Uncommon Descent Serving The Intelligent Design Community

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I have recently commented on another thread:

https://uncommondesc.wpengine.com/intelligent-design/researcher-asks-is-the-cell-really-a-machine/

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

https://uncommondesc.wpengine.com/intelligent-design/transcription-regulation-a-miracle-of-engineering/

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50 (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free radicals
  4. Infections
  5. Radiation
  6. Immune stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.


Added graphic: two very different proteins and their functional history


Added graphic (for Bill Cole). Functional history of Prp8, collagen, p53.
Comments
Missing gpuccio’s technical posts. jawa
Transposable elements as a potent source of diverse cis-regulatory sequences in mammalian genomes
Eukaryotic gene regulation is mediated by cis-regulatory elements, which are embedded within the vast non-coding genomic space and recognized by the transcription factors in a sequence- and context-dependent manner. A large proportion of eukaryotic genomes, including at least half of the human genome, are composed of transposable elements (TEs)
Transcription factors (TF) are proteins that regulate gene expression by binding to DNA at specific sequence motifs.
differences across species are thought to be largely driven by changes in gene expression, mediated by divergence in cis-regulatory elements [106,107]. Recent progress in the field revealed that a substantial portion of mammalian cis-regulatory sequences is derived from TEs. These TE-derived cis-regulatory elements are often cell type- and species/clade-specific and can contribute to gene expression regulation through many diverse mechanisms
The prevalence of TE utilization for regulatory functions may differ between cell types and developmental stages. Indeed, TEs seem to play an outsized role during mammalian pre- and peri-implantation development, where whole subclasses of TEs (such as MERVL in mice, or HERV-K and HERV-H in humans) function in host gene regulation as alternative promoters, enhancers or boundary elements. This widespread utilization of TEs during early development is likely facilitated both by the global epigenomic de-repression during this time of embryogenesis and by the fact that to successfully propagate through vertical transmission
Since we have only begun to systematically assess the function of TEs in gene control, it is likely that we are still vastly underestimating their impact as well as the diversity of mechanisms by which TEs can influence transcription, post-transcriptional gene regulation, genome organization
our understanding of TEs has come a long way from the notion of ‘junk’ DNA. What persists is Barbara McClintock's early vision of TEs as ‘controlling elements'
  OLV
GP: We’re looking forward to reading your next OP on the immune system and the RAG proteins. PeterA
GP, This article in EN reminds of a topic you have discussed here: https://evolutionnews.org/2020/01/can-new-genes-emerge-from-scratch/ PeterA
GP: You may want to look at this: https://www.biorxiv.org/content/10.1101/2020.01.12.903138v1.full OLV
more ID? :) VEGF/VEGFR2 signaling regulates hippocampal axon branching during development
This study reports a novel molecular mechanism by which direct VEGF/VEGFR2 signaling on hippocampal pyramidal neurons regulates axon branching during development.
VEGF/VEGFR2 signaling falls into the category of bimodal regulators that differentially direct axonal and dendritic development in an opposite manner.
Our data provide for the first time evidence that in mammals similar processes can also modulate axon branching.
Two possibilities, non-exclusive, could explain the increased axon branching upon VEGFR2 deletion. On the one hand, the lack of VEGFR2 could lead to a decrease in the dynamics of protrusions turnover and thus increase the probability of a protrusion to become a filopodium. Our data support such a model as we observe that the absence of VEGFR2 increases the percentage of filopodia. On the other hand, a compensatory mechanism, yet unidentified, might become activated to overcome the inhibition of VEGFR2, resulting in higher branch number than in control conditions.
OLV
more ID: EphrinB2 regulates VEGFR2 during dendritogenesis and hippocampal circuitry development
Vascular endothelial growth factor (VEGF) is an angiogenic factor that play important roles in the nervous system, although it is still unclear which receptors transduce those signals in neurons.
Our results demonstrate the functional crosstalk of VEGFR2 and ephrinB2 in vivo to control dendritic arborization, spine morphogenesis and hippocampal circuitry development.
During development, the hippocampus undergoes typical stages of neuronal development involving proliferation, differentiation, synapse and circuit formation, and the maturation of synaptic connections. CA3 neurons are generated around E14.5 and CA1 neurons one day later at E15.5 (Grove, 1999). Neurogenesis of the dentate gyrus granule cell starts shortly before birth, peaks in the first postnatal week and continues until adulthood.
 
Our study describes a novel function for neuronal expressed VEGFR2 signaling in the development of the hippocampus.
      OLV
DAZL Regulates Germ Cell Survival through a Network of PolyA-Proximal mRNA Interactions
The RNA binding protein DAZL is essential for gametogenesis, but its direct in vivo functions, RNA targets, and the molecular basis for germ cell loss in Dazl-null mice are unknown.
Our results reveal a mechanism for DAZL-RNA binding and illustrate that DAZL functions as a master regulator of a post-transcriptional mRNA program essential for germ cell survival.
RNA binding proteins (RBPs) are potent post-transcriptional regulators of gene expression.
The necessity of the DAZ family of RBPs for germ cell survival is well established in multiple species. However, the direct targets, regulatory roles, and biological functions of these RBPs remained unclear. Our integrative analyses combining transgenic mice, FACS, and a panel of unbiased, transcriptome-wide profiling tools provide important insights into the molecular and biological functions of this important family of RBPs.
In conclusion, our study provides insights into the molecular basis of germ cell loss in Dazl KO mice and demonstrates that germ cell survival depends on a DAZL-dependent mRNA regulatory program. Given the functional conservation between mouse DAZL, human DAZL, and DAZ (Vogel et al., 2002), our findings shed light on the molecular basis for azoospermia in 10%–15% of infertile men with Y chromosome microdeletions. The RNA targets of DAZL extend far beyond germ cell-specific genes and include many that encode core components of macromolecular complexes present in all proliferating cells. Therefore, our findings may also be relevant to other human diseases because Dazl is a susceptibility gene for human testicular cancer (Ruark et al., 2013) and is amplified or mutated in nearly 30% of breast cancer patient xenografts examined in a single study (Eirew et al., 2015). We propose a general model (Figure S7) whereby DAZL binds a vast set of mRNAs via polyA-proximal interactions facilitated by PABPC1-polyA binding and post-transcriptionally enhances the expression of a subset of mRNAs, namely a network of genes that are essential for cell-cycle regulation and mammalian germ cell maintenance. These observations provide insights into molecular mechanisms by which a single RBP is recruited to its RNA targets and coordinately controls a network of mRNAs to ensure germ cell survival.
OLV
ID on steroids? :) 3? End Formation and Regulation of Eukaryotic mRNAs
The polyadenosine (polyA) “tail” is an essential feature at the 3? end of nearly all eukaryotic mRNAs. This appendage has roles in many steps in the gene expression pathway and is subject to extensive regulation. Selection of alternative sites for polyA tail addition is a widely used mechanism to generate alternative mRNAs with distinct 3?UTRs that can be subject to distinct forms of posttranscriptional control. One such type of regulation includes cytoplasmic lengthening and shortening of the polyA tail, which is coupled to changes in mRNA translation and decay. Here we present a general overview of 3? end formation in the nucleus and regulation of the polyA tail in the cytoplasm, with an emphasis on the diverse roles of 3? end regulation in the control of gene expression in different biological systems.
    OLV
off topic: Transposons remind me of some comments GP has made about them. Here's an interesting paper: The RNA Helicase BELLE Is Involved in Circadian Rhythmicity and in Transposons Regulation in Drosophila melanogaster  
Circadian clocks control and synchronize biological rhythms of several behavioral and physiological phenomena in most, if not all, organisms. Rhythm generation relies on molecular auto-regulatory oscillations of interlocked transcriptional-translational feedback loops. Rhythmic clock-gene expression is at the base of rhythmic protein accumulation, though post-transcriptional and post-translational mechanisms have evolved to adjust and consolidate the proper pace of the clock.
We suggest that BELLE acts as important element in the piRNA-mediated regulation of the TEs and raise the hypothesis that this specific regulation could represent another level of post-transcriptional control adopted by the clock to ensure the proper rhythmicity.
BELLE Is a Putative Circadian Clock Component
Aiming at finding new molecular component of the clock machinery in Drosophila, we have identified the DEAD-box RNA helicase BELLE as interactor of CRY.
BELLE Has a Role in the Regulation of the TEs in the Nervous System and in Gonads
A possible role of BELLE in the piRNA pathway is only starting to be elucidated: our experiments suggest that it might act in maintaining precise levels of TE RNAs, probably regulating the activity of other piRNA components. The involvement of belle in both circadian rhythmicity and piRNA mediated transposon regulation suggests association between these two biological processes. This hypothesis is supported by indirect, though reasonable, observations.
we have described an emerging role of belle in both circadian rhythmicity and transposon regulation, that leads to the attractive hypothesis that piRNA-mediated regulation could be another level of post-transcriptional control adopted by the clock to ensure the proper rhythmicity.
  OLV
Why don’t we see more ID objectors in this discussion? Are they afraid of scientific discussions? We know that a few distinguished biology or biochemistry professors* have commented here in this website before, but have left after running out of valid anti-ID arguments. But why do they let GP get away with his extensive ID-supporting OPs and follow up comments? Have they hidden for good? :) (*) professors LM (UofT) & AH (UofK) for example jawa
Maybe GP could comment on some of those papers ? PeterA
PW, That’s quite a juicy list indeed. Thanks. OLV
OLV, Here’s a list of papers that may relate to this OP: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6590518/citedby/ pw
Another recent in topic paper: 2019 Cell Reports
Compensation among paralogous transcription factors (TFs) confers genetic robustness of cellular processes, but how TFs dynamically respond to paralog depletion on a genome-wide scale in vivo remains incompletely understood. Using single and double conditional knockout of myocyte enhancer factor 2 (MEF2) family TFs in granule neurons of the mouse cerebellum, we find that MEF2A and MEF2D play functionally redundant roles in cerebellar-dependent motor learning. Although both TFs are highly expressed in granule neurons, transcriptomic analyses show MEF2D is the predominant genomic regulator of gene expression in vivo. Strikingly, genome-wide occupancy analyses reveal upon depletion of MEF2D, MEF2A occupancy robustly increases at a subset of sites normally bound to MEF2D. Importantly, sites experiencing compensatory MEF2A occupancy are concentrated within open chromatin and undergo functional compensation for genomic activation and gene expression. Finally, motor activity induces a switch from non-compensatory to compensatory MEF2-dependent gene regulation. These studies uncover genome-wide functional interdependency between paralogous TFs in the brain.
Due to the diverse states a neuron undergoes during development and plasticity, the context-dependent nature of compensation by TF family members should advance our understanding of brain development and function. As we learn more about the interdependency of paralogous TFs, we should gain further insight into how paralogs respond to TF loss-of-function mutations in the context of disease [...] Furthermore, identifying the genomic signatures of non-compensatory sites may allow us to predict regulatory elements that might be more susceptible to gene dysregulation upon perturbation of different TF paralogs in disease states.
  OLV
This paper seems more in topic with this OP: 2019 Nature Genetics
Core regulatory transcription factors (CR TFs) orchestrate the placement of super-enhancers (SEs) to activate transcription of cell-identity specifying gene networks, and are critical in promoting cancer. Here, we define the core regulatory circuitry of rhabdomyosarcoma and identify critical CR TF dependencies. These CR TFs build SEs that have the highest levels of histone acetylation, yet paradoxically the same SEs also harbor the greatest amounts of histone deacetylases. We find that hyperacetylation selectively halts CR TF transcription. To investigate the architectural determinants of this phenotype, we used absolute quantification of architecture (AQuA) HiChIP, which revealed erosion of native SE contacts, and aberrant spreading of contacts that involved histone acetylation. Hyperacetylation removes RNA polymerase II (RNA Pol II) from core regulatory genetic elements, and eliminates RNA Pol II but not BRD4 phase condensates. This study identifies an SE-specific requirement for balancing histone modification states to maintain SE architecture and CR TF transcription.
OLV
Jawa, Here's another paper for your talented "all knowing" buddy to explain to the rest of us here: Front. Genet., 08 November 2019
The hedgehog (Hh) family comprises sonic hedgehog (Shh), Indian hedgehog (Ihh), and desert hedgehog (Dhh), which are versatile signaling molecules involved in a wide spectrum of biological events including cell differentiation, proliferation, and survival; establishment of the vertebrate body plan; and aging. These molecules play critical roles from embryogenesis to adult stages
The Hh family involves many signaling mediators and functions through complex mechanisms, and achieving a comprehensive understanding of the entire signaling system is challenging.
The regulatory mechanisms of the Hh pathway are complex, and new mechanisms are continuously being identified. Because each new finding triggers another question, many researchers in various fields including molecular and cell biology, genetics, medicine, biochemistry, protein structure, chemistry, and mathematical biology have chosen Hh as the focus of their research.
Hh proteins are involved in a variety of biological events such as cell differentiation, proliferation, and survival. The fact that multiple processes that are apparently distinct from each other are induced by a single Hh protein should be addressed in the future. Future studies on Hh could focus on the cell type-specific expression levels of each mediator of the Hh signaling pathway. Although there are more than 30 mediators of Hh signaling, the expression levels of these proteins are likely cell type-specific, which may confer variation in the kinetics and responsiveness to the signal.
The mechanisms underlying the cell type specificity of target genes involved in the Hh signaling pathway should be investigated. Although most signaling mediators are common to Shh, Ihh, and Dhh, the downstream genes induced are context-dependent. This variation may be achieved through crosstalk with other signaling molecules, or differences in the transcription factors interacting with Gli or the epigenetic background (chromatin status) of cells. Even in the same neural progenitor cells, early and late progenitor cells show differential responses to the same Shh protein
Despite extensive research, many mechanisms underlying Hh signaling may remain undiscovered, and cutting-edge approaches, such as chasing single cells or single proteins, computational prediction, and genome-wide functional screens, are warranted to elucidate these mechanisms.
Perhaps he will tell us how to explain all that using RV+NS. :) OLV
Jawa, yeah, right. :) PeterA
OLV @731: I'm sure PavelU could cite some papers that answer those questions. :) jawa
An interview with a leading Biology researcher
the fundamental question in developmental biology is how the right cells are produced in the right place, at the right time and in the right amounts in a developing tissue.
Addressing these issues covers some of the most basic questions in biology. How is gene activity controlled? How is cell function determined? How are tissues shaped and organised from cells?
Given the huge volume of published research, it's increasingly difficult to keep up with the scientific literature.
OLV
KF and EugeneS, Excellent comments @728 & @729 respectively. Thanks. PeterA
PavelU
to explain the forces that must have shaped...
Must have... Natural selection does not select for a future function. It selects from among existing functions. The semiotic triple {code-interpretant-referent} is a prerequisite of evolution (whatever its capabilities in reality), not a consequent. For evolution to kick-start there must be a population of functional systems capable of replicating themselves. Replication is impossible without read/write from/to memory. The memory contents must be treated in two substantially different ways by the same replication system: as data to make a copy of, and as a program to build a next-generation organism. Additionally, to make sure the system is semantically closed, memory should contain a program not only to reconstruct the organism itself but also to reconstruct the reconstructor. And this complexity must be present before evolution can even start! EugeneS
PU, more exaggerated speculation: ABSTRACT: The universal triple-nucleotide genetic code is often viewed as a given, randomly selected through evolution. However, as summarized in this article, many observations and deductions within structural and thermodynamic frameworks help to explain the forces that must have shaped the code during the early evolution of life on Earth. Thermodynamics at micro-level, is in large part about random molecular activity. Insofar as it addresses constraints of necessity, that still does not get us to language, code, organised execution machinery, encoding of replication information or the associated metabolic, encapsulated, smart gated entity capable of protecting its internal systems. KF kairosfocus
This paper explains how the genetic code appeared and evolved: “many observations and deductions within structural and thermodynamic frameworks help to explain the forces that must have shaped the code during the early evolution of life on Earth.” https://jb.asm.org/content/201/15/e00091-19.long PavelU
GP, Thank you very much for the clear -as usual- explanation @723. Also enjoyed reading your insightful comment on the interesting question by EugeneS. PeterA
GP, I had a look into the paper. They report that it is possible to climb halfway towards peaks of wild type functions with only random substitutions. They hypothesize that given recombinations it may be possible to reach the peaks with realistic constraints on the number of available sequences. Now I vaguely remember that we might have discussed this already in a different thread. Bringing recombinations in makes it a search rather than adaptive walk, which sort of begs the question because evolution is not a proper search. Do you agree? EugeneS
GP, Now I remember the context. Thanks very much. EugeneS
PeterA at #720: The exocyst? Amazing, certainly. It is indeed an extremely important complex for intracellular transport (from Golgi to the plasma membrane) and exocytosis. Eight subunits, each of them about 700-900 AAs long. And all of them are required for the function, because the loss of any of them results in cell death or an embryonic lethal phenotype. The eight subunits are indivisual and different proteins: none of them shows any significant sequence similarity to any of the others. The complex is conserved in all eukaryotes, and many aspects of its structure and function are conserved too. However, the subunits that make the complex are often very different in different species. The sequence specificity of human sequences is definitely acquired mainly at the transition to vertebrates. All eight proteins exhibit a definite information jump at that transition. For example, for EXOC1 the jump is 0.759 baa and 679 bits, and it is very similar in all the other components. So, even considering an average of 600 bits, we have an irreducibly complex system, essential to organism survival, which shows a global information jump of about 4800 bits at the vertebrate transition, is highly dynamic and still very much incompletely understood! Not bad! :) gpuccio
EugeneS: Hi Eugene. Good question. I think it is a generally accepted idea that wild type proteins as we observe them are highly optimized. This is exactly one of the main "arguments" of neo-darwinists, who of course explain that optimization as the result of a long neo-darwinian process, and infer therefore (without any evidence of that) that the optimized protein is derived from some imaginary ancestor which had to be at the same time much simpler and still functional enough to be naturally selected. This is one of the fairy tales of neo-darwinism. There is no evidence at all that supports the existence of those long neo-darwinian pathways of optimization. Of course we have a few examples of optimization due to neo-darwinisn processes, for example in antibiotic resistance, but it is always a very short pathway (2, 3 AAs in most cases), and the basic variation that brings the new function in the beginning is always very simple (1 or 2 AAs), and works by degrading partially some already existing, comples functional structure. Behe has argued very well about that general scenario in his last book. That existing wild types are highly optimized, and that they cannot be reached by a neo-darwinian pathway, is shown very well by the precious paper about rugged landscapes: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0000096 Here, the wild type could not be reached (in the experiment, and probably in any real world scenario) in a very simple experiment of function retrieval, where instead only many less functional states (by far less functional) could be easily reached. This, in a phage system, the best context probably for RV + NS to act. So, in this case we have very important evidence that the wild type protein is an exceptionally rare state with the highest known functionality. A very strong evidence for design, if I ever saw one. Of course, that paper was quoted to me, in the beginning, by some neo-darwinist "friend", as a clear example supporting neo-darwinism! gpuccio
GP, Not necessarily related to your OP. You mentioned somewhere that there was evidence that protein wild types were globally optimal. Could you say more on this please? Thanks. EugeneS
I'd like to read GP's comment on the large hetero-octameric protein complex known as "exocyst", which apparently is very important. What's the history of those proteins? PeterA
I wouldn't take PavelU's senseless posts too seriously. His paper references are completely off-topic. The poor troll just overreacts to seeing the word "evolution" in the headline or the text of a paper and immediately assumes it's proving something. Very naive attitude, just to say it nicely. I'd rather call it stupid though. Apparently he hasn't got the memo explaining that evolutionary terms are sprinkled all over many papers because that's the dressing they have to put on their articles in order to make them acceptable to the publishing establishment. In most cases one can delete those terms and the meaning of the text doesn't get altered. Actually, sometimes they turn better readable after removing the 'evolutionary' additions. If PavelU is not a troll, then he definitely has a very poor reading comprehension that he should work on ASAP. The poor guy is a joke. jawa
PavelU, you claim that "Here’s a very recent paper that demonstrates evolution with much details:" From the paper we find:
"We found that neuronal number and soma position are highly conserved. However, the morphological elaborations of several amphid cilia are different between them, most notably in the absence of ‘winged’ cilia morphology in P. pacificus. We established a synaptic wiring diagram of amphid sensory neurons and amphid interneurons in P. pacificus and found striking patterns of conservation and divergence in connectivity relative to C. elegans, but very little changes in relative neighborhood of neuronal processes." https://elifesciences.org/articles/47155
PavelU, in case you are unaware, their finding of " the absence of ‘winged’ cilia morphology in P. pacificus' and their finding of "very little changes in relative neighborhood of neuronal processes" are actually evidence against the reductive materialism of Darwinian evolution. Same with your second paper:
Twenty million years of evolution: The embryogenesis of four Caenorhabditis species are indistinguishable despite extensive genome divergence "The four Caenorhabditis species C. elegans, C. briggsae, C. remanei and C. brenneri show more divergence at the genomic level than humans compared to mice (Stein et al., 2003; Cutter et al., 2006, 2008). However, the behavior and anatomy of these nematodes are very similar. " https://www.sciencedirect.com/science/article/pii/S0012160618306870?via%3Dihub
LOL, PavelU, do you not realize that this paper directly contradicts YOUR Darwinian assumption that mutations to DNA are SUPPOSE to effect body plan morphology ?
Darwinism vs Biological Form https://www.youtube.com/watch?v=JyNzNPgjM4w
Your last claim is a "recent paper that proves protein evolution". Yet the paper 'proves' nothing of the sort. From the paper,
Methods: We examined evolution of the exocyst by comparative genomics, phylogenetics and structure prediction. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6784791/#!po=0.467290
PavelU, do you not know in science that to 'prove' that something is feasible, you actually have to empirically demonstrate that it is possible? They have done nothing of the sort. The real world of empirical science is not kind to your Darwinian presuppositions of protein evolution in the least:
Right of Reply: Our Response to Jerry Coyne – September 29, 2019 by Günter Bechly, Brian Miller and David Berlinski Excerpt: David Gelernter observed that amino acid sequences that correspond to functional proteins are remarkably rare among the “space” of all possible combinations of amino acid sequences of a given length. Protein scientists call this set of all possible amino acid sequences or combinations “amino acid sequence space” or “combinatorial sequence space.” Gelernter made reference to this concept in his review of Meyer and Berlinski’s books. He also referenced the careful experimental work by Douglas Axe who used a technique known as site-directed mutagenesis to assess the rarity of protein folds in sequence space while he was working at Cambridge University from 1990-2003. Axe showed that the ratio of sequences in sequence space that will produce protein folds to sequences that won’t is prohibitively and vanishingly small. Indeed, in an authoritative paper published in the Journal of Molecular Biology Axe estimated that ratio at 1 in 10^74. From that information about the rarity of protein folds in sequence space, Gelernter—like Axe, Meyer and Berlinski—has drawn the rational conclusion: finding a novel protein fold by a random search is implausible in the extreme. Not so, Coyne argued. Proteins do not evolve from random sequences. They evolve by means of gene duplication. By starting from an established protein structure, protein evolution had a head start. This is not an irrational position, but it is anachronistic. Indeed, Harvard mathematical biologist Martin Nowak has shown that random searches in sequence space that start from known functional sequences are no more likely to enter regions in sequence space with new protein folds than searches that start from random sequences. The reason for this is clear: random searches are overwhelmingly more likely to go off into a non-folding, non-functional abyss than they are to find a novel protein fold. Why? Because such novel folds are so extraordinarily rare in sequence space. Moreover, as Meyer explained in Darwin’s Doubt, as mutations accumulate in functional sequences, they will inevitably destroy function long before they stumble across a new protein fold. Again, this follows from the extreme rarity (as well as the isolation) of protein folds in sequence space. Recent work by Weizmann Institute protein scientist Dan Tawfik has reinforced this conclusion. Tawfik’s work shows that as mutations to functional protein sequences accumulate, the folds of those proteins become progressively more thermodynamically and structurally unstable. Typically, 15 or fewer mutations will completely destroy the stability of known protein folds of average size. Yet, generating (or finding) a new protein fold requires far more amino acid sequence changes than that. Finally, calculations based on Tawfik’s work confirm and extend the applicability of Axe’s original measure of the rarity of protein folds. These calculations confirm that the measure of rarity that Axe determined for the protein he studied is actually representative of the rarity for large classes of other globular proteins. Not surprisingly, Dan Tawfik has described the origination of a truly novel protein or fold as “something like close to a miracle.” Tawfik is on Coyne’s side: He is mainstream. https://quillette.com/2019/09/29/right-of-reply-our-response-to-jerry-coyne/ Quantum criticality in a wide range of important biomolecules Excerpt: “Most of the molecules taking part actively in biochemical processes are tuned exactly to the transition point and are critical conductors,” they say. That’s a discovery that is as important as it is unexpected. “These findings suggest an entirely new and universal mechanism of conductance in biology very different from the one used in electrical circuits.” The permutations of possible energy levels of biomolecules is huge so the possibility of finding even one that is in the quantum critical state by accident is mind-bogglingly small and, to all intents and purposes, impossible.,, of the order of 10^-50 of possible small biomolecules and even less for proteins,”,,, “what exactly is the advantage that criticality confers?” https://medium.com/the-physics-arxiv-blog/the-origin-of-life-and-the-hidden-role-of-quantum-criticality-ca4707924552
Thus PavelU as usual with your hit and run tactics, you got nothing. In fact since your first two papers actually support ID and contradict Darwinism, you actually got less than nothing. If you had any real clue what you were actually talking about PavelU, these mistakes that you continually make here on UD, mistakes that constantly backfire on you, SHOULD be very embarrassing for you. And should serve as a wake-up call for you that you are not on the right path. bornagain77
Another recent paper that proves protein evolution https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6784791/#!po=0.467290 PavelU
More bad news for ID: Here's a very recent paper that demonstrates evolution with much details: https://elifesciences.org/articles/47155   And here is another: https://www.sciencedirect.com/science/article/pii/S0012160618306870?via%3Dihub PavelU
2019 ScienceDirect evo-devo nonsense?
Genomes vary significantly but the body plans do not of the Caenorhabditis family.The embryogenesis of four species of the Caenorhabditis family are similar.Patterns of cell migration, cell death and differentiation are nearly identical.Mechanism of establishment of left-right axis is conserved during evolution.
the embryonic development of all four Caenorhabditis species are nearly identical, suggesting that an apparently optimal program to construct the body plan of nematodes has been conserved for at least 20 million years. This contrasts the levels of divergence between the genomes and the protein orthologs of the Caenorhabditis species, which is comparable to the level of divergence between mouse and human. This indicates an intricate relationship between the structure of genomes and the morphology of animals.
Definition of intricate by Merriam-Webster 1 : having many complexly interrelating parts or elements : complicated 2 : difficult to resolve or analyze https://www.merriam-webster.com/dictionary/intricate
the general strategy to construct a nematode evolved considerably during evolution.
substantial genomic differences yet the four species are highly similar in external morphology and, as far as known, in their habitat and ecological requirements
this pattern would suggest an origin by allopatric, non-adaptive speciation.
Definition of Allopatric by Merriam-Webster  www.merriam-webster.com/dictionary/allopatric Allopatric definition is - occurring in different geographic areas or in isolation
would also agree with an adaptive species formation under at least partly sympatric conditions, driven by a yet-to-be-discovered subtle adaptive (ecological) differentiation.
it appears to be most astonishing that these organisms are apparently characterized by a high substitution rate (Cutter, 2008), genome reductions related to different reproductive modes (Fierst et al., 2015) and a low protein level genomic identity
OLV
Yes, that's a very interesting paper indeed. BTW, here's something you probably understand better than most folks here, including myself: https://ocw.mit.edu/courses/biology/7-91j-foundations-of-computational-and-systems-biology-spring-2014/video-lectures/lecture-2-local-alignment-blast-and-statistics/ OLV
OLV: Wonderful link at #712: Enhancer Features that Drive Formation of Transcriptional Condensates The free Graphical abstract is wonderfully clear: transcription happens often at specific dynamic structures, called here "transcriptional condensates". As seen in the abstract, the main somponents acting to build those condensates are: a) Strong interactions between proteins (Transcription factors, the orange circles) and DNA sites (enhancers). Many TFs, many enhancers. Of course the strong interaction here is mediated by the DNA binding domains (DBD) in TFs, the most conserved functional unit in TF molecules. b) Weak, multivalent protein-protein interactions between TFs (the non DNA binding part of the molecule) and other proteins (coactivators, the blue circles). c) These weak, flexible interactions are mediated mainly by so called "disordered regions" in the involved molecules. d) Finally, phase separation gas a fundamental role in making all that possible. As the authors say: "Our study provides a framework to understand how the genome can scaffold transcriptional condensates at specific loci and how the universal phenomenon of phase separation might regulate this process." Awsome. The usual tradeoff between stability and flexibility, literally "condensed" in amazing dynamic structures! :) gpuccio
Multi-enhancer transcriptional hubs confer phenotypic robustness Enhancer Features that Drive Formation of Transcriptional Condensates Enhancer Priming Enables Fast and Sustained Transcriptional Responses to Notch Signaling Dynamics of Notch-Dependent Transcriptional Bursting in Its Native Context OLV
Organizational principles of 3D genome architecture Chromatin accessibility and the regulatory epigenome Host–transposon interactions: conflict, cooperation, and cooption Highly structured homolog pairing reflects functional organization of the Drosophila genome The genome-wide multi-layered architecture of chromosome pairing in early Drosophila embryos The Role of Insulators in Transgene Transvection in Drosophila Position Effects Influence Transvection in Drosophila melanogaster OLV
Highly interacting regions of the human genome are enriched with enhancers and bound by DNA repair proteins The Cajal Body Protein WRAP53B Prepares the Scene for Repair of DNA Double-Strand Breaks by Regulating Local Ubiquitination OLV
Over on Peaceful Science they are asking for examples for design yielding a nested hierarchy. This is funny because they have never shown that they understand the concept. Universal common descent via gradual processes could never produce a nested hierarchy. There would be too many transitional forms blurring the lines of distinction. That and the fact traits can be lost which would make a descendent appear to be an ancestor. I would love to see them present a nested hierarchy produced by blind and mindless processes. They never will. However, in "Evolution: A Theory in Crisis" Dr Michael Denton started one for designed/ manmade objects under the title of "Transport". Under the "Transport" Kingdom you would have the Phyla- Land, Water, Air, and Hybrids. Under the Land Phylum you would have the classes: Surface; underground; hybrids. Each of those would have Orders pertaining to engine type: diesel; steam; electric; petrol; hybrids. The families would be the different variations- trucks; cars; etc. And so on until you come to the specific cars. Under the "Water" Phylum you would have the Classes: surface; submarine; hybrids Under the "Air" Phylum you would have the Classes: fixed wing; helicopter; hybrids So from there you just keep adding the criteria to fill out the rest of your nested hierarchy. ET
Daily Pageviews per Visitor PT. 2.0 UD. 1.4 SW. 1.0 TSZ. 1.0 PS. 1.0 PT has higher daily pageviews per visitor but much lower traffic than UD? How come? Bounce rate: Percentage of visits to the site that consist of a single pageview. UD. 69.0% SW. 100.0% PT. NA TSZ. NA PS. NA NA: information not available Daily time on site: PT. 3:00 SW. 2:50 UD. 1:24 TSZ. NA PS. NA PR has higher daily time on site than SW but it’s lower in the ranking? SW has higher daily time on site than UD but it’s lower in the ranking? How come? jawa
. jawa
Alexa rankings update. Compare to rankings @703. UD. 671,155 SW. 1,076,846 PT. 1,806,946 PS. 4,804,915 TSZ. 6,999,740 Total sites linking in: PT. 961 UD. 572 SW. 433 TSZ. 35 PS. 13 SW, PT and PS have shown substantial improvements in their rankings lately. Specially PS which apparently got a huge boost after GP posted his comments in their website. But TSZ has dropped dramatically. What’s wrong with that blog? could they benefit from inviting GP to explain ID to them too? :) Intriguing that the order according to the total sites linking in doesn’t correspond with the rankings. Why? How to explain that PT has so many more sites linking in than UD but it’s so far under UD in the ranking? jawa
DATCG: Yes, flexibility is a great thing! Moreover, rigidity and flexibility are indeed a continuous space, and important tradeoffs take place between the two. As we will see in my next OP, while immunoglobulins are not, certainly, IDPs, they can have different levels of flexibility, and the tradeoff between flexibility and rigidity perfectly mirrors, in this case, the tradeoff between sensitivity and specificity. In a perfectly engineered general plan. Fascinating stuff! :) gpuccio
OLV: Thank you for the many interesting links. Yes, this NF-kB signaling pathway os really remarkable, and central to many fundamental processes. As I said at #181: "This is a really amazing flexibility and polymorphism. A complex semiotic system that implements, with remarkable efficiency, a lot of different functions. This is engineering and programming of the highest quality." gpuccio
Alexa ranking today: UD. 673,409 1 SW. 1,105,906 2 PT. 2,004,951. 3 PS. 4,809,839. 5 TSZ. 6,991,172. 7 jawa
DATCG, Good to see your interesting comments again. Thanks. OLV
Gpucio #686 Thanks, so glad to see you're still posting these wonderful articles with great details and insights for ID at UD. Unfortunately been very busy, not enough time to participate as I'd like. But always try to catch your post. Great stuff you found in both papers. Thanks! I'd not put IDPs and Immune systems together until recently seeing your future OP and the recent IDP article at EvoNews. But remember thinking last year, why are IDP's "Flexible?" Must be a purpose or role for such a design. Each paper shows a propensity for IDPs usage. re: 2nd paper... "Such multifunctionality is commonly found among intrinsically disordered proteins. So was thinking, Flexible Proteins - can they provide a real role for Immune Systems? For an immune system to prosper, it needs multifunctional "wrachets" that can a) recognize invader b) lock on to binding site c) provide multiple recognition trackers for the Lock-down/rigid binding of the "partner" Otherwise, the increase in number of Rigid Proteins would grow. Would a Rigid Protein(Ordered Protein) only system become unmanageable by total numbers required? What numbers would it take to replace the Flexible Proteins(IDPs) with Rigid Proteins(Ordered Proteins)? In the Immune system? So add, d) reduce cellular traffic flow by reducing number of proteins required to bind e) operational management efficiency - less proteins to locate, transcribe, produce and manage f) power efficiency - reduces overhead and reproduction of proteins, reducing need for higher levels of ATP usage g) which in turn is a thermal issue for reducing overall operational temperature? All those Replications and ATPs whirring to produce more Rigid Proteins, that clog the system = more heat, and a need for greater coolant system OK, had more will try to post later. DATCG
Diet-Derived Fatty Acids, Brain Inflammation, and Mental Health https://www.frontiersin.org/articles/10.3389/fnins.2019.00265/full OLV
Omega-3 polyunsaturated fatty acid attenuates the inflammatory response by modulating microglia polarization through SIRT1-mediated deacetylation of the HMGB1/NF-kB pathway following experimental traumatic brain injury https://jneuroinflammation.biomedcentral.com/articles/10.1186/s12974-018-1151-3 OLV
The Therapeutic Effects of Treadmill Exercise on Osteoarthritis in Rats by Inhibiting the HDAC3/NF-KappaB Pathway in vivo and in vitro https://www.frontiersin.org/articles/10.3389/fphys.2019.01060/full OLV
Valproic acid attenuates traumatic spinal cord injury-induced inflammation via STAT1 and NF-kB pathway dependent of HDAC3 https://jneuroinflammation.biomedcentral.com/articles/10.1186/s12974-018-1193-6 OLV
Looking forward to reading GPuccio’s next OP. pw
Exactly three months after starting this topic this heavily technical OP remains in the top 5 most visited the last 30 days (around 6,400 in three months). jawa
Obviously this NF-kB signaling pathway is quite remarkable, isn’t it? PeterA
Azithromycin Polarizes Macrophages to an M2 Phenotype via Inhibition of the STAT1 and NF-kB Signaling Pathways https://www.jimmunol.org/content/203/4/1021.long OLV
Cargo-less nanoparticles program innate immune cell responses to toll-like receptor activation. https://www.sciencedirect.com/science/article/pii/S0142961219304326?via%3Dihub OLV
A novel curcumin analog inhibits canonical and non-canonical functions of telomerase through STAT3 and NF-kB inactivation in colorectal cancer cells http://www.oncotarget.com/index.php?journal=oncotarget&page=article&op=view&path%5B%5D=27000&path%5B%5D=86095 OLV
NF-kB Signaling Pathways in Osteoarthritic Cartilage Destruction https://www.mdpi.com/2073-4409/8/7/734 OLV
NF-kB Signaling in Ovarian Cancer https://www.mdpi.com/2072-6694/11/8/1182 OLV
Single-Cell Analysis of Multiple Steps of Dynamic NF-kB Regulation in Interleukin-1a-Triggered Tumor Cells Using Proximity Ligation Assays https://www.mdpi.com/2072-6694/11/8/1199 OLV
Adenovirus early region 3 RIDa protein limits NFkB signaling through stress-activated EGF receptors PLoS Pathog. 2019 Aug; 15(8): e1008017. Published online 2019 Aug 19. doi: 10.1371/journal.ppat.1008017 EGFRs activated by stress of adenoviral infection regulated signaling by the NFkB family of transcription factors, the NFkB p65 subunit was phosphorylated at Thr254 RIDa expression was sufficient to down-regulate the same EGFR/NFkB signaling axis https://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1008017 OLV
DATCG: Hi, nice to hear from you! :) Good arguments, as usual. And interesting links. By the way, IDPs seem to have a special role in innate immunity, as a tool against viruses and bacteria, as shown by the following papers: Intrinsic disorder in proteins involved in the innate antiviral immunity: another flexible side of a molecular arms race. https://www.ncbi.nlm.nih.gov/pubmed/24184279
Abstract We present a comprehensive bioinformatics analysis of the abundance and roles of intrinsic disorder in human proteins involved in the antiviral innate immune response. The commonness of intrinsic disorder and disorder-based binding sites is evaluated in 840 human antiviral proteins and proteins associated with innate immune response and defense response to virus. Among the mechanisms engaged in the innate immunity to viral infection are three receptor-based pathways activated by the specific recognition of various virus-associated patterns by several retinoic acid-inducible gene I-like receptors, toll-like receptors, and nucleotide oligomerization domain-like receptors. These modules are tightly regulated and intimately interconnected being jointly controlled via a complex set of protein-protein interactions. Focused analysis of the major players involved in these three pathways is performed to illustrate the roles of protein intrinsic disorder in controlling and regulating the innate antiviral immunity. We mapped the disorder into an integrated network of receptor-based pathways of human innate immunity to virus infection and demonstrate that proteins involved in regulation and execution of these innate immunity pathways possess substantial amount of intrinsic disorder. Disordered regions are engaged in a number of crucial functions, such as protein-protein interactions and interactions with other partners including nucleic acids and other ligands, and are enriched in posttranslational modification sites. Therefore, host cells use numerous advantages of intrinsically disordered proteins and regions to fight flexible invaders and viruses and to successfully overcome the viral invasion. And: Abundance and functional roles of intrinsic disorder in the antimicrobial peptides of the NK-lysin family https://www.tandfonline.com/doi/abs/10.1080/07391102.2016.1164077
Abstract NK-lysins are antimicrobial peptides (AMPs) that participate in the innate immune response and also have several pivotal roles in various biological processes. Such multifunctionality is commonly found among intrinsically disordered proteins. However, NK-lysins have never been systematically analyzed for intrinsic disorder. To fill this gap, the amino acid sequences of NK-lysins from various species were collected from UniProt and used for the comprehensive computational analysis to evaluate the propensity of these proteins for intrinsic disorder and to investigate the potential roles of disordered regions in NK-lysin functions. We analyzed abundance and peculiarities of intrinsic disorder distribution in all-known NK-lysins and showed that many NK-lysins are expected to have substantial levels of intrinsic disorder. Curiously, high level of intrinsic disorder was also found even in two proteins with known 3D-strucutres (NK-lysin from pig and human granulysin). Many of the identified disordered regions can be involved in protein–protein interactions. In fact, NK-lysins are shown to contain three to eight molecular recognition features; i.e. short structure-prone segments which are located within the long disordered regions and have a potential to undergo a disorder-to-order transition upon binding to a partner. Furthermore, these disordered regions are expected to have several sites of various posttranslational modifications. Our study shows that NK-lysins, which are AMPs with a set of prominent roles in the innate immune response, are expected to abundantly possess intrinsically disordered regions that might be related to multifunctionality of these proteins in the signal transduction pathways controlling the host response to pathogenic agents.
gpuccio
And to follow-up... on IDPs, Why the word, "disorder?" Press Release... Supposed disorder NOT disorder after all Paper: Phosphorylation orchestrates the structural ensemble of the intrinsically disordered protein HMGA1a and modulates its DNA binding to the NF-kB promoter DATCG
Why use the word, "intrinsically" to describe a Flexible Protein? Because, they must remain ever vigilant to force Design out and keep blind, unguided mutations in.
intrinsically [in?trinz?k(?)l?, in?trins?k(?)l?] ADVERB - in an essential or natural way.
As if these proteins are not designed to be flexible and other proteins to be rigid, ordered and limited match by design. By being close-minded to only a natural cause, they become blind to Design. DATCG
Gpuccio, There's new article out about Intrinsically Disordered Proteins :) cited by Evolution News, woot, woot! So, thought I'd look IDPs up for Immune System which you've said you will be posting on in the future. Because certainly the Immune System needs Flexible Proteins, right? Here's an article on first search:
"The scientists, nine of whom are from Johns Hopkins and one from the University of Houston, set out to answer that question. They chose for their study a disordered protein taken from human cells called glucocorticoid receptor, which regulates genes that control, among other functions, metabolism and immune system response.
https://elifesciences.org/articles/30688 Smiles... IDPs - misnamed are Flexible Proteins, designed to be flexible. :) It goes on, sorry this is off-topic I know, but fun! We live in amazing times of incredible Design Discovery :)
By manipulating segments of the protein in the lab, they were able to show how one portion acts on another, and that the disordered protein creates versions of itself to act almost in place of regulator molecules that govern its activity. The disordered protein uses an activation-repression dynamic—described by Hilser as similar to attracting and repelling magnets—between sections within the disordered chain to regulate its own activities and those of other proteins. "Our work uncovered the language of how these spaghetti pieces communicate," Hilser said. "We showed that those pieces of spaghetti interact with each other sort of like attracting and repulsing magnets, creating a kind of tug-of-war, and that the body can make different versions of the protein to tune which part wins the tug-of-war." Yet to be explained, he said, is how the interactions among these proteins and the sub-sections happen and how all this can ultimately be used to treat disorders that emerge when things go awry with these molecules central to almost all life function.
Looking forward to your post on the Immune System and FI! DATCG
Jawa: Yes, it's been a long time! :) Now I am really working hard at the OP about the adaptive immune system. Indeed, I think it will have to be in two separate parts. The first part will be about the generation of the basic antibody repertoire, let's say the pre-antigen part. The second part will be about affinity maturation and other related post-antigen topics. The adaptive immune system is really an amazing example of protein engineering lab. That anybody in the world really believe that it is not an engineered system is completely beyond my understanding. Moreover, the engineering principles that govern the two different aspects I have mentioned are equally complex and efficient, but completely different in their working and perspective. Really fascinating stuff. And, luckily, much more is understood today than even a few years ago. gpuccio
Is this discussion still in the list? Popular Posts (Last 30 Days)
Controlling the waves of dynamic, far from… (1,290) Does The Bible “condone” slavery, even… (1,220) Sean Carroll: “Nowadays, when a more… (1,067) A world-famous chemist tells the truth:… (968) Researcher: Evidence for early man in Asia half a… (964)
It's almost 3 months old. It has been visited 6,329 times. jawa
GP
Of course we get a signal even after 60 – 100 million years only of separation (humans vs mice or other mammals). However, it is mixed with a non trivial component of passive conservation. It is a valid signal, however, and the different behaviour of different proteins can be observed just the same. I prefer to discuss the vertebrate transition because in that case I am more confident that most, or practically all of the conservation can be attributed to fnctional constraint.
I agree. These bottlenecks where additional functional constraint occur is very interesting. bill cole
Bill: Of course we get a signal even after 60 - 100 million years only of separation (humans vs mice or other mammals). However, it is mixed with a non trivial component of passive conservation. It is a valid signal, however, and the different behaviour of different proteins can be observed just the same. I prefer to discuss the vertebrate transition because in that case I am more confident that most, or practically all of the conservation can be attributed to fnctional constraint. Prp8 is really an amazing example of extreme conservation. TFs like p53 are probably not the best way to assess neutral divergence, because we know quite well that they are usually bimodal: highly conserved functional DBDs and poorly conserved functional sequences implied in complex interactions and regulations. A more "neutral" reference could be some structural protein, like collagen, even if those protein too have probably specific roles in different species. For your convenience, I am adding at the end of the OP another graph, with the evolutionary history, always in terms of human conserved sequence similarity, of these three proteins: Prp8, p53 and collagen. As you can see, both p53 and collagen have a rather "neutral" behaviour, even if probably for different reasons, as discussed. Both behave, in average, as the mean of the whole proteome. However, some minor adjustment seems to take place for both at the vertebrate transition (but that is true, in general, for the whole proteome too), and collagen seems to show some jump after marsupials. But, in general, the behaviour of collagen can be probably attributed mainly to neutral divergence, and for p53 there is probably the effect of a dual, inverse, functional effect. The really amazing thing is Prp8, which, as we know, exhibits practically the same sequence starting at the beginning of metazoa. The cnidarian protein has indeed 92.23% identity with the human protein. Which is then conserved, and only slightly refined, in all other metazoa. Amazing, especially for a protein which is 2335 AA long! gpuccio
GP
For my purposes, alignments bewteen humans and primates, or even other mammals like mice, are not really interesting, because for those species the split is toot recent. If I get 95% similarity between humans and mice, for example, I really cannot say how much of that is functional constraint and how much is passive similarity. Very simply, there has not been enough time for neutral varitaion to act significantly where it can act.
What would you expect from neutral mutations over 60 million years like the split between humans and mice? We get 99.9% identity between humans and mice with prp8. We only get 89% identity with the TTN protein. P53 we get 76% identical positions. This looks to me like different designs or more neutral mutations tolerated. Will look at it more tomorrow. Trying to think about how many neutral mutations we would expect if the protein could tolerate mutations in every location. No need to respond at this point just thinking out loud. bill cole
Bill at #675: Please, read also my #676. For my purposes, alignments bewteen humans and primates, or even other mammals like mice, are not really interesting, because for those species the split is toot recent. If I get 95% similarity between humans and mice, for example, I really cannot say how much of that is functional constraint and how much is passive similarity. Very simply, there has not been enough time for neutral varitaion to act significantly where it can act. That's why I look at least at cartilaginous fish (400+ million years of separations). Of course single celled eukaryotes are very distant, and probably different functional constraints can have a big role there, especially for regulatory proteins. Another problem is that many proteins exist in rather different isoforms, with rather different functions. Let's consider, for example, Prickle 1 and Prickle 2, which I have discussed in an older OP. They are 800+ AA long, but they share, in humans, "only" 711 bits. They are similar, but they are different proteins, with different roles, different tissue specificity, and so on. For comparison, Prickle 1 in humans and Prickle 1 in cartilaginous fishes share 1189 bits of sequence similarity, after 400+ million years of separation. This is a way of detecting how a difference between two similar proteins has certainly, in some cases, a heavy functional meaning. gpuccio
Bill: You say: "We also need to take into account that you are sampling many different applications of the protein across many different complexity of species." Correct. But that is one of the reasons that my measures are underestimating FI, not the opposite. When a specific sequence is conserved through hundreds of millions of years, it's beacuse it is highly functional, and it is needed in practically all the different adaptations of that protein in different species. The parts of the sequence which are not conserved, of course, are a mix of neutral or quasi neutral components, which are relatively free to change, and of other functional, maybe even highly functional, components, which need to change to adapt the protein to different contexts. It is not difficult to conceive those changing functional components. Proteins have many "signals" that are context specific: localization signals, parallel interactions with regulatory networks, phosphorylation sites, ubiquitination sites, and so on. Many of those signals are different in different species, even if the general function of the protein may appear similar. Some proteins are just machines that perform more or less the same thing in different species. But most proteins are not like that. Especially proteins that are involved in complex networks. Like most proteins that I have discussed here. Like most proteins that are long and complex, and which present clear information jumps at specific evolutionary nodes, like the transition to vertebrates. That's why species are different, and each of them is infinitely complex, but in different ways. So, it's not really a "sampling" that is at work in my procedure. I measure what is conserved thorugh long evolutionary times. That is not really a sampling, but a direct measure. You see, the simple truth is that here the background noise is not so important. Big samples are useful to detect a signal against the background noise. But when we observe signals that have the size of hundreds of bits of sequence similarities, the background noise of random similarities is completely irrelevant. Do you know why I uise the bitscore, and not the E value, which is another output os Blast? The E value is in itself a measure of probability, so it would be very simple to use directly that value. From the Blast FAQ: "The Expect value (E) is a parameter that describes the number of hits one can "expect" to see by chance when searching a database of a particular size. It decreases exponentially as the Score (S) of the match increases. Essentially, the E value describes the random background noise. " well, the simple reason that I cannot use the E value instead of the bitscore is that, for any of the protein comparisons that I discuss, which usually have bitscores of hundreds of bits, the E value is simply rounded to zero. For example: ATP synthase beta chain between E. coli and humans: bitscore = 663 bits. E value = 0 RAG1 between cartilaginous fishes and humans: bitscore = 1361 bits. E value = 0 So, as you can see, the bitscore can distinguish between proteins of different conserved complexity, while the E value (which measures the background noise) is flattened to 0 by the algorithm itself. Because the background noise is irrelevant, at those levels of signal. gpuccio
Hi GP Great explanation and reviewing the points. They have gone from you cannot make the measurement to too small of sampling as I pointed out this statistical technique used in polling. Now that population size does not matter if we take 9 samples than the error is 33%. 1/the square root of 9. By my arithmetic the error is less than one bit in each direction. Do you agree? I did some alignment work this am and found some interesting results. Different types of yeast have 70% alignment to prp8. If we look at humans and monkeys the alignment is 100% if we include mice it is 99.9%. When we compare yeast to slime mold we get 56 to 59% alignment same for mammals vs yeast. The issue is the functional constraint in yeast is much lower than in multicellular organisms. The functional constraint in earlier multicellular eukaryotic cells appears to be less than in mammals. This looks like design changes to me. What do you think? bill cole
Bill: I am not sure I understand your point about sampling. In general, it is true that the accuracy of an estimate depends on the sample size and not on the population size, provided that the population is big enough. But I am not sure of what this has to do with the context of our discussions. The problem in evaluating FI is usually to have some good estimate of the target sice. The search size can be usually estimated easily enough. And then same can be said of the probabilistic resources of the system. Remember, we don't need high precision here. We are dealing with very big numbers, and what really matters is the approximate order of magnitude of the target space/search space ratio. In general, functional spaces are extremely small if compared to search spaces, especially as the search space grows exponentially. That is rather evident in language and in software, for example. Instead, neo-darwinists (OK, I have decided that I will go on calling them that way :) ) try desperately to convince us (and probably themselves) that the functional space of proteins os special, that complex functions are connected (as if a watch were connected to a rifle, or to a book) and that complex functions abund so much in that space that they will be found like mushrooms. Or, in alternative, that complex functions are really simple when they first appear, as if a watch could evolve from one gross quasi-gear, a book from some random sign made by the wind, and so on. Of course, this is all mere imagination and dogma. Things are not that way. Complex functions are complex exactly because they require a lot of specific bit configurations. And complex functions do not emerge from simple functions. They emerge out of planning, out of understanding, out of purpose. The 2000 protein superfamilies are not a connected space where it is easy to go from one function to the other. Not at all. It is not a case that such transitions are never observed. They are never observed because they do not exist. These things are so obvious that every thinking person would admit them, if they were not, at the same time, too dangerous for the current ideology. So, we witness the sad show of intelligent people sticking to impossible explanations, and to the obstinate denial of very simple and powerful concepts like FI, and so on. So we see depressing "discussions" to deny that FI exists, or to affirm that it is everywhere, or to build it by summing simple bits of simple functions. How sad. If FI is such a wrong idea, why was Szostak so interested in it? Nobody knows. If function is so abundant in the protein functional space, why did the same Szostak have to build a big random library just to find some weak and useless ATP binding, and then engineer it thorugh random variation and intelligent selection just to get some stromg and equally useless ATP binding? If function is so abundant, why was it impossible for the researchers of the rugged landscape paper to find the functional island of the wildtype sequence? And so on, and so on. All the scenario of protein engineering shows how difficult it is to manifacture a functional protein, even using all our knowledge of biochemical laws and of the protein landscape, and all our most recent technology, and of course all possible imitations of what already exists in living beings. Yes, because, even using design, the best design we are able to produce, it is really difficult to engineer proteins. But certainly it must be us; we must be really dumb, given that complex functional proteins grow in the search space like mushrooms. gpuccio
Apparently T aquaticus thinks that science can be reduced to a parlor game.
Does this sequence have FI, and was it made by a mind? GVGICQSWMFVQKKMDCIGLCIPMIIMMIQGSSAYTKHKMAFTPRNSNLAFMVHHISQWG SGDARVDAEMQINKPQWLNEKNGNTHFNEYFMGDMYDQIGRKTRNQSGDFSGFALPCFFY TEYRNCHRLRIGNHRRNYFTHKYCSKEWPVFPCGPYFSKNDFGIMSYHQYSTALSHECLV TAGEHDHFQSNIKIMMHEYS
CONTEXT is important here. Clearly if we observed those letters scratched into a cave wall we would infer it was the product of a mind. That is an intelligent agent did it. If we received that on a radio channel we would infer there was a mind behind it. Context matters in science. Hopefully PS has something better to offer than sheer desperation. So far they do not. With respect to biology the relevant sequences produce observed functionality. The point is to determine whether said functionality arose by blind and mindless processes or by intelligent design. gpuccio, and many others, say that by measuring the information contained in that functional sequence such a determination can be made. These clowns think we should be able to predict that when handed random sequences, which sequences will be functional. When all that is happening is that we can predict which sequences required intelligent design. ET
Gpuccio Thanks for the response. I completely agree with you. Something came up this morning in a discussion with Rum. His claim is that the possible sequences are too large for and estimate. When I looked at political poll sampling strategy it was interesting that by their methods accuracy is dependent on sample size and not the size of the population. I would like to understand this better as this is very interesting for your method. We also need to take into account that you are sampling many different applications of the protein across many different complexity of species. bill cole
Bill: In the past, I have already made an explicit challenge (to those at TSZ, I believe) to offer any sequence so that I could infer, or not infer, design for it, without any false positive. Friends here at UD were ready to offer many examples of functional sequences for which I readily and correctly inferred design. Interlocutors from the other field tried all sorts of tricks, more or less on the line of T, trying to show, I don't know for what reason, that I could not detect design in all possible cases, or that there were many strings for which I could simply not infer design, without knowing if they were true negatives or false negatives. Which are of course very trivial truths, that I could have agreed upon in advence. Design inference is about inferring with extremely high certainty that some specific object is designed. It is not about recognizing all designed objects. It is not about recognizing all non designed objects. It is a procedure with virtually no false positive (if the threshold of FI is chosen appropriately), and with many, many false negatives. Again, these are really the basics of ID theory. gpuccio
Bill: This is all we can say, IMO: 1) It is a sequence of 200 characters, with no apparent linguistic meaning. 2) It can easily be intepreted as a sequence of 200 AAs (the characters are those of the AA one letter code). 3) I have blasted it, without finding any significant similarity. That said, the sequence appears to implement no specific function of any complexity, as far as I can see. So I agree with you, it has apparently extremely low FI (we could always define some very generic function for a non specific 200 character sequence). Therefore, there is no indication at all to infer design for the sequence. It is a negative, with the information we have about it. Now, there are of course two possibilities: a) It is a true negative. b) It is a false negative. There is some function for it, either as a character sequence or as a sequence of AAs, but we have not recognized it. In both cases, there is no problem. As should be very clear to whoever has some basic understanding of ID and of the design inference, the procedure for design inference is conceived to have practically no false positives, but many false negatives. Examples of false negatives are: a) Designed objects whose function is very simple, and does not qualify for a safe design inference. In a general sense, anything below 500 bits of FI, or any other appropriate threshold for the system that is being considered. b) Designed objects whose function is not recognized by the examiner. Really, it is rather tiresome to have to explain all the basics to people who arrogantly believe that they have already understood all, while they don't even know what they are speaking of. gpuccio
Gpuccio and all. Here is a post from PS I found interesting. Any thoughts? My thoughts are 0 FI unless the function can be clearly defined. Shows that minds are capable of FI and garbage :-) T aquaticus Friendly Atheist Biologist 6m colewd: The functional information inside DNA to start. Ok. Does this sequence have FI, and was it made by a mind? GVGICQSWMFVQKKMDCIGLCIPMIIMMIQGSSAYTKHKMAFTPRNSNLAFMVHHISQWG SGDARVDAEMQINKPQWLNEKNGNTHFNEYFMGDMYDQIGRKTRNQSGDFSGFALPCFFY TEYRNCHRLRIGNHRRNYFTHKYCSKEWPVFPCGPYFSKNDFGIMSYHQYSTALSHECLV TAGEHDHFQSNIKIMMHEYS How did you calculate FI, and how does the calculation test your hypothesis? bill cole
Communication codes in developmental signaling pathways. Review article Li P, et al. Development. 2019. https://dev.biologists.org/content/146/12/dev170977.long
Abstract A handful of core intercellular signaling pathways play pivotal roles in a broad variety of developmental processes. It has remained puzzling how so few pathways can provide the precision and specificity of cell-cell communication required for multicellular development. Solving this requires us to quantitatively understand how developmentally relevant signaling information is actively sensed, transformed and spatially distributed by signaling pathways. Recently, single cell analysis and cell-based reconstitution, among other approaches, have begun to reveal the 'communication codes' through which information is represented in the identities, concentrations, combinations and dynamics of extracellular ligands. They have also revealed how signaling pathways decipher these features and control the spatial distribution of signaling in multicellular contexts. Here, we review recent work reporting the discovery and analysis of communication codes and discuss their implications for diverse developmental processes.
OLV
Hugh Kenneth @664:
UD is a news aggregator site where someone is payed to post links to several articles every day.
Are you sure that that statement is exactly valid? :) The comparison between those websites was done basically motivated by curiosity about the generated traffic according to Alexa data. And it was done mainly is relation to GP having a discussion with folks who contribute to the other mentioned websites. Very simple. However, since you brought up the “news aggregator” parameter, should UD be compared with DR for example? Yes, why not? :) I literally predicted that PS could have a noticeable increase in traffic after GP was commenting in their site. Well, the jump was more impress than I expected. Having a discussion with GP was definitely a boost to PS and PT. The timing of the numbers shown in Alexa confirms this. :) Do you agree? BTW, here’s another way to look at the traffic issue: Popular Posts (Last 30 Days) Rare hominin skull upsets tidy human origins theory (3,669) Controlling the waves of dynamic, far from… (2,920) Once More from the Top on “Mechanism” (1,802) Chemist James Tour calls time out on implausible… (1,337) Does The Bible “condone” slavery, even… (1,205) Are those 5 OPs comparable as apples? jawa
Hugh Kenneth @664:
But even comparing the sites you did is not exactly comparing apples and apples either. UD is a news aggregator site where someone is payed to post links to several articles every day.
Do you mean “news aggregator” like “Drudgereport.com”? :) For example, couldn’t the following websites be comparable as news media? Alexa ranking: wsj.com 472 Drudgereport.com 639 Economist.com 3,183 I agree that there are several interesting parameters shown in the information provided by Alexa. Perhaps you understand them better than I do. Having GP explaining some ID concepts to the guys at PS definitely provides a tempting excuse to somehow compare UD with PS. They seem more closer in terms of covered topics than DR and UD are, even though both are in the category of news aggregators, according to your commentary. Seeing in PS folks who seem active contributors at PT and TSZ definitely creates another tempting excuse to put PT, TSW, PS in the same group. Seeing all those folks engaged in public discussion with GP adds support to such argument of “comparable”. jawa
GP
Sequence comparisons allow us, in the right context, to detect FI even when the function cannot be implemented, like in the case of pseudogenes.
Thanks. This really helped me solidify the concept in my mind. Even though Rum and T did not have the right concept this exercise of explaining it to them really helped. I think they are on the same page with us at this point. We will see. I reviewed (re read the conversation this AM) your discussion with Josh Art and Steve. I think if we do it again we should put the 500 bit discussion to the side as we can conclude whether it is design or evolutionary processes latter once we get agreement that the FI measurement is reasonable. Since Kirk is working to refine his methods right now it would be interesting if we could test correlation and repeatability between your results and his. Just a thought. bill cole
Jawa@659, yes, I do understand your point, and I admit using an absurd example to emphasize my point. But even comparing the sites you did is not exactly comparing apples and apples either. UD is a news aggregator site where someone is payed to post links to several articles every day. From what I have seen, these generally don’t trigger a lot of comments, but they do often lead to related OPs that do. To the best of my knowledge, these other sites limit themselves to relatively infrequent OPs. I would also be interested in what the traffic numbers mean. Are they simply hits, are they unique hits, are they filtered by duration on site? The reason I ask is two-fold. First, I compared the bounce rate of UD against a couple of the others on your list and UD’s bounce rate is much higher, suggesting lower average meaningful engagement by those that visit the sight. Second, there are a relatively few unique commenters who post comments many times every day. People like BA77, KF, Gpucio and ET. They alone are probably responsible for a not inconsequential fraction of UD traffic. Hugh Kenneth
Bill: We absolutely agree on the concept. My way of expressing it is that FI is a property of the function, not of the object. The object may implement the function or not. For the function to be implemented, by definition, all the necessary bits must be there (the complete FI). If an object has almost all the necessary bits, it has most of the FI linked to the function, but it cannot yet implement the function. So, I would not say that the object has zero FI, but that it has almost all, but not all, the FI linked to the function, but cannot yet implement the function. The substance is the same, but this is the terminology I prefer to use. Sequence comparisons allow us, in the right context, to detect FI even when the function cannot be implemented, like in the case of pseudogenes. Really, all this discussion generated by Rumracket and company is completely meaningless, and an obvious and desperate attempt to deny the evident truth. gpuccio
GP, How are the epigenetic markers associated with determining which DNA part can be expressed within a particular cell type? what’s the relation or association between the TFs and the epigenetic markers and the ncRNAs that serve as regulatory elements ? How’s that choreography composed before it’s needed? Thanks. OLV
Does anybody know a plant biologist who could explain this to us here? Who could answer some questions we may have about this paper? Single-cell three-dimensional genome structures of rice gametes and unicellular zygotes https://www.nature.com/articles/s41477-019-0471-3
Our results reveal specific 3D genome features of plant gametes and the unicellular zygote, and provide a spatial chromatin basis for ZGA and epigenetic regulation in plants.
OLV
What does it mean “established de novo”?
the emergence of totipotency after fertilization involves extensive rearrangements of the spatial positioning of the genome1,2. However, the contribution of spatial genome organization to the regulation of developmental programs is unclear nuclear organization is not inherited from the maternal germline but is instead established de novo shortly after fertilization. The two parental genomes establish lamina-associated domains (LADs)4 with different features that converge after the 8-cell stage. the mechanism of LAD establishment is unrelated to DNA replication. paternal LAD formation in zygotes is prevented by ectopic expression of Kdm5b, which suggests that LAD establishment may be dependent on remodelling of H3K4 methylation. Our data suggest a step-wise assembly model whereby early LAD formation precedes consolidation of topologically associating domains.
Genome–lamina interactions are established de novo in the early mouse embryo https://www.nature.com/articles/s41586-019-1233-0 OLV
Hugh Kenneth, Did the comment @650 clarify your confusion? Do you see now what’s the problem with your wrong example? It’s not about validity. It’s all about internet traffic compared between peers. As EricMH well said, ID is a topic that brings more traffic to their websites. Please feel free to ask any question you may still have. Thanks jawa
Hugh Kenneth, did the comment @650 clarify the valid issue you pointed to @649? thanks jawa
EricMH, SW is Professor LM’s website. You may want to look at the full list of website names @619. jawa
@Jawa indeed! I've not spent much time at SW (what is this?) or PT, but I have seen at TSZ the only threads that generate hundreds of comments are about ID. Definitely the case at PS. The only time that site gets interesting is when an ID proponent is 'peacefully' debated by Swamidass and his gang. EricMH
EricMH @652: “ID is the only topic that brings traffic to the PS site” I would extend it: “ID is the only topic that brings traffic to the SW, PT, TSZ, PS sites” :) jawa
Hi GP
The error is simple. FI is a property of the function, the minimal number of specific bits necessary to implement the function. The object can implement the function or not. A defective protein cannot implement the function, because something is still missing, for example one specific AA. However, the FI of the function is always the same.
Thank you The error is conflating functional bits with functional information. -A system can have 495.7 functional bits yet 0 functional information. -If you add 4.3 bits now it goes from 0 bits of functional information to 500 bits of functional information as it now has enough functional bits to function. The important point is functional bits is a separate concept from functional information. Agree? bill cole
GP: “these people are using the concept of FI in a completely wrong way” Perhaps to them it’s the most convenient way? To reduce the crime rate a city government may simply decide to change the parameters used to determine what may qualify as “crime”. :) Is it wrong? No, just convenient. :) There’s no right or wrong. What could be right to one may be considered wrong to another person. And viceversa too. It’s all relative, right? :) That’s why the “tornado” analogy seems so clever. :) And the “starry sky” poetically fantastic! :) They gleaned the benefits from having GP in PS for a short time: their internet traffic has gone up tremendously. :) Smart folks. ;) jawa
ID is the only topic that brings traffic to the PS site :D The irony of PS is that they claim to think ID is bad science, a distraction, and not worth their time. They've even discussed banning the subject. Same with Panda's Thumb and The Skeptical Zone. The spend all their time bashing a topic that provides them all their traffic! EricMH
Bill at #644: The error is simple. FI is a property of the function, the minimal number of specific bits necessary to implement the function. The object can implement the function or not. A defective protein cannot implement the function, because something is still missing, for example one specific AA. However, the FI of the function is always the same. The defective protein already has almost all the specific bits necessary to implement the function, except the last 4.3. So, we can say that it already has most of the necessary FI to implement that complex function. Of course, it cannot yet implement it, because something is still lacking. So, we cannot evaluate its FI by assessing the function, because it is not yet there. However, if we know the functional object, in this case the complete protein, we can easily see that the defective protein is almost the same, at sequence level: so we can know that the necessary FI is almost completely there. That's how we recognize pseudogenes in the genome, for example. That's how we know that a watch is a broken watch, and not a stone, even if it is not working. So, it is obvious that a final transition that adds the last 4.3 bits can make the function appear. There is no mystery in that, as even a child would understand. But not Rumracket, it seems. So, the transition from the defective protein to the complete protein is a simple transition, involving only 4.3 bits of FI. And the function that appears is a complex function, because the rest of the FI was already there, in the starting state. Of course, if we want to know the probability of getting the function from an unrelated state, which is the meaning of FI, we cannot start from the defective protein. We must start from an unrelated state. Otherwise, it is like evaluating the FI of a sonnet starting from the same sonnet with one important typo, that changes the meaning of some important word. Or evaluating the FI of a watch starting from the same watch lacking one small but important gear. Or evaluating the FI of a software starting from the same software blocked by one wrong byte. Reasoning like Rumracket, or his comrades, we could easily conclude that the FI of Excel is one byte! Again, the simple truth is: these people are using the concept of FI in a completely wrong way, because their only purpose is to discredit it. Again, this is shameful and ridiculous. gpuccio
Hugh Kenneth @649: Yes, that’s a valid point. However, Alexa ranking apparently has to do with internet traffic (whatever that means). I compare “peer” (kind of) websites that have some similarities in the topics they cover. In this case we’re dealing with websites that host discussions on similar topics that are rather boringly “nerdy” hence not too popular. :) For example, this very discussion where we’re posting our comments. How many people out there could be interested in quantifying FI in protein families or in NF-kB systems? A TV entertainer or a ML baseball player could earn several times more than several biologists working on research to find the cure for cancer or Alzheimer’s. Which case is more valid? Perhaps more people would find more attractive or entertaining to watch sport events than to discuss biomedical research progress. Hence those are different categories. The example you provided seems to fall in a completely different category than UD, hence it is not comparable to UD. Their respective rankings are not comparable. Actually they differ even in seriousness and truthfulness. UD seems more respectful, rational, truthful than the website you mentioned. Apparently there are over 100 million active websites but they can’t all be compared in one category. It won’t make sense. I provided several examples for illustration, but not all were comparable to one another. UD, SW, PT, TSZ, PS are comparable in certain areas. These days I would expect jawa
Jawa@648, site ranking isn’t always an indication of site validity, integrity or reliability. For example: UD (uncommondescent.com) ..........................698,761 Westboro baptist church (godhatesfags.com).. 50,430 Hugh Kenneth
Site .............rank ......... % UD .............698,761 ........ 1 SW ..........1,287,233 ........ 2 PT ............1,570,928 ........ 2 TSZ ..........3,845,426 ........ 4 PS .............4,522,159 ........ 5 jawa
This OP has been in the top 5 for the last 70 days. Popular Posts (Last 30 Days) 1. Rare hominin skull upsets tidy human origins theory (3,663) 2. Controlling the waves of dynamic, far from… (2,900) 3. Once More from the Top on “Mechanism” (1,783) 4. Chemist James Tour calls time out on implausible… (1,331) 5. Does The Bible “condone” slavery, even… (1,191) jawa
GP @643:
Rumracket’s “argument” is ridiculous. I have tried to explain him why, but of course he is so in love with it that he will never understand.
so in love with it ? Well, in addition to “peaceful” they’re also “loving” people. Both concepts go hand by hand. :) jawa
I knew it, I knew it. Even publicly predicted right here that it’s going to happen. Having GP in a discussion in PS gave that website a tremendous boost in online traffic reflected in the Alexa rankings. BTW, the website PT also went substantially up in the rankings after Art Hunt presented his “tornado” breakthrough idea. Curiously, even SW improved their position noticeably, though professor LM wasn’t directly involved in the discussion at PS. UD dropped in the rankings, probably after many frequent UD visitors went to PS to continue learning about the fascinating “starry sky” FI. :) :) jawa
Hi Guccio I think you are conceptually right but there may be a problem with H and S's definition as they define FI as 0 if there is no function. This is what Rumraket is arguing against. Has this argument created a case that the H and S definition needs modification as it cannot create differentiation comparing an almost sequence to a garbage sequence? Your response was exactly what I expected :-) bill cole
Bill: Rumracket's "argument" is ridiculous. I have tried to explain him why, but of course he is so in love with it that he will never understand. FI is related to a function. The function has that value of FI, which is the minimal number of specific bits that are necessary to implement the function. It is also the probability of generating that function in a random system from an unrelated state. So, the FI of ubiquitin is what it is (about 4.3 bits for each specific AA). And it corresponds to the probability of generating ubiquitin by a random walk from an unrelated state. If you take ubiquitin and change one AA, what you get is simply a protein that cannot implement the function of ubiquitin. But of course it is not an unrelated state. It is a sequence that already has all the specific bits for the ubiquitin function, minus one AA. It is a defective ubiquitin. Of course, the function will be retrieved if that AA is corrected, because all the other specific bits are already there. I really can't see what is the problem. I suppose he problem is simply Rumracket. Again, these people are simply using wrongly the concept of FI, with the only purpose of discrediting FI. Shameful. And ridiculous. gpuccio
Gpuccio Here is comment from PS by Rumraket. Have answered him. I committed to comparing your answer to mine. Could you give this a shot?
Also, if pre-ubiquitin cannot do ubiquitin’s function, and the function only emerges after one mutation in pre-ubiquitin, isn’t the number of sequences that meet the known minimal threshold for function(of ubiquitin), one sequence? The one being created when pre-ubiquitin mutates into ubiquitin. If one sequence meets M(EX)M(E_X) , then the ratio is 1 divided by all of sequence space. So the change in function is maximal for sequence length(which would be roughly 328 bits for a 76 aa protein like ubiquitin), not 4.3 bits.
bill cole
Still I don’t understand why they named it “peaceful” instead of “serious”. Isn’t it serious? :) BTW, is there any known “violent science” discussion blog out there? :) jawa
Gpuccio I would object to using design deniers as a label because is the same tactic Jock used with his Texas sharp shooter fallacy. The problem is that the scientific community has become politicized protecting weak theories like the grand claims of evolution. It's time to push open honest discussion and call out logical fallacies. Mike Behe does a very good job maintaining the high ground despite all the nonsense he has to deal with. I think he is gaining respect because of this. It just takes time. bill cole
Well, their website is called “peaceful science” jawa
Upright BiPed: Yes. And it's interesting that, after having played the good guys for some time, as far as my procedure was discussed, raising only a few "technical" objections (none of them relevant), as soon as the discussion shifted (because they asked) to the role of FI in the design inference, all hell broke loose. False arguments, manipulation, madness. Probably, design deniers is really the most appropriate way to name them, given that, for reasons that I cannot understand, they resent being called neo-darwinists. gpuccio
. The leader of PS attempted to equate biological information with the "information" supposedly contained in the configuration of stars in the night sky -- an not one of his little groupies even batted an eye. And he's an expert. Just ask him. Upright BiPed
Jawa at #633: Why waste all those monkeys and all that time? The verse was obviously generated in a tornado. gpuccio
Jawa at #634: Maybe he realized more than we can imagine. And his starry sky is simply shameful. gpuccio
GPuccio I agree that JS maybe didn’t realize that his starry sky is not even wrong. jawa
GPuccio, Was that sonnet produced by a million monkeys on a million typewriters for a million years ? :) jawa
Jawa: I was really surpised by Hunt's tornado argument. Is it possible that he does not understand? Yes, I suppose it's possible. Because I believe he is, after all, in some good faith. So, he does not understand. By the way, my final confutation of the tornado "argument" is here, at #614. Frankly, Swamidass' starry sky was much worse. And not in good faith, IMO. Perhaps it's easy to state falsities, or even frank lies, when you have some dozen fanatics ready to tell that you are right, or at least to stay silent. How was the sonnet? "Tir'd with all these, from these would I be gone" A treasure not only of functional information, but above all of precious, undaunted truth, in one verse. gpuccio
Does the motto of Art Hunt’s college mean that they won’t admit any student or hire anybody from the Namibian Himba tribe? :) jawa
Did professor Art Hunt patent his brilliant “tornado” argument yet? Does he teach his “tornado” argument at UKY? :) jawa
KF: "Now, ponder why in 90 years since Oparin et al, we are still stuck in speculative just so story myth-making like this." Please, be compassionate! It's really difficult to defent a theory which cannot be defended. As we have seen with the "arguments" at PS. :) gpuccio
OLV: From the second paper you quote at #625:
How broadly expressed repressors regulate gene expression is incompletely understood. To gain insight, we investigated how Suppressor of Hairless—Su(H)—and Runt regulate expression of bone morphogenetic protein (BMP) antagonist short-gastrulation via the sog_Distal enhancer. A live imaging protocol was optimized to capture this enhancer’s spatiotemporal output throughout the early Drosophila embryo, finding in this context that Runt regulates transcription initiation, Su(H) regulates transcription rate, and both factors control spatial expression. Furthermore, whereas Su(H) functions as a dedicated repressor, Runt temporally switches from repressor to activator. Our results demonstrate that broad repressors play temporally distinct roles and contribute to dynamic gene expression. Both Run and Su(H)’s ability to influence the spatiotemporal domains of gene expression may serve to counterbalance activators and function in this manner as important regulators of the maternal-to-zygotic transition in early embryos.
Perfectly appropriate to this OP and thread! :) gpuccio
PavelU: :) gpuccio
OLV: No, I have not yet discussed translation in detail, especially at the ribosomal level. Maybe in the future. The paper you point to seems very interesting. I will read it with great attention. Thank you! :) gpuccio
How to “Run” embryonic development https://thenode.biologists.com/how-to-run-embryonic-development/research/ Distinct Roles of Broadly Expressed Repressors Support Dynamic Enhancer Action and Change in Time https://www.sciencedirect.com/science/article/pii/S2211124719308368?via%3Dihub OLV
The below paper resolves the FI jumps and the OoL issue. Game over. ID fans should look for other things to be interested in.
Do you really believe this? bill cole
PU, lessee:
Abstract We argue for [--> speculative, not a demonstration with solid empirical basis] the existence of an RNA sequence, called the AL (for ALpha) sequence, which may have played a role at the origin of life [--> Having invented the character, you stage the play, and the script]; this role entailed [--> galloping hypothesis, a speculative scenario is now projected as a factual premise implying conclusions] the AL sequence helping generate the first peptide assemblies via a primitive network [--> a speculation now suggested to be historic OoL fact] . These peptide assemblies included “infinite” proteins [--> evading the information and organisation challenge for functional systems using protein nanomachines]. The AL sequence was constructed on an economy principle [--> this is now alleged history] as the smallest RNA ring having one representative of each codon’s synonymy class and capable of adopting a non-functional but nevertheless evolutionarily [--> magic word] stable hairpin form that resisted [--> speculation treated as fact] denaturation due to environmental changes in pH, hydration, temperature, etc. Long subsequences from the AL ring resemble sequences from tRNAs and 5S rRNAs of numerous species like the proteobacterium, Rhodobacter sphaeroides. Pentameric subsequences from the AL are present more frequently than expected in current genomes, in particular, in genes encoding [--> codes and algorithms, empirically, come from what known adequate cause, please? And more broadly aren't codes manifestations of LANGUAGE, a strong sign of intelligent, purposeful action?] some of the proteins associated with ribosomes like tRNA synthetases. Such relics may help explain [--> speculation turned "fact" now "explains" actual observations] the existence of universal sequences like exon/intron frontier regions, Shine-Dalgarno sequence (present in bacterial and archaeal mRNAs), CRISPR and mitochondrial loop sequences.
See the chain of fallacies? Now, ponder why in 90 years since Oparin et al, we are still stuck in speculative just so story myth-making like this. KF kairosfocus
The below paper resolves the FI jumps and the OoL issue. Game over. ID fans should look for other things to be interested in. Here’s the recent paper: Emergence of a “Cyclosome” in a Primitive Network Capable of Building “Infinite” Proteins https://www.mdpi.com/2075-1729/9/2/51 PavelU
GP, This paper may confirm that the fascinating topic of functional complexity and complex functionality associated with proteins is far from being wrapped up. Apparently there’s much work to be done in this area of research. Or at least that’s my impression. This recent paper may try to say something interesting: Nervous-Like Circuits in the Ribosome Facts, Hypotheses and Perspectives Youri Timsit Int. J. Mol. Sci. 2019, 20(12), 2911; by Daniel Bennequin https://www.mdpi.com/1422-0067/20/12/2911/htm
In the past few decades, studies on translation have converged towards the metaphor of a “ribosome nanomachine”; they also revealed intriguing ribosome properties challenging this view. Many studies have shown that to perform an accurate protein synthesis in a fluctuating cellular environment, ribosomes sense, transfer information and even make decisions. This complex “behaviour” that goes far beyond the skills of a simple mechanical machine has suggested that the ribosomal protein networks could play a role equivalent to nervous circuits at a molecular scale to enable information transfer and processing during translation. We analyse here the significance of this analogy and establish a preliminary link between two fields: ribosome structure-function studies and the analysis of information processing systems. This cross-disciplinary analysis opens new perspectives about the mechanisms of information transfer and processing in ribosomes and may provide new conceptual frameworks for the understanding of the behaviours of unicellular organisms. “in unicellular organisms, protein-based circuits act in place of a nervous system to control the behaviour” “because of the high degree of interconnection, systems of interacting proteins act as neural networks [...] to respond appropriately to patterns of extracellular stimuli” “the wiring of these networks depends on diffusion-limited encounters between molecules and for this and other reasons, they have unique features not found in conventional computer-based neural network”. The recent analysis of r-protein networks in the ribosomes of the three kingdoms [2] updates and further enhances this intriguing hypothesis. r-protein networks form complex circuits that differ from most known protein networks, in that they remain physically interconnected. these networks displayed some features of communication networks and an intriguing functional analogy with sensory-motor circuits found in simple organisms. these networks may play at a molecular scale, a role analogous to a sensory-motor nervous system, to assist and synchronize protein biosynthesis during translation. the nerve circuits do not have exactly the same properties that the ribosomal proteins circuits have Facts and Current Paradigms An Extensive Flow of Information Ribosome Choreography during Protein Biosynthesis Ribosome Heterogeneity and Open Questions Hypotheses Ribosome Behaviour The r-Protein–Neuron Equivalence Sensing the Ribosomal Functional Sites Transferring Information Molecular Synapses and Wires Molecular Communication A new Type of Allostery in r-Protein Networks Nervous-Like Circuits in the Ribosome? Number of Nodes, Connectivity and Evolution Functional Organization the r-protein networks may have an equivalent function to nervous systems at a nanoscale. Perspectives In conclusion, our study proposes that the r-protein networks may have an equivalent function to nervous systems at a nanoscale. These molecular systems are proposed to transfer and integrate the information flow that circulates between the remote functional sites of the ribosome to synchronize ribosome movements and to regulate the protein biosynthesis. Thus, r-proteins may collectively integrate the information taken from distinct sites and similar to a nervous circuit, may help to synchronize the correct tRNA recognition, the tRNA translocation and the growth of the nascent peptide. This hypothesis opens new perspectives in ribosome function, in the evolution of complex systems and in biomimetic technological research of nanoscale information transfer and processing. Considering a collective role of r-proteins may stimulate a new conceptual framework for both conceiving new antibiotics and better understanding the origin of ribosomopathies [86]. For example, mutations that impede the communication pathways such as the W255C [65] may have a general role in translation defects and pathologies. Inversely, specifically targeting some pathways in bacterial r-protein networks or sub-networks may help to produce new efficient antibiotics. On the other hand, this study stimulates and further characterizes and compares r-protein networks to understand how they have evolved. This would provide precious insights into the evolution of information processing in living organisms. It may also help to understand the complex behaviours of unicellular organisms that may use similar networks to integrate and respond to external stimuli. Finally, understanding the molecular mechanisms of information transmission and processing would constitute the basis for conceiving new computing nano-devices.
OLV
GP: At the beginning of this OP you cite another OP you wrote over a year ago on the fascinating transcription topic. Have you done an OP on translation? OLV
Alexa global internet traffic ranks Website.....………Rank……..Top % G ……………………1 …….0.0001 //Google AMZN…………….11 …….0.0001 //Amazon BS ………………..24 …….0.0001 // Blogspot B ………………….29 …….0.0001 // Bing WP ……………….54 …….0.0001 // Wordpress BG ………………801 …….0.001 // Biblegateway EN............147,621..........1 // EvolutionNews UD............669,566..........1 // this website SW.........1,323,657..........2 // Sandwalk PT..........2,199,162..........3 // Panda’s thumb TSZ.......3,056,322..........4 // the skeptical zone PS.........7,102,855..........8 // peaceful science BTW, apparently SW was in the top 1% not long ago. At one point PT apparently was in the top 1% too. Why have they dropped so drastically lately? Apparently TSZ was in the top 6% and has improved substantially (up 2%) Apparently PS was in the top 6% but has dropped dramatically 2%. jawa
PeterA at #616: Interesting article about ERV functions. In the new OP about the immune system, I will probably discuss briefly another important example of (probably) transposon derived new fundamental proteins, RAG1 and RAG2. :) gpuccio
Bill Cole: “ I am glad you stuck to your guns or kept your position on this issue.” Well, JS also kept his position on that issue and PS kept its position at the bottom of the ranking. :) Alexa global internet traffic ranks 2019-09-09 .........Rank........Top % ....... Today G ........................1 .......0.0001 AMZN................11 .......0.0001 BS ....................24 .......0.0001 B ......................29 .......0.0001 WP ...................54 .......0.0001 BG ..................801 .......0.001 EN ...........144,044 .......1 ....... 146,408 UD ...........641,186 .......1 ....... 654,747 SW ........1,341,960 .......2 ....... 1,271,258 PT .........2,108,860 .......3 ....... 2,117,137 TSZ .......3,329,139 .......4 ....... 3,054,286 PS .........7,074,812 ........8 ..... 7,099,106 The last few days SW and TSZ had substantial improvements in the ranking. The rest got worse. jawa
GP: Here’s an interesting article in the EN website that cites papers related to something you’ve referred to before: https://evolutionnews.org/2019/09/waste-not-research-finds-that-far-from-junk-dna-ervs-perform-critical-cellular-functions/ PeterA
Very good GP. I am glad you stuck to your guns or kept your position on this issue. bill cole
To all here: A few more words about tornadoes and the role of necessity. Dembski in his explanatory filter has explained very well that necessity must be reasonably excluded as a possible cause of what we observe, if we want to make a safe design inference. Speaking from the point of view of FI, I would like to stress again that FI is a measure of the improbability of obtaining a configuration implementing the function we are observing, as a result of the probability distributions that are working in the system. In that sense, necessity can be seen as the negation of FI: if some configuration is generated in the system as a result of well describable necessity laws, it means essentially that the probability of that configuration is 1. IOWs, there are no alternatives to that configuration in the system. Therefore, the FI is zero (target space = 1; search space =1; -log2 od 1/1 = 0). In a stochastic system, like the weather system on our planet, we can describe in part the evolution of the system by necessity laws, but there are also random configurations that emerge, and that can only be described by probability distributions. The important point is: any computation of FI refers only to the probabilistic component, because, as said, FI is zero when a necessity law acts and can explain what we observe. So, in the case of tornadoes, the point is: how much is necessity, how much is probabilistic? As said, I am not a meteorologist. Buy I believe that we can safely say that many events in the weather system has a strong necessity component, even if they cannot be explained completely by necessity, because many random variables are at play, too. However, weather forecast is a good science, and in many cases successful. Rains, winds, pressures, temperatures, can rather well be anticipated, in a certain measure. Tornadoes are more difficult, certainly. they are, essetially, a special kind of order, destructive order, that can be generated in the system. Again, not being a meteorologist, I quote, this time from a page from the National Geographic: https://www.nationalgeographic.com/environment/natural-disasters/tornadoes/
Tornadoes are vertical funnels of rapidly spinning air. ... Also known as twisters, tornadoes are born in thunderstorms and are often accompanied by hail. Giant, persistent thunderstorms called supercells spawn the most destructive tornadoes. These violent storms occur around the world, but the United States is a major hotspot with about a thousand tornadoes every year. ... What causes tornadoes? The most violent tornadoes come from supercells, large thunderstorms that have winds already in rotation. About one in a thousand storms becomes a supercell, and one in five or six supercells spawns off a tornado. ... Although they can occur at any time of the day or night, most tornadoes form in the late afternoon. By this time the sun has heated the ground and the atmosphere enough to produce thunderstorms. Tornadoes form when warm, humid air collides with cold, dry air. The denser cold air is pushed over the warm air, usually producing thunderstorms. The warm air rises through the colder air, causing an updraft. The updraft will begin to rotate if winds vary sharply in speed or direction. As the rotating updraft, called a mesocycle, draws in more warm air from the moving thunderstorm, its rotation speed increases. Cool air fed by the jet stream, a strong band of wind in the atmosphere, provides even more energy. Water droplets from the mesocyclone's moist air form a funnel cloud. The funnel continues to grow and eventually it descends from the cloud. When it touches the ground, it becomes a tornado.
Well, I would say that is a good explanation. Certainly, it does not explain everything. But it gives a good idea: certain conditions that occur rather often in the system, for example storms, generated tornadoes with a well known probability. "About one in a thousand storms becomes a supercell, and one in five or six supercells spawns off a tornado." Most of this is the result of well understood necessity laws operating in the system. The probability depends of course on some random variables: a storm must be there, temperatures, winds and other variables must form some specific configuration which can generate the tornado. But that specific configuration is not unlikely at all. Indeed, it happens about 1000 times a year in the US. Therefore, any attempt to intepret tornadoes as extremely unlikely events, considering all possibel states of water molecules, is simply wrong. Water molecules, in the weather system, are simply constrained by necessity laws, at least for the most part. What we must consider is the probability of the macrostates in the weather that are associated to tornadoes, and those macorstates are not unlikely at all. Therefore, tornadoes have extremely low FI, and require no design inference. IOWs, we need no tornado engineers. Now, am I critizing Art's analysis of tornadoes, while I make the same errors for proteins and other biological objects? Absolutely not. You see, it should be clear at this point that FI is about the probability of the target space linked to the function. In general, we compute it as -log2 of the ratio target space/search space. But the point is: the search space must be a true search space. IOWs, it must be the set of all possible states that are really available to the system, possibly with some grossly comparable probability. When strong necessity laws determine largely the outcome, as in the case of weather and tornadoes, the search space is not so big, because only a few states are really available to the system: those compatible with the necessity laws operating in its evolution. Water molecules have to follow those laws, to respect those constraints. Again, here it's the macrostates that count, because the microstates are largely constrained. So, a wind is not free to go in any direction. Rain cannot fall if clouds are not there. And so on. Is the system of protein coding genes the same type of system? Not at all. Let's see. Our system, whatever it is, will be essentially a pool of reproducing organisms with a genome. What we call a population. Now, let's say that the population has a definite genome, with its variability, and that the organisms reproduce themselves. What is the first necessity laws that works here? It is the simple fact that an organism reproduces itself by duplicating its genome, as precisely as possible. Why? Not because it is a law of nature, but because the organism is progarmmed to do exactly that thing. But, of course, we observe RV in reproducing organisms. It is the cause of novelty, otherwise genomes would remain essentially the same, with possibly some recombination (HGT, sexual reproduction). RV generates novelty. Now, RV has many forms, but in the end it changes something in the genome. The new genome changes a little. Some difference is generated, because some error takes place in the genome duplication. Now, let's consider protein coding genes. RV can change a nucleotide. That is the simplest case, probably the most common. Let's say that it is a mutation, not an indel, so the only change at protein level is, possibly, that one AA changes. If the mutation is not synonimous. OK? Now, the simple question is: is that variation constrained by strong necessity laws? And the answer is: no. Of course there are ncessity laws at play, and of course some variations are slightly more likely. The rate of variation can be different for different parts of the genome, and so on. But all thses considerations do not change the very simple fact: if we observe RV occurring in a protein coding gene, and we assume that no NS occurs, we are considering a random walk that, in the beginning, will explore just the sequence space that is nearer to the original sequence. But, after a few attempts, we are in the ocean of possible sequences, and there is really no known necessity laws that can favour some sequences over others. Especially if you consider that the necessity laws that cause the variation are acting on nucleotides, and that the functional result is instead an AA sequence, connected to the gene sequence only by the genetic code, which is certainly not known to the biochemical laws that operate the variations as errors in genome duplications. So, the point is: the space of all possible AA configurations for that sequence is really a search space, completely accessible by the system in all its parts as far as only RV is considered. There is no biochemical laws that excludes any posiible sequence, and for all practical purposes we can consider all possible sequences that are unrelated to the initial state as similarly probable. So, it is perfectly correct to use the space of all possible sequences as a search space available to the random system we are considering. These are not water molecules, constrained by well known necessity laws and macrostates. The different sequencesa are not constrained, at a probabilistic level. The only possible constraint here could be NS. Again, as I have always said, NS must be considered separately. But let's see briefly how it can act. If we start from an initial sequence that is functional, NS can only constrain change, and favour the conservation of the sequence. That's exactly what negative NS does in functional proteins. It is also the foundation fo my procedure to estimate FI. In some rare cases, if there is some space for optimization of the existing function, NS can favout that process. A process that, as discussed many times in detail, is severely limited in all known cases. Involving only a few AAs at most. However, optimization of an existing function is not generation of a new complex function. So, in general, negative NS operates against evolution. It just preserves what is already there, and is already functional. But to generate a new function, we must leave what already exists. That's why the origin of new protein families or superfamilies os often better conceived as happening in non functional sequences of the genome. Because there negative selection cannot work. What about positive NS? Of course, if we have a new complex function ready and operating, and if it gives a detectable reproductive advantage, it will possibly be positively selected. Fixed. But our new function, in the case we are discussing, is complex. Let's say that at least 500 specific bits must be found to implement that function, even at its simplest level. How can that function ever appear? Positive NS cannot act until the function appears. OK, if we are neo-darwinists (ehm... design deniers) we can dream that each single AA that is necessary to the new function can be positively selected for other reasons: it gives some increase in fiteness, or it just gets lucky in the lottery of genetic drift! OK. It's possible. But it is equally possible for all other possible variations, those variations that do not lead to our new protein, those variations that essentially lead nowhere, in the ocean of non functional sequences. Which, of course, are always extremely more likely than any functional one. So, why should we get lucky? Why should the intermediate bits of our final function be selected, if there is nothing special in them, nothing that makes them different from all other possible bits of variation, except that we know that, when 500 of them will be exactly what is needed, a new useful function will be there? That dream is the dream of getting Excel from Word, continuing to sell a constantly improved Word. One byte at a time. Good luck. gpuccio
Gpuccio
Can you go from Word to Excel, using just small changes, say one byte at a time, so that each time you can sell the existing software better, because it has become more efficient? That’s exactly what deconstructing a complex function into small naturally selectable steps would be. No surprise they can’t succeed!
Agree. They underestimate the problem that the observed AA sequences bring to selection and drift innovating. The chances are astronomically higher that they will de innovate. :-) bill cole
Bill Cole and others: As I often say: Can you go from Word to Excel, using just small changes, say one byte at a time, so that each time you can sell the existing software better, because it has become more efficient? That's exactly what deconstructing a complex function into small naturally selectable steps would be. No surprise they can't succeed! gpuccio
Bill Cole: "I have not seen empirical evidence that these pathways exist across protein families." Neither have I. All known examples of NS are very short optimizations, usually of some degradation of existing complex proteins, as in antibiotic resistance, as well explained by Behe, who seems to be so despised at PS that I could hardly mention his name without being reprimanded! :) gpuccio
Gpuccio
If they want to go on dreaming that complex proteins can be deconstructed into naturally selectable steps, they just have to show that this is true. I am not aware of those evolutionary pathways, nor of any reasonable motive why they should exist, if not in the imagination of our interlocutors.
The would have to be designed to exist IMO given the size of the sequence space. I have not seen empirical evidence that these pathways exist across protein families. bill cole
Mike1962, “Blind Watchmaker Devotees. BWDs for short.” :) jawa
Gpuccio: Don’t know well how to name them as a whole: design deniers? Blind Watchmaker Devotees. BWDs for short. Blind Evolution Devotees. BEDs for short. Maybe others can suggest summore candidates. mike1962
GP: Here's a list of articles on the topic "Analysis of RNA Polymerase II complexes". Is this related to the current topic in this discussion? Perhaps you cited some of these articles before. Role of integrative structural biology in understanding transcriptional initiation Functional assays for transcription mechanisms in high-throughput Full list. OLV
GP @600:
1) How can low but non trivial levels of FI arise in the extremely complex and organized protein engineering system which is the immune system? My new OP is meant to propose some detailed analysis of that question. 2) How can a complex and extremely efficient protein engineering system like the immune system arise in a system (the biological system) which has no tools to engineer it? You, like me, certainly know that there is only one reasonable answer to that question.
I like the two questions and answers. And I agree. \ OLV
Bill Cole @601: I see your point. Let's wait and see. Thanks. jawa
Very interesting discussion here. PaoloV
Bill Cole: "Joe Felsenstein are arguing for lots of FI being generated by evolution in small increments." Again! Just to clarify for the nth time. FI refers to the improbability of one single function to arise by purely random effects in the system we are considering. So, it is the number of specific bits necessary for that single function to be present. The function is considered as non deconstructable into simpler selectable steps, because if we introduce selection the evaluation must be different. Complex functions are not, as a rule, deconstructable into simpler increasingly functional steps. Least of all into naturally selectable steps. If they want to go on dreaming that complex proteins can be deconstructed into naturally selectable steps, they just have to show that this is true. I am not aware of those evolutionary pathways, nor of any reasonable motive why they should exist, if not in the imagination of our interlocutors. If some naturally selectable intermediate can be shown to exist, it's not a problem. As I have explained, the FI of a decosntructable protein is more or less equal to the FI of the most complex step in the deconstruction. So, given a real decosntruction, it's rather easy to update the computation of FI. It should be clear that our interlocutors have a simple way to falsify not ID itself, but its application to proteins. And that would certainly be a huge success for them, and a severe argument against ID theory in general. or at least against biological ID. They only need to demonstrate that the proteins that appear to be complex, to have high FI, say more than 500 bits, can as a rule be deconstructed into simpler, naturally selectable steps, well in the range of the probabilistic resources of the biological system where they arise. It's not really necessary thatthey do that for all proteins. If they can demonstrate that such a thing can be done for a relevant number of complex proteins, that would be enough. Maybe they could start with one! :) IOWs and very simply: the only case of true FI beyond a relevant threshold, in all the examples we have discussed about safes, is the big safe. That is the only case of an object with 100 bits of FI (which, of course, could be 500, or whatever we like). All the rest is only wishful thinking from our interlocutors, more or less in good faith, but completely wrong. And the simple fact remains that an object with more than 500 bits of FI will never arise in a non biological, non designed system. Tornadoes and starry skies are no counter-examples. And, of course, if we are ready to reason correctly and without dogmas, that means that such an object will never arise in a non designed biological system, too. Once we have clarified that NS being able to do that is only a myth. gpuccio
It doesn't matter how much FI is in a protein because it remains that "they" don't have a mechanism capable of producing proteins. All proteins exist in existing life. Nature cannot produce them. Also, ask them how they determined beta lactamase evolved via blind and mindless processes. ET
Jawa
I doubt it. Good luck.
We found common ground on the last debate. They are not ready to throw out evolutionary theory so they need to keep the plausibility of the current mechanisms alive. What Gpuccio is correctly pointing out is this type of argument leads to inconsistencies. Steve Schaffer and Joe Felsenstein are arguing for lots of FI being generated by evolution in small increments. Art Hunt and Rumraket have seen the light and realize that 500 bits of FI is very unlikely achievable by evolutionary mechanisms and so are arguing for low FI in proteins. The problem is when you put all the augments all together and take their low ball number we still end up with 14000 bits in the spliceosome as it is made of 200 proteins that work together. As such the first transition between prokaryotic and eukaryotic evolution by natural selection and neutral mutations fails. Gpuccio's argument created this divide which they must reconcile. This is perhaps the path foward to common ground. bill cole
OLV: "Now, how much FI could it be associated with the system underlying the antibody maturation that generates about 40 bits of FI? Does this question make sense?" It makes a lot of sense! :) Let's say that there are two different questions, one relatively simple, the other extremely interesting: 1) How can low but non trivial levels of FI arise in the extremely complex and organized protein engineering system which is the immune system? My new OP is meant to propose some detailed analysis of that question. 2) How can a complex and extremely efficient protein engineering system like the immune system arise in a system (the biological system) which has no tools to engineer it? You, like me, certainly know that there is only one reasonable answer to that question. gpuccio
Bill Cole: I am not following at present these new arguments from Art about how low FI would be in proteins, because very simply I have not the time. I am working hard at other things. Maybe in the future. But I would like to make a very simple observation. How quote here what I said at #570: "However, most of them (I have some problem in not using the easy term “neo-darwinists”, after the fierce declaration by Swamidass that no such thing exists any more. Don’t know well how to name them as a whole: design deniers? :) ), most of “them”, I was saying, seems to have as highest priority to discredit FI in all its forms. This strategy takes usually one of many different ways: a) To deny that FI exists b) To deny that it can be measured in any real system c) To affirm that it exists, but there is not a lot of it in proteins, or in biological objects d) To affirm that it exists, and there is a lot of it in all biological objects, even those that are relatively simple e) To affirm that it exists, and there is a lot of it in non designed, non biological objects All of those ideas are wrong. Of course, for different reasons. But it is interesting to observe how the need to deny something can take so many different, and frankly opposing, pathways." Now, I can understand that different people, in their urge to discredit FI, may take different and contrasting ways. But why should the same person use two completely different strategies, each totally inconsistent with the other? And yet, that seems to be Art's choice. I will be more clear. First of all, I will say that, if I were a design denier, I would definitely stick to strategy c: "To affirm that it exists, but there is not a lot of it in proteins, or in biological objects" All the other choices are simply gross logical errors, and have no value. Option c, instead, while wrong, can be reasonably discussed, because it relies on an issue that is still not clarified, IOWs the functional space of proteins and the sequence - function relationship in that space. While I am certain that what we alredy know is more than enough to make the ID point and falsify design deniers, there is certainly some space for discussion. However, Art has recently defended, certainly with honest passion, two points that, to me, seems to be inconsistent. IOWs, both strategy: e) To affirm that it exists, and there is a lot of it in non designed, non biological objects and strategy: c) To affirm that it exists, but there is not a lot of it in proteins, or in biological objects I am referring, of course, to his statements that: 1) There is huge FI in tornadoes, events that are easily generated in the weather system by reasonably understood necessity laws acting on some random configurations (about 1253 tprnadoes per year only in the US) 2) There is very low FI in complex proteins, which obviously have a lot of it, like prp8. Now, while I obviously believe that both those statements are grossly wrong, I really wonder if am am missing something about the general logic here. IOWs, if FI is so common in non designed, non biological objects, like tornadoes, and examples of extremely high FI are daily generated around us, as Art seems to believe (the tornado argument), then it is obviously useless in the design inference. Then, what is the point in demonstrating that FI is so low in proteins and other biological objects? Is Art defending the strange and rather paradoxical idea that FI is extremely high in non designed non biological objects, but it is definitely extremely low in biology? That is really strange, and it is the exact logical opposite of what ID believes. Just a random coincidence? :) gpuccio
Bill Cole,
Maybe we will find some common ground at Peaceful Science with your ideas
I doubt it. Good luck. They would have to add “serious” to “peaceful” before you can have a productive discussion there. So instead of PS it would be SPS. :) Maybe such a name change along with the corresponding attitude change would make them more attractive to visits? :) This is today: Alexa global internet traffic ranks 2019-09-09 .............Top % Google ..............1 .......1 UD ..........641,159 .......1 PT .......2,043,165 .......3 TSZ .....3,330,124 .......4 PS .......7,081,297 .......8 Google stats added for comparison only. jawa
Gpuccio
So, beta lactamase is probably not a conserved protein, but I am not sure what type of conservation we are discussing here. I have not really considered this issue, so I can offer just a few generic ideas. My impression is that some beta lactamases are highly conserved in some bacterial groups, and much less in others. For example, E. coli ampC (P00811) is highly conserved in enterobacteria (700-750 bits), much less in other gammaproteobacteria, like Pseudomonas (about 360 bits). Not much is certain about the evolution of these proteins, so it is rather difficult to intepret these data in terms of FI.
I agree with you that these enzymes are not good examples of long conservation and think that Art did not yet fully comprehend your method. He made the claim today that prp8 had 70 bits of FI. This number is so low given the facts that after he did not understand my attempt to show him how absurdly low this number is I asked that we delay the discussion. We are stuck discussing this because the data we are calculating falsifies the grand claims of evolutionary theory. Evolutionists like Art who have studied and supported this theory for several decades are not quite ready to give it up. That is understandable to me. Like you I don't see any value in prolonging irrational discussions based on trying to protect a theory that is clearly obsolete. I am excited to discuss your new project on the immune system. Maybe we will find some common ground at Peaceful Science with your ideas. bill cole
GP: Please, allow me to post this short "off topic" announcement: Models of Consciousness Conference at Oxford PaoloV
GPuccio, This transcription regulation topic is very interesting. I appreciate how much one can learn from reading this discussion, as well as the older OP you wrote on a similar topic, despite it being a little too technical for me sometimes. However, I did not understand very well the discussion you had with people outside this current topic. It will be exciting to read the new article you're currently working on. pw
GPuccio @590:
I can simply say that they are wrong, and I have tried to explain why as clearly as possible.
Agree. If they don't want to understand, too bad.
Let’s assume for the moment, while waiting for a more details analysis, that antibody maturation can generate about 40 bits of FI. This is not unreasonable. And that it can do that in a few weeks.
Now, how much FI could it be associated with the system underlying the antibody maturation that generates about 40 bits of FI? Does this question make sense? OLV
GPuccio @ 589: Apparently my message @588 was not clear. The link I provided points to a list of recent papers related to tRNA and aaRS which I posted for UB but wanted to share it with you too. My comment was not about the "teaching" topic discussed in that thread. Here are the links to the papers: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000274 https://www.mdpi.com/1422-0067/20/1/140 https://www.mdpi.com/2075-1729/9/2/51 https://www.mdpi.com/1422-0067/20/12/2911 OLV
Bill Cole: "This is the point and the reason they are making irrational challenges. Your method is an understandable way to comprehend that evolution as it stands today is a very poor way to explain life’s diversity. A philosophic supporter of evolution has to fight it or give up his philosophy." So true. That's what I meant when I said that the discussion was a confrontation between two very different paradigms, and not a peer review of my procedure. That seems to have made them very angry. I really can't understand why. I was invited there, of course, as a defender of ID theory. Everybody there is well conscious of that. And yet, they desperately try to deny it. What is the sense in that? I say it again: I believe that our theory is right, and that their theory (however they like to name it) is wrong. That's why I spend my time writing things here (and there). There is no other reason. I am open to all resonable arguments and suggestions, but I am not seeking a peer review from those renowned scholars who believe things that, IMHO, are completely wrong. Is that my ego? Is that just being polemic? Maybe. But it's what I believe, and I will not change my mind only because those renowned scholars disagree with me. With us. "Art made a mistake saying Axe’s work refutes your hypothesis. " Of course. Axe's work is good evidence that protein space is not what our kind interlocutors think it is. That is absolutely consistent with what I have always said here. "Beta lactamase is not a conserved protein" OK. But I would like to know what beta lactamase we are considering, and what type of conservation. I paste here from Wikipedia a few words about the "evolution" of beta lactamases, just to give a general summary: "Beta-lactamases are ancient bacterial enzymes. The class B beta-lactamases (the metallo-beta-lactamases) are divided into three subclasses: B1, B2 and B3. Subclasses B1 and B2 are theorized to have evolved about one billion years ago and subclass B3s is theorized to have evolved before the divergence of the Gram-positive and Gram-negative eubacteria about two billion years ago.[23] The other three groups are serine enzymes that show little homology to each other. Structural studies have shown that groups A and D are sister taxa and that group C diverged before A and D.[24] These serine-based enzymes, like the group B betalactamases, are of ancient origin and are theorized to have evolved about two billion years ago.[25] The OXA group (in class D) in particular is theorized to have evolved on chromosomes and moved to plasmids on at least two separate occasions." So, beta lactamase is probably not a conserved protein, but I am not sure what type of conservation we are discussing here. I have not really considered this issue, so I can offer just a few generic ideas. My impression is that some beta lactamases are highly conserved in some bacterial groups, and much less in others. For example, E. coli ampC (P00811) is highly conserved in enterobacteria (700-750 bits), much less in other gammaproteobacteria, like Pseudomonas (about 360 bits). Not much is certain about the evolution of these proteins, so it is rather difficult to intepret these data in terms of FI. Here are some other data about beta lacatamases: https://www.asmscience.org/content/book/10.1128/9781555815639.ch22 gpuccio
Gpuccio
The simple truth is: my procedure is vastly underestimating FI. But it’s fine, because even so a design inference is the only reasonable explanation for most observed proteins in the biological world. I am satisfied with that.
This is the point and the reason they are making irrational challenges. Your method is an understandable way to comprehend that evolution as it stands today is a very poor way to explain life's diversity. A philosophic supporter of evolution has to fight it or give up his philosophy. It is most likely a conservative estimate of FI. When we observe a highly preserved long protein like prp8 there is no reasonable explanation how random change arrived at that position. If the protein was able to tolerate 10 different proteins at every position it would still contain 2335 bits. Art Hunt challenged my math but when I asked him for a counter estimate he changed the subject. Art made a mistake saying Axe's work refutes your hypothesis. Beta lactamase is not a conserved protein so it is acting exactly how your method would predict it would act. Catalytic activity despite AA substitutions at different positions. bill cole
To all here: While I work at the immune system (a lot of recent and interesting literature about that topic, big work!), I would like to clarify a very simple aspect. As I have said many times, and as I have summarized in some detail at # 577 here (but also in my posts at PS), the simple fact that, say, 10 different functions, each of it having 50 bits of FI, arise in a system does not in any way mean that 500 bits of FI have been generated, This is a point about which there is no possible negotiation. If people at PS or elsewhere want to think differenty, it's their choice. I can simply say that they are wron, and I have tried to explain why as cleraly as possible. I cannot do more than that. With those who disagree about that point, no further discussion about FI is possible. Of course. So, while I am trying to analyze as precisely as possible how many bits of FI are usually generated by the highly complex protein engineering lab that is the immune system, I would like to clarify a simple point. Let's assume for the moment, while waiting for a more details analysis, that antibody maruration can generate about 40 bits of FI. This is not unreasonable. And that it can do that in a few weeks. Of course, that happens many times in any individual, in the course of life. So, let's say that 20 antibodies are optimized by the immune lab in a reasonable window of time. In no way that means that 800 bits of FI have been generated. I just means that a system that can generate 40 bits in a few weeks can, obviously, do the same thing many different times. But that is not 800 bits of FI. That's all. gpuccio
OLV: I am not specially interested in teaching ID, but of course any dogmatic restraint of free thought and free learning is intrinsically sad. I believe that most philosophical works shoud be banned from school, according to that attitude. Certainly, many philosophers stated wrong things. But if we don't know what they said, and we are not free to discuss it, nobody can really decide for onself who is right and who is wrong. The final free cognitive choice for all humans. gpuccio
GPuccio, Off topic What do you think of this? https://uncommondesc.wpengine.com/education/demand-for-a-ban-on-teaching-creationism-in-welsh-schools/#comment-683607 Thanks OLV
This is a little confusing: Professor Art Hunt had a very successful “tornado” argument presented against GP’s teaching mission @ PS Apparently Dr Hunt’s brilliant “tornado” argument punched a hole in GP’s explanation, but apparently that brilliant contribution by Dr Art Hunt didn’t make a major difference in helping to improve the ranking of the websites where Dr Hunt and his party comrades post their clever comments. Why? Did I miss something in the picture? :) Alexa global internet traffic ranks and % 2019-09-08 ......Rank.......Top % Google ................1 .......1 UD ..........641,587 .......1 PT .......2,044,027 .......3 TSZ .....3,333,476 .......4 PS .......7,093,074 .......8 Apparently there are over 100 million active websites out of over 1.5 billion registered websites (most inactive) A few days ago PT was in the top 2% but now it dropped to the 3%? Apparently earlier this year PT was just a few 100K websites behind UD, both in the top 1%. But somehow PT has kept dropping. This is strange considering that their opinions seem to be more in synch with the popular trends lately? Is this right? Maybe I got this wrong ? :) Does that mean that both Google (ranked 1) and UD are in the top 1% of the active websites? :) jawa
GPuccio @585:
When we are not discussing in any way all the regulatory FI that is absolutely necessary for even the simplest and isolated functional protein to work.
...to work and even to get synthesized to begin with, as you explained so well a year ago in your own OP on transcription that you cleverly cited at the beginning of this OP. It could be only me, but I have to admit that I still don’t understand why it’s so difficult for those intelligent highly educated people at PS to understand the clear cases that GPuccio explained. Am I missing something in the picture? Regarding the next OP on the fascinating “immune system” topic, it seems like with every new OP you set the bar higher for yourself. It would take quite a creative effort to write something as interesting as the analogy with the surfers in this OP. Let’s wait and see. :) OLV
Bill Cole: It's difficult to discuss with those who don't want to listen. I have said many times that what I measure is not the exact number of functional sequences. It is a good estimator. People seem to forget very easily, on that side, that we are discussing logarithmic/exponential scales here. Now, we have used as a rule a threshold of 500 bits to infer design. That's something! With 240 bits as a higher threshold of the probabilistic resources of our planet (a very generous higher threshold). Now they argue that my estimator of FI could overestimate the true value. When I am using the Blast algorithm which, for identical aminoacid sequences, gives at most 2,2 bits per aminoacid site, when the full information value of one specific aminoacid site is of course 4.3 bits. Try to see how much is the difference, in a logarithmic scale. When, as explained, my method can only measure the conserved FI, and completely ignores the FI which is species or class specific, and that certainly is a relevant part of the whole FI in a protein. When we are not discussing in any way all the regulatory FI that is absolutely necessary for even the simplest and isolated functional protein to work. And OK, there is some space for optimization. Maybe the alpha and beta chains of bacterial ATP synthase started simpler, and then were oprimized by some NS. Maybe. Maybe not. For what we know, we observe the highly specific sequences that are almost the same in bacteria as in humans. But let's say that those sequences started different. Simpler, less functional. We have about 600 conserved aminoacids in the two chains. Which are only part of a bigger complex, in general much less conserved. So, hoe simple do these people believe that the initial protein subunit was? 10 specific AAs? 20? 30? 30 specific AAs are already almost at the very generous edge of probabilistic resources. And all optimizations driven by NS we know of are a few aminoacids at most. Maybe 5 or 10 in the very efficient protein engineering lab that is the immune system (which, of course, I am analyzing in detail at present), and which of course is not an example of RV + NS, but rather of how proteins can be designed by complex systems. But I don't want to anticipate. So, maybe there is some room for optimization. That would mean a few similar sequences with lower, or slightly different, function. And so? And maybe there are a few completely different complex solutions, which increase the target space for the general function os, what? 10 times? 100 times? 100 times is less than 7 bits. Wow! These people seem not to realize that, when I compute, by my procedure and using the Blast algorithm, a FI of 663 bits to the beta chain of ATP synthase blasting the human form against E. coli, with 335 identities and 383 positives in a 529 AA long sequence, I am proposing that the target space is 2^1653. Indeed, the potential information value for such a sequence is 2^2286, therefore 663 bits of FI correspond to an extremely big target space: 2^1653 sequences that can be functional in that context. Oh, but our interlocutors are worried that I may have missed a couple of similar sequences, or a few AAs of optimization! The simple truth is: my procedure is vastly underestimating FI. But it's fine, because even so a design inference is the only reasonable explanation for most observed proteins in the biological world. I am satisfied with that. gpuccio
Gpuccio Re interesting exchange at PS. rt Arthur Hunt Plant Biologist 23h colewd:
Curiously enough, Axe’s work refutes @colewd. Axe, if you may recall, crafted a beta-lactamase that was quite unlike any that is seen in databases. His work shows that “preservation” is not a good way to estimate the numbers of functional sequences, and thus FI.
colewd Bill Cole Uncommon Descent 12m Art: Curiously enough, Axe’s work refutes @colewd. Axe, if you may recall, crafted a beta-lactamase that was quite unlike any that is seen in databases. His work shows that “preservation” is not a good way to estimate the numbers of functional sequences, and thus FI.
Beta lactamase is not highly preserved.
bill cole
GPuccio, That’s exciting news. Thanks. I look forward to reading your next OP. OLV
To all here: I am definitely working at a new OP about the immune system from the point of view of FI. Fascinating topic! :) It will include answers about catalytic antibodies, affinity maturation, the formation of the basic repertoire, and so on. I don't know how much time it will take, but I am working at it. gpuccio
Groovy :) Looks like some interesting new post to read Gpuccio. Good to see you! Hopefully will read this soon. Thanks! DATCG
What does GAE mean?
Genealogical Adam and Eve. bill cole
GP @570:
I am not aware of any direct answer to that, or to the final direct questions I offered there.
Well, maybe their response was to shutdown the whole discussion and hide their embarrassing failure away from potential viewers. :) jawa
PeterA: "Is it the Texas… something?" The Texas something, yeah! :) gpuccio
To all here: Wrong examples of FI computation: This is a more important point, and I see that most people at PS are convinced of this wrong approach. And they are convinced that their wrong way of conceiving and computing FI is a confutation of ID theory. Well, they are almost right about that: the truth is that their wrong ways of conceiving and computing FI is a confutation of thir wrong ideas of what ID theory is. They are doing all by themselves: wrong understanding of tyhe theory, confutation of their wrong understanding. I have tried many times to explain that FI is simply the computation of the minimal number of bits necessary to implement some well defined function. IOWs, it is a property of the function. What is a function? A well defined description of something that can be done using the object. Now, there are simple functions and complex functions. Simple functions have only a few bits of FI, complex functions have high values of FI. As I have discussed in my first OP about FI, linked many times, a stone form a beach can implement the function of paperweight: maybe not the most elegant, but perfectly efficient is the form and weight of the stone are good enough. The target space her is big. We can find a lot of such stones on the beach. We usually do not find a watch on a beach, unless someone lost it. This is the very simple concept of FI: a stone has low FI, and the functions ot can implement are simple. A watch has high FI, if we define it for the function of measuring time accurately. Well, what do out friend seem to believe? They say that FI can be summed for simple functions, to get high values of FI. Because FI is multiplicative. Is that true? Of course no. I have given the example of the safes just to show that. But I will give here another one, even simpler. Let's go back to the beach. Let's say that a stone of appropriate weight and form has about 1 bit of FI: IOWs, in average, one stone out of two is good. It's like the small safes in my other example, where one solution out of two was correct. So, a boy who wants to sell simple paperweights to tourists go to the beach, and starts looking at the stones to choose those that are good for his trade. After ten minutes, he gathers 100 of them, and he goes home, happy. According to the reasonings of our "friends", the result of the search is an example of 100 bits of FI. Of course, that's completely wrong. It's simply a sum of 100 results of very low FI, in a low number of attempts. The FI is low, the probability is low, the required number of attempts is low, the wait time is low. Consistently. Why? Because FI is a measure of the improbability of getting to a result by chance, because of its specific comlexity. There is nothing complex in finding 100 examples of 1 bit stones in a beach. I use the binomial distribution to analyze that type of context.If each correct stone is a success, the boy has a probability of 0.5281742 to find 100 good stones in 200 attempts. An object exhibiting 100 bits of FI, instead, is simply an object so specific in its form and weight, for example, that we have only a probability of 1:2^100, IOWs a probability of 7.888609e-31 in one attempt. The probability of finding 100 such objects in 200 attempts is so low that my software (R) just gives me zero as a result. The reason for that abysmal difference? 100 instances of objects with 1 bit of FI are not an instance of an object with 100 bits of FI. So, how can we avoid that error? It's simple, We mjust simply use the concept of FI as it is, and not try to change it. FI is the specific improbability to find an object with the defined function in a random system, as a result of the random configurations that arise in the system. A few important rules to use FI properly: a) The configurations must not be determined by necessity laws: they must be random configurations really accessible to the system as a result of the laws acting in it. Possibly with comparable probabilities. We will see how important is this rule in the case of tornadoes. b) FI is computed for one object implementing a well defined function, or for a system of objects which are all necessary to implement the defined function. c) Functions must be real functions, IOWs, something independently defined that can be done using the object. d) The function for which we compute FI must be considered as not deconstructable (not decomposable) into simpler functions. IOWs, no such deconstruction must be known or available. Note: as we are discussing empirical science, it is not requested that we have a mathematical or logical demostration that no deconstruction is possible. The empirical absence of known deconstruction will do, of course with possible falsification, consistently with Popper's rule. Let's try to understand how important is this point. Let's go back to the 100 small safes and the one big safe. The 100 small safes are an example of 100 simple results. No high FI there. The 1 big safe is an example of 100 bits of FI in one object. But, of course, Swamidass has tried to confound even this simple context. And I have given some thoughts to his arguments, finding them wrong. I will try to give a few clarifications, to avoid confusion about this point. So, what if we define the function as "having 100 simple functions found" in some random system? No problem with that. It is not really a function, unless you can demonstrate that those 100 simple functions generate a higher meta-function. So, I would define that not a function, but rather the description of a state. However, it is a very likely result, if the rpobablisitc resoruces are adequate. For example, we have seen that about 200 attemps are needed to have something more than 0.5 probability of finding 100 one bit solutions. Even, if it is not a function, we can just the same compute the probability of that state. So, what about the case where those 100 simple functions do generate a new meta-function? Please, follow me with attention here. Let's say that. for some strange reason, the ordered bits that are the right solutions to open the 100 small safes (which of course are ordered too) are exactly the solution to open the big safe. So, the thief opens the 100 safes, writes the sequence of the bits, then goes to the big safe, opens it, and goes home doubly satisfied! Has he found a 100 bit function? Apparently, yes. He has opened the big safe. How did he do that? In such a short time? The simple asnwer is: because the 100 bit function was deconstructable into 100 simple functions, each of them of 1 bit, and each of them contributing to the final function. And, and this is the really important part, each of them selectable because of its simple function. Let's understand this point of the selection. Let's say that the thief tries to open the 100 small safes by trying one sequence of 100 bits at each attempt. After each attempt, a numaber of small safes will open. But the thief cannot see that result, or racognize it in any way. He cannot know which bits were "correct". He can have feedback from the system only if he finds the right sequence of bits. then, both the 100 small safes and the one big safe will open, and will be shown to him. Otherwise, the small safes close again, and he will have to try a new sequence. This particular case corresponds to a context where solutions for the simple functions can be easily found, but they are not in any way "selected" and fixed. It can be the case for some simple functions that can be easily found, but give no fitness advantage to the cell. They are not fixed, they undergo no form of recognition and selection, and they will be probably lost in the next few attempts. This is a situation where there are 100 bits of FI. The functioncannot appear and be recognized and selected unless the whole sequence of 100 bits is provided. And there is nothing in the system that can help find it. IOWs, the system can only use its probabilistic resources. If the probabilistic resoruces are mcuh lower than the FI, the result will never be found. So, we come to a statement that Ihave always given here at UD, every time I have debated NS. Some here do not like it too much, but it is perfectly true: Any form of selection, IOWs any form of process that can recognize a configuration which is a step towards a higher meta-function, and fix it, lowers the probabilistic barriers for that final result. IOWs, it lowers the FI of the final function. IOWs, deconstructable functions have lower FI than similar functions which are not deconstructabel. Sometimes much lower. My friends, don't worry about that: the simple truth is that complex functions cannot, as a rule, I would say almost never, be deconstructed in that way. That's why NS has only a minor role in the game. As discussed by me elsewhere. And, of course, by Behe in his wonderful books. (It seems that all my statements about Behe are annoying for the PS people. But I just say what I think) So, in theory, if we can deconstruct into 1 bit steps a compelx function, however complex I would say, and we can in some strange way recognize and fix each individual bit, we can easily find the final function. That's more or less what Dawkins did with the weasel. But there is an even easier and direct way to do it. Just use the final result, the correct phrase, as an oracle, and try a random letter for each position. Then keep the right ones, and try again with the rest. In a very limited number of attemps, you will get the right phrase in all its beauty. Dawkings used a more complex strategy, always using the text that he had to fins as an oracle, but making things a litlle more difficult, just to give the appearance of credibility. Of course, the simplest way, if you have the text, is to copy it directly. But then you are not using randmness at all, and there is no fun. So, we want to find the exact sequence of the beta chain in human ATP synthase? What's the problem? We can easily demostrate that GP is a liar in his silly reasonings about FI. We can just measure, after each random attempt, the sequence similarity of our random sequence to our oracle (of course, the right sequence). We keep only the sequences where the similarity grows. If it goes, down, we go back to the previous state. I am sure that we can find the solution in very reasonable times. After all, that's how antibody maturation is obtained, nore or less. But I am anticipating things. The point is: any form of selection lowers the probabilistic resources. How much? Let's say that our 100 bit solution can be deconstructed into 70 steps. For each of them the right solution, of found, can be selected and fixed, because a smaller safe opens. Each ordered solution os an ordered part of the final meta-solution (the key to the big safe). OK, the deconstruction is as follows: 35 small safes, each with a 1 bit key. 1 intermediate safe with a 30 bit key 35 small safes, each with a 1 bit key How do we compute the FI of the big safe, so deconstructed? It's easy. It is essentially the FI of the single 30 bit safe, plus a trivial contribution from the other ones. The thief will open quickly the first 35 safe, and the last 35 safes. But the problem is the 30 bit safe. That is not easy at all. He needs approximately 1 billion attempts to have a good probability of success. At 1 minute for attempt, that's about 1900 years! So, we can correctly say that the FI of the 100 bit function, so deconstructed, is about 30 bits, with a very trivial contribution from the other bits. So the point is: if our theory is that a complex function can be reasonably accessible to our random system, we have to demonstrate that there exists one deconstruction of the functioninot simpler steps, each of them contributing to the fuinal meta-function, each of them selectable. With a form of selection, fixation or expansion which is available in the system. OK, that's enough for the moment. gpuccio
Bill Cole,
His main objective is to keep both sides talking with the exception of areas that support his work on the GAE.
Why would he want to keep both sides talking? To increase visits to his website or to reach a compromised settlement on the discussed topic? If it’s the former, then he better does it right, because his website seems sinking in the Alexa ranking (close to 4M positions down in the last three months). What does GAE mean? jawa
PeterA, Here’s what GP was apparently referring to: https://en.m.wikipedia.org/wiki/Texas_sharpshooter_fallacy jawa
Now, I am not really convinced that he really believes the things he has stated about this point.
Gpuccio I think you are right here. Many times Josh will argue contrary positions just to stimulate conversations. He often uses logical fallacies that are obvious and I have a hard time believing he does not realize he is doing this. It is very hard to argue against Behe's carefully laid out positions without invoking logical fallacies. His main objective is to keep both sides talking with the exception of areas that support his work on the GAE. bill cole
GP @570: Excellent explanation of the mistaken FI examples raised by some folks at PS. Thanks.
Asking: “I want exactly this configuration” is meaningless, from the point of view of FI. That is drawing the target after having made the shot, and drawing it where no target at all existed before (does that remind you of something?
Is it the Texas... something? :) PS.
However, her I want to deal briefly with option e), Of course, just after I had cautioned him that, as any reasoning person would understand without any need of being cautioned, using the observed bits to build an ad hoc function is the words logical fallacy one can imagine
here? worst? :) PeterA
Hi Olv
Also, Bill Cole -who did a nice work coordinating your teaching mission at PS- just mentioned a possible joint work with professor Behe? Did I get that right?
I would not rule out this possibility especially if we get more traction with GP's ideas within the scientific community. The take away for me is that both concepts, irreducibly complexity and showing high amounts to information gain are compatible as irreducibly complexity shows highlights where extensive random search is necessary. If we see 500 bits we can safely infer design. bill cole
OLV: "Also, Bill Cole -who did a nice work coordinating your teaching mission at PS- just mentioned a possible joint work with professor Behe? Did I get that right?" Well, that's probably Bill Cole's desire! :) gpuccio
To all here: Wrong examples of FI: First of all, I want to clarify why the attacks made to the concept of FI in the last part of the discussion there were completely wrong. It is strange to see that the need to dicredit, or simply deny, FI and its importance is, in the end, the main argument used against ID theory. This is surprising in a way, because some certainly understand what FI is and why it is so important even in the field of our "interlocutors", After all, my concept, and even my definition, of FI is not different from Szostak's. And Arthur Hunt has expressed some interest in the concept, if I understand well his comments. However, most of them (I have some problem in not using the easy term "neo-darwinists", after the fierce declaration by Swamidass that no such thing exists any more. Don't know well how to name them as a whole: design deniers? :) ), most of "the", I was saying, seems to have as highest priority to discredit FI in all its forms. This strategy takes usually one of many different ways: a) To deny that FI exists b) To deny that it can be measured in any real system c) To affirm that it exists, but there is not a lot of it in proteins, or in biological objects d) To affirm that it exists, and there is a lot of it in all biological objects, even those that are relatively simple e) To affirm that it exists, and there is a lot of it in non designed, non biologcial objects All of those ideas are wrong. Of course, for different reasons. But it is interesting to observe how the need to deny something can take so many different, and frankly opposing, pathways. However, here I want to deal briefly with option e), and in particular with the examples provided by Swamidass in response to my challenge to give even one counter-example of FI higher than 500 bits in non designed, non biological objects. This is an important point, because the simple fact that no example exists of high FI in non designed and non biological objects, in all the known universe, is absolutely true, and is one of the important empirical foundations of ID theory. But of course, I had underestimated the zeal of my interlocutors in stating the most irrationl things to make their wrong points. I want to clarify here that I am not saying that about the tornado example, which is wrong but has some reason to be discussed in this context. I will discuss that special case in a later comment. I am referring here to the examples provided by Swamidass, which are beyond any possible excuse. Now, I am not really convinced that he really believes the things he has stated about this point. Knowing that he is an intelligent and competent person, it is difficult for me to believe that. Maybe it was only stratefy. Not good, anyway. However, I have to accept his position as it was expressed. And his position is, as said, beyond excuse. He has given 4 examples. I have considered only the first, because the other ones are probably the same thing. The starry sky. According to Swamidass, the starry sky is a case of extremely high FI. Why? you may ask. And you are perfectly justified in asking why, because there is apparently no reason to think that. The answer: because it implements many functions, such as navigation, orientation, and myth telling. Yes, exactly that. Now, I have answered that in some detail with the arguments that you can find here at #470 and, more in detail, #492. A correct definition of function for the starry sky shows that it has functions, but that they have extremely low FI, because practically any random configuration, except for those highly ordered, can implement them efficiently (navigation etc.). Not happy with that, Swamidass precises that his function is not about any configuration that can implement navigation or myths (the only correct way to define that function independently), but that he wants exactly the configuration we observe, the myths we have, and so on. Of course, just after I had cautioned him that, as any reasoning person would understand without any need of being cautioned, using the observed bits to build an ad hoc function is the worst logical fallacy one can imagine. But that is exactly what he has done. Asking: "I want exactly this configuration" is meaningless, from the point of view of FI. That is drawing the target after having made the shot, and drawing it where no target at all existed before (does that remind you of something? :) ) With this logic, each deck of cards would be an example of high FI. Ah, but I suppose that is exactly what our interlocutors want to believe. FI everywhere, which is the same, after all, of having FI nowhere. So, I answered that with a rather hot comment, that you can find at #507. Including a confutation of Swamidass' false accusation to me of having broken my rules using the bits to define functions. Something that I have never done. I am not aware of any direct answer to that, ot to the final direct questions I offered there. So, this is the first serious error: FI does not abound in that type of systems, which could include winds, clouds, the form of continents, and whatever. It is, indeed, almost absent there. Of course we use the form of continents to navigate. But we would do the same if the form were different. Practically any form of the continents can and must be used to navigate efficiently (except maybe those incompatible with navigation. So, FI almost zero, in all those cases. That's it. gpuccio
GP:
I am also preparing some discussion about the immune system, mainly because the subject deserves an update, and also to answer some very wrong statements made at PS about that. I am not sure, but maybe that will go onto a new OP.
Excellent! Thanks! I look forward to reading more OPs from you. Also, Bill Cole -who did a nice work coordinating your teaching mission at PS- just mentioned a possible joint work with professor Behe? Did I get that right? Perhaps a new book is in the oven? :) OLV
OLV: "which to me personally seemed frustrating." It was, from many points of view. But some things were good. I have no regrets. :) gpuccio
GP, Glad to know you rested. Also glad to have you back after your stressful teaching mission at PS, which to me personally sometimes seemed frustrating because the audience did not look very interested in learning. However, your clear message could make some of your interlocutors there to think more seriously about what you told them. Perhaps some of them could even be persuaded to take ID more seriously in the near future? :) It’s encouraging to know that you’re planning a new OP. Thanks OLV
To all here: Hi guys, thank you for your interventions here. I have taken a little rest. Deserved, I dare to say! :) OK, discussing with the people at PS is rather frustrating, especially when the discussion, after an initial appearnce of scientific detachment, took the definite tone of desperate denial. My opinion, of course. So, I think that I will do the following: I will sum up here some of the major differences between my thoughts (and probably those of most people here) and those of most people at PS. As I see them, of course. I am firmly convinced that they are really wrong on those points, and I will explain again briefly the reasons for that. That will include a brief discussion about tornadoes, which is probably the single interesting example made by them (indeed, by Arthur Hunt, who has been, to be frank, probably the best inbterlocutor there. Always in my opinion). I am also preparing some discussion about the immune system, mainly because the subject deserves an update, and also to answer some very wrong statements made at PS about that. I am not sure, but maybe that will go onto a new OP. This work will be done here, in relative peace. Then I will see what is the best way to share it with those at PS, too. But, for the moment, I am writing for you, my friends! :) gpuccio
Bill Cole, Thanks for answering my questions. PeterA
Peter
Did professor JS respond to GP’s comment posted here @554? BTW, you did a nice work promoting and encouraging this interesting debate between GP and the folks at PS. Thanks. PS. It’s not your fault that the PS folks approached it in such a fuzzy manner. You did your part well.
-A few back and forth by Steve and Joshua this AM. -I think Gpuccio's work has some real potential. Mike Behe read the initial exchange and agrees. I think the merger of Gpuccio's ideas and Mikes has interesting possibilities. In the discussion above Gpuccio claims that wait time is about the same as FI and I agree for an irreducibly complex system. -Thanks for your kind words. I think the evolutionary position and population genetics are in trouble as the problem Gpuccio is surfacing in real, he is proposing a real measurement and a test, and the idea of adding bits of different functions to calculate FI violates Hazen's and Szostak's definition of functional information. bill cole
Hi Peter
I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here. broaching this subject? What subject? the approaches used by the ID vanguard? Huh? the relevant metric that ID proponents should be measuring? Huh?
I will answer the best I can. 1. The subject is the Intelligent design theory. 2. No trying to disqualify evolution by a single protein analysis as some did with Axe's work.https://pandasthumb.org/archives/2007/01/92-second-st-fa.html 3. Measuring information. He is interested in Gpuccio's method to explore as a possibility. bill cole
Out of one side of his mouth Joshua insists he isn't a Neo-Darwinist. And yet his words and actions say that he is. It is a safe bet he doesn't know what then term means as Nathan Lents, Lenski, Dawkins, Coyne, et al, are all Neo-Darwinists. It's sad watching him live a lie... ET
GP @554:
why don’t you try to make some analysis of that type, and let’s see the results? I am ready to consider them.
Excellent suggestion. Let’s see how he responds to this. PS. What is EVD? PeterA
Bill Cole, Did professor JS respond to GP’s comment posted here @554? BTW, you did a nice work promoting and encouraging this interesting debate between GP and the folks at PS. Thanks. PS. It’s not your fault that the PS folks approached it in such a fuzzy manner. You did your part well. PeterA
Bill Cole, I didn’t understand what you wrote @552. Perhaps my question wasn’t clear enough. Let me try it differently: Did you understand what Art Hunt meant by the below quoted comment (specially the highlighted text)?
I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here.
broaching this subject? What subject? the approaches used by the ID vanguard? Huh? the relevant metric that ID proponents should be measuring? Huh? Emphasis added. Thanks. PeterA
Us lurkers are very much thankful to you, gpuccio. This has been most interesting and clarifying to my own understanding. And thanks to the others involved too. Keep up the great work. mike1962
I was just saying what they already have been saying. ET
ET: There is no Sharp Shooter Fallacy. The functions exist independently and objectively, we are not inventing them. Period. gpuccio
OK, even if you could get a new gene by chance- that is a gene with a start codon, stop codon, a nucleotide sequence between them and a binding site- if it doesn't code for the right sequence of amino acids it won't fold. And even if it does fold that isn't any guarantee it will be functional. That "sharp shooter fallacy" may not be anything of the kind. The sad part is they think that because there wasn't a target, it somehow, magically, makes it even odds of happening. "Oh, there wasn't any target, you IDiot. It all just happened. And we know that because there it is. The odds of you being you are ginormous and yet here you are- your ancestors could have never figured the odds of you being here and here you are. So your probability arguments are ignorant." That is the PS POV summary of events ET
Swamidass at PS:
In a non-decomposible system (1 safe, with a 100-bit combination), the wait time is 2^100.
OK, I will try to simplify this point. FI, if correctly understood and applied, is related to the wait time. More or less as 2^FI The point is, FI is the number of bits necessary to implement one well defined function. Without those bits, the function simply does not exist. That means that the function is treated as non decomposable. Therefore, the wait time is approximately 2^FI. Therefore, FI, used correctly, expresses the probability of finding the function in a purely random system, if no necessity intervention, like NS, intervenes. That is the purpose of FI. That is the reason it is useful. Now, if the function can be demonstrated to be decomposable, FI must be analyzed taking into account the decomposition. Which, in a biological context, means the effects of NS. It is not true that decomposition of a function has nothing to do with selection. In the case of the samlle safes, the wait time is very short because the simpler functions are recognized as such (the safe opens, and the thief gets the money. In a biological system, that means that the simpler function must work so that it can be recognized and in some way selected. Otherwise, those simpler functions would not change at all the probability of the final result, or the wait time. If the thief had to try all possible combinations of 0 and 1 for the 100 safes, and become aware that something has happened only when all 10 safes are open, then the problem would be exactly the same as with the big safe. So, intermediate function is always a form of selection and as such it should be treated. So, any intermediate function that has any influence of the wait time has also the effect of lowering the FI, if correctly taken into consideration. Moreover, a function must be a function. some definite task that we can accomplish with the object. The simple existence of 10, or 100, simpler functions is not a new function. Not from the point of view of FI as it must be correctly conceived and applied. The correct application of FI is the computation of the bits necessary to implement a function, a function that does not exist without all those bits, and which is not the simple co-existence of simpler functions. IOWs, there must be no evidence that the function can be decomposed into simpler function. That said, 10 objects having 50 bits of FI do not mean 500 bits of FI. And the wait time for a complex function, if FI is correctly applied, is more or less 2^FI. If you want to conceive and apply FI differently, and apply it to co-existing and unrelated simpler functions, or to functions that can be proved to be decomposable, you are free to do as you like. But your application of the concept, of course, will not work, and it will be impossible to use it for a design inference. Which is probably, your purpose. But not mine. So, if you insist that FI is everywhere in tons, in the starry sky, in the clouds, maybe even in the grains of sands of a beach, you are free to think that way. Of course, that FI is useless. But it is your FI, not mine. And if you insist that the 100 safes and the big safe have the same FI, and that therefore FI is not a measure of the probability and of the wait time, you are free to think that way. Of course, that type of FI will be completely useless. But again, it is your FI, not mine. I believe that FI, correctly understood and used, is a precious tool. That’s why I try to use it well. Regarding the EVD, I am not convinced. However, if you think that such an analysis is better than the one performed by the binomial distribution, which seems to me the natural model for bynary outcomes of success and failure, why don’t you try to make some analysis of that type, and let’s see the results? I am ready to consider them. The objection of paralelism in some measure I understand. But you must remember that I have computed the available attempts of the biological system as the total number of different genomes that can be reached in the whole life of our planet. And it is about 140 bits, after a very generous gross estimate of the higher threshold. So, the simple fact here is: we are dealing (always for a pure random system) with at most, at the very most, 140 bits of possible attempts everywhere, in problems that have, in most cases, values of FI much higher than 500 bits for proteins for which no decomposition has ever be shown. Why should parallelism be a problem? Considering all the possible paralllel attempts in all existing organisms of all time, we are still at about 140 bits. OK, I am tired now. Again, excuse me, I will probably have to slow down my interventions. I will do what I can. I would like to deal, is possible, with the immune system model, because it is very interesting. Indeed, I have dedicated a whole OP to that, some time ago. Antibody Affinity Maturation As An Engineering Process (And Other Things) And I think that this too is pertinent: Natural Selection Vs Artificial Selection And, of course, tornadoes, tornadoes… :) Ah, and excuse me if I have called you, and your friends, neo-darwinist. I tend to use the expression in a very large sense. I apologize if you don’t recognize yourself in those words. From now on, at least here, I will use the clearer term: “believer in a non designed origin of all biological objects”. Which, while a little bit long, should designate more unequivocally the persons I have come here to confront myself with. Including, I suppose, you. gpuccio
GP @550: “Oh, it seems that the thread at PS is going to close in one day.” If they don’t do something to improve their Alexa Global Internet Traffic Ranking position, they might have to close the entire website. UD 631,311 627,114 612,722 602,627 602,965 UP 191 K 578 Total Sites Linking In PT 1,732,931 1,736,969 1,743,372 1,592,453 1,628,896 DN 150 K 950 Total Sites Linking In TSZ 3,215,461 3,222,145 3,226,071 3,228,639 3,323,453 DN 830 K 37 Total Sites Linking In PS 7,036,059 7,051,188 7,059,655 7,064,442 7,067,236 DN 3.7 M 12 Total Sites Linking In jawa
Hi Peter
Do you understand the text quoted @524?
He is raising an issue with Gpuccio's method of calculating FI. There was nothing new gained regarding known strengths and weakness of the method. At the end of the day since sequences observed are so long there is almost no window for RMNS to work. Gpuccio's results are shedding great doubt if that window exists at all. bill cole
GPuccio @550:
Oh, it seems that the thread at PS is going to close in one day.
Time for a break? :)
May be they are sure they have reached something final.
Good for them. :)
OK, as I still have many things to say, in case I will continue here.
Welcome back! Thanks! Good for us here! PeterA
To all here: Oh, it seems that the thread at PS is going to close in one day. May be they are sure they have reached something final. OK, as I still have many things to say, in case I will continue here. :) gpuccio
To all here: This is my comment at PS about the question of probabilities. I still have to discuss the specific case of antibody maturation in the immune system. gpuccio (quote): "Let’s state things clearly: 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI." Swamidass:
This is a new one. Probabilities are multiplicative, so information is additive. Information is the log of a probability. So yes, 10 objects with 50 bits of FI each are exactly 500 bits of FI..
OK, let’s clarify this. 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI. 10 objects with 50 bits of FI each are 500 bits of FI only if those 10 exact objects are needed to give some new defined function. Let’s see the difference. Let’s say that there is a number of possible functions in a genome that have, each of them, 50 bits of FI. Let’s call the acquisition of the necessary information to get one of those functions “a success”. These functions are the small safes in my example. The probability of getting a success in one attempt is, of course, 1:2^50. How many attempts are necessary to get at least one success? This can be computed using the binomial distribution. The result is that with 2^49 attempts we have a more than decent probability (0.3934693) of getting at least one success. How many attempts are necessary to have a decent probability of gettin at least 10 successes, each of them with that probability of success, each of them with 50 bits of FI? Again, we use the binomial distribution. The result is that with 2^53 attempts (about 16 times, 4 bits, the number of attempts used before) we get more or less the same probability: 0.2833757 That means that the probability of getting 10 successes is about 4 bits lower than the probability of getting one success. The FI of the combined events is therefore about 54 bits. Why is that? Why do probabilities not multiply, as you expect? It’s because the 10 events, while having 50 bits of FI each, are not generating a more complex function. They are individual successes, and there is no relationship between them. That’s why the statement: 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI. is perfectly correct. Those ten objects have 500 bits of FI only if, together, they, and only they, can generate a new function. In terms of the safes, solving the 100 keys to the samell functions generates 100 objects, each with 1 bit of FI. But finding those 100 objects does not generate in any way 100 bits of FI, because the 100 functional values found by the thief have no relationship at all with the 100 bit sequence that is the solution for the big safe. I hope that is clear. We can rather easily find a number of functions with lower FI, but their FI cannot be summed, unless those functions are the only components that can generate a new function, a function that needs all of them exactly as they are. Please, give me feedback on this point, before I start examining the example of affinity maturation in the immune system. This is not only to Swamidass, but to all those who have commented on this point. By the way, I was forgetting: using the binomial distribution, we can easily compute that the number of attempts needed to get at least one success when the probability of success if 1:2^500 (500 bits of FI) is 2^499, with a global probabilty of 0.3934693 gpuccio
Case 2. Two independent events, each with probability p, and each event is independently usefule, so it can be retained by negative selection when found.
This is too vague to tell what the FI of this event is imo. bill cole
All...here is an post by Dr Swamadass for comments. swamidass S. Joshua Swamidass Confessing Scientist 26m @glipsnort that is not a well defined example. Case 1.Two independent events, each with probability p, and success requires both at same time, and there is no benefit to one alone. Case 2. Two independent events, each with probability p, and each event is independently usefule, so it can be retained by negative selection when found. Case 3. One event with probability p^2. All else being equal, perhaps with some caveats to be clarified: The FI is the same for all three cases (success at all events). Single trial success is identical in all cases: p^2, with FI 2 log p. Evolutionary wait time in Case 1 and 3 is the same: p^2, with FI 2 log p. Evolutionary wait time in Case 2 is much less: p * 2, with FI 2 log p. Case 1 is equivalent to the strictest (and known to be false) version of irreducible complexity (IC1). Even Behe acknowledges that this is not how biology works. For very good reason, modern evolutionary theory works like Case 2, which had far lower wait times than Case 1 and 3. FI does not correlated with wait time! The decomposability of the system breaks this relationship. This result does not depend on fitness landscapes at all, just random sampling (tornado in a junkyard) plus NEGATIVE selection, not Darwinistic positive selection. bill cole
Joshua wrote:
We have read Behe’s three books @gpuccio. We have assessed them carefully. Have you read our response to Darwin Devolves?
I have and all three of you struck out. Swamidass et al., just blindly accept any narrative against Dr Behe, even when the narrative is devoid of science. Dr. Behe didn't take their review seriously. Only the willfully ignorant did. ET
Bill Cole, Do you understand the text quoted @524? PeterA
This discussion is starting to remind me of the "Methinks it is like a weasel" problem. It's one thing to get that sentence. But in reality that sentence only has a function in one and only one literary work. It could never work in any of Mark Twain's books. It could never work in a Hemmingway novel. Could you imagine Mark Antony saying "Friends, Romans, countrymen. Methinks it is like a weasel."? HT "Disinherit the Wind"- I highly recommend anyone and everyone read that play ET
To break this down against Hazen and Szostak's definition given Steves example we need to know: -What is the defined function? An effective antibody -What is the functional information contained in bits. 60 bits or 1e-18 chances a random sequence will solve the problem. As I see it all Steve is generating with new sequences are additional tries at hitting the target. The functional information pertaining to this anti body remains fixed at 60 bits IMO. Thoughts? If Steve had to generate two different antibodies with FI=60 bits that bound to two different lethal hosts and without success the animal would die then I think his math works. - bill cole
Steves latest post: Gpuccio: So, you see, 100 objects with one bit of FI each do not make 100 bits of FI. One object with 100 bits of FI is the real thing. The rest is simply an error of reasoning. Steve:
You are quite correct. My mistake was in treating the 60 bits as representing the probability of finding a particular antibody per infection rather than per B cell. In the former case, my calculation would be correct. (In the 100 safes scenario, the correct analogy would be the probability of unlocking all 100 safes by flipping a coin once as the thief encounters each safe. That probability is indeed the same as that for guessing the 100-bit combination by flipping 100 coins.) But since the 60 bits is per B cell, the probability per infection is much higher. So let’s ballpark some numbers for the real case. We’re assuming the probability of hitting on the correct antibody is ~1e-18, which is 60 bits worth. How many tries do the B cells get at mutating to hit the right antibody? Good question. There seem to be about 1e11 naive B cells in an adult human. Only a fraction of these are going to proliferate in most infections. Let’s say 10% of naive B cells each proliferate 100-fold. That give 1e12 tries at a 1e-18 target, for a probability of randomly hitting the target of 1 in a million per infection. That corresponds to ~20 bits. So each week in this scenario only contributes 20 bits of probability, not 60, and the time to reach 500 bits is 25 weeks, not 8. (Note: this 500 bits represents the same probability as hitting a 500 bit target in a single try.) If my guess of the proliferation is off by an order of magnitude, knock off a few more bits. It still takes less than a year to get to 500 bits, and a lot less than 1e38.
bill cole
glipsnort at PS:
Yes, your math is off by 37 orders of magnitude.
Wow, you guys in the anti-ID field seem to be really fond of this error. Let’s state things clearly: 10 objects with 50 bits of FI each are not, in any way, 500 bits of FI. Which is what Rumracket (and maybe you) seems to believe when he says: If natural selection can add 60 bits of FI in a few weeks, why can’t it add 500 bits of FI over the course of (say) 20 million years? To make things more clear, I will briefly propose again here my example of the thief and the safes, that I used some time ago to make the same point with Joe Felsenstein. It goes this way. A thief enters a building, where he finds the following objects: a) One set of 100 small safes. b) One big safe. The 100 small safes contain, each, 1/100 of the sum in the big safe. Each small safe is protected by one electronic key of one bit: it opens either with 0 or with 1. The big safe is protected by a 100 bit long electronic key. The thief does not know the keys, any of them. He can do two different things: a) Try to open the 100 small safes. b) Try to open the big safe. What would you do, if you were him? Rumracket, maybe, would say that there is no difference: the total sum is the same, and according to his reasoning (or your reasoning, maybe) we have 100 bits of FI in both cases. My compliments to your reasoning! If the thief reasoned that way, he could choose to go for the big safe, and maybe spend his whole life without succeeding. He has to find one functional combination out of 2^100 (about 10^30). Not a good perspective. On the other hand, if he goes for the small safes, he can open one in, what? one minute? Probably less. Even giving one more minute to take the cash, he would probably be out and rich after a few hours of honest work! :slight_smile: So, you see, 100 objects with one bit of FI each do not make 100 bits of FI. One object with 100 bits of FI is the real thing. The rest is simply an error of reasoning. gpuccio
ET
Ask them for the methodology used to determine that blind and mindless processes- NS, drift, constructive neutral evolution- produced the differences in the proteins gpuccio’s methodology says required design intervention. Then we can all compare to see which is the more robust. Make sure you get the methodology used to determine the mutations were happenstance events. Otherwise we will be hit with another barrage of equivocation.
We know there is no model here. I think it is best not to shut down the discussion. bill cole
All Abstract
Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define “functional information,” I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA–GTP binding energy), I(Ex) = ?log2[F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function ? Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of function.
This is Hazen and Szostak's definition of FI. FI points to a single function and single configuration. bill cole
To gpuccio an Bill Cole- et al- From an ID perspective the immune system was intelligently designed. And it does exactly what it was intelligently designed to do. That said, if they can demonstrate that the immune system evolved via natural selection, drift, CNE or any other blind and mindless mechanism, then gpuccio's argument is falsified and there isn't any need to talk about what the immune system produces. So all they are doing by using the immune system's products to try to refute gpuccio, is engaging in question-begging. They are using what has to be explained in the first place as something that can produce FI, thereby refuting gpuccio. A similar thing is seen with the type 3 secretory system and bacterial flagella. They try to use one unexplainable structure in an attempt to explain another unexplainable structure. The hypocrisy is clear. Any ID methodology will be made into a strawman and refuted. All the while they get away with the "glossy narrative" methodology. Pathetic, really Ask them for the methodology used to determine that blind and mindless processes- NS, drift, constructive neutral evolution- produced the differences in the proteins gpuccio's methodology says required design intervention. Then we can all compare to see which is the more robust. Make sure you get the methodology used to determine the mutations were happenstance events. Otherwise we will be hit with another barrage of equivocation. Or let them push you around and get nowhere. Unless that helps you refine your argument. Then it is a positive. Contingencies abound... ET
Here I think is the weakness in Steves answer. The events are not improbable because of the resources available to generate 60 bits. Could the same resources generate 500 bits certainly not. If it was a single draw he has a point but it is not. So generating 60 bits one week with the available resources to do so and then generating 60bits the second week is not the equivalent of generating 120 bits. The number of bits is a function of the length of the sequence in bits divided by functional possibilities. I don't think Steve is right but I have to commend his effort here. bill cole
Jawa Here is Steve Schaffner's retort.
Yes, your math is off by 37 orders of magnitude. You’re essentially calculating probabilities of independent low-probability events – create one improbable antibody this week, create another next week. The probability of creating both is the product of the individual probabilities, which is equivalent to adding the logs of the probabilities. So you add the number of bits. Generating 60 bits per week, and assuming independence, means that it will take a little over 8 weeks to generate 500 bits.
bill cole
Jawa
Did they answer that question yet?
Not yet but I proposed this as a rhetorical question just to start the discussion because this is a point of confusion for most evolutionary biologists. If you look at GP's post at PS you will see he answered exactly as I thought he would. bill cole
Alexa global ranking among all websites in the internet Google............1........--..........-- unchanged Bing..............29........UP..........12 last 90 days UD........602,973.......UP.......184 K last 90 days PT......1,628,401.......DN......162 K last 90 days TSZ....3,230,378.......DN......698 K last 90 days PS......7,068,065.......DN......3.72 M last 90 days I still don't understand why PS has experienced such a drastic drop in ranking lately. BTW, there were times when PT was just a couple of 100K under UD. But now they seem to be 1M under UD. Where did all those viewers go? Any clues? jawa
GP @531: Your math is not accurate, because Rumracket said precisely that "natural selection can add 60 bits of FI in a few weeks" but you did your calculations assuming that "60 bits are added in one week" which is very different than the proven facts stated by Rumracket. You changed the numbers to your favor. Texas sharpshooter fallacy! There you go again! :) What you did is unfair to the poor guy. :) Don't I sound like some folks at PS? :) jawa
Bill Cole @529: I see your point and fully agree. Now, what about the other folks you mentioned @523?
This is a good question. I am wondering if @gpuccio, @glipsnort @swamidass or @Joe_Felsenstein can answer it.
Did they answer that question yet? jawa
To all: Here it is: This came to me via Bill Cole. Rumracket:
If natural selection can add 60 bits of FI in a few weeks, why can’t it add 500 bits of FI over the course of (say) 20 million years?
I have no idea of what the “60 bits in a few weeks” is about, but at least we can clarify the math. Let’s say that 60 bits are added in one week, whatever the source of this statement may be. 500 bits is a quantity which is 2^440 times bigger than 2^60. We have about 2^38 weeks in 5 billion years. So, at that rate, NS would be able to add about 98 bits of FI in 5 billion years. I hope I made no errors. Just check the math. gpuccio
Bill Cole at #529: Bravo! I have just answered exactly that. To be precise, 98 bits in 5 billion years! For those with a suspicious mind: I had not yet read Bill's comment, when I answered. :) gpuccio
Hi Jawa He would explain that there are not the resources or time in the biosphere to get to 500bits by 60bit increments. One hundred billion 60 bit increments is about 90 bits. Let me know if you want a more detailed explanation of why this is true. This is the problem evolution faces. The fact that DNA and proteins contain specified sequences they have almost infinite ways to be arranged. bill cole
Bill Cole- It is up to Rumraket et al to show that natural selection can A) add 60 bits of FI and B) that said FI can accumulate on ONE sequence to reach 500. For it to be natural selection they have to demonstrate the thing that is being debated- that all the mutations were happenstance events. ET
Bill Cole @523: What would GP say about that? jawa
Popular Posts (Last 30 Days)
Controlling the waves of dynamic, far from… (2,460) Darwinist Jeffrey Shallit asks, why can’t… (1,447) Are extinctions evidence of a divine purpose in life? (1,323) UD Newswatch: Epstein Suicide (991) “Descartes’ mind-body problem” makes nonsense of materialism (981)
At the top of the list, this discussion has over 1,000 more visits than the second most popular discussion for the last 30 days. jawa
For those readers interested in following GP’s discussion with PS, here are the associated post numbers: 343 Bill Cole 351 GP to Bill Cole 354 Bill Cole 356 GP to Bill Cole and PS 357 Bill Cole 360 Bill Cole 368 GP to PS 369 GP to Davecarlson 370 GP to JS 374 GP to UD 375 GP to JS 381 GP to JS 387 GP to JS 388 GP to JS 395 GP to JS 398 GP to JS 401 GP to Art Hunt 402 GP to Rumracket 406 GP to JS 408 GP to JS 411 GP to PS 416 GP to sfmatheson and JS 431 GP to JS 432 GP to JS 433 GP to JS 434 GP to glipsnort 438 GP to Art Hunt 445 GP to sfmatheson 446 GP to sfmatheson and JS 449 GP to all 451 GP to JS 461 GP to sfmatheson 462 GP to glipsnort 465 GP to JS 466 GP to Art Hunt 468 GP to JS 469 GP to all 470 GP to JS 472 GP to JS 474 GP to JS 487 GP to glipsnort 488 GP to JS 489 GP to JS 490 GP to JS 491 GP to JS 492 GP to JS 495 GP to Art Hunt 503 GP to JS 504 GP to glipsnort 505 GP to sfmatheson 506 GP to sfmatheson 507 GP to JS 516 GP to UB 518 GP on glipsnort 519 GP to sfmatheson 520 GP to all 521 GP on Art Hunt to be continued... PeterA
GP @466: What did Art Hunt mean by this (specially the highlighted text)?
Thanks again, @gpuccio. I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here. I suspect that there are serious issues with the approaches one may take to estimate this property, but the concept seems to make sense to me.
Emphasis added. Thanks. PeterA
Gpuccio This question from Rumraket that I did not answer tells us a lot about mis understanding of the design argument. This is an error both Felsenstein and Schaffner have made.
Rumraket:mIf natural selection can add 60 bits of FI in a few weeks, why can’t it add 500 bits of FI over the course of (say) 20 million years? Cole:This is a good question. I am wondering if @gpuccio, @glipsnort @swamidass or @Joe_Felsenstein can answer it.
bill cole
GPuccio @517:
So, at risk of finding myself in a torpedoed battleship and in the midst of a tornado, I will go on my way. Until they allow me to do that, or until I become too tired.
Well, you could always look up at the "starry sky" for some orientation while navigating those rough waters. BTW, who interprets the positions, configurations and brightness of the constellations on the night sky? Conscious agents? Here's an example of an otherwise function-less set of objects that is used as reference for orientation. Sometimes we could use certain rocks as reference while hiking in the mountains. PeterA
Art remains, IMO, one of the best interlocutors there. Art at PS: Agreed. I just expect that @gpuccio will not acknowledge that my post posed some problems, and that the inconsistencies that are apparent are genuine issues. I want to state up front that I won’t be re-hashing the same points over and over. gpuccio: And I really agree with you on that. I hate repeating arguments, if they have been already clearly stated. To agree to disagree is certainly a much better option. Look, I habe been forced to slow down my comments here, because it was really too exacting. But I am available to continue the discussion, if it remains interesting. My only aim is to defend ID theory, as well as I can. And to get interesting and constructive intellectual confrontation with those who think differently. Regarding the problem of FI in non biological, non designed systems, I remain firmly convinced of my statement: there is no example of non trivial values. I don’t think I will discuss further the starry sky, because I believe that I have already shown beyond any possible dount that it is a completely wrong exampler. Unless, of course, Swamidass brings new arguments. But I feel that I still owe you some better clarification about my position regarding the tornado example. I am absolutely convinced that your analysis of that system oin term of FI is wrong, but maybe I have not explained my points clearly enough. So, I will make a last attempt at clarification, but I need some more time for that. After that, I leave the last word to you, and we can peacefully agree to disagree. :slight_smile: One last poiny. I have read in my e-mail a comment by you about my arguments here and the semantic argument. I cannot find it here, maybe it is in the parallel thread. However, I wanted to confirm that you are perfectly right about that: while I believe that the semantic argument is very important and valid, I have not used it here up to now, and I probably won’t. THe reason is simple: the arguments I have presented here do not need it. So, I confirm that all my arguments here, the statement that no high levels of FI can be found in non biological and non designed objects, the estimate of FI in proteins, the estimate of the biological resources of our biological planet, and everything else, do not depend in any way on the semantic argument, at least not in the form that I have expressed them. They could certainly be strengthened by semantic considerations, and I will probably mention in the future discussion a minor aspect that is probably pertinent to my discussion, but essentially all my reasonings here are independent from that. I hope this clarifies the point. Another point is that, while my argument is more easily shown for digital information, it perfectly applies to non digital systems, too. I will clarify that better in my final discussion about tornadoes. gpuccio
To all here: sfmatheson's quote requires some clarification. It seems that I have touched some sensitive cord by saying that I did not consider mt contributions at PS a "peer review" of my methodology, but rather an intellectual confrontation between two very different paradigms, ID theory and neo-darwinism. THat's exactly what I think, and what I have always thought. I have not spent so much time and resources posting there because I needed a peer review of my methodology. And I do not feel guilty because, in someone's opinion, I am wasting a precious opportunbity of being corrected by so many scholars in my errors. While all contributions that help me undertstand things are very much welcome, be they from scholars or passersby, I am well aware that at PS I am facing scholars (or simply people) who are strong adversaries of ID theory, which happens to be what I consider the truth about biological functions. So, there is a strong underlying confrontation of paradigms, however kind and tolerant and understanding we may try to be (and I am not sure that everybody is trying to do that). The simple truth is: I believe that what they believe is seriously wrong, and they believe the same of me. No problem there, constructive scientific confrontation and exchange can happen just the same (at least in theory). But simply denying the existence of the difference in paradigm will certainly not help. gpuccio
sfmatheson at PS:
Oh. I was misinformed about the purpose of the conversation. I thought it was about your methodology,
So why the many questions about ID theory, which I have been answering? gpuccio
glipsnort at PS provides a rather lucid contribution (I am serious):
I would rather focus on larger issues that I think do matter.
Right. I do agree. You raise classical objections, that I know very well. Probably, I have not had to time to discuss them here, up to now.
Sequence conservation is not a valid estimator of the ratio you’re interest in. Conservation tells you nothing about most of the search space; it tells you only about the immediate mutational neighborhood of the existing sequence, which is a vanishingly small fraction of the total. More importantly, it does not give information about the number of nearby states that possess the function in question. Instead, it gives information about the number of states with higher function. (Higher fitness, actually, which need not be the same thing, but that’s a minor concern here.) But in standard evolutionary theory, the theory you’re challenging, adaptation involves passing through states with lower fitness (less functional states) until the local maximum is reached, and not returning to those states. Conservation cannot tell you whether there are less functional but still selectable states nearby in mutation space, and therefore cannot tell you anything about the size of the target space. This alone invalidates conservation as a proxy for the ratio you’re interested in.
This objection can be summarized as follows: we are looking to optimized functions, and they are conserved in such an optimized form. But, of course, we belive that they started much simpler, and evolved gradually to the optimized state by RV + NS. Therefore, the target space for the initial, simpler form of the function must be much bigger". Is that fine? This objection is very reasonable from the point of view of a fervent believer in the powers, of NS, but irrelevant if we consider what NS can really do according to facts. Optimizations are short and scarcely important. New fucntions are complex already in their starting form. Even if some optimization can cerianly occur, and does indeed occur, a complex function is complex, and cannot be decostructed into simpler stpes, each of them naturally selectable. I cannot enter into details about that just now, given the bulk of things that I still have to clarify, but I have discussed this point in great detail in my OP about the limits of NS, alredy lnked here. I would also recommend Behe’s last book, Darwin devolves, which is essentially about this problem (how NS really works). I know that these brief statements will immediately draw hundreds of fierce attacks here. So be it.
The target space you’ve considered consists of a single function, the function of the gene you’re looking at. To the extent that evolution can be considered a search algorithm, though, it is not a search for “the function performed by gene X”. It is a search for any function that will increase fitness. The only target space that will let you assess whether evolution could have produced some gene without design, then, is the space of all possible beneficial functions. Considering the probability post facto of achieving the function that did arise is indeed the Texas sharpshooter fallacy.
No, the Texas sharpshooter has nothing to do with this. This objection is often referred by me as the “any possible function” objection. In brief, evolution is not searching for anything, so it can find any possible function. Therefore, it must be much more powerful than a search for a specific function. Again, this seems to be a reasonable objection from the point of view of a good believer in the neo-darwinian algorithm, but it is irrelevant when we consider facts. First of all, it is not “any possible function”: it is any change that gives some definite reproductive advantage, and can therefor be expanded and fixed, with reasonable probability, by positive NS. That is much more restricted than “any possible function”. Moreover, the number of functions that can really be useful in a context is sverely limited by the complex organization of the context itself. An existing set of complex functions, well organized, can use only a few new functions, which must anyway be well integrated into what already exists. Behe’s book, and known facts, show clearly that in known cases of NS acting, the variation is very simple, and it is variation of some already existing complex structure, with some impairment of its original function, but at the same time some collateral advantage in a specific environment. Like in antibiotic resistance. Again, the main point is, again, that complex functions are already complex even in their minimally complex form. And adding the target space of many complex functions changes only trivially the target space - search space ratio, when we are already above the 500 bits, even a lot above. The key point is: these are logarithmic - exponential values. But, again, I cannot deal with this point in greater detail, for lack of time.
The claim that only processes that incorporate design can generate 500 bits of FI has been challenged by two examples of two biological processes that observably produce large amounts of FI, in cancer and the immune system. Those challenges have not been addressed.
The claim remains valid. The examples offered for non biological objects are simply wrong. I am trying to show why, even if I don’t expect to convince anyone here. This is, it seems, a very sensitive points, and it evokes terrible resistances. Again, so be it. Regarding cancer and the immune system, I will treat those two cases in great detail. If I survive. After all, that is my field, much more than meteorology! gpuccio
ET: "I tried to warn you. Those guys are so desperate they will say anything." Not that I did not expect it. Now I have been kindly cautioned (by Swamidass) that if my argument depends on Behe's books, I just torpedoed my own battleship. I apologize to Behe for that. Unfortunately, I cannot simply ignore that his ideas are correct! So, at risk of finding myself in a torpedoed battleship and in the midst of a tornado, I will go on my way. Until they allow me to do that, or until I become too tired. :) gpuccio
UB: Thank you, as usual, for your contribution. I have received your beautiful e-mail, and I will answer as soon as I find a little of time. I am rather pressed at PS, even if I have decided to slow down my contributions there. The results, as expected, are quickly deteriorating. I had to clarify there that, while I hold in the highest regard the semiotic argument in ID, and in particualr the way you defend it. my arguments presented there up to now do not depend on it. For example, it is perfectly true that non biological, non designed objects never exhibit high values of FI. While the semiotic arguments certainly strengthens the point, it is not necessary to reach that conclusion. The examples brought there, starry skies and similar, and tornadoes, are not counter-examples at all. Those systems show no high levels of FI at all. And I am trying to explain why. Even if the results there will probably be what they will be. gpuccio
UB,
Since when is the information scientist Wolfgang Johannsen an “ID author”?
Who knows? Maybe he hasn't come out of the closet yet. :)
the undeniable distinction between the semantic information contained in the gene, versus the “physical information” projected onto stars, islands, and raindrops in Hunt’s and Swamidass’s irrelevant counter-examples.
Do you think they understand such a distinction?
their counter-examples have absolutely nothing in common with biological information.
Exactly right. jawa
. Since when is the information scientist Wolfgang Johannsen and "ID author"? I bet that would be a surprise to him, but who knows, you learn something new every day. In any case ... Grasping at straws, those men of great character at PS want to now see if they can make GP squirm because I dared to point out the undeniable distinction between the semantic information contained in the gene, versus the "physical information" projected onto stars, islands, and raindrops in Hunt's and Swamidass's irrelevant counter-examples. The promise of yet another tender morsel of rhetoric is apparently more important to their "science" than the well-documented fact that their counter-examples have absolutely nothing in common with biological information. Such rhetoric is invaluable when you have no intentions whatsoever of exposing yourself to empirical evidence anyway. Oh, and while I am here, let me give you a helping hand Mr Schaffner. Upright BiPed
PT ranking had a substantial jump up perhaps after Art Hunt’s successful “tornado” argument against GP. :) But PS kept sinking in the ranking. TSZ too. UD 631,311 627,114 612,722 602,627 578 Total Sites Linking In PT 1,732,931 1,736,969 1,743,372 1,592,453 950 Total Sites Linking In TSZ 3,215,461 3,222,145 3,226,071 3,228,639 37 Total Sites Linking In PS 7,036,059 7,051,188 7,059,655 7,064,442 12 Total Sites Linking In jawa
Faizal Ali:
But science does create models, few if any of which entail the existence of any gods. And to the extent that these models accurately explain and predict the observations we make, it demonstrates that gods are not necessary to explain those observations. No?
Clearly not in the case of evolution by means of blind and mindless processes. There aren't any models for blind and mindless processes producing vision systems. There aren't any models for blind watchmaker evolution, period. Evolutionary algorithms exemplify evolution by telic processes. So could someone please ask him what the heck he is talking about? https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/259 ET
The stupidity, it burns:
The claim that only processes that incorporate design can generate 500 bits of FI has been challenged by two examples of two biological processes that observably produce large amounts of FI, in cancer and the immune system. Those challenges have not been addressed.
Cancer is a loss of information by any measure. Cancer cells are more primitive than their non-cancerous cells. Joshua's information measure is based on his misunderstanding. It has no basis in reality. The immune system example is just question-begging. You have to show that it arose via blind and mindless processes, which you cannot. If you could then ID would be a moot point. It is a lost cause trying to communicate with PS ET
“Natural selection is the simple result of variation, differential reproduction, and heredity—it is mindless and mechanistic.” UCBerkley. The variation is what has to be heritable. And it also has to be happenstance in nature, ie an accident, error or mistake- not planned or directed. “Natural selection is the blind watchmaker, blind because it does not see ahead, does not plan consequences, has no purpose in view.” Dawkins in “The Blind Watchmaker”. Natural selection is a process of elimination. There is a huge difference between selection and elimination, as Ernst Mayr wrote:
Do selection and elimination differ in their evolutionary consequences? This question never seems to have been raised in the evolutionary literature. A process of selection would have a concrete objective, the determination of the “best” or “fittest” phenotype. Only a relatively few individuals in a given generation would qualify and survive the selection procedure. That small sample would be only to be able to preserve only a small amount of the whole variance of the parent population. Such survival selection would be highly restrained. By contrast, mere elimination of the less fit might permit the survival of a rather large number of individuals because they have no obvious deficiencies in fitness. Such a large sample would provide, for instance, the needed material for the exercise of sexual selection. This also explains why survival is so uneven from season to season. The percentage of the less fit would depend on the severity of each year’s environmental conditions.
That process of elimination is in no way a magical shape-shifting feedback, capable of producing the appearance of design. Natural selection pertains to the individuals and not individual genes or proteins. That is something else that needs to be considered. ET
So, then, do people who lie on behalf of their cause know they are being dishonest? Some do, and some do not. The test comes when they are shown their supporting data is not accurate--do they then stop making that argument? If they do, they are honest people who made an error; and errors are not lies. If they do not, they place ideology above truth. That, unfortunately, is not only common, it is probably the greatest source of mass evil in the world. - Dennis Prager
Sadly they have become what they despise – zealots… Heartlander
I tried to warn you. Those guys are so desperate they will say anything. ET
Swamidass at PS:
@gpuccio do you see how you argument is grounded in microstates, not macrostate? Your objection to me about the start constellations applies here.
With all respect, you are using a strategy which is unfair and unacceptable. Because it generates only confusion in a discussion which is already confused enough. And not, I believe, because of me. In brief, I asked for any possible counter-example to my statement that there exists no object, non biological and non designed, which exhibits high FI (more than 500 bits). I firmly stick to this statement. It is true, beyond any doubt. So, you readily offer not one, but four counter-examples. Very good. Now, I am not joking here. I mean what I say. And I am not here ot waste time, or make strategies. One counter-example will falsify my position. This point is very important. to me and to the discussion here. So I take your examples very seriously. And I start from the first (the olthers are nor essentially different. And I point to the reasons why it is not a counter-example. Indeed, it is rather obvious for me that it is not a counter-example at all, so much so that I really wonder why you are offering it. At this point, you justify your choice using an old trick, that I know very well, and that indeed I had cautioned you against just before you used it: you use the bits to build an ad hoc function. This is not new. I remember that many years ago, in a similar discussion (yes, I am not at all new at these confrontations and at these objections, whatever you seem to believe), Mark Frank, a very serious and intelligent friend from the other side, when challenged to offer one counter-example, did exactly the same thing: he used the bits to build an ad hoc function. And believe me, he was in perfect good faith. IMO, this seems to show two things: how intelligent people can say and believe obviously wrong things whem they have a desperate need to deny some important truth that is not comforting for their worldview, and how the same intelligent people (in this case MF and you) cannot obviously find a better argument, if they have to recur to this very indirect, and completely wrong, argument. However, when I pointed that you were using the wrong trick of using the bits to build an ad hoc function, you change again your position: instead of simply saying if you agree or disagree with my point, you “justify” it saying that I did the same, and broke my rules in the same way. Which is absolutely not true. So, when I point to the simple fact that I have never, never used the bits to define a function, which can be checked by anybody just by looking at everything that I have written in more than ten years, you “justify” your behaviour by saying that my methodology to estimate FI is based on micorstates. Which is not true, either, but requires a more detailed answer, that I hope to give later. So, when I point to the simple fact that one thing is the definition of the function, another thing the procedure to estimate FI, and that the discussion was about the first thing, not the second, you still do nothing to acknoledge my point abd clarify your position. I have appreciated your behaviour up to now. Very much. But I don’t like this. Not at all. This discussion is very serious for me, and for obvious reasons difficult to manage here. Tricks and strategies and unnecessary confusion are very bad. So, I ask again. Do you insist that the starry sky exhibits more than 500 bits of FI? Under my definition of FI? Do you insist that I have broken my own rules in the definition of a function? Where? Thank you. (I will discuss the tornado separately with Art, at least for a last attempt) gpuccio
sfmatheson at PS:
Does this mean that you take “random walk” to have no component other than randomness? I remain unconvinced that your methods have any meaning for evolution.
The random walk is random. It is not, of course, the only component of the neo-darwinian model. The model is RV + NS. I happen to know this. The probabilistic analysis applies only to RV. NS must be analyzed separately. But for NS to act, there must be something that is naturally selectable. And that naturally selectable function must arise by RV alone. Can you agree on that?
You are mistakenly equating peer review with publication.
My point is simply that this is not a peer review, but a confrontation between two very different paradigms.
If you did not make this error,
I never did this error. Nor, as far as I know, any important source of ID theory. gpuccio
sfmatheson at PS:
My understanding of your writings here is that you have calculated/estimated a probability (referring to “probabilistic barriers” and “probabilistic resources of the known universe”) that a particular outcome could come about. That is the probability I was referring to.
OK.
Until we know more about that outcome, most importantly whether it could have gone in other ways, the probability of that particular outcome is meaningless. This is a tired old topic in discussions of design and I know you are aware of it.
I certainly am. There are two different aspects. The probabilistic resources of our biological world can definitely be computed, at least as a geneorus higher threshold. That what I have done in my OP about that issue: What Are The Limits Of Random Variation? A Simple Evaluation Of The Probabilistic Resources Of Our Biological World https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ You will se in the first Table there that I give a very generous estimate of 140 bits as the limit (for bacteria). That means that, at a very generous most, only 2^140 different states could have been reached and tested in 5 billion years on our planet. Then there is the problem of evaluating the target space. That is more difficult, and there are objective problems. Look also at my comment #133. I am very confident that my procedure is very good to estimate a specific form of FI, linked to long evolutionary conservation of sequence. Of course, we must address some potential difficulties, and you list some of them: alternative solutions, and so on. Indeed, I have discussed those things many times . Of course, I cannot address everything here an all at the same time. I have given links, but probably nobody here has the time to check them. IOWs, I am human, and you are too. But I cannot understand why, every time there is some aspect that I have not yet discussed, you all draw final conclusions on what I think or am doing. That is wrong. I am here to answer your questions, when they are good questions. I give you a link here of an OP where I have discussed many of the things you mention: Defending Intelligent Design Theory: Why Targets Are Real Targets, Probabilities Real Probabilities, And The Texas Sharp Shooter Fallacy Does Not Apply At All. https://uncommondesc.wpengine.com/intelligent-design/defending-intelligent-design-theory-why-targets-are-real-targets-propabilities-real-probabilities-and-the-texas-sharp-shooter-fallacy-does-not-apply-at-all/
Until we know more about that outcome, most importantly whether it could have gone in other ways, the probability of that particular outcome is meaningless.
This is one of the things I discuss in that OP. Possible alternative solutions. In brief, if there were many other complex alternative solutions (and probably there are), the computation of FI, at the levels we are considering, woyuld change very little. See the section about clocks in the mentioned OP. If there were much simpler solutions, we would definitely observe those ones, and not the complex solution we observe.
[Side note: you may notice that I am using design-ish language here (“to make”, “to achieve”) because I don’t think there is anything wrong or unscientific about talking design-ishly.]
That’s perfectly fine. :)
By what process are we pursuing these outcomes? Any probability calculation that assumes a single-step flying-together of amino acids is, at this point, flat-out dishonest. So, while calculating probability, are we considering the fact that an evolutionary “walk” is almost as far from a random flying-together as one might get?
No, here you don’t understand my point. I have never said that there is a single step transition. Sometimes I really think many of you believe I am a complete fool. Maybe, maybe not. Not in this case. :slight_smile: Of couse the transition happens in many steps. That’s why I have clearly said that the best model is a random walk from some unrelated sequence state. And the concept of probabilistic resources has exactly that meaning: how many attempts can the system make? How many steps are allowed in the random walk. The probability of finding an unrelated state by a random walk, let’s say buy 2^100 steps, is practically the same as the probability of finding that same target by a random search in the same number oif attempts. The two systems differ in the initial steps, of course, but with a big numer of steps there is no great difference. It is just related to the rate between target space and search space, and to the number os attempts/steps allowed to the system. And of course, one thing should be clear. All probabilistic evaluations refer only to the RV part of the neo-darwinisn mechanism, which include Neutral drift. The NS part must be evaluated separately and differently. And I have not even begun to do that here. Finally, my contribution here is not aimed to publish a paper. So, I do not consider it as some form of peer review. It is, instead, aimed at intellectual confrontation about a very inmportant paradigm difference: design against neo-darwinism to explain biological functions. In that sense, it is much more precious than a peer review to me. But for very different reasons. gpuccio
glipsnort at PS:
Your comments here make sense to me for some definitions of FI, but I don’t see how they work in the context of your stated definition. If FI is purely a measure of the ratio of target space to search space, what does “specific sequence information” mean? Or 1250 bits of new FI? The function of the protein is unchanged, as are the target and search spaces. How do you decompose the ratio into old and new FI?
Please, look also at my comment #121 to Swamidass. The point is: one thing is the definition of functional information, another thing is my strategy (or anyone else’s) to get an indirect estimate of it. The definition is what it is. A direct method to measure the target space (the thing we really don’t know) would be to sythesize all possible sequences in the search space, and test each one in the lab for the defined function. Of course that is not possible, and never will be. Another semi-direct way would be to have such a good understanding of the sequence - function relationship in proteins as to be able to compute the target space. That is promising, but I believe that we are still very far away from that. So, we haver to use indirect estimates of FI. My procedure is exactly that. Being based on long conservation of sequence, the interesting thing is that we estimate FI without any explicit reference to the function. IOWs, we have a protein that certainly has some function in its context. We trace the appearance of some new sequence specificity at some definite evolutionary step, and we classify that new sequence specificity as highly functionally constrained, because we observe tfhat, after its first appearance, it is conserved for 400+ million years. So, we have an indirect estimate of the FI in that specific sequence in that specific context, even oif we have not defined the function explicitly. Of course, we can have a good idea of what the function is just by looking the protein at Uniprot. At least for many proteins. Now, to answer your questions. Let’s say we have a protein in pre-vertebrates which has low sequence similarity with the human form in pre-vertebrates. There are different possibilities. Fir example, in the case of CARD11, as you can see in the graph I have posted, the protein probably did not exist in pre-vertebrates. The bitscore is extremely low, probably just background noise, or just some limited domain similarity is some part of the long molecule. That is perfectly consisten with the protein being involved in the immune system, which as we know appears in jawed fishes. So, this is probably a new vertebrate protein. As such, the explanation is rather straightforward. The protein appears in cartilaginous fishes, and right from the beginning of its existence it has already more than half of its potential FI (about 1.3 baa). The remaining history of the protein in vertebrates, as can be seen in the graph, is not very interesting. There seems to be another minor adjustment in reptiles. Maybe, but it is difficult to be sure. The rest is compatible with passive conservation, increasing as the evolutionary distance decreases. So, the history of this protein is clear enough: it exhibit a major engineering (be patient, let’s say it could be by design or bt neo-darwinist mechanisms, for the moment) at the beginning of its existence, and then not much happens. Except maybe at the transition to reptiles. Now, let’s say that we have, instead, a protein that alredy exists in pre-vertebrates. Let’s call it protein A. Let’s say that it has a value of human conserved sequence similarity, in pre-vertebrates, of 0.7 baa. That is alredy something. Let’s say the protein is 500 AA long. We have already a major similarity here, and reasonably a bitscore of a few hundred bits. Now, the same protein jumps in cartilaginous fishes to 1.5 baa, presenting a jump of 0.8 baa. So, a few hundred bits of new human conserved sequence similarity. Of new FI. What does it mean? We already know that we are not measuring total FI, nor total FI change. The protein in pre-vertebrates could well have higher total FI, or less, or the same. We don’t know, because my methodology cannor detect FI which is not linked to long sequence conservation. IOWs, what I have called functional divergence. So, we just stick to the new FI that appears at the transition and is then conserved: those 0.8 baa. What is their meaning? The reasonable answer is that the protein function undegoes a major adaptation at the transition to vertebrates. Maybe the basic function, the basic structure, remain the same. Maybe the total FI remains the same. As said, that we don’t know. But the appearance of such a big new component of FI in vertebrates means that now the protein does the same things in a different context, or that it does some new things. That is perfectly conmpatible with what we know about the big changes that happen in new classes of organisms, especially at regulation level, and in protein networks that control transcription or other major pathways. TFs, for example, as already discussed, often retain their DBDs, but change the rest of their sequences. And acquire new functions, or differently tailored functions. I hope this answers your questions. gpuccio
Swamidass at PS:
I actually agree with you in much of this analysis, but you are missing the point. I’m applying the rules that you laid out, and it leads to this problem. You are making a parallel mistake in your analysis of protiens. I made the parallel mistake to make this point.
It seems that I don’t understand the point. Please, don’t be too confident in my intelligence. Could you please explain better what you think? :) gpuccio
Recorded the Alexa Global Internet Traffic Ranking for these four related websites on three recent days and the trend doesn’t seem encouraging for the bottom three sites. Actually, unexpectedly to me PS hasn’t got much ranking boost if any at all from their current exposure to UD viewers through GP’s discussion. I don’t know how to explain it. PT -the closest to UD ranking- is over one million sites below UD, though they have more sites linking in theirs. I don’t understand that either. TSZ is much lower. PS is at the bottom. PT, TSZ and PS used to be much higher not so long ago. I don’t understand why they have dropped so low lately. Please, note that Google has ranking position 1. UD is 612,721 positions under Google. IOW, there are 612,721 websites above UD. Those positions change daily. UD 631,311 627,114 612,722 578 Total Sites Linking In PT 1,732,931. 1,736,969 1,743,372 950 Total Sites Linking In TSZ 3,215,461 3,222,145 3,226,071 37 Total Sites Linking In PS 7,036,059 7,051,188 7,059,655 12 Total Sites Linking In jawa
Coded information. All Art is doing is pointing out desperation loop-holes because of a perceived flaw in the definition of functional information (FI). So coded information has to be part of the definition for functional information. Because that is exactly what proteins are. They are the object code to the nucleic acid's source code. And the differences gpuccio measures are actual differences in the coding sequences. That leads us back to Francis Crick:
Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.
That is the way both Drs Dembski and SC Meyer use it with respect to biology. In the "Signature in the Cell", Meyer says that information the way he uses it, anyway, refers to
the attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something (such as nucleotides in DNA or binary digits in a computer program) that produce specific effects
That's right, the standard dictionary definition. That also falls in line with functional sequence complexity (Durston et al). And it all comes back to the sequence specificity required to produce proteins- biologically relevant proteins. Meaning proteins that carry out a biologically related function. The more essential the protein is should be considered in the amount of FI it exhibits. The special effects Meyer talks about. The point is we need to agree on a standard definition of terms like "functional information". We cannot go around with our own versions and by doing so muddy the waters. ET
Art is on a Crusade. But he is tilting at windmills:
Recall that the informational content of genomes is usually estimated by “calculating” that fraction of all possible sequences (nominally, amino acid sequences) that can satisfy a particular specification. We can use a similar strategy to guesstimate the “information” carried by water vapor molecules in a storm.
Again, the water molecules do NOT specify the tornado. The amino acid sequences do specify the polypeptide. And many times it takes other proteins to specify the protein from that polypeptide. So no, Art. Yours is nothing like gpuccio's. ET
191,193,195 is my discussion with Art. This post was an interesting idea but I agree with ET it is not a real analogy to protein sequences. If you cannot identify the specific sequence there is no way to measure it. https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560 bill cole
Art Hunt posted:
But what I do with water molecules is what you do with amino acids when you estimate FI in proteins, …
Seems to me the amino acid methodology depends on sequence specificity, sequence similarity and divergence. The water molecules do not specify the tornado. The amino acid sequence needs to specify a protein. So, at least to me, what Art is doing is nothing like what gpuccio does. But it does look like desperation has become the norm over there. As for the alleged transition to vertebrates- until we have a testable mechanism we cannot say anything about any patterns. Mechanism determines pattern. Bones are useless without articulated joints. Articulated joints are useless without correctly attached muscles. And muscles are useless without electricity to power them. The alleged transition to vertebrates exists only in imagination-land. ET
Roy posted the following about evolutionary algorithms:
EAs can and have been used to find solutions to problems where the solution was not previously known, and so could not possibly have been smuggled it.
Earth to Roy- It was smuggled in via the program which was intelligently designed to solve a problem. The main thing smuggled in is the ability to replicate. The next is the fitness function which is telic in nature. All solutions are checked and then guided towards a final target. EA's are programmed with the parameters and specifications the final solution must meet. They do nothing but mimic tens, to hundreds to millions of people working to solve the problem, all in their own way and all with an overseer directing them. It's like designing something new. You know what you want your design to do. And then you use existing knowledge to solve the problem of making it so. There isn't anything blind and mindless about it- well, not the successful designs, anyway. https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/178 ET
With a tornado there isn't any sequence one can use for quantification. Art just wants to play games with the water molecule schtick. And others want to play games by saying their own genomes could never have happened because they are too improbable to have happened- really. They even have an entire thread about it. https://discourse.peacefulscience.org/t/falter-every-birth-is-a-statistical-impossibility/4915 However, if a tornado appeared without the required conditions, one would have every reason to think something else was at work. ET
To all here: This is my anwer to Art about tornadoes. I cannot yet post it at PS, because of some rule of the blog system which does not allow more than three consecutive posts. So I will post this there as soon a someone stops the sequence of my comments. :) Meanwhile, here it is for you. Art: Now I come to your tornado. It is an interesting example, because, even if I am no expert of meteorology, it is probably one of those cases where some form of order comes out of a system including events that can be easily explained in form of necessity laws, random events and chaotic components. I think there are many examples like that. None of them is designed, of course. But do they exhibit high FI. The answer is no. I will try to explain how the system should be considered in terms of FI, even if I am not a meteorologist. Of course, you are free to stick you your analysis in terms of water molecules, but I cannot agree. Well, here the system is our planet, and its meteorologic phenomena. I think we can agree on that. What type of system is it? IOWs how can we describe and analyze meteorologic phenomena? I think we can agree that many of those events follow, more or less precisely, some well known laws, derived of course from the laws of physics applied to this particular system. That's why many events can be more or less anticipated. Weather previsions are everywhere, and I would say that at present they are often rather good. So, that part is a necessity system, more or less precise. No FI in that. But of course, not all can be anticipated. Least of all, I think, tornadoes. Probably, necessity laws act on random components and chaotic components to generetae a tornado. I don't know, I am not a meteorologist. So, being not a meteorologist, I paste here some explanation taken somewhere on the web, https://weatherstreet.com/weatherquestions/What_causes_tornadoes.htm hoping it is not so bad:
What causes tornadoes? Tornadoes form in unusually violent [thunderstorms](https://weatherstreet.com/weatherquestions/What_causes_thunderstorms.htm) when there is sufficient (1) instability and (2) wind shear present in the lower atmosphere. Instability (https://www.weatherquestions.com/What_is_an_unstable_air_mass.htm) refers to unusually warm and humid conditions in the lower atmosphere, and possibly cooler than usual conditions in the upper atmosphere. Wind shear (https://weatherstreet.com/weatherquestions/What_is_wind_shear.htm) in this case refers to the wind direction changing, and the wind speed increasing, with height. An example would be a southerly wind of 15 mph at the surface, changing to a southwesterly or westerly wind of 50 mph at 5,000 feet altitude. This kind of wind shear and instability usually exists only ahead of a cold front and low pressure system (https://weatherstreet.com/weatherquestions/What_is_a_cyclone.htm). The intense spinning of a tornado is partly the result of the updrafts and downdrafts in the thunderstorm (caused by the unstable air) interacting with the wind shear, resulting in a tilting of the wind shear to form an upright tornado vortex. Helping the process along, cyclonically flowing air around the cyclone, already slowly spinning in a counter-clockwise direction (in the Northern Hemisphere), converges inward toward the thunderstorm, causing it to spin faster. This is the same process that causes an ice skater to spin faster when she pulls her arms in toward her body. Other processes can enhance the chances for tornado formation. For instance, dry air in the middle atmosphere can be rapidly cooled by rain in the thunderstorm, strengthening the downdrafts that asist in tornado formation. Notice that in virtually every picture you see of a tornado the tornado has formed on the boundary between dark clouds (the storm updraft region) and bright clouds (the storm downdraft region), evidence of the importance of updrafts and downdrafts to tornado formation. Also, an isolated strong thunderstorm just ahead of a squall line that then merges with the squall line often becomes tornadic; isolated storms are more likely to form tornadoes than squall lines, since an isolated storm can form a more symmetric flow pattern around it, and the isolated storm also has less competition for the unstable air which fuels the storm than if it were part of a solid line (squall line) of storms. Because both instability and wind shear are necessary for tornado formation, sometimes weak tornadoes can occur when the wind shear conditions are strong, but the atmosphere is not very unstable. For instance, this sometimes happens in California in the winter when a strong low pressure system comes ashore. Similarly, weak tornadoes can occur when the airmass is very unstable, but has little wind shear. For instance, Florida -- which reports more tornadoes than any other state in the U.S. -- has many weaker tornadoes of this variety. Of course, the most violent tornadoes occur when both strong instability and strong wind shear are present, which in the U.S. occurs in the middle part of the country during the spring, and to a lesser extent during fall. Contrary to popular opinion, tornadoes have not increased (https://weatherstreet.com/weatherquestions/Are_tornadoes_getting_worse.htm) in recent years.
OK, so I would say that there is some good understanding of the conditions that generate a tornado. In general, we can say that some specific configurations of the basic components of the weather (distribution of winds, temperatures, pressures, and so on) allow tornadoes to be generated. So, there is nothin mysterious in the process. It is well understood, even if its mathematical and empirical treatment is certainly difficult. Some configurations of weather conditions lead to tornadoes. It's those configurations that we must consider, not the configurations of individual molecules of water. The basic components of weather follow, more or less, precise necessity laws, and of course molecules of water follow those laws too. The random-stochastic component, the only one that generates specific configurations that can act as "configurable switches", is caused by the complex interaction of those necessity laws. So, the correct question in terms of FI is: how many weather configuration (target space) lead to a tornado, in the search space of all possible weather configurations? I am certainly the last persom that can solve such a problkem quantitatively, but I believe that it can be solved in principle. There is nothing mtsterious here. Now, tornadoes are not too common (luckily), but they are certainly not exceedingly rare. I suppose therefore that, analyzing the space of configurations above mentioned, it will not be difficult to show that the configurations that lead to a tornado are not exceedingly rare. I cannot make that kind of analysis, unfortunately. Can you? So, I am rather confident that tornadoes, like many manifestations of necessity acting on random and chaotic components, are certainly fascinating, but can be perfectly explained in terms of the physics of these system. And the configurations that lead to them are a non trivial part of the space of configurations. IOWs, FI is low, and there is absolutely no reason to infer design. OK, so I would say that there is some good understanding of the conditions that generate a tornado. In general, we can say that some specific configurations of the basic components of the weather (distribution of winds, temperatures, pressures, and so on) allow tornadoes to be generated. So, there is nothin mysterious in the process. It is well understood, even if its mathematical and empirical treatment is certainly difficult. Some configurations of weather conditions lead to tornadoes. It's those configurations that we must consider, not the configurations of individual molecules of water. The basic components of weather follow, more or less, precise necessity laws, and of course molecules of water follow those laws too. The random-stochastic component, the only one that generates specific configurations that can act as "configurable switches", is caused by the complex interaction of those necessity laws. So, the correct question in terms of FI is: how many weather configuration (target space) lead to a tornado, in the search space of all possible weather configurations? I am certainly the last persom that can solve such a problkem quantitatively, but I believe that it can be solved in principle. There is nothing mtsterious here. Now, tornadoes are not too common (luckily), but they are certainly not exceedingly rare. I suppose therefore that, analyzing the space of configurations above mentioned, it will not be difficult to show that the configurations that lead to a tornado are not exceedingly rare. I cannot make that kind of analysis, unfortunately. Can you? So, I am rather confident that tornadoes, like many manifestations of necessity acting on random and chaotic components, are certainly fascinating, but can be perfectly explained in terms of the physics of these system. And the configurations that lead to them are a non trivial part of the space of configurations. IOWs, FI is low, and there is absolutely no reason to infer design. gpuccio
ET, Yes, it’s discouraging to see such a pathetic bunch, which includes educated folks, being so obliviously vague and shallow in their discussion with GPuccio. Now maybe their attitude makes it easier for us to understand why their website is doing so poorly on the Alexa Global Internet Traffic ranking. Even in a world where superficiality and shallowness are increasingly the norm everywhere. What a shame! Oh, well. jawa
Reading the comments @ Peaceful Science provides plenty of evidence that the anti-ID ilk are liars, bluffing equivocators and willfully ignorant. They couldn't assess any evidence if their lives depended on it. And they will NEVER support their claims with actual science. They can't even devise a methodology to test their claims. And it is sad watching gpuccio try to educate them while they flail away all the while refusing to grasp what he says. ET
Swamidass: OK, now after some rest, let me go back to the starry sky example. I will show how it should be treated in terms of FI. We have a system where 9000 stars can have an independent position in the sky. Also, they can have different brighteness. We define a function: that the stars can help orientation and navigation. Let’s assume that the position of the stars is a random configuration. We have a system with a very big search space. A lot of possible configurations, considering both position and brightness. The right question, from the point of vew of FI, is: how many of the possible configutations would satisfy the independently defined function? How big is the target space? Is it an infinitesimal fraction of the search space (high FI), or rather a big part of it? The answer here, while difiicult to compute in detail, is easy enough in principle: the target space is almost as big as the search space. Therefore, FI is really trivial, almost zero. Why? Because, of course, almost all the possible random configurations of position and brightness can help orientation and navigation. Not all of them, however. Very ordered configurations, those where all the stars are more or less equally distributed in the sky, and brghtness is more or less the same for all of them, would not help orientation and navigation. We would just see a sky that is the same everywhere, and does not allow us to get information about earth rotation and our position. But of course, those highly ordered configurations are really, really rare in the search space. This is an interesting example, and I thank you for providing it, because it is a case where order does not satisfy the function, while randomness does. Unfortunately (for your argument), the FI linked to such a system is almost zero. I think this is a good answer for your other examples too, but if you believe that they have different merits, please explain why. gpuccio
Swamidass at PS:
Okay, can you clarify how you implemented this rule in your analysis?
In my discussion about the relationship between FI and the design inference (that has nothing to do with my methodology to estimate FI) I have given a clear example of FI in language and how to measure it. Please, refer to that. The Shakespeare sonnet. You will find many possible functional definitions for the sonnet, each of them implying different levels of FI. The bits in the sonnet (the sequence of letters) are of course never used in any definition. In my procedure to estimate FI in proteins, the function is not defined (it is supposed to be the one described in Uniprot, if and when known). The estimate is based on conservation, which is an indicator of functional constraint, but does not tell us what the function is. Those are two different things. gpuccio
(Note: after writing the previous comment, Swamidass helped me retrieve what I had written before about the starry sky example. So I completed it and posted it. I thank Swamidass for his kind help, and apologize in advance for some of the tones in the following comment, however justified! :) ) Swamidass at PS:
No, I’m just using a particular definition of function, which parallels yours in biology. If you don’t want me to use my definition, I am not sure you can use yours.
It seems you are defining function by the already observed configuration of proteins in extant biology. This does not take into account the configurations that would produce the same high level functions, but we just don’t see because it is not what happened.
It is subjective how we define function. I chose a definition of function that paralleled yours in biology, so I am not sure how you can object to me “breaking the rule” while breaking yourself with your own definition! Yes, this highlights the problems with using FI as a way of determining if something is designed or not
This is bad. Frankly, I would probably not even answer this kind of arguments, if they did not come from you. I don’t know if you really believe that the stars in the sky exhibit high values of FI, or if you are only provoking (without any good reason, IMO). If you really believe that, there is proibably no purpose in continuing any discussion about FI. If you are provoking, it’s not a good sign just the same. However, here is a brief answer. The simple rule I have described (and which is rather obvious in any possible serious discussion about FI) is that we cannot use the observed bits to define the function. The function must be defined undependently from the knowledge of the observed bits. So: “a configuration of stars that favors storytelling” is valid. But probably almost all possible configurations would do that. While “a confifuraion of stars where the first has these celestial coordinates, the second these other ones”, and so on for all 9000 visible stars, is not valid. So, a binary number of 100 digits is a good definition. And, of course, has no relevant FI. A binary number that is 00110100… is not a valid definition. It can be used only as a pre-specification. This is the rule, and I have never broken it. I have never defined a protein function that says: “a protein with the following sequence: …” I have always used for proteins the function described in Uniprot for the observed protein, or something like that. IOWs, a protein which can do this and that. Never: “a protein with this sequence”. But you say: no, I wnat the stars that must have exactly the position that we know. That is breaking the rules. You are using the bits. I have never done that. You say: "It seems you are defining function by the already observed configuration of proteins in extant biology. " Not at all. That statement is unfair, wrong and confounding. I am always defining function as what a protein can do. I am using observed configurations, in a precisely described way and accordign to well explained assumptions, only to estimate FI in proteins, not to define function. You are equivocating, and rather badly. You raise the problem of other sequences that could implement the function. But my methodology is aimed exactly at that: having an estimate of the target space. If you do the math, you will see that the estimates of the target space in my results are very big. Of course, there is always the problem of possible alternative solutions, similarly complex, but completely different. Those cannot easily be anticipated. They certainly exist, in some measure. That is a completely different problem. It has nothing to do with the definition of the function, but rather with the estimate. I have discussed that problem in detail in the past. You will find a long discussion about that in this OP and in the following thread: Defending Intelligent Design Theory: Why Targets Are Real Targets, Probabilities Real Probabilities, And The Texas Sharp Shooter Fallacy Does Not Apply At All. 2 Look at the part about clocks. gpuccio
Swamidass at PS:
Overestimate is the opposite of underestimate. We are saying you are wildly overestimating FI, not underestimating it.
Did you misread @glipsnort? He is saying that it is possible, even likely, to underestimate the total FI present (true) while while overestimating the change in FI. And have you read my answer to him? I quote myself:
As for overestimating the change in FI, again I have never tried to estimate the absolute change in the full content of FI. That should be clear from what I have written. I quote myself:
gpuccio: It should be clear that my methodology is not measuring the absolute FI present in a protein. It is only measuring the FI conserved up to humans, and specific to the vertebrate branch. So, let’s say that protein A has 800 bits of human conserved sequence similarity (conserved for 400+ million years). My methodology affirms that those 800 bits are a good estimator of specific FI. But let’s say that the same protein A, in bees, has only 400 bits of sequence similarity with the human form. Does it mean that the bee protein has less FI? Absolutely not. It probably just means that the bee protein has less vertebrate specific FI. But it can well have a lot of Hymenoptera specific FI. That can be verified by measuring the sequence similarity conserved in that branch for a few hundred million years, in that protein.
That should be clear enough, but still you insist about the danger of overestimating the change in FI, when I have never tried to do that. Maybe you are confused by the fact that I speak of information jumps. But, you see, my term has always been “information jumps in human conserved sequence similarity”. It’s not a jump in the full content of FI, as I clearly explain in the above quote. IOWs, when I say that CARD11 shows and information jump of 1250 bits at the transition to vertebrates, I simply mean that 1250 bits of new FI that is similar to the form observed today in humans appear at that transition. It is a jump, because new sspecific sequence information arises, that was not there before. But I have never said the the total FI was lower before. I simply don’t measure it, because my methodology cannot do that.
My procedure cannot overestimate FI, only underestimate it. My estimate of the change is only an estimate of the change (jump) in human conserved similarity. It is not, and never has been said to be, a measure of the change in total FI. OK, I was answering to your very disappointing post about the starry sky, but for some strange reason I have lost all that I had written. Maybe it is better. Now I am tired. Tomorrow I will see if I really want to say those things. gpuccio
Swamidass at PS:
Which is, once again, why it is important to do this analysis with a phylogeny.
OK, I will try to be simple and clear. I am here to discuss a methopdology that can, I believe, give important indications about a certain type of FI in proteins (the one that can be revealed by long sequence conservation), its appearance at certain evolutionary times, its different behaviour in different proteins. And that can give a good idea, by establishing a reliable lower threshold of new FI appearing at certain steps, of how bif the functional content of many proteins is. These data are very interesting, in my opinion, tgo sipporft a design inference in many cases. This is my purpose, and nothing else. Now, may be that a phylogenetic analysis could do that better. Or maybe not. I don’t know, and I cannot certainly perform a phylogenetic analysis now. I am not aware of phylogenetic analyses that are centered on the concept of FI as formulated in ID, least of all on design detection. So, I have my doubts. However, I am not here to perform a phylogenetic analysis, I am here only to explain and defend my ideas, and I try to do exactly that. So, again, I am comvinced that my methodology is a good estimator of that part of FI which is connected to sequence conservsation, for example from cartilaginous fish to humans, and that appears in vertebrates. The same procedure can also be applied to other contexts, of course. I have received, from you and others, a few recurring criticism that are simply not true ot not pertinent. Here are a couple of examples:
Your estimate of FI seems to be, actually, FI + NE (neutral evolution), where NE is expected to be a very large number. So the real FI is some number much lower than what you calculated.
Wrong. My estimate of FI is, rather, Total FI - functional divergence (FI not conserved up to humans). Therefore, as stated also by glipsnort, my estimate is underestimating FI, not overestimating it. Moreover, NE has nothing to do with this.
We are convinced that neutral evolution will be mistaken as FI gains.
Why? And what do you mean by NE? Do you mean NV and ND? Why should that “be mistaken” as FI gain? By my procedure? There is absolutely no reason for that. Neutral variation is the cause of divergence in non functional sequences. Why should it be mistaken as FI gain by a proicedure based on sequence conservation? I really don’t understand what you mean. And so on. How can we discuss with such basic misunderstandings repeated so many times, and without any real explanation of what is meant? gpuccio
glipsnort at PS:
When using conservation it’s possible, even likely, to underestimate the total FI present while overestimating the change in FI.
Again, my purpose is not to measure the full content of FI in a protein. I ageree that it is possible to underestimate the full content of FI, but we can have a good idea of the component revealed by sequence conservation from that point on. That’s what I do, that’s what I get as result, and that’s what I use for my inferences. I have never made reference to the full content of FI. My purpose is to demonstrate the presence, at a certain point in evolutionary history, of a definite quantity of FI that has a sequence configuration that will be conserved up to humans. This should be clear, by now, Either you agree that my methodology does that, or you don’t. Fell free to decide, and if you want to explain why. But there is no sense in requiring from my methodology what it has never tried to measure. As for overestimating the change in FI, again I have never tried to estimate the absolute change in the full content of FI. That should be clear from what I have written. I quote myself:
It should be clear that my methodology is not measuring the absolute FI present in a protein. It is only measuring the FI conserved up to humans, and specific to the vertebrate branch. So, let’s say that protein A has 800 bits of human conserved sequence similarity (conserved for 400+ million years). My methodology affirms that those 800 bits are a good estimator of specific FI. But let’s say that the same protein A, in bees, has only 400 bits of sequence similarity with the human form. Does it mean that the bee protein has less FI? Absolutely not. It probably just means that the bee protein has less vertebrate specific FI. But it can well have a lot of Hymenoptera specific FI. That can be verified by measuring the sequence similarity conserved in that branch for a few hundred million years, in that protein.
That should be clear enough, but still you insist about the danger of overestimating the change in FI, when I have never tried to do that. Maybe you are confused by the fact that I speak of information jumps. But, you see, my term has always been “information jumps in human conserved sequence similarity”. It’s not a jump in the full content of FI, as I clearly explain in the above quote. IOWs, when I say that CARD11 shows and information jump of 1250 bits at the transition to vertebrates, I simply mean that 1250 bits of new FI that is similar to the form observed today in humans appear at that transition. It is a jump, because new sspecific sequence information arises, that was not there before. But I have never said the the total FI was lower before. I simply don’t measure it, because my methodology cannot do that. And this is it. You think as you like, but at least try to understand what I say and what I am doing. Or justr don’t try, if you prefer so. gpuccio
As for this from Joshua:
Where is the empirical evidence natural processes can’t produce 500 bits of FI in biological life?
There isn't any evidence that they can. No one knows how to test such a thing. And Joshua's "examples" just demonstrate sheer desperation or a complete misunderstanding of the argument. Again, there isn't any positive evidence nature, operating freely, can produce 500 bits of FI. So it would be up to the people who says it can to demonstrate such a thing. What gpuccio is saying is based on everything we know about functional information. 100% of our observations and experiences say that functional information (500 bits) only comes via intelligent agency volition. And that nature always takes the line of least resistance. It is OK with producing rocks. Spiegelman's Monster is also testimony to the fact that nature chooses the simplest way. ET
Timothy Horton wonders:
I’m still waiting for your explanation as to why the example of the evolving soft robots don’t constitute a natural process increasing FI.
Because it has nothing to do with blind and mindless processes. For one it was designed with the ability to reproduce. That means it was given the very functional information that blind and mindless processes needs to account for. And what they do in no way exceeds the amount of information in the program. So they didn't increase FI. The organisms were intelligently designed with working muscle groups, bones and electric stimulus. Again, absolutely nothing to do with blind and mindless processes. Nature, operating freely cannot produce 500 bits of FI. ID has said nothing about intelligently designed computer programs. The desperation is starting to peak... ET
psnort Steve Schaffner Computational Biologist 26m Rumraket: This wall of substance-less but technical-sounding fog left me still wondering what it has to do with evolution or the evolution of protein coding genes. SteveSchaffner Or with @gpuccio’s argument, which makes no reference to these claims about information. (I note that a google search for “physics of symbol systems”, quotation marks included, returns only three hits, two of them to Uncommon Descent. ) This is from a very competent Scientist at PS. Very surprising. Are trying to relive the argument that DNA is a code or not? bill cole
Faizal Ali is more than welcome to post here. But we all know why he won't. And Rumraket is again just hand-waving away upright biped's argument. Until someone demands that they present the methodology, not model, to test the claim that blind and mindless processes did it, they can always just hand-wave and back-peddle. But if they do actually ante up then we have something to compare methodologies. Then all one has to do is show which methodology has the substance. How did they determine that blind and mindless processes produced the immune system? The genetic code? ATP synthase? Why don't they lead by example instead of double-talk and bald assertions? Push back and see what they have. My bet is more bluffing and equivocation. ET
Gpuccio I made the following argument. Do you agree with this?
There are a couple issues with Josh’s analogy. He is not looking at a translated sequence. He is also not looking at a changing or potentially changing configuration over time. This change allows you to look at comparative data and make a judgement about cause of the configuration. In much the same way the observation of 500 heads is significant in that we have something to compare it to. That is the normal statistical expectation of the quantity of heads and tails after tossing a coin 500 times.
bill cole
gpuccio:
Can you please explain why neutral evolution would be part of the FI I measure? This is complete mystery to me.
Would anyone seriously consider that neutral changes could account for more than two, maybe three, substitutions in one gene? Unless there was a serious bottleneck drastically increasing the odds, it would be a stretch. But I get the feeling they think that neutral changes can account for many more than that. That is nothing more than the "theory" of sheer dumb luck. ET
These people @ PS will NEVER put up any methodology that shows blind and mindless processes can produce 500 bits of FI. All they will do is continue to baldly claim that it can. Joe Felsenstein has chimed in and it has already been proven that he doesn't understand the argument. And people are taking cues from him. Sad, really... ET
It just gets worse:
Over millions of consecutive generations, this protein has incrementally grown larger by adding hundreds of amino acids.
What? If you add amino acids to an existing protein the chances are you are going to bury the active site. And most likely change the structure. When it gets too long it will no longer fold properly without the aid of chaperones. There isn't any evidence that proteins can grow as Rumraket suggests https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/110 ET
Faizal Ali,
The problem is the inclusion of the latter under the category “semantic” information is the point in question. The large majority of experts who do not accept the creationist argument also do not accept the creationist claim that the physical and chemical interactions of the molecules involved in biological processes are directly analogous to sort of semantic information involved in, say, a written novel or a computer program.
No, sorry, it is not in question. The physics of symbol systems remain the same regardless of the medium, or its origin, and your attempt to dismiss the issue as a “creationist claim” displays an embarrassing lack of knowledge regarding the recorded history of the issue. You are likely unaware of this because you have not educated yourself on the literature. Additionally, empirical science is not established by consensus; it is established by what can be demonstrated and repeated. I would think you might have been aware of this, but I could be mistaken. In any case, if you find someone who as shown that the genetic material is not rate-independent, or that the process is not irreversible, or that no distinctions need be made between laws and initial conditions, or perhaps if you find someone who has solved the measurement problem, or any of the other physical observations recorded in the physics literature regarding symbol systems over the past half century, then be bure to let us know. Until then, I plan to stick with the science. You are free to continue otherwise. Upright BiPed
@UB 467 Letting you know iI copied your comment and posted at PS: https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/97 equate65
GP @474: If Dr. JS has problems understanding such a basic but fundamental concept, then your discussion at PS is really going to slow down considerably. That's why it looks like there is not so much progress, except the comment you quoted at 466, which I don't understand. Even that quoted comment could be a product of misunderstanding rather than a sign of progress in the discussion. At least that's my perception before you explain what Art Hunt meant by what he wrote in that comment you quoted @466. No need to respond this now. I can wait until you're done with your discussion at PS. Thanks. PeterA
GP @466: What did Art Hunt mean by this?
Thanks again, @gpuccio. I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here. I suspect that there are serious issues with the approaches one may take to estimate this property, but the concept seems to make sense to me.
Emphasis added. No need to respond this now. I can wait until you're done with your discussion at PS. Thanks. PeterA
Swamidass at PS:
A new configuration would not be equally precious for telling the stories we have now. We would have different constellations, and therefore different myths about these constellations. My function is to tell this specific (specified!) stories, not any old stories you might want to come up with in place of them. So no, a new conversation would break the storytelling function. Remember also, that some configurations (e.g. a regular grid or a repeating pattern) are useless for navigation or time-telling. Very quickly, we would get over 500 bits with a careful treatment, well into the thousands if not millions of bits.A new configuration would not be equally precious for telling the stories we have now. We would have different constellations, and therefore different myths about these constellations. My function is to tell this specific (specified!) stories, not any old stories you might want to come up with in place of them. So no, a new conversation would break the storytelling function. Remember also, that some configurations (e.g. a regular grid or a repeating pattern) are useless for navigation or time-telling. Very quickly, we would get over 500 bits with a careful treatment, well into the thousands if not millions of bits.
But you are doing exactly what I cautioned about. You are defining the function as a consequence of an already observed configuration. If the configuration were different, we woukld be telling different stories. Are you really so confused about the meaning of FI? The function must be defined independently You can define the function as “telling stories about the stars”. You cannot define the function as “telling storeis about the stars in this specific configuration”. How can you not understand that this is conceptually wrong? gpuccio
GP @ 469:
It is becoming a little difficult to paste here all the relevant posts at PS. I apologize in advance if something becones lost in translation, and maybe there are some obscure passages in the discussion. I am doing my best!
I can see how difficult that double posting must be. I suggest that those of us who are interested in following your discussion at PS just go there and read it directly, so you can have more time to concentrate on the discussion. Another option could be that someone from here does the copying/pasting of your comments from PS to here. PeterA
Swamidass at PS:
That last paragraph is key. Your estimate of FI seems to be, actually, FI + NE (neutral evolution), where NE is expected to be a very large number. So the real FI is some number much lower than what you calculated.
I really don’t understand. Can you please explain why neutral evolution would be part of the FI I measure? This is complete mystery to me. Neutral evolution explains the conservation of sequences? Why? I really don’t understand. gpuccio
For those readers interested in following GP’s discussion with PS, here are the associated post numbers: 343 Bill Cole 351 GP to Bill Cole 354 Bill Cole 356 GP to Bill Cole and PS 357 Bill Cole 360 Bill Cole 368 GP to PS 369 GP to Davecarlson 370 GP to JS 374 GP to UD 375 GP to JS 381 GP to JS 387 GP to JS 388 GP to JS 395 GP to JS 398 GP to JS 401 GP to Art Hunt 402 GP to Rumracket 406 GP to JS 408 GP to JS 411 GP to PS 416 GP to sfmatheson and JS 431 GP to JS 432 GP to JS 433 GP to JS 434 GP to glipsnort 438 GP to Art Hunt 445 GP to sfmatheson 446 GP to sfmatheson and JS 449 GP to all 451 GP to JS 461 GP to sfmatheson 462 GP to glipsnort 465 GP to JS 466 GP to Art Hunt 468 GP to JS 469 GP to all 470 GP to JS 472 GP to JS 474 GP to JS to be continued... PeterA
Swamidass at PS:
gpuccio: What is the object? The starry sky? You mean our galaxy, or at least the part we can observe from our planet? What is the function? Swamidass: I said the information is the positions of visible stars in the sky. The function of this information, for many thousands of years, was navigation (latitude and direction), time-telling (seasons), and storytelling (constellations). Any change that would impact navigation, time-telling, or storytelling, or create a visual difference would impact one or all these things. There are about 9,000 visible stars in the sky (low estimate). Keeping things like visual acuity in mind (Naked eye - Wikipedia), we can compute the information. However, even if there are just two possible locations in the sky for every start (absurd) and only half the stars are important (absurd), we are still at 4,500 bits of information in the position of stars in the sky. That does not even tell us the region of sky we are looking at (determined by season and latitude), but we can neglect this for now.
I wll briefly answer this, and then for the moment I must go. What makes the current configuration of the stars specific to help navigation, time telling or story telling? If the configuration were a different random configuration, wouldn’t it be equally precious for navigation, time telling and storytelling? There is no specific functional information in the configuration we observe. Most other copnfigurations generated by cosmic events would satisfy the same functions you have defined. gpuccio
To all here: It is becoming a little difficult to paste here all the relevant posts at PS. I apologize in advance if something becones lost in translation, and maybe there are some obscure passages in the discussion. I am doing my best! :) gpuccio
Swamidass at PS:
But evolution is not a random walk!! It is guided by many things, including natural selection. If you neglect selection, you are not even modeling the most fundamental basics. There are other deviations from the random walk model too. Evolution is also not all or none, but demonstrably can accumulate FI gradually in a steady process. I could go on, but you need a better model of evolution.
You are anticipating too much. Have patience. I am only saying that the correct model for the RV part of the neo-darwinian model is a random walk. For the moment, I have not considered NS or other aspects. By the way, the random walk model is also valid for neutral drift because, as sait, it is part of the RV aspect.
First, there is a difference between FI estimates by your procedure and the true FI.
As said, my estimate is a good lower threshold. For design inference, that is fine.
For your argument to work, as merely a starting point, you have to demonstrate the FI you compute is a reasonable approximation if the true FI, not confused by neutral evolution and correctly calling negative controls.
I have discussed that. Why do you doubt that it is a reasonable approximation? It is not confused by neutral evolution, why should it? The measurement itself is based on the existence of neutral evolution. Why should that generate any confusion? I have said that my procedure cannot evaluate functional divergence as separate from neutral divergence. Therefore, what I get is a lower threshold. And so? What is the problem? As a lower threshold I declare it, and as a lower threshold I use it in my reasonings. Where is the problem?
You also have to use a better model of evolution than random trials or walks.
Of course, as said, I am not considering NS. Yet. I will. But I have already pointed to two big OPs of mine, one for RV and one for NS. You can find a lot of material there, if you have the time. However, I will come to that. And to the role of NS in generating FI. Just give me time. But RV is a random system of events. It must be treated and analyzed as such. gpuccio
Note to the "experts" at PS: You are equivocating on the vast (and very well-documented) difference between what physicists refer to as "physical" or "structural" information, and the semantic information contained in the gene system (the source of specification and control over protein synthesis). Joshua Swamidass, biological information is semantic and rate-independent, requiring a coordinated set of non-integrable constraints, to be actualized in a non-reversible process. The process itself requires complimentary descriptions in order to be understood. Structural or physical "information", on the other hand, is purely dynamic and reversible. Clearly, you should know these things, and should not present yourself as an expert on the subject while casually equivocating between these diametrically-opposed meanings. A protein is the product of an encoded description; the position of the stars in the night sky are not. Neither are the locations of islands on the open sea. Neither are weather patterns and tornadoes. Upright BiPed
Art at PS:
Thanks again, @gpuccio. I would like to commend you for broaching this subject, as it stands in contrast to the approaches used by the ID vanguard. I have long been of the opinion that the relevant metric that ID proponents should be measuring is something akin to informational work, which may be like what you describe here. I suspect that there are serious issues with the approaches one may take to estimate this property, but the concept seems to make sense to me.
Thank you! :) I will come to your tornado as soon as possible. In the meantime, the discussion with Swamidass can maybe help clarify some points. I will come back to the discussion later. gpuccio
Swamidass at PS:
One example is the configuration of stars in the sky. Far more than 500 bits. Another example is the location of islands in the sea. Another example is the weather patterns across the globe for the last century. And yes, all these objects can be defined by a functional specification. This is all functional information.
I start with you, because at least I have not to show meteorologic abilities that I do not posses! Art’s tornadoes will be more of a challenge. :slight_smile: I am not sure if the problem here is a big misunderstanding of what FI is. Maybe, let’s see. According to my definition, FI can be measured for any possible function. Any observer is free to define a function as he likes, but the definition must be explicit and include a level to assess the function as present. Then, FI can be measured for the function, and objects can be categorized as expressing that function or not. An important point is that FI can be generated in non design systems, but only at very low levels. The 500 bit threshold is indeed very high, and it is appropriate to really exclude any possible false positive in the design inference. I think that I must also mention a couple of criteria that could be important in the following discussion. I understand that I have not clarified them before, but believe me, it’s only because the discussion has been too rushed. Those ideas are an integral aprt of all ID thinking, and you can find long discussions made by me at UD in the past trying to explain them to other interlocutors. The first idea you should be familiar with, if you have considered Dembski’s explanatory filter. The idea is that, before making a design inference, we should always ascertain that the configurations we observe are not the simple result of known necessity laws. For the moment, I will not go deeper on this point. The second point is about specification, not only functional specification, but any kind of specification. IOWs, any type of rule that generates a binary partition in the search space, defining the target space. The rule is simple enough. If we are dealing with pre-specifications, everything can work. IOWs, let’s take the simple example of a deck of cards. If I declare in advance a specific sequence of them, and then I shuffle the cards and I get the sequence, something strange is happening. A design inference (some trick) is certainly allowed. But if we are dealing with post-specifications, IOWs we give the rule after the object had come into existence and after we have observed it, then the rule must be independent from the specific configuration of bits observed in the object. Another way to say that is that I cannot use the knowledge of the individual bits observed in the object to build the rule. In that case, I am only using an already existin generic infomration to build a function. So, going back to our deck of cards, observing a sequence that shows the cards in perfect order is always a strange result, but I cannot say: well, my function is that the cards must have the following order, and then just read the order of a sequence that has already been obtained and observed. This seems very trivial, but I want to make it clear because a lot of people are confused about these things. So, I can take a random sequence of 100 bits and then set it as electronic key to a safe. Of coruse, there is nothing surprising in that: the random series was a random series, maybe obtained by tossing a fair coin, and it had no special FI. But, when I set it as a key, the functional information in that sequence becomes 100 bits. Of course, it will be almost impossible to get that sequence by a new series of coin tossing. Another way to say these things is that FI is about configurations of configurable switches, each of which can in principle exist in at least two different states, so that the specific configuration is the one that can implement a function. This concept is due to Abel. OK, let’s go back to your examples. Let’s take the first one, the other will probably be solved automatically. The configuration of stars in the sky. OK, it is a complex configuration. As it is the configuration of grain of sands on a beach. So, what is the function? You have to define a function, and a level of it that can define it as present or absent in the object we are observing. What is the object? The starry sky? You mean our galaxy, or at least the part we can observe from our planet? What is the function? You have to specify all these things. Frankly, I cannot see any relevant FI in the configuration of stars. Maybe we can define some function for which a few bits could be computed, but no more than that. So, as it is your example, plese clarify better. Fior me, it is rather obvious that none of your examples shows any big value of FI for any possible function, And that includes Art’s tornado, which of course I will discuss separately with him. Looking forward to your input about that. gpuccio
And then we have the totally untestable imagination for "evidence" against gpuccio:
There could have been pre-cursor genes that had functional tertiary structures that were then modified through subsequent mutations. That pre-cursor gene could have been lost to history. If there is billions of years of evolution that led to a gene that was one mutation away from finding a novel function, then you would need to factor in those billions of years of evolution into your calculations.
Just say anything because you don't have to support it.
Nature is full of species that found different strategies for adapting to environmental challenges, and I would suggest this extends to the molecular level.
Because they were intelligently designed with the ability to do so.
For example, intron splicing is just one possible function that could have arisen. A completely different method for dealing with misfolded proteins could have emerged.
Just like that- magic! For example there isn't any evidence that blind and mindless processes can put together a system of components for intron splicing. And blind and mindless processes obviously cannot tell nor does it care about misfolded proteins. They don't care about proteins. The problem is they expect us to blindly accept that blind and mindless processes did all of that without trying to nor wanting to. That is why they get so frenzied when someone like gpuccio comes up with a methodology that threatens their core beliefs https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/89 ET
And moar entertainment from Joshua:
It is guided by many things, including natural selection. If you neglect selection, you are not even modeling the most fundamental basics.
Natural selection is a process of elimination. It is NOT a process of selection. It is blind and mindless. It doesn't guide, it culls. And it is not some magical feedback. It boils down to nothing more than contingent serendipity. If you neglect that, Joshua, you wind up making bald assertions that you will never support. And Joshua is still hung up on his cancer strawman. I doubt anything will ever get him off of that. EricMH just gave up. https://discourse.peacefulscience.org/t/gpuccio-functional-information-methodology/7549/116 ET
glipsnort at PS:
I’ll offer the same example I gave in the other thread: the human immune system.
Are you saying that the human immune system is a non biological object? Interesting. You have some serious misconceptions about the immune system, however, but just now I have other things to do. If you have one single counter-example of a non biological object that exhibits more than 500 bits of FI and is not a designed human artifact, please do it. That was the request. gpuccio
sfmatheson at PS:
If I have misunderstood the metric, please let me know
Yes, you have. Definitely.
I know you are also calculating some kind of probability, but that probability is completely meaningless without very important additional information.
What additional information? I can get no sense from tour discourses. You are certainly in good faith, and you also admit that you are not an expert, if I understand well. Maybe that’s why your objections are not clear at all. I say that with no bad intentions, but because I really don’t understand what you mean. So please, explain what is the missing information without which probability would be meaningless here. But please, clarify of what probability you are speaking. The bitscore is linked to a probability in the Blast allgorithm itself. It is given as E value, and for values small enough it is practically the same thing as a p value. It expresses the expected number of similar homologies in a similar search in the database if the two sequences were unrelated. For many Blast resulta resulting in high similarity, that value is given as 0. The probability which I mention in ID theory is a different thing. It is the probability linked to the FI value, and expresses the probability to find the target in a random search or walk, in one attempt. I think these concepts are rather precise. What is the additional information you need?
I know I’m not missing anything about the probability calculations, because those can’t be meaningful by themselves. (That’s old, old hat in “design” debates.)
That’s simply a meaningless statement. What do you mean?
The point about neutral evolution is about applying a method to a negative control.
The point about neutral evolution is that it happens. What do you mean?
When you read phylogenetics papers, you should notice this.
I have not made a phylogenetic analysis, nor can I see any reason why I should do that. I have used common phylogenetic knowledge, at the best of my understanding, to make very simple assumptions. That vertebrates derive from some common chordate ancestor, that cartilaginous fishes split from bony fishes rather early in the natual history of vertebrates, that humans derive from bony fishes and not from cartilaginous fishes. Am I wrong? The times I have given in my graphs are only a gross approximation. They are not important in themselves.
Without good answers, you can only say that you used BLAST to “measure” sequence conservation in a few hand-picked evolutionary trajectories. And that, my friend, is not informative. My opinion is that it isn’t even a good start.
You are free to think as you like. But I have not analyzed a few hand picked trajectories. Now that I have access to my database, I can gibe you more precise data (but you could find them in my linked OPs). For example, I have evaluated the information jump at the vertebrate transition (IOWs the human conserved similarity in cartilaginous fishes minus the human conserved similarity in pre-vertebrates) for all human proteins, and using all protein sequences of non vertebrate deuterostomes and chordates and of cartilaginous fishes in the NCBI database. Here are the results, expressed both as absolute bitscore difference, and as bits per aminoacid (baa): Absolute difference in bitscore: Mean = 189.58 bits SD = 355.6 bits Median = 99 bits Difference in baa: Mean = 0.28796 baa SD = 0.3153 baa Median = 0.264275 baa As you can see from the values of the medians, half of human proteins have an information difference at the vertebrate transition that is lower than 99 bits and 0.26 baa. Is that a good negative control, if compared to the 1250 bits of CARD 11? Remember, these are logarithmic values! 75th centile is 246 bits and 0.47 baa. That means that 25% of human proteins have values higher than that. And I could show you that values are very significantly higher in proteins involved in the immune system and in brain maturation. I don’t know if that means anything for you. However, for me these are very interesting data. About FI in evolutionary history. gpuccio
ET at #459: Did Shaffner really write that? "coding for hundreds of specific antibodies that are highly functional, each precisely tuned to a protein on a particular pathogen or pathogen strain" This is just ignorance. However, I have not the time now to explain why. I will do it later. gpuccio
Steve Shaffner doubles-down:
I’ll offer the same example I gave in the other thread: the human immune system. Your body contains DNA with more than 500 bits of FI, coding for hundreds of specific antibodies that are highly functional, each precisely tuned to a protein on a particular pathogen or pathogen strain. You were not born with DNA that had that information in it; it was generated by a process of random mutation and selection.
Question-begging nonsense. And according to ID we were born with the information required to produce the immune system and the immune system was the product of intelligent design. There isn't any evidence nor a way to test the claim that any immune system evolved via blind and mindless processes. If those existed ID would have been a non-starter and this discussion would not be taking place. https://discourse.peacefulscience.org/t/gpuccio-functional-information-methodology/7549/95 ET
Apparently GPuccio's time and effort is highly appreciated by a number of readers that most probably are following GP's discussion with his objectors at PS. Note that the current GP's OP remains in the most popular list: Popular Posts (Last 30 Days)
Controlling the waves of dynamic, far from… (2,020) Darwinist Jeffrey Shallit asks, why can’t… (1,424) Are extinctions evidence of a divine purpose in life? (1,316) UD Newswatch: Epstein Suicide (971) “Descartes’ mind-body problem” makes nonsense of materialism (970)
Visited 3,961 times since posted July 10, 816 visits today! jawa
And Timothy Horton continues the question-begging and stupidity:
That also fails because biological systems show such high FI values without being designed.
No evidence provided.
Objects produced with evolutionary algorithms also show the process can increase FI with the amount only being limited by how long the algorithm is allowed to run.
Evolutionary algorithms are examples of evolution by means of telic processes. Evos will just say anything without caring how ignorant it makes them appear. ET
They exclude evolution by means of intelligent design. To them all evolution has to be blind and mindless and evolution by design can never even exist. And yet that is what ID says- that organisms were intelligently designed with the ability to adapt and evolve. As Dr. Spetner wrote:
He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108
ET
Yes, gpuccio. As I have said you are facing a steep, uphill climb with that group. But it should allow you to refine your arguments, anyway. They might not listen but others will. ET
ET at #452: So, for Rumracket it is no issue at all? Just iterations? Good to know! :) gpuccio
Et at #450: "The immune system was intelligently designed with the ability to do that, Steve. Producing the immune system via blind and mindless processes is what is impossible." Of course. Are they really offering that kind of arguments? gpuccio
Rumraket:
Another issue is that no attempt is made at evaluating the probability of the design event.
LoL! That's because the odds of a designing agency being able to design what they do is exactly 1 to 1. The probability of being dealt a hand of cards in a poker game is 1. The probability of someone rolling dice in a craps game is 1.
Whether you look at the evolutionary history of individual proteins, or the complete genomes in which the genes encoding these proteins are encoded, you can see how through many iterations, a sequence that looks like it is very unlikely and has high FI could have evolved. It’s really no issue at all.
Especially if you don't care about science. However science requires your claims to be testable and as of today, they are not. ET
Swamidass et al. (at PS): So, here is your question #1:
What empirical evidence do you have that demonstrates FI increases are unique to design?
I have explained that the connection is empirical, even if with a good rationale. I quote myself: Leaving aside biological objects (for the moment), there is not one single example in the whole known universe where FI higher than 500 bits arises without any intervention of design. On the contrary, FI higher than 500 bits (often much higher than that) abunds in designed objects. I mean human artifacts here. This is the empirical connection. Based on observed facts. Of course, you are not convinced. You ask for more, and you raise objections and promises of counter-examples. That’s very good. So, let’s go to my two statements. I will try to support them both. But in reverse order. My second statement is: “FI higher than 500 bits (often much higher than that) abunds in designed objects. I mean human artifacts here.” Your objection:
As a technical point, without clarifying precisely how FI is defined, this is not at all clearly the case.
But I have given a very precise definition of FI. What is the problem here? To be more clear, I will describe here the three main classes of human artifacts, designed objects, where “FI higher than 500 bits (often much higher than that) abunds”. They are: a) Language b) Software c) Machines The first two are in digital form, so I will use one of them as an example, in particular language. I have shown in detail how FI can be indirectly computed, as a lower threshold, for a piece of language. I link here my OP about that: An Attempt At Computing DFSCI For English Language https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ A clarification: dFSCI is the acronym I used for some time in the past to point to the specific type of information I was discussing. It means digital Functionally Specified Complex Information. It was probably too complicated, so later I started to use just Funtional Information, specifying when it is in digital form. The piece of language I analyze in the OP is a Shakespeare Sonnet (one of my favourite, I must say). My simple conclusion is that a reliable lower threshold of FI of such a sonnet is more than 800 bits. The true FI is certainly much more than that. There has been a lot of discussion about that OP, but nobody, even on the other side, has really questioned my procedure. So, this is a good example of how to compute FI in language, and of one object that has much more than 500 bits of FI. And is designed. Of course, Hamlet or any other Shakespeare drama have certainly a much higher FI than that. The same point can be easily made for software, and for machines (which are usually analogic, so in that case the procedure is less simple). So, I think that I have explained and supported my second point. If you still do not have a clear understanding of my definition of FI, and how to apply it to that kind of artifacts, please explain why. So, let’s go to my first statement. “Leaving aside biological objects (for the moment), there is not one single example in the whole known universe where FI higher than 500 bits arises without any intervention of design.” I maintain that. Absolutely. Your objection:
Why should we agree with this? It seems obviously false. What evidence do can you present to support this assumption? There are examples of non-designed processes processes we can directly observe producing FI. We can observe high amounts of FI in cancer evolution too, which you agree is not designed. We also see high amounts of FI in viruses, which you also agree are not designed. All these, and more, are all counter examples to your assumption.
OK, I invite you and anybody else to present and defend one single counter-example. Please do it. You mention two things that you have offered before. a) Cancer b) Viruses. I have already declared that cancer is not an example of a design system, and I maintain it. Technically, it is a biological example, but as I have agreed that it is not a design system, I am ready to discuss it to show that you are wrong in this case. I want, however, to clarify that I stick to my declared principle to avoid any theological reference in my discussions. I absolutely agree that cancer is not designed, but the reason has nothing to do with the idea that “God would not do it”. It is not designed because facts show no indication of design there. I am ready to discuss that, referring to your posts here about the issue. For viruses, I was, if you remember, more cautious. And I still am. The reason is that I do not understand well your point. Are you referring to the existence of viruses, or to their ability to quickly adapt? For the second point, I would think that it is usually a non design scenario, fully in the range of what RV + NS can do. I must state again, however, that I am not very confident in that field, so I could be wrong in what I say. For the first point, I am rather confident that viruses have high levels of FI in their small genomes, and proteins. They are not extremely complex, but still the genes and proteins, IMO, are certainly designed. So, are viruses designed? Probably. My only doubt is that I don’t understand well what are the current theories about the origin of viruses. My impression is that there is still great uncertainty about that issue. I would be happy to hear what you think. In a sense, viruses could be derived from bacteria or other organisms. Their FI could originate elsewhere. But again, I have not dealt in depth with those issues, and I am ready to accept any ideas or suggestions. Again, I have no problem with the idea that viruses may be designed. If they are, they are. So, my support of my first statement is very simple. I maintain that empirically there is no known example of non biological objects exhibiting more than 500 bits of FI that are not designed human artifacts, I invite everyone, including you, to present one counter-example and defend it. I am also ready to discuss your biological example of cancer. That requires, of course, a separate discussion in a later comment. For viruses, please explain better what is your point. The information in their genes and proteins is of course complex, and designed. Their adapttations, instead, as far as I can understand, do not generate any complex FI. gpuccio
Steve Shaffner wrote:
Every functioning human immune system finds a target with FI>500 bits. The target is a set of antibodies that can effectively respond to hundreds of different pathogens. If an analysis concludes that hitting such a target is impossible, the analysis is wrong.
The immune system was intelligently designed with the ability to do that, Steve. Producing the immune system via blind and mindless processes is what is impossible. https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/78 ET
To all: I have posted at PS a graph documenting the two examples given in the summary about my methodology posted there. The two proteins are: 1) ATP synthase beta chain: an old protein that presents amazing similarity to the human form alredy in bacteria. In metazoa, the curve is almost horizontal. The FI here is older than that. 2) CARD11, a protein that, as we well know, presents an amazing FI jump at the transition to vertebrates. I add the graph here, at the end of the OP. gpuccio
Swamidass asks:
So what evidence can you produce for a pre-existing specification?
Even Richard Dawkins agrees that biological reproduction is the thing specified in advance: In The Blind Watchmaker Dawkins writes, “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality is specified in advance is…the ability to propagate genes in reproduction.ET
Then there is the scientifically illiterate and totally clueless Faizal Ali- thankfully gpuccio doesn't have to deal with that one:
Yes, but it is clear @gpuccio has lots and lots of things he wants to tell us about before he gets to the big reveal, if he ever does. You know and I know what a big disappointment that is going to be. But this should prove to be a very enlightening demonstration of how ID’ers think science works.
LoL! What empirical evidence do THEY have that demonstrates FI increases can happen via blind and mindless processes? What empirical evidence do THEY have that demonstrates FI can arise via blind and mindless processes? They have no way to even test such things... ET
sfmatheson and Swamidass at PS: I have always been interested in the issue of functional divergence. One way to test for functional divergence could be to use my methodology in two separate branches of evolutionary history. The sequence configuration that is not shared in the two branches but is highly conserved in each branch would be a good candidate for functional divergence. I have tried something on that line in this OP of mine: Information Jumps Again: Some More Facts, And Thoughts, About Prickle 1 And Taxonomically Restricted Genes. https://uncommondesc.wpengine.com/intelligent-design/information-jumps-again-some-more-facts-and-thoughts-about-prickle-1-and-taxonomically-restricted-genes/ Another way, of course, is to prove the function of the non conserved sequence directly. Transcription Factors are a very good example of that. They have one or more DNA binding domains that are usually highly conserved. The rest of the molecule (often half of the sequence or more) is not very conserved. However, there are many indications that important functions, different from DNA binding, are implemented by that part of the protein. I have discussed a recent, very interesting paper about RelA, an important TF, which demonstrates how much of the function is linked to the non DBD part of the molecule. If you are interested, you will find my comment about that at #29 in the thread linked at the start of this thread: Controlling The Waves Of Dynamic, Far From Equilibrium States: The NF-KB System Of Transcription Regulation. https://uncommondesc.wpengine.com/intelligent-design/controlling-the-waves-of-dynamic-far-from-equilibrium-states-the-nf-kb-system-of-transcription-regulation/ gpuccio
sfmatheson at PS:
Your answer, I think, missed the point. The issue is not, as you seem to think (and I could be wrong) that neutral drift precludes design. (Edit for clarity: what I mean is that the challenge posed by @swamidass is not an assertion that drift cannot lead to design.)
Maybe I was not clear enough. My point about neutral drift is not that it precludes design, or not. Or that it leads to design, or not. My point is that neutral drift is irrelevant for FI and the design inference. I have also specified the reson for that. Neutral drift does not change any of the factors that influence FI and the design inference: a) It does not change the target space b) It does not change the search space c) It does not change the probabilistic resources of the system. IOWs, neutral drift neither precludes design nor leads to it, and it makes the generation of FI neither easier nor more difficult. I hope that is clear. But, of course, neutral variation (which is the result of neutral drift) is instead an important part of my methodology to measure FI in proteins. Indeed, as explained many times, my whole procedure is based on the two pillars of neutral variation and negative (purifying) selection. You say:
The issue is that drift (the key concept being neutral drift) will create sequence divergence that is uncoupled from function. In situations in which you have neutral sequence divergence, you have a nice negative control, which is one theme that @swamidass has emphasized without success. Any metric that supposes itself to measure “functional information” must be able to distinguish random drift from functional difference.
And you are right, of course. I have clarified those things myself, in comment #36. I quote myself: gpuccio at PS:
It should be clear that my methodology is not measuring the absolute FI present in a protein. It is only measuring the FI conserved up to humans, and specific to the vertebrate branch. So, let’s say that protein A has 800 bits of human conserved sequence similarity (conserved for 400+ million years). My methodology affirms that those 800 bits are a good estimator of specific FI. But let’s say that the same protein A, in bees, has only 400 bits of sequence similarity with the human form. Does it mean that the bee protein has less FI? Absolutely not. It probably just means that the bee protein has less vertebrate specific FI. But it can well have a lot of Hymenoptera specific FI. That can be verified by measuring the sequence similarity conserved in that branch for a few hundred million years, in that protein.
So, I will try to be even more clear. When we have a bitscore from Blast, we are measuring of course both conservation and divergence. We can divide that into 4 different components: a) Sequence conservation due to passive common descent. That is the component that we cancel, or minimize, by guaranteeing that we are blastin sequences separated by a very long evolutionary time, because practically all passive similarity will have been erased by neutral variation, if that part of the sequence is not functional. b) Sequence similarity preserved by negative (purifying) selection. This is what we measure by the bitscor, of the condition above mebtioned is satisfied. This is what I call a good estimator of FI. c) Divergence due to neutral variation and drift. d) Divergence due to different functional specificities in the two organisms. Let’s call this “functional divergence”. Now your point seems to be that my method cannot distinguish between c) and d). And I perfectly agree. But I have never claimed that it could. My method is aimed to measure that part of FI that is linked to long sequence conservation. Nothing more. That is of course only a part of the total FI. Let’s say that it is the part that can be detected by long sequence conservation. I have always made that very clear. My method underestimates FI, and this is one of the resons for that. So, we can consider my methodology as a good estimator of a lower threshold of FI in a protein. That’s perfectly fine for the purpose of making a design inference. When I find 1250 bits of detectable FI in CARD11, I can well infer design according to ID theory. If the FI is more than that, OK. But that value is much more than enough. As stated many times, ID is a procedure to infer design with no false positives, and many false negatives. False negatives are part of the game, and the threshold is conceived to guarantee the practical absence of false positives. Of course, functional divergence is a very interesting issue too. But it requires a different apporach to be detected. I will discuss that briefly in next post. gpuccio
For those readers interested in following GP’s discussion with PS, here are the associated post numbers: 343 Bill Cole 351 GP to Bill Cole 354 Bill Cole 356 GP to Bill Cole and PS 357 Bill Cole 360 Bill Cole 368 GP to PS 369 GP to Davecarlson 370 GP to JS 374 GP to UD 375 GP to JS 381 GP to JS 387 GP to JS 388 GP to JS 395 GP to JS 398 GP to JS 401 GP to Art Hunt 402 GP to Rumracket 406 GP to JS 408 GP to JS 411 GP to PS 416 GP to sfmatheson and JS 431 GP to JS 432 GP to JS 433 GP to JS 434 GP to glipsnort 438 GP to Art Hunt PeterA
Intelligent Design doesn't stand a chance against the magical shape-shifting feedback of natural selection. So sayeth the Peaceful Science faithful. :) ET
It is just getting worse. Now there is some sort of magic in natural selection and its feedback:
The same thing that’s wrong with every ID-Creationist probability calculation. You make the demonstrably false assumption extant proteins formed through purely random processes instead of the empirically observed process of gradual development shaped by selection feedback.
Cuz humans reproducing more humans will eventually lead to more humans... https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/68 ET
@Mike1962. Link you requested: https://discourse.peacefulscience.org/t/gpuccio-functional-information-methodology/7549 equate65
@ Mike 439: https://discourse.peacefulscience.org/t/gpuccio-functional-information-methodology/7549 equate65
Someone please provide a link to the gpuccio/swamidass thread on the other website. mike1962
Art at PS:
I would note that the bits calculated in this cited essay cannot be equated with bits derived from either BLAST analyses or the equation for FI that has been given above. They are quite completely different, and the calculation in the essay simply does not provide a bit-based estimate of probabilistic resources.
Why not? I have considered the probabilistic resources of our planet as a higher threshold of the number of possible states visited by a super population of bacteria inhabiting our planet for 5 billion years and reproducing at a very high rate. This is of course an exaggeration, and a big one, but the idea is correct, I believe. The probabilistic resources of a systerm are the number of states that can be randomly reached. It is similar to the number of times that i can toss a coin. They can be expressed as bits, just taking the positive log2 of the total number of states. So, if I have a sequence that has a FI of 500 bits, it means that there is a probability of 1:2^500 to get it in one random attempt. If my system has probabilistic resources of 120 bits (IOWs, 2^120 states can be reached), the probability of reaching the target using the whole probabilistic resources is still 1:2^380. What’s wrong with that? Of course, as I have said, the Blast bitscore is not the FI. But, provided that the conditions I have listed are satisfied, it is a good estimator of it. Look also at my answer to glipsnort, that I have just published. Please, let me know what you think. Thank you. :) gpuccio
Yes, I get it and it will be frustrating for you. But you seem to have the patience required and it is always a good thing to have others look over your methodology. I am glad that they restricted the number of people that you have to respond to. ET
ET: I am trying to clarisy a few points. Let's see how it goes. It's not easy. If I can agree with Swamidass at least on some basic ideas, I will try to show him why his reasoning about cancer is wrong. It's strange how a simple concept like FI is so often misunderstood. See #434, for example. gpuccio
If you read Joshua's Computing the Functional Information in Cancer you can see that his concept of FI is NOT the same as what gpuccio is trying to discuss. It will be very difficult to continue the dialog until there is an agreement about what is being discussed. People are trying to grasp at straws to try to refute gpuccio's methodology and assumptions. ET
glipsnort at PS:
Consider a cartoon piece of sequence that is 100 bp in length. The sequence has a function, every basepair contributes to the function, and loss of the function is lethal to the organism. 0.1% of all randomly chosen sequences could serve the same function equally well. This means the sequence has 7 bits of FI, correct? 300 different sequences can be reached from the functional sequence by one single-base substitution – which is the only kind of mutation in this cartoon world. The most likely case, then, is that all 100 bases will be conserved through evolution. Will blast return a bitscore of 7 bits for a 100 bp exact match? ETA: Sorry, that’s 10 bits – I was taking the natural log.
I don't follow your reasoning. A sequence of 100 bp where each bp must be specific for the function to be present has, of course, a FI of 200 bits: Target space = 1 Search space = 4^100 Target space/Search space = 6,22302E-61 FI = 200 bits The FI expresses the probability of finding the target space from an unrelated state in one attempt. In this case, it is 1:4^100 I don't follow your reasoning. Of course, the perfect conservation of that sequence would inform us that the sequence has 200 bits of FI. Indeed, the bitscore of a 100 bp sequence against itself is 185 bits. Which seems good enough. gpuccio
Swamidass at PS:
This is helpful and distinguishes you from @Kirk. We agree that common descent can produce similarity, and that this would look like FI in your computation. We have had a hard time establishing this point with other ID luminaries. The way you account for this is by only looking at ancient proteins, where you hope that millions of years would be sufficient to erase this effect. How do you know 400,000 million years is long enough to erase this effect?
Usually, when you look at synonimous sites, it is very difficult to detect any passive sequence similarity after such a time. IOWs, we reach “saturation” of the Ks. The R function that I use to compute Ks usually gives me a saturation message for synonimous sites in proteins that have such an evolutionary distance. Moreover, the fact is rather obvious when we look at the very different behaviour of proteins in the transition to vertebrates. The example of CARD11 is really an extreme case. Many proteins have very low sequence similarity between fishes and humans. The human configuration, in those proteins, begins to grow at later steps. There are proteins that have big neutral components, and the neutral part in not conserved at all thorughout that evolutionary window. So, I have all reasons to believe that 400 million years are enough to make conserved information a good estimator of FI. Remember, we are not looking for perfection here. Just a good estimate. gpuccio
Swamidass: I will start with an easy point: your question about neutral drift. I think that will be valid for coevolution also. First of all, I am well aware of nutralism and of drift as important actors in the game. Indeed, you can see that my whole reasoning to measure FI is based on the effects of neutral variation and neutral drift. What else erases non functional similarities given enough evolutionary time? So, I have no intention at all to deny the role of neutral variation, of neutral drift, and of anything else that is neutral. Or quasi neutral. My simple point is: all this neutral events, including drift, are irrelevant to ID and to FI. The reason is really very simple. FI is a measure of the probability to get one state from the target space by a random walk. High FI means an extremely low probability if reaching the target space. Well, neutral drift does not change anything. The number of states that is tested (the probabilistic resources of the system) remains the same. The ratio of the targte space to the search space remains the same. IOWs, neutral drift has no influence at all on the probabilistic barriers. Why? Because it is a random event, of course. Each neutral event that is fixed is a random variation. There is no reason to believe that the mutations that are fixed are better than those that are not fixed, in the persepctive of getting to the target. Nothing changes. Look, again I am trying to answer briefly. But I can deepen the discussion, if you let me know what you think. Just a hint. To compute FI in a well defined system, we havt to compute, durectly or indirectly, three different things: The search space. That is usually easy enough, with some practical approximations. The target space. This is usually the difficult part, and it usually requires indirect approximations. The probabilistic resources of the system. FI (-log2 the ratio of the target space to the search space) is a measure of the improbability of finding the target space in one random event. But of course the system can try many random events. So, we have to analyze the probabilistic resources of the system, in the defined time window. This can usually be donne by considering the number of reproductions in the population, and the number of mutations. IOWs, the total number of genetic states that will be available in the system in the time window. I have discussed many aspects of these things in this OP: What Are The Limits Of Random Variation? A Simple Evaluation Of The Probabilistic Resources Of Our Biological World https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ I give here also the link to my OP about the limits of NS: What Are The Limits Of Natural Selection? An Interesting Open Discussion With Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ In those two OPs, and in the following discussions, I have discussed many aspects of the questions that are being raised here. Of course, I will try to make again the important points. But please help me. When what I say seems too brief or not well argumented, consider that I am trying to give the general scenario first. Ask, and I will try to answer. gpuccio
(Swamidass has been trying to help me by implementing a new policy of moderation and limiting the number of contributors to the direct thread to a few. Here is my comment about that.) I thank Swamidass and the others for the attention to my “risk of being overwhelmed”. I would like to answer everyone, or at least all those who offer interesting contributions, but the risk os real. I am one, and my resources are limited. So, I will profit of this new “anti overwhelming” policy, but I will also try to have a look at waht others say, and if possible answer them. I will answer different points that have been raised as it comes. There is so much to say. I don’t want to convince anyone, but I would like very much to clarify what I believe as well as possible. So, let’s start. gpuccio
Chris Falter chimes in unaware that evolutionary algorithms use telic processes- they do what they were intelligently designed to do. How do you deal with people like that? ET
Alan Fox:
Maybe someone could offer a definition of “functional information”.
That has been explained to Alan on numerous occasions. Stat with Crick's definition of information with respect to biology:
Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.
Functional information would be in the sequence specificity required to produce functioning protein X (whatever protein is being investigated). Methinks Alan just wants to muddy the waters. Alan then points to this paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4476321/ and apparently never read it. They start with an existing abundance of polypeptides- not one of which arose via blind and mindless processes- and found 4 out of 6x10^12 that bound to ATP- that is the function. Alan thinks that does something for blind watchmaker evolution and is somehow an argument against ID. Clueless. ET
ET, Ok. Thanks. BTW, here’s another interesting piece of information provided by Alexa: UD 631,311 578 Total Sites Linking In PT 1,732,931 950 Total Sites Linking In TSZ 3,215,461 37 Total Sites Linking In PS 7,036,059 12 Total Sites Linking In Note that PT has a lot more sites linking in but a lot less traffic than UD? jawa
OK Jawa. Yes I get it. I misread it as being the number of visits for whatever reason. ET
ET @414: Did you see my comment @420? Did you understand it? Thanks. jawa
It just gets worse. Now a Timothy Horton is bluffing and equivocating. It's sad, really: https://discourse.peacefulscience.org/t/comments-on-gpuccio-functional-information-methodology/7560/5 Timothy sez:
If computer code which produces the performance observed in the visual graphics isn’t a sequence then what is it?
Did nature write that computer code? No. Humans did. Meaning intelligent agencies did. Peaceful Science wallows in its own willful ignorance and cowardly equivocations. ET
Oh my. The key itself was created by design, Neil. The random number generator worked as it was intelligently designed to work. Nature, operating freely, did not produce the key. See, it’s comments like that which demonstrate gpuccio faces a huge, steep hill.
The bigger issue is the number generator producing two like sequences is extremely remote where biology can do this. This goes to the heart of what functional information really is especially what gpuccio is observing when he sees information jumps and then long term preservation. What caused the jump? What caused the preservation? bill cole
Oh my. The key itself was created by design, Neil. The random number generator worked as it was intelligently designed to work. Nature, operating freely, did not produce the key. See, it’s comments like that which demonstrate gpuccio faces a huge, steep hill.
The bigger issue is the number generator producing two like sequences is extremely remote where biology can do this. This goes to the heart of what functional information really is especially what gpuccio is observing when he sees information jumps and then long term preservation. What caused the jump? What caused the preservation? bill cole
As for cancer, any cancer cells that utilize fermentation are more primitive than the normal cells. That alone means there isn't any gain in functional information. A rock is a primitive hammer. It contains less FI than a real hammer. Also, it is obvious that cancer cells lost some specificity along the way. Their coding no longer specifies the type of cell it should be. Joshua will not listen to any of that, though... ET
The RSA key itself was mostly produced by a random number generator, with some filtering. Yes, you could say that the random number generator was designed. And you could say that the RSA cryptosystem was designed. Still, the key itself is mostly generated randomly, so does not seem designed.
Oh my. The key itself was created by design, Neil. The random number generator worked as it was intelligently designed to work. Nature, operating freely, did not produce the key. See, it's comments like that which demonstrate gpuccio faces a huge, steep hill. ET
ET @414:
what are you talking about? Last I checked 7 million (PS) > 643,000 (UD)- Or am I misreading 405?
Sorry, my fault. I should have explained the meaning of the values shown in that table. I should not have assumed that others will know what they mean without my clarification. As far as I understand it, those numbers mean website ranking in the entire internet. I admit that those numbers are hard to believe. I could not believe that there can be more than 7 million websites out there! That's a big wow!!! But I convinced myself using this easy approach: open the links provided @346 and read the information written in there. Then compare those 4 links with this: https://www.alexa.com/siteinfo/google.com Note that the Alexa Ranking value for Google is 1. Yes, ONE. That's it. That means that there are over 600K websites out there that are ranked higher than UD. That's why the ranking comparison @346 is done among comparable peers. Otherwise, it wouldn't make much sense, at least to me. Again, read the information provided by Alexa in those links for UD, PT, TSZ, PS and see more interesting stuff that is in there. Do you understand this now? If not, I'll try to explain it another way. Now you should understand what I wrote about UD having incomparably much more internet traffic than PS does. Hence this discussion with GP allows PS to be exposed to at least the viewers at UD. Now UD is exposed to PS viewers, but that shouldn't make a big difference in number of viewers, because they don't have that many anyway. jawa
ET
gpuccio and Bill Cole need to ask them for examples from their position that falls in with what they are saying about science and testability.
As far as I am concerned this discussion has been successful. Gpuccio's method was generally accepted as a way to estimate FI. The 500bit issue is secondary. The competing methods (to design or mind) that claim they can generate FI like neutral mutations will fall away in my opinion given a little time. I have asked them for a model several times and they have not produced one any better then Dawkins Weasel. bill cole
For those readers interested in following GP’s discussion with PS, here are the associated post numbers: 343 Bill Cole 351 GP to Bill Cole 354 Bill Cole 356 GP to Bill Cole and PS 357 Bill Cole 360 Bill Cole 368 GP to PS 369 GP to Davecarlson 370 GP to JS 374 GP to UD 375 GP to JS 381 GP to JS 387 GP to JS 388 GP to JS 395 GP to JS 398 GP to JS 401 GP to Art Hunt 402 GP to Rumracket 406 GP to JS 408 GP to JS 411 GP to PS 416 GP to sfmatheson and JS to be continued... PeterA
Jawa- It is easy to see what is going on over there. And it is obvious that Joshua is bluffing and equivocating. And now Rumraket wants gpuccio to demonstrate a negative all the while they never ante up and never demonstrate anything. Where is the demonstration that blind and mindless processes can produce 500 bits of FI? It doesn't exist. ET
sfmatheson at PS:
Speaking only for myself, I don’t think there is any reason to discuss design, or “ID theory,” in this context until the very basic questions asked by @swamidass are addressed. And I would reiterate that asking this question outside of even a basic phylogenetic analysis/approach is futile. It is not possible to talk meaningfully about “functional information” without these basic foundational tasks being done.
Swamidass at PS:
I agree. Rather than a primer on ID generalities, let’s focus on what specifically you @gpuccio are doing. For example, with your definition of design, there must be a pre-existing design. You are empirically based. So what evidence can you produce for a pre-existing specification? As @nwrickert, it seems obvious that this does not exist, at least not in an human accessible form.
I was answering to the remarks made here asking how I got to the design inference from the simple observation of complex FI. If you are not interested in the theory you seem to discuss so often here, please clarify that. But that seems not to be the case. I see that, as expected, my “primer on ID generalities” has already generated some fierce response. So, I think I will go on, and answer them. gpuccio
ET @412:
Joshua is already bluffing and equivocating. So unless he is called on his obviously false statements, this is going to go exactly as I predicted- nothing will come of it and PS will claim victory over ID.
This discussion is just starting. I think it's too early to draw conclusions. Let's wait and see. jawa
Jawa, what are you talking about? Last I checked 7 million (PS) > 643,000 (UD)- Or am I misreading 405? ET
ET @407:
UD could easily increase the number of viewers just by going toe-to-toe with PS. Meaning UD should take on the anti-ID diatribe and put PS in its place.
Looking at the data @405 we may see that the number of viewers at PS is incomparably much lower than the number of viewers here at UD. Therefore technically PS may benefit more -in number of viewers- from this discussion, because their website is now exposed to the incomparably much larger number of viewers at UD. However, UD viewers should benefit from the learning experience that this discussion may result in. Since probably most of what GP may present to the PS folks is already known to us, the main learning in this case may come from seeing how different arguments are being presented and how different folks approach this kind of discussion, depending on their individual perspectives. Any person interested in the topic of human communication should enjoy this inter-blogging chat. :) jawa
Joshua is already bluffing and equivocating. So unless he is called on his obviously false statements, this is going to go exactly as I predicted- nothing will come of it and PS will claim victory over ID. Joshua actually believes that cancer demonstrates a gain in FI. And yet all evidence says the opposite- that cancer cells are more primitive than the normal cells they evolved from. ET
To all at PS: So, the central core of ID theory is the following: Leaving aside biological objects (for the moment), there is not one single example in the whole known universe where FI higher than 500 bits arises without any intervention of design. On the contrary, FI higher than 500 bits (often much higher than that) abunds in designed objects. I mean human artifacts here. Therefore, if we observe FI in any object (leaving aside for the moment biological objects) we can safely infer a design origin for that object. That procedure will generate no false positives. Of course, it will generate a lot of false negatives. The threshold of 500 bits has been chosen exactly to get that type of result. If those points are not clear, we are not really discussing iD theory, but something else. This strong connection between high FI levels and a design origin has, of course, a rationale. But its foundation is completey empirical. We can observe that connection practically everywhere. The rationale could be expressed as follows: there is no known necessity law that generates those levels of FI without any design intervention. Therefore, FI in non design systems can arise only by chance. But a threshold of 500 bits is so much higher than the probabilistic resources of the known universe, that we can be sure that such an event is empirically impossible. The probabilistic barriers of getting 500 bits of FI are simply too high to be overcome. Well, that’s ID theory in a nutshell. I will come to the application to biology later. But I am confident that this simple summary will be enough for the moment to generate some answers. gpuccio
This discussion between GP and the folks at PS is really interesting. Let's hope that GP's interlocutors at PS remain polite and that they all -but specially Dr. JS- focus in on understanding well the important concepts GP is trying to explain. PeterA
John Mercer:
In science, the term “theory” refers to a scientific hypothesis whose empirical predictions have a long track record of being correct. Nothing of the sort exists for ID. You might have a hypothesis, but only if it makes clear empirical predictions.
None of that exists for blind watchmaker evolution. It seems gpuccio's opponents do not even understand how lame their position is. There aren't any predictions borne from blind and mindless processes- well maybe genetic diseases and deformities. There isn't any way to test the claim that those types of processes produced vision systems. gpuccio and Bill Cole need to ask them for examples from their position that falls in with what they are saying about science and testability. ET
Swamidass: Please, let me go on with some linear explanation of ID theory and my approach to it. Then I will answer your three questions. You may have noticed that I have proposed two different questions about what ID theory is about: What is the connection between complex FI and the design inference? How does that apply to biological objects? Now, if we want to understand each other, we have to focus first on the first question. To do that, we must for the moment forget biological objects. After all, they are the object we are discussing about: are they designed or do they arise by other mechanisms? So, we will for the moment consider the origin of biological objects undecided, and try to understand ID theory without any reference to biology. To do that, we need an explicit definition of design and of functional information. I have offered a lnk to my two OPs about those two definitions. So, I will just remind here that: Design is any process where some conscious intelligent and purposeful agent imprints some specific configuration to a material object deriving it from subjective representations in his consciousness. The key point here is that the subjective representation must precede its output to the materila oobject. FI is the number of bits required to implement some explicitly defined function. Any function can be used. FI is always defined in relation to the defined function, whatever it is. n object exhibits the level of FI linked to the function if it can be used to implement the explicitly defined function at the explicitly defined level. In general, an explicitly defined function generates a binary partition in a well defined system and set of possible objects: those that can implement it, and those that cannot. FI, in general, is computed as -log2 of the ratio of the target space (the number of objects that can implement the function) to the search space (the number of possible objects) in the defined system. More in next post. gpuccio
Jawa, UD could easily increase the number of viewers just by going toe-to-toe with PS. Meaning UD should take on the anti-ID diatribe and put PS in its place. ET
Swamidass at PS:
gpuccio: IOWs, this protein was highly and specifically engineered during the transition to vertebrates, and that precious FI has then been preserved up to now. Swamidass: Well that inference is not warranted. As we know, there are several mechanisms that produce FI in biological sequences. He would have to rule all of these out to make this inference.
I will start from this statement to deal with what I call the central/core of ID theory. You must understand that when I was requested to write a summary of my methodology to measure FI in proteins, I did exactly that. I did not include a complete description of ID theory. Of course, being a very convinced supporter of ID theory, it was very natural for me to conflate the measurement of very high values of FI with an inference to design, because that’s exactly what ID theory warrants. But now, having discussed in some detail the rationale of my measurement of FI in proteins, the focus can shift to ID theory itself. In brief, what is the connection between complex FI and design? And how does that connection apply to biological objects? An important premise is that my personal approach to ID theory is completely empirical. It requires no special philosophy or worldview, except some good epistemology and philosophy of science. It is, I believe, completely scientific. And it has no connections with any theology. It has always been my personal choice to avoid any reference to theological arguments in all my scientific discussions about ID theory. And I will stick to that choice here too. More in next post. gpuccio
Alexa Global Internet Traffic Ranking for UD peers: Site.........Aug 23.........Aug 26...........Last 3 days...........Last 90 days UD.........643,052........631,311...........UP.......11K.........UP..........127K PT........1,693,214......1,732,931..........DOWN..39K.........DOWN....270K TSZ......3,205,404......3,215,461.........DOWN..10K..........DOWN...702K PS........7,057,312.......7,036,059.........UP........21K (*).....DOWN....4.22M (**) Note that PS relative position has deteriorated considerably the last 3 months (**), but then it has improved the last 3 days (*). Could (*) be related to their recent exposure to UD readers in GP's thread? Hard to tell, but it could be related. :) PS. See the corresponding Alexa links @346. jawa
Rumracket at PS:
Whether some protein grew in size during the evolution of some clade or along some branch, or how conserved that protein is in that clade, seems to me to have little to nothing to do with whether it was designed. What am I missing?
He's missing: 1. "the central core of ID theory, which connects complex FI to the design inference." - GP 2. a comprehensive and coherent explanation for an unguided cause, including RV+NS+whatever else... 3. perhaps open-mindedness and willingness to understand? 4. maybe some basic logic reasoning too? :) jawa
OLV at #399: Sometimes I post at strange times! :) gpuccio
Rumracket at PS:
Whether some protein grew in size during the evolution of some clade or along some branch, or how conserved that protein is in that clade, seems to me to have little to nothing to do with whether it was designed. What am I missing?
You are missing the central core of ID theory, which connects complex FI to the design inference. I will get to that soon enough. gpuccio
Art at PS:
I may have more later, but for now it should be noted that the bit score from a BLAST search and the informational number of bits used to identify design may be two rather different things. The usage adopted by gpuccio and most ID proponents is just a -log2 transformation of the ratio of functional to all possible sequences. I don’t believe the bit score from a BLAST search is the same thing.
Hi Art, thank you for your contribution, that allows me to clarify some important point. You are right of course, the bitscore of a BLAST search and the value of FI as -log2 of the ratio between target space and search space are not the same thing. But the point is: the first is a very good estimator of the second, provided that some conditions are satisfied. The idea of using conserved sequence similarity to estimate FI is not mine. I owe it completely to Durston, and probably others have pointed to that concept before. It is, indeed, a direct consequence of some basic ideas of evolutionary theory. I have just developed a simple method to apply that concept to get a quantitative foundation to the design inference in appropriate contexts. The condition that essentially has to be satisfied is: sequence conservation for long evolutionary periods. I have always tried to emphasize that it is not simply sequence conservation, but that long evolutionary periods are absolutely needed. But sometimes that aspect is not understood well,/ so i am happy that i can emphasize it here. I will be more clear. A strong sequence similarity between, say, a human protein and the chimp homologue of course is not a good estimator of FI. The reason for that should be clear enough: the split between chimps and humans is very recent. Any sequence configuration that was present in the common ancestor, be it functionally constrained or not, will probably still be there in humans and chimps, well detectable by BLAST, just because there has not been enough evolutionary time after he split for the sequences to diverge because of neutral variation. IOWs, we cannot distinguish between similarity due to functional constraint and passive similarity, if the time after split is too short. But what if the time after split is 400+ million years, like in the case of the transition to vertebrates, or maybe a couple billion years, like in the case of ATP synthase beta chain in E. coli and humans? According to what we know about divergence of synonimous sites, I would say that time windows higher than 200 million years begin to be interesting, and probably 400+ million years are more than enough to guarantee that most of all the sequence similarity can be attributed to strong functional constraint. For 2 billion years, I would say that there can be no possible doubt. So, in this particular case of long conservation, the degree of similarity becomes a good estimator of functional constraint, and therefore of FI. The unit is the same (bits). The meaning is the same, in this special context. Technically, the bitscore measures the improbability of finding that similarity by chance in the specific protein database we are using. FI measures the improbability of finding that specific sequence by a random walk from some unrelated starting point. If the sequence similarity can be attributed only to functional constraint, because of the long evolutionary separation, then the two measures are strongly connected. Of course, there are differences and technical problems. We can discuss them, if you want. The general idea is that the BLAST bitscore is a biased estimator, because it always underestimates the true FI. But that is not the important point, because we are not trying to measure FI with great precision. We just need some reliable approximation and order of magnitude. Why? Because in the biological world, a lot of objects (in this case, proteins) exhibit FI well beyond the threshold of 500 bits, that can be conventionally be assumed as safe for any physical system to infer design. So, when I get a result of 1250 bits of new FI added to CARD11 at the start of vertebrate evolution, I don’t really need absolute precision. The true FI is certainly much more than that, but who cares? 1250 bits are more than enough to infer design. To all those who have expressed doubts about the value of long conserved sequence similarity to estimate FI, I would simply ask the following simple question. Let’s take again the beta chain of ATP synthase. Let’s BLAST again the E. coli sequence against the human sequence. And, for a moment, let’s forget the bitscore, and just look at identities. P06576 vs WP_106631526. We get 335 identities. 72%. Conserved for, say, a couple billion years. My simple question is: if we are not measuring FI, what are we measuring here? IOWs, how can you explain that amazing conserved sequence similarity, if not as an estimate of functional specificity? Just to know. gpuccio
For those readers interested in following GP's discussion with PS, here are the associated post numbers: 343 Bill Cole 351 GP 354 Bill Cole 356 GP 357 Bill Cole 360 368 GP 369 370 374 375 381 387 388 395 398 to be continued... PeterA
GP @ 397: Please, don't pay attention to my posts until after you're done with the interesting discussion you're engaged in with the folks at another website. That should keep you quite busy for some time. I don't know how long that discussion may take, but it could last longer than some of us may expect. BTW, I noticed that apparently you posted @387-388 after 1:00 am (Aug 26) in your time zone. Then you posted @395-397 before 11:00 am (Aug 26) in your time zone. I think you should not be pressured to post your comments or answers. You should do it at your own convenience. Whoever might be interested in what you have to say should wait patiently. I look forward to learning from your polite discussion with another website. I appreciate your time and effort to post your comments here too for the rest of us. Thus we don't have to visit another website to follow your discussion. That's very nice of you. I'm also posting information for the other folks that comment here and for your relatively numerous anonymous readers in this thread. :) OLV
Swamidass et al. : I see that many comments here are about the relationship of my analysis with a possible philogenetic analysis. I am trying to understand better what you mnean and why you suggest that. Maybe some further feedback from you would help. I will try to clarify a few points about my analysis that, apparently, are not well understood. My analysis is focusing on the vertebrate transition only because it is very easy to study it. A number of circumstances are particularly favorable, as I have tried to explain. In particular, the time pttern of the pertinent splits, and the presence of sufficient protein sequences in the NCBI database to make the comparisons, and of course, the very good data about the human proteome. But in no way I am trying to affirm that there is something special in the vertebrate transition. There is a lot of functional information added at that time, and we can easily check the sequence conservation of that information up to humans. However, the same thing probably happens at many other transitions. So, why do I find a lot of FI at the vertebrate transition? It’s because I am looking for FI specific to the vertebrate branch. Indeed, I am using human proteins as a probe, and humans are of course vertebrate. My analysis shows that a big part of the specific FI found in vertebrates was added at the initial transition. It is not cpmparing that to what happens in other branches of natural history. Just to be clear, I could analyze in a similat way the transition to hymenoptera. In that case, I would take as probes the protein sequences in some bee, for example, and blast them against pre-hymenoptera and some common ancestor of the main branches in that tree. I have not done that, but it can be done, and it would have the same meaning of my vertebrate analysis: to quantify how much specific FI was added at the beginning of that evolutionary branch. I am not saying that vertebrates are in any way special. I am not saying that humans are in any way special (well, they are, but for different reasons). It should be clear that my methodology is not measuring the absolute FI present in a protein. It is only measuring the FI conserved up to humans, and specific to the vertebrate branch. So, let’s say that protein A has 800 bits of human conserved sequence similarity (conserved for 400+ million years). My methodology affirms that those 800 bits are a good estimator of specific FI. But let’s say that the same protein A, in bees, has only 400 bits of sequence similarity with the human form. Does it mean that the bee protein has less FI? Absolutely not. It probably just means that the bee protein has less vertebrate specific FI. But it can well have a lot of Hymenoptera specific FI. That can be verified by measuring the sequence similarity conserved in that branch for a few hundred million years, in that protein. OK, time has expired. More in next post. gpuccio
OLV: Thank you for the interesting links. I appreciate them. As you can see, I am rather busy now with our kind interlocutors at PS, but I hope I can have a look at them as soon as possible! :) gpuccio
ET at #389: You are correct, there is some difference between Swamidass'approach and mine, and I believe it is due to some misunderstanding on his part of the concept of FI. At least, that was my impression when I had a look at his posts about cancer, mentioned above. It is true that I talk of single proteins, however in principle one can consider the possible states of a whole genome in reference to some function, and that's what he seems to do in the cancer example. However, one must be very careful about FI computation. But I don't want to anticipate this discussion, because I will deal with it after I have answered other comments at PS. Thank you for your continued attention. As expected, this parallel discussion is requiring much of my time, so I apologize if I will maybe be slower in answering comments. gpuccio
Joshua Swamidass at PS:
Great. That is really helpful. You are agreeing then that: Human-to-human genetic variation is a negative control. Viral evolution is a negative control. Cancer evolution is a negative control. You add the, appropriate, caveat that the horizontal transfer of genes is not a design based infusion of information. You also suggest as a negative control: Experiments that do not include “intelligent” selection (for example, Lenski’s experiment) “Intelligent selection” is poorly defined by I think I get what you mean. It seems also that, correctly, this would include both in silico simulations and in vitro experiments. Both approaches are valid “experiments” if conducted correctly. That is great news. From here, there are two ways forward I see. First, I want to hear your response to what we have written already about your analysis. That seems the place to start. This should clarify a great deal about your methodology and what you are precisely claiming. Second, after your methodology and refined and clarified, perhaps we will circle back to looking at some of these other cases. To have a preview of what this might look like, see here: Computing the Functional Information in Cancer . https://discourse.peacefulscience.org/t/computing-the-functional-information-in-cancer/1646 However, let’s not distract from the first point too soon. Looking forward to it.
OK, now I will answer the main points raised in the comments here, and then we can discuss your arguments about cancer. It will be a good way to explain better what FI is, how it should be used, and its role in design inference. I am looking forward to it, too! :) gpuccio
The Standard Graphical Notation for Biological Networks OLV
Check this out: Hit and Run Transcriptional Repressors Are Difficult to Catch in the Act Video  
Transcriptional repressors and activators may function by different mechanisms and may be resident or absent at different stages of gene regulation. In this contribution, an example is shown of how an activator and a “hit and run” repressor may function and how this would affect detection by assays such as chromatin immunoprecipitation.
Transcriptional silencing may not necessarily depend on the continuous residence of a sequence?specific repressor at a control element and may act via a “hit and run” mechanism. Due to limitations in assays that detect transcription factor (TF) binding, such as chromatin immunoprecipitation followed by high?throughput sequencing (ChIP?seq), this phenomenon may be challenging to detect and therefore its prevalence may be underappreciated. To explore this possibility, erythroid gene promoters that are regulated directly by GATA1 in an inducible system are analyzed. It is found that many regulated genes are bound immediately after induction of GATA1 but the residency of GATA1 decreases over time, particularly at repressed genes. Furthermore, it is shown that the repressive mark H3K27me3 is seldom associated with bound repressors, whereas, in contrast, the active (H3K4me3) histone mark is overwhelmingly associated with TF binding. It is hypothesized that during cellular differentiation and development, certain genes are silenced by repressive TFs that subsequently vacate the region. Catching such repressor TFs in the act of silencing via assays such as ChIP?seq is thus a temporally challenging prospect. The use of inducible systems, epitope tags, and alternative techniques may provide opportunities for detecting elusive “hit and run” transcriptional silencing.
OLV
More on comment @390: Pathway Studio OLV
Check this out: Biological Pathway Specificity in the Cell—Does Molecular Diversity Matter? Video
Biology arises from the crowded molecular environment of the cell, rendering it a challenge to understand biological pathways based on the reductionist, low?concentration in vitro conditions generally employed for mechanistic studies. Recent evidence suggests that low?affinity interactions between cellular biopolymers abound, with still poorly defined effects on the complex interaction networks that lead to the emergent properties and plasticity of life. Mass?action considerations are used here to underscore that the sheer number of weak interactions expected from the complex mixture of cellular components significantly shapes biological pathway specificity. In particular, on?pathway—i.e., “functional”—become those interactions thermodynamically and kinetically stable enough to survive the incessant onslaught of the many off?pathway (“nonfunctional”) interactions. Consequently, to better understand the molecular biology of the cell a further paradigm shift is needed toward mechanistic experimental and computational approaches that probe intracellular diversity and complexity more directly.
  OLV
Off topic commercial break ;) GP could use this tool to illustrate his arguments when explaining functional complexity to the folks at PS or in his future OPs here. :) How to draw a diagram for a biological pathway? Example OLV
Methinks that Joshua is not understanding the point. Or maybe I don't. But it seems to me that gpuccio is talking about individual proteins that have some sequence similarity (such that they may share a common ancestor?). Individual. Proteins. And I get that from the following:
a) I use Blast to measure sequence homology between proteins, in bits. I take the bitscore from the Blast algorithm as it is, with some consideration of the number of identities and similarities, too.
We are now saying that sequence homology = sequence similarity. SEQUENCE. of. proteins. with. some. similarity. The point? Joshua jumps to:
1-Human-to-human genetic variation is a negative control. 2-Viral evolution is a negative control. 3-Cancer evolution is a negative control.
All of the above need to be about specific proteins, if I am reading gpuccio correctly. So 1 would be about a specific protein or proteins found in two different humans and comparing the sequences. I am not sure how to go about 2, you would need some known parent and the decedents. And 3 would also be about specific proteins from the normal cell and the cancer cell. Right? Strange how I thought I knew what was being discussed and yet by reading Joshua's responses I am not sure. One of us is not in the same ballpark. ET
Joshua Swamidass at PS:
Thanks @gpuccio, but you are jumping the gun here. I’m asking if any of these cases would constitute a good negative control for you, not what you would compute their FI. If none of these cases work as negative controls what would?
Yes, those cases are not examples of FI. We can use them as negative controls. But also any case where a protein is passed from one species to another one without any relevant modifications can be a negative control. There is no variation of FI there (for that protein). There are a lot of cases like that. Indeed, any case of variation in a lab where no designed information is added to the system can be used as a negative control. For example, Lenski’s experiment can be considered as a good negative control, a non design biological system, because the design intervention there is rather limited (setting the lab system and the general rules), but no other specific information is added. Instead, any experiment where intelligent selection takes place would not be a good negative. gpuccio
Joshua Swamidass at PS:
It seems that, measuring FI the way some people measure it, there is more than 500 bits of FI differing between individual humans. We see more than this amount of FI develop in the evolution of cancer. Also in the evolution of viruses.
There are many things to say, and many interesting issues to be anwered in the comments that have been offered here. I am really not sure where to begin. So, let’s begin at this last statement of yours, hoping that it can help me clarify a few things about FI and its measurement. First of all, it should be clear thatr all the information we are discussing here is in digital form. That reakky helps, because it is much easier to measure FI in digital form. However, we need to know the reral molecular basis of our functions. That’s why I rarely discuss fossil, morphology and similar issues, and stick rather ot protein sequences. It’s the only way to begin to be quantitative about FI. Even so, it is not an easy task. I will just remind here that FI is defined as the minimum number of specific bits that, to the best of our knowledge, are necessary to implement some explicitly defined function. You will find more details about my definitions of design and FI here: Defining Design https://uncommondesc.wpengine.com/intelligent-design/defining-design/ and here: Functional Information Defined https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ These are, indeed, the first two OPs I wrote for UD. I like to have my definitions explicit, before going on. Now, very briefly, FI is usually measured directly as the rate between the target space and the search space, as defined by the function. The measure is completely empirical, and it must be referred to some definite system and time window. The purpose, of course, is to possibly infer design fro some object we are observing in the system. We infer design if we observe that some object can implement a function, explicitly defined, hich implies at least 500 bits of specific FI. This is a very simlified definition, and we may need to clarify many aspects later. For the moment, it will be a starting point. But, of course, those 500+ bits of FI must arise in the system at some time, and must not be present before. IOWs, we need the appearance of new complex FI in the system, to infer a design intervention. So, just to be brief, I believe that none of the three examples you offer is an example of new complex FI arising in a system. I will briefly discuss the first two, avoding for the moment the example of viruses (I am not really expert about that, and I may need some better clarifications about what you mean). So, the first point. You say: " There is more than 500 bits of FI differing between individual humans". Well, the point is not if there is such a difference. The point is: what is the origin of such a difference? Let’s see. The basic reference genome and proteome are rather similar in all human beings. The FI there is more or less the same, and we can wonder how it came into existence. Much of it comes, of course, from “lower” animals, but some of it is human specific. In all cases, according to my theory, complex FI arose as the result of design, in the course of natural history. Then there are the differences. Of course humans are different one from the other. There are many reasons for that. First of all, the greatest part of that difference is generated in the course of human procreation. We know how the combination of two different genomes (father and mother) into one generates a new individual genome, with the further contribution of recombination. That is a wonderful process, but essentially it is a way to remix FI that is already there, in a new configuration. The process is combinatorially rather simple. I don’t see how it should generate new FI. I will be more clear. We would observe new FI if some individual, for some strange reason, received a new complex protein capable of implementing a new function, a protein which did not exist at all before in all other humans. Let’s say a new enzyme, 500 AAs long, that implement a completely new biochemical reaction and has no sequence similarity with any other previous protein. That would be new FI. Or the addition of a new function to an existing protein by at least 500 new specific bits, as some new partial sequence configuration which did not exist before. These are the things that happened a lot of times in the course of natural history. The information jumps. But there is nothing like that in the differences between humans. There are also differences due to variation. Mainly neutral or quasi neutral variation, which generates known polymorphisms, or simply individual differences. The online Exac browser is a good repository of them. And there are the negative mutations, genetic diseases. Nothing of that qualifies as new complex FI. Let’s go for the moment to the second point. You say: “We see more than this amount of FI develop in the evolution of cancer”. I don’t think so. Could you give examples, please? What we see in the evolution of concer is a series of random mutations, most of them very deleterious, that in some cases confer reproductive advantage to cancer cell in the host environment. But those mutations are combinatorially simple. They are usually SNPs, or deletions, duplications, inversions, translocations and so on. Simple events. Many of them, but still simple events. We are exactly in the scenario described and analyzed by Behe in his very good last book. Simple mutations affect already existing complex structures, altering their previous functions in sucvh a way that, sometimes, a relative advantage is gained. For example, a cell can escape control, and start reproducing beyond its assigned limits. I will just give an example. Burkitt’s lymphoma is caused, among other things, by a translocation involving the c-mych gene, a very important TF. The most common event is a 8-14 translocation. The event is very simple, but the consequences are complex. However, the change in FI is trivial. A single frameshift mutation can easily cancel a whole gene and its functions. Still, the molecular event is very simple. FI arises when more than 500 specific bits for a new function appear. That is about 116 specific AAs. Do you know of any cancer cell where a completely new and functional enzyme appears? That did not exist before? Well, that’s enough for the moment. gpuccio
Check this out: The pause-initiation limit restricts transcription activation in human cells
Eukaryotic gene transcription is often controlled at the level of RNA polymerase II (Pol II) pausing in the promoter-proximal region. Pausing Pol II limits the frequency of transcription initiation (‘pause-initiation limit’), predicting that the pause duration must be decreased for transcriptional activation. To test this prediction, we conduct a genome-wide kinetic analysis of the heat shock response in human cells. We show that the pause-initiation limit restricts transcriptional activation at most genes. Gene activation generally requires the activity of the P-TEFb kinase CDK9, which decreases the duration of Pol II pausing and thereby enables an increase in the productive initiation frequency. The transcription of enhancer elements is generally not pause limited and can be activated without CDK9 activity. Our results define the kinetics of Pol II transcriptional regulation in human cells at all gene classes during a natural transcription response.
  OLV
Don't the folks at PS realize that the biology research literature increasingly reveals more and more examples of enormous functional complexity of complex functionality, which serve as evidences of conscious purposeful design in the biological systems? Note GP's comment @367 in reference to the 3rd Way of evolution scientists:
I can see no credible “non design mechanism” that can generate complex FI. And RV + NS remains the only one that really deserves to be falsified. Those people are only trying to combine good insights about the limits of current theories with the dogmatic need to avoid any reference to design.
In that same comment GP wrote:
science is made with facts, not with imaginary possibilities.
OLV
@366 update Since this discussion started to mention PS, their global internet traffic has gone up. Good for them! :) This thread alone seems to attract more visits than the whole PS website. Having a public discussion with GP should look good in JS's CV. :) jawa
From Joshua:
It seems that, measuring FI the way some people measure it, there is more than 500 bits of FI differing between individual humans. We see more than this amount of FI develop in the evolution of cancer. Also in the evolution of viruses.
For humans we need to compare two proteins to see if there is that big of a difference. The same for cancer. It is my understanding that cancerous cells are more primitive than what they came from. So, if anything, they should represent a loss of FI. So it could be that the loss is greater than 500 bits. But then again it depends on the sequences compared and if the results are additive. For viruses, again, you need to compare two proteins with sequence similarity. ET
GPuccio, Your discussion with folks in another website seems interesting. I'll try to keep an eye on how it progresses. I look forward to learning from that exchange. Again, thanks for the time and effort you put into this. pw
Joshua Swamidass at PS:
Do you have any taxa you would agree are negative controls? That their evolution by common descent did not require designed infusion of information? For example, what about viruses? Subspecies of rhinos? What are some examples of groups of organisms you think could have arisen by common descent without an “infusion of design”? How did you determine these negative controls? If you can’t give us these negative controls, are you arguing that every change in organisms is an infusion designed, no mater how small?
It all depends on the complexity of molecular changes in FI. My analysis is absolutely quantitative. So, if two taxa have no relevant differences in FI, they could well have arised from non design mechanisms. But I believe that in most cases it would be difficult to prove a negative. What do you think? gpuccio
Unfortunately he is serious... ET
.
JSwamidass: "It seems that, measuring FI the way you do, there is more than 500 bits of FI differing between individual humans".
Hello? !! Is he serious? Upright BiPed
gpuccio, Could you please ask them what their mechanism is and how it can be tested- for inserting FI into proteins. abbastanza per favore ET
According to the references your terminology was fine. But "sequence similarity" is also OK. As you say the important thing is everyone is talking about the same thing. ET
ET: Well, about homology it was just a question of terminology. No big problem. I think we are set about that. Of course, the big problem, as you say, is about the ability of RV + NS to generate complex FI. We will se what happens about that! :) gpuccio
Joshua Swamidass at PS:
Some Parallel Questions From hearing about your work from others, and reading your articles, I wanted to clarify your position. It sounds like you: 1. Believe in an old earth. 2. Affirm common descent of humans with the great apes. 3. Are arguing that there is evidence for design in the origin of phyla. 4. You conceive design as information infused into the common descent process. Is this about right?
I will start with these easy questions, as I have not much time right now. Here are my answers: 1. Yes, definitely. 2. I affirm common descent of all living organisms, at least as the best current theory. Including, of course, humans. But, as explained many times, I believe that design acts on common descent to input the new functional information any time it is required. 3. I am arguing that there is evidence of design any time that we observe new complex functional information, higher than 500 bits for example, arise. Of course, also in the origin of phyla. 4. Yes, definitely. Well, that was easy. gpuccio
To all: OK, I have set up an account at PS. I will do as follows: I will go on answering here, and then I will paste the answer, as it is, at PS. I think the extra work is worth the while, because the discussion there seems interesting. I invite you all to stay tuned! :) gpuccio
This was expected but it is still sad:
Of course, we know there are other ways that FI can accrue and that common descent with neutral drift can inflate FI estimates. Gpuccio, it seems, does not know about these other mechanisms and complications to his analysis. Not knowing these things, he does not have a way to explain FI increases without recoursing to design.
The entire world awaits the evidence that blind and mindless processes are capable, Joshua. Present it or admit that you are just grasping for straws. Peer-review is absent evidence of blind and mindless processes producing proteins. ET
Wikipedia on "homology":
Homology among proteins or DNA is inferred from their sequence similarity. Significant similarity is strong evidence that two sequences are related by divergent evolution from a common ancestor.
Then there is An Introduction to Sequence Similarity (“Homology”) Searching:
Sequence similarity searching, typically with BLAST (units 3.3, 3.4), is the most widely used, and most reliable, strategy for characterizing newly determined sequences. Sequence similarity searches can identify ”homologous” proteins or genes by detecting excess similarity – statistically significant similarity that reflects common ancestry. This unit provides an overview of the inference of homology from significant similarity, and introduces other units in this chapter that provide more details on effective strategies for identifying homologs.
and
Sequence similarity searching to identify homologous sequences is one of the first, and most informative, steps in any analysis of newly determined sequences.
So it looks like scientists do use sequence similarity to determine if two proteins are homologous. ET
GPuccio @367: I appreciate your answers. pw
Joshua Swamidass at PS:
The hypothesis he is testing is interesting
Thank you! Seriously, this is one of the biggest acknowledgments I have ever received from the other side.
Essentially, he is arguing there was a large jump in FI in proteins at the vertebrate transition.
Yes. Indeed, in many proteins. CARD11 is just one example.
It seems that he needs to run some systematic controls to be sure. It would be interesting to see the conservation (in bits) between homologs of the protein at different taxa levels.
That’s exactly what I have tried to do. My comment for your blog is, of course, only a brief summary with a couple of examples. I have described in detail my results for vertebrates in this OP: The Amazing Level Of Engineering In The Transition To The Vertebrate Proteome: A Global Analysis https://uncommondesc.wpengine.com/intelligent-design/the-amazing-level-of-engineering-in-the-transition-to-the-vertebrate-proteome-a-global-analysis/ In brief, I have tested the whole human reference proteome against 9 groups of organisms, chosen, with some practical compromise, to represent the natural history of metazoan. For each human protein, a blast comparison was made versus all the protein sequences present in the NCBI database for that group of organisms, and the best hit chosen. I used the donwnaded version of Blast to perform the comparisons automatically. So, my database has the best hit of each human protein with each class of organisms, in terms of bitscore, bits per aminoacid, and difference with the previous class. I use that database for all my analyses, and the R software to analyze results and draw graphics. You can read in the OP above mentioned that I have found about 1.7 million bits of new FI appearing in vertebrates, just at the start of their natural history.
If we looked that graph, would we see a discontinuity at vertebrates? Maybe, but it does not seem he has collected enough information to be sure. I would like to see the graph.
You can find the general graph with the mean values, in baa, for each class, in the above mentioned OP (Fig. 1). You can see the general jump in vertebrates, which is better analyzed in the following figures. However, the important point is not only the generic jump, but the fact that the significant jumps can be found in specific classes of proteins, especially those involved in immune response and brain development. That is good confirmation that my methodology is really measuring the relevant information novelties. Your next statement deserves a rather more detailed answer, so I will have to postpone it to next post (as soon as possible). gpuccio
Davecarlson at PS:
Obviously this is not among the most important critiques, but Gpuccio seems to not understand what homology means. It’s a binary state. Two sequences (or characters) are either homologous or they are not. It’s nonsensical to assess the “degree” (e.g., high or low) of homology. I presume he means sequence identity, which is really not at all the same thing.
Well, thank you for the correction. I am a medical doctor, and not a biologist, so some imprecision in terminology on my part can be reasonably expected. I will be happy to acknowledge any well explained correction. Yes, I have been using the word “homology” to mean the degree of sequence similarity between protein sequences, as measured for example by the Blast bitscore. I was not aware that the word should be reserved to the binary condition of being or not a homologue. So I will, in the future, use the form “conserved sequence similarity” instead of “conserved homology”. I would not use “sequence identity”, because identities are not the only component of the bitscore, even if certainly the major component. However, I can see that in a later comment John_Harshman comes to my (partial) rescue, stating: “As for the confusion of homology with similarity, that seems common among molecular biologists, not just Gpuccio.” Thank you. Being human, I can find some small consolation in not being alone in my errors! gpuccio
To all at PS (and here): First of all I would like to thank Joshua Swamidass and all the kind interlocutors at Peaceful Science for taking the time to consider my writing. I have read the first few comments, and they are very interesting and stimulating. As agreed with Bill Cole, who is the stimulator of this discussion and whom I thank sincerely, I will answer here to the main arguments raised at PS, because I do want that my answers be well visible to the people here at UD. I hope this "parallel" discussion (already realized in the past with TSZ) may be comfortable for the guys at PS too. Bill can of course reference my comments at PS, if he wants to. So, let's start. gpuccio
Pw at #363: Good questions! Here are my answers: 1) "Could it be that smaller FI jumps that your method underestimated have accumulated through time leading to the larger jumps that your method is sensitive to? Could those mini-jumps be explained by RV combined with NS (either negative or positive)?" I work with what can be detected. Of course I have to reason about classes of organisms. When I blast against all deuterostomes that are not vertebrates, and then against cartilaginous fishes, as explained, I am dealing with classes that are rather near in evolutionary history. The pertinent splits occurred, reasonably, in a rather short evolutionary time. There is no trace of evolutionary intermediates of the specific sequences that appear in vertebrates. Moreover, as explained many times, there is no reason in the world that a very complex function, involving at least 1200+ bits of FI in the form we can observe, can be deconstructed into many simpler steps, each of them naturally selectable, and of which there is no trace at all. As said, science is made with facts, not with imaginary possibilities. 2) Yes, I definitely have objections. I can see no credible "non design mechanism" that can generate complex FI. And RV + NS remains the only one that really deserves to be falsified. Those people are only trying to combine good insights about the limits of current theories with the dogmatic need to avoid any reference to design. 3) "Do you consider those vaguely alluded regulatory mechanisms also part of the design paradigm proposed by ID, in addition to your method to quantify large FI jumps in proteins between species or through natural history?" Of course. FI in protein sequences is only part of the general scenario. There is a lot of FI, probably a lot more, at other levels, especially regulation networks. However, FI in protein sequences is easier to measure,and that's why I usually stick to it. gpuccio
@346 update Alexa Global Internet Traffic Ranking for UD peers: UD 643,052 647,842 PT 1,693,214 1,734,036 TSZ 3,205,404 3,220,427 PS 7,057,312 7,058,378 All these blogs dropped their ranking positions since posted @346 but UD and PS dropped relatively less. :) jawa
It’s past 11am in most parts of Europe. Why is the time stamp so different? Where is that time in? Just curious. jawa
Does somebody understand how could such a boringly dry and colorless biology/stats-related discussion like this attract so many visits and comments compared to the more entertaining philosophically colorful discussions that run parallel to this? Are there so many science nerds out there? :) Also, since Bill Cole has become GP’s ambassador in other not-so-friendly websites, more curious folks could be looking at this discussion here, although perhaps the real beneficiaries are the other blogs that appear to have less traffic than UD, according to Alexa data. :) jawa
GPuccio, I appreciate your detailed clear explanations and agree that your arguments are strong. However, I have some questions that would like to ask you: 1. Could it be that smaller FI jumps that your method underestimated have accumulated through time leading to the larger jumps that your method is sensitive to? Could those mini-jumps be explained by RV combined with NS (either negative or positive)? 2. Some scientists like J. Shapiro (U Chicago), D. Noble (U Oxford), E. Koonin (NIH), claim that there are other non-design natural mechanisms besides RV and positive NS that could explain those big jumps of FI that your method detects. Do you have any objection to their ideas? 3. Early in your current OP you point to another very interesting OP you wrote a year ago, where you vaguely allude to various possible regulatory mechanisms that could explain the specificity of transcribing a particular set of genes instead of others at a given time. I could quote that part of your text if you want me to. Do you consider those vaguely alluded regulatory mechanisms also part of the design paradigm proposed by ID, in addition to your method to quantify large FI jumps in proteins between species or through natural history? Could you elaborate more on those transcriptional regulation issues? pw
ET: “gpuccio, do you assume “long evolutionary periods” just to see if their scenario is plausible? The same for the alleged “transition to vertebrates”- do you just assume it happened to test your hypothesis under their scenario?” Well, I would simply say that I assume that part of “their scenario” which is also “my scenario”. IOWs, as you know, I do believe: a) That there is some physical descent of the protein sequences, even if practically all the new functional information is introduced by design, at specific times and places b) That neutral variation occurs all the time c) That negative (purifying) selection is a powerful and ubiquitous mechanism that helps preserve functional sequences, contrasting the effects of neutral variation, while non functional sequences are constantly changed, at a very low pace, by neutral variation itself. These points I do accept. What I do not accept is: 1) That the appearance of new functional information can be explained by RV and positive NS 2) That descent with non designed modifications can explain biological functional information 3) That positive NS has any non trivial role in the general scenario (as you know, I do believe that it plays some occasional trivial role in adaptation, as well explained by Behe in his last book) That said, I use the term “long evolutionary periods” just in the sense of “long periods in natural history”. The word “evolution” does not mean anything to me, if we do not specify how that evolution happens. Evolution is the simple, undeniable fact that new species appear in natural history, and that there is some relation (for example, of increasing complexity) between them. The problem is to explain how that “evolution” happens, and I have no doubts that it happens by design. However, as explained many times, the concepts of physical continuity of the protein sequences, of neutral variation and of negative selection are central to my reasonings: I use those assumptions to show that homologies conserved through long periods in natural history are a good estimator of functional information. “Have you ever taken your “old friend, the beta chain in ATP synthase” and calculated the functional information it contains, comparing it to the most basic polypeptides nature is known to have produced? That would be to see how steep Mt. Improbable is for that necessary beta chain.” Of course, but a direct calculation is extremely difficult, probably impossible at present. That’s why I, like Durston, use an indirect method based on conserved homologies. But the purpose is always the same: to measure FI, IOWs how steep Mt. Improbable is for some sequence. My procedure is aimed exactly at that, and I do believe that the measures of FI (and therefore improbability) that I get are good and reliable, even if certainly biased in the sense of a severe underestimation of the true FI, as I have explained many times. gpuccio
Just mentioning PS here in this discussion thread, which is currently the most visited* in UD, which is stratospherically more visited** than PS, should help PS to get more exposure to potential visitors. :) The exchange between GP and PS through Bill Cole, is a tremendous click magnet for PS. :) (*) check this out Popular Posts (Last 30 Days)
Controlling the waves of dynamic, far from… (1,623) Darwinist Jeffrey Shallit asks, why can’t… (1,410) Are extinctions evidence of a divine purpose in life? (1,303) Apes and humans: How did science get so detached… (976) “Descartes’ mind-body problem” makes nonsense of materialism (963)
(**) see post @346 jawa
Gpuccio Can you contact me at colewd@aol.com to discuss the Peaceful Science exchange? Thanks bill cole
Just a note about the people you will be dealing with @ PS- They don't understand why ID isn't a mechanistic theory and they say it prevents ID from being science. They can't grasp the simple facts that the science of ID pertains to identifying and studying design in nature. And that we don't even ask about the specific mechanisms used until we do so. All we can say is that intelligent agency volition is the mechanism for ID. ET
gpuccio, do you assume "long evolutionary periods" just to see if their scenario is plausible? The same for the alleged "transition to vertebrates"- do you just assume it happened to test your hypothesis under their scenario? Have you ever taken your "old friend, the beta chain in ATP synthase" and calculated the functional information it contains, comparing it to the most basic polypeptides nature is known to have produced? That would be to see how steep Mt. Improbable is for that necessary beta chain. ET
Thanks Gpuccio. I will send it to Josh. bill cole
Bill Cole (and all): OK, so here is a brief primer about my methodology to measure Functional Information in proteins: a) I use Blast to measure sequence homology between proteins, in bits. I take the bitscore from the Blast algorithm as it is, with some consideration of the number of identities and similarities, too. b) I am interested in homologies that are conserved throughout long evolutionary periods. I consider that kind of homology as a very good estimator of FI. The reason is very simple: a specific sequence can be conserved for those long time windows only if it is under very strong functional constraint, and is therefore preserved by negative (purifying) selection. In all other cases, sequence homologies are practically cancelled after a long evolutionary time because of the constant effect of neutral variation. c) How long must the time window be so that sequence homology may be considered a good estimator of FI? I would say at least 200 – 400 million years. Better if 400. Why? Because that’s more or less the time window that is usually associated with “saturation” of synonymous sites, IOWs with the more or less complete loss of any detectable homology in neutral sequences. d) In particular, I am interested in “information jumps” in proteins at specific evolutionary times, especially at the transition to vertebrates. e) That transition is supposed to have happened more than 400 million years ago, providing therefore a good time window for our reasonings. Moreover, the split between pre-vertebrates and the first vertebrates, and the following split between cartilaginous fishes and bony fishes, happened reasonably in a relatively short time more than 400 million years ago. As we will see, this is a very good context to measure information jumps in proteins. f) So, let’s be practical. I take some protein in the human form. IOWs, I use the human sequence as my initial “probe”. g) Then I measure, for the specific task of studying the transition to vertebrates, the sequence homology between the human protein and all pre-vertebrates, in particular deuterostomes and chordates non vertebrates. I take the bitscore value of the best hit. This value represents the best assessment of sequence homologies existing before the appearance of vertebrates. The value can be very low, or medium, or high. Whatever it is, that sequence information was already there. h) Then I measure the sequence homology between the human protein and the proteins in cartilaginous fishes. I take the bitscore valure of the best hit. Again, it can be low, medium or high. This is the sequence homology that is present at the beginning of vertebrate history, before the split between cartilaginous fishes and bony fishes. That, again, is supposed to have happened 400+ million years ago. This value is important, because humans derive from bony fishes. Therefore, any homology found between cartilaginous fishes and humans predates the split between cartilaginous fishes and bony fishes. IOWs, any such homology was alredy present in the common ancestor of fishes, and therefore it has been conserved for 400+ million years. i) Finally, I make the difference, in bits, between the bitscore from h) and the bitscore from g). This is the “information jump”, IOWs the sequence homology (to the human form) that has been “added” in the transition to vertebrates. And it is also a very good estimator of the functional information jump (more precisely, of the jump in human conserved FI), IOWs of the human conserved FI that was added to that protein in the transition to vertebrates, because both the homology measured from h) and the homology measured from g) are sequence homologies that have been conserved from 400+ million years. This is the general idea. Now, an example. Let’s consider for a moment an old friend, the beta chain in ATP synthase. We know it is a very conserved proteins, with a very high homology between the human sequence and the sequence in bacteria. Certainly a lot of FI there. Now, this is a 529 AA long protein. Let’s say that we want to apply our methodology to see what happens to that protein at the vertebrate transition. OK, blasting the human sequence (P06576) against non vertebrate Deuterostomia and Chordates, the best hit is 866 bits (Acanthaster planci), a starfish. Now, let’s blast it against cartilaginous fishes. The best hit is 929 bits (Callorhincus milii). So, the information jump at the transition to vertebrates, for that protein, is 929 – 866 = 63 bits. 0.12 bits per AA (baa). Very low indeed. That simply means that the protein was already almost identical to the human form in pre-vertebrates (87% identities). IOWs, the FI was already there, and no big information jump takes place at the vertebrate transition. Now, let’s do that again with the protein CARMA1/CARD11, of which we have discussed in this thread. This is a 1554 AA long sequence, in the human form. Again, let’s blast it against Deuterostomia and Chordates that are not vertebrates. The best hit is 234 bits (Saccoglossus kowalevskii). 0.15 baa. A very low score, for such a long protein. That means that the specific sequence found in humans was almost completely absent in pre-vertebrates. Now, let’s blast it against cartilaginous fishes. The best hit is 1514 bits (Callorhincus milii). That means that, even if the shark protein is still different from the human form, about half of the potential sequence information is already there. More than 400 million years ago. So, how big is the jump in human conserved FI? Easy. 1514 – 234 = 1280. IOWs, about 1280 bits of FI have been added “de novo” in vertebrates, and then conserved for more than 400 million years. That’s a very big information jump. IOWs, this protein was highly and specifically engineered during the transition to vertebrates, and that precious FI has then been preserved up to now. OK, I hope that helps. gpuccio
Bill Cole- Why even bother? You should be asking the people of Peaceful Science for the evidence that supports blind watchmaker evolution along with the means to test its claims. If they don't like the design inference with respect to any bacterial flagellum ask them for a non-telic mechanism that can produce such a structure. Then ask them how to test it. ET
OK, for the moment I will do as follows. I will give you, in next post, some simple example of my methodology. You can post it there, if you like, and see what happens. Let’s start with that. Thanks gpuccio. I think this works. We will work it like we did at TSZ. bill cole
Jawa- "They" will not come here. "They" are comforted by their willful ignorance. It's a shame, really. But if "they" are so unwilling to understand ID I say let them wallow in their ignorance. ET
Gpuccio, Your comment @351 is very helpful and timely. Thanks! As I said @345, it seems like Bill Cole’s suggestion @343 is an interesting idea. However. I fully agree with your reasoning @351 and support your suggestion, which makes more sense. Also thank you for your time and effort to write OPs and extensive follow up commentaries, which have taught me and perhaps others -including many anonymous readers- the basis of your clever approach to quantifying and detecting large jumps of complex functionally specified information within proteins. This seems like a very strong argument for ID. Well done! PeterA
Bill Cole at #343: Well, I don’t know. Of course it’s easy to explain my methodology, and I have done that many times here. The problem is not with the explaining, It is with the follow-up discussion. As you know, I have been posting and discussing on “hostile” blogs in the past, both directly (at TSZ, and before that at Mark Frank’s blog), and indirectly (at TSZ, with some long parallel discussions, a couple of times, I believe). Indeed I like intellectual confrontation, even fight (good fight of course). But the problem is, those discussions are always long and very exacting, in terms of my time and resources. And I want that my arguments, and my defenses of them, may be followed here, at UD, where the majority of ID friends come. Because, in the end, it’s for them that I write, mainly. I have scarce hope of convincing anyone on the other side, even of some tiny bit of my arguments, and of the credibility and weight of ID. It’s not that I am a cynic, I am just speaking from experience. Those people do not seem to be really available to recognize good arguments from the other side, whatever they may be. Even the best of those people. So, it is always a rather unilateral discussion. The role of my interlocutors seems to be of just arising objections, some good, some definitely not, and allow me to clarify. That’s good, of course, but in the end the real target of my efforts are always the ID people. And the ID people are here. Again, the problem is not so much with moderation. I am not worried of those interlocutors who just offend and make sarcastic jokes of no relevance. Those I can simply ignore. My concern is with the good interlocutors, those who offer “good” objections, more or less, and in good faith. Those people, of course, deserve answers, and all my attention. I have always tried never to abandon a discussion that is going on in good faith, and with some reasonable respect between the interlocutors. So, answering the good objections is the real work, and it is hard work in terms of time and resources. I do that willingly, happily I would say, here, because I know that here there are people really interested in ID. But I have not the time and resources to do that both here and elsewhere. OK, for the moment I will do as follows. I will give you, in next post, some simple example of my methodology. You can post it there, if you like, and see what happens. Let’s start with that. gpuccio
Jawa: “Sven Mill posted a few hostile comments against GP in this discussion, but eventually ran for the exit doors and hasn’t come back to respond GP’s comments @270 and @272.” That’s speculation. You don’t know why the guy hasn’t come back. Be more respectful of others. Obviously his incoherent arguments didn’t make a scratch on GP’s clear explanations, but the guy’s conspicuous absence may be unrelated to his lack of valid arguments. Let’s leave those objectors alone. Some of us would not mind seeing them in the discussion, but their presence isn’t require here. Get over it and move on. There’s much work to do reviewing the increasing number of biology research papers that are revealing very interesting facts that obviously point more and more to conscious purposeful design. GP is doing this very well. PeterA
ET, Whoever has an objection to what GP is saying here may post their arguments right here and engage in a politely serious discussion with GP. Sven Mill posted a few hostile comments against GP in this discussion, but eventually ran for the exit doors and hasn’t come back to respond GP’s comments @270 and @272. BTW, PS is doing poorly according to Alexa numbers posted @346. They seem to struggle trying to attract readers. Note that the veteran PT is far below UD in the global ranking. TSZ is even farther below PT. But PS is definitely the lowest of them. Maybe the climate change has affected them all so dramatically? :) They should wake up and smell the flowers. :) jawa
Bill Cole- Send them here. They won't come because they need their willful ignorance and to control the dialog. It's clear they do not want to understand what ID says and what it is arguing against. Sad, really. ET
Off topic: Emerging Role of Eukaryote Ribosomes in Translational Control Nicole Dalla Venezia, Anne Vincent, Virginie Marcel, Frédéric Catez and Jean-Jacques Diaz * Int J Mol Sci. 2019 Mar; 20(5): 1226. doi: 10.3390/ijms20051226 https://www.mdpi.com/1422-0067/20/5/1226 OLV
Bill Cole: Is there anybody looking at PS out there? :) Alexa Global Internet Traffic Ranking for UD peers: UD 643,052 https://www.alexa.com/siteinfo/uncommondescent.com PT 1,693,214 https://www.alexa.com/siteinfo/pandasthumb.org TSZ 3,205,404 https://www.alexa.com/siteinfo/theskepticalzone.com PS 7,057,312 https://www.alexa.com/siteinfo/peacefulscience.org jawa
Bill Cole @343: That seems like a very interesting idea. PeterA
This is intriguing, isn't it? 1.5 month since posted, this OP remains in the hit parade. :) Even after Sven Mill ran away from here, making this thread an echo chamber. :) Even without distinguished objectors like Art Hunt and Larry Moran heating up the discussion. :)
Popular Posts (Last 30 Days)
Controlling the waves of dynamic, far from… (1,436) Darwinist Jeffrey Shallit asks, why can’t… (1,400) Are extinctions evidence of a divine purpose in life? (1,297) Apes and humans: How did science get so detached… (975) “Descartes’ mind-body problem” makes nonsense of materialism (960)
jawa
Gpuccio Would you be willing to describe your methods of measuring FI on peaceful science? I could try to negotiate a highly moderated thread. Bill bill cole
"semiotic polymorphism"? No wonder the "3rd Way of evolution" folks are looking desperately for an quick extension to the neo-Darwinian theory. At the pace you keep citing more papers revealing complex functionality and functional complexity within biological systems, soon they won't know how to explain anything in biology, therefore will be forced to stick to the old established doctrine: RV+NS did it. :) OLV
To all: Speaking of semiotic polymorphism, this CBM signalosome, and the amazing CARDs involved in it, are a really good example of that concept. Look, for example, at this article (one of the 15 menttioned at #332, and already mentioned by OLV at #335): CARMA3: Scaffold Protein Involved in NF-?B Signaling https://www.frontiersin.org/articles/10.3389/fimmu.2019.00176/full Now, don’t be confused. We humans are simply adding some unnecessary complexity to the topic by calling these fascinating proteins with two different names: CARD proteins (Caspase recruitment domain proteins) or CARMA proteins (Caspase recruitment domain and membrane-associated guanylate kinase-like proteins) They are, however, the same thing. Always to clarify, here is the corresponding nomenclature for the four proteins discussed in the Research topic quoted at #332: CARMA1 = CARD11 (1154 AAs) CARMA2 = CARD14 (1004 AAs) CARMA3 = CARD10 (1032 AAs) CARD9 seems to be, simply, CARD 9 (536 AAs) OK, the paper mentioned here deals with CARMA3 in particular, but gives some interesting background information about the first three proteins:
Although CARMA family proteins share a high degree of sequence and structural homology, they are transcribed by different genes and expressed in different tissues. Specifically, CARMA1 is primarily expressed in the hematopoietic system, including the spleen, thymus and peripheral blood leukocytes; CARMA2 is expressed in the mucosal tissues and skin; and CARMA3 is expressed in a broad range of tissues, particularly at high levels in lung, kidney, liver and heart, but not in hematopoietic cells
So, let’s try to understand, maybe giving a look at Fig. 1 in the paper. a) These 3 proteins are similar, and share similar domains. True, but beware: these are very different proteins, with individual sequences. A simple Blast shows that they share, at most, 400 bits of homology (in the human form), and that is really very little, for one thousand AA long proteins. b) They are expressed from different genes and in different tissues. We already know that CARMA1/CARD11 is expressed in the lymphoid tissue, both B and T lymphocytes. CARMA2/CARD14 is expressed, instead, in the skin and mucosa. CARMA3/CARD10, finally, is expressed in the lungs, heart and liver. Such a strict compartmentalization is, in itself, very interesting. c) Different tissue expression, of course, is connected to different roles. We already know that CARMA1/CARD11 is essential in the transmission of the specific immune signal in B and T lymphocytes, from BCR and TCR to the NF-Kb system. Its defects are involved in extremely serious hereditary immune definciencies. CARMA2/CARD14, on the other hand, “plays a critical role mediating IL-17RA signaling in keratinocytes”, and an anomaly of ots working seems to be connected to psoriasis. Finally, CARMA3/CARD10 “functions as an indispensable adaptor protein in modulating NF-?B signaling downstream of some GPCRs (G protein-coupled receptors), including angiotensin II receptor and lysophosphatidic acid receptor, as well as receptor tyrosine kinases (RTKs), such as epidermal growth factor (EGF) receptor and insulin-like growth factor (IGF) receptor (12–14). Recent studies indicate that besides NF-?B signaling, CARMA3 also serves as a modulator in antiviral RLR signaling, providing a new understanding of CARMA3." d) However, the other two components of the CBM signalosome, the two proteins BCL10 and MALT1, are ubiquitously expressed in all tissues, and they work in a similar way with all the different CARMA/CARD proteins. e) Moreover, all these three different pathways, starting at completely different membrane receptors in completely different cells, and activated by each of the three mentioned CARMA/CARD proteins, converge, through BCL10 and MALT1, on the same basic pathway destined to transmit the signal to the nucleus: our well known NF-kB system, with all its complexity and flexibility. f) Finally, of course, in the nucleus each different signal that was at the start of the activation, in some way, is translated into a completely different transcription pattern, involving maybe hundreds, or thousands, of different genes. So, B and T lymphocytes respond in a very specific way, and so keratynocytes, and so heart cells or liver cells. That’s what I call semiotic polymorphism. At its best! :) gpuccio
After this OP, now NF-kB seems to pop up everywhere. :) The NF-kB Signaling Pathway https://www.creative-diagnostics.com/The-NF-kB-Signaling-Pathway.htm Fight Inflammation by Inhibiting NF-kB? https://www.lifeextension.com/magazine/2019/7/fighting-inflammation-by-inhibiting-nf-kb/page-01 PeterA
Guess what interesting topic will GP write about in his next OP? :) jawa
This discussion remains among the top 5 most popular the last 30 days according to number of visits. PS. Please, note that the comment @337 is related to the one @328. jawa
Has anybody heard of Sven Mil lately? When GP politely deflated Sven Mil’s hostile pseudo arguments, the guy simply disappeared from the scene. Did he run for the doors in complete panic? Or did he go to consult professors Art Hunt and Larry Moran, who had embarrassing experiences in this website? :) jawa
“increased activation of NF-kB and MAPK via NFKB1 deletion enhance macrophage and myofibroblast content at the repair, driving increased collagen deposition and biomechanical properties.” Sci Rep. 2019; 9: 10926. doi: 10.1038/s41598-019-47461-5 PMCID: PMC6662789 PMID: 31358843 OLV
CARMA3: Scaffold Protein Involved in NF-kB Signaling
During the past decade, much progress has been made in the understanding of CARMA3 functions in the NF-kB signaling pathways. [...] investigating the function of CARMA3 has revealed that it forms a complex with BCL10 and MALT1 to mediate different receptors-dependent signaling, including GPCR and EGFR, leading to activation of the transcription factor NF-kB. CARMA3-dependent IKK activation is involved in GPCR-, RTK-, ATM-, and RLR-induced NF-kB activation. However, further investigation is still needed to reveal the mechanism by which CARMA3 are linked to the different receptors. it is important to determine whether CARMA3 is associated with other proteins following different stimuli. Identifications of such proteins will provide the molecular basis of how CARMA3-containing complexes are involved in mediating NF-kB activation induced by different stimuli. more studies are required to define the regulation of CARMA3-mediated signaling transduction CARMA3 plays a positive role in RIG-I-induced NF-?B activation
And this is just one of the 15 articles in the given collection. Very interesting indeed. OLV
GP @332: That collection of articles is a biology research treasure trove. OLV
GP @332:
Our good friend, the CBM signalosome, discussed in some detail in the OP and in the thread, has recently been the object of a very interesting “Research Topic” in Frontiers in Immunology.
This is very interesting indeed. Thanks! OLV
To all: Our good friend, the CBM signalosome, discussed in some detail in the OP and in the thread, has recently been the object of a very interesting “Research Topic” in Frontiers in Immunology. Here is the link to the 15 articles: Research Topic: CARMA Proteins: Playing a Hand of Four CARDs https://www.frontiersin.org/research-topics/6853/carma-proteins-playing-a-hand-of-four-cards#articles And here is the Editorial: Editorial: CARMA Proteins: Playing a Hand of Four CARDs https://www.frontiersin.org/articles/10.3389/fimmu.2019.01217/full A few thoughts:
Over the past 20 years, enhanced analyses of tumor-specific genomic alterations coupled with elegant biochemical approaches have helped to map essential signaling pathways in healthy and malignant cells. For example, the B cell lymphoma/leukemia 10 (BCL10) gene was identified in 1999 from a recurrent chromosomal translocation noted in non-Hodgkin lymphomas that arise in mucosa-associated lymphoid tissue (MALT). These studies demonstrated that BCL10 could oligomerize via its caspase recruitment domain (CARD) and induce robust activation of nuclear factor of kappaB (NF-?B), a critical family of transcription factors first implicated in lymphocyte differentiation. In <3 years, multiple groups discovered that BCL10 must partner with one of four CARD-containing scaffold proteins (CARD9, CARD10, CARD11, and CARD14) and the MALT1 paracaspase (also discovered from a MALT lymphoma-derived chromosomal translocation) to drive NF-?B signaling in response to various receptor-mediated inputs. … Although we now appreciate that tissue-specific expression of CARMA/CARD proteins dictates function and underlying pathology of associated diseases, this collection also underscores the ubiquity and significance of the CBM signalosome as a central governor of receptor-mediated signaling to NF-?B and additional outputs important for cell proliferation, survival, differentiation, and function.
Very interesting. gpuccio
OLV at #325: Interesting paper. Indeed, communication between cells, often very distant and different cells in multicellular organisms, requires at least three different levels of coding: 1) The message: a specific requirement to be sent to the target cells from the cells that originate the signals. This is a symbolic coding, because of course the "messenger" molecules, be they hormones, cytokines, or anything else, have really nothing to do with the response they are destined to evoke in the end. They are symbolic messengers, and nothing else. Moreover, the coding implies not only the type of messenger molecules, but also their concentration, distribution in the organism, and possibly modifications or interactions with other structures, as we have seen for example in the OP dedicated to the extracellular fluid. 2) First decoding step and transmission to the nucleus. This is usually a very complex step, where multiple levels of decoding interact in an extremely articulated way, often implying a lot of control of random noise and chaotic components, as seen in this thread. Moreover, many layers are superimposed here, starting from membrane receptors, their modulations, their immediate translation systems, and then the more complex pathways that translate the partially decoded message to the nuclear environment. Please note that at this level the message has already been partially decoded, but is still transmitted in rather symbolic form, usually as chains of molecular interactions that can assume multiple configurations and forms. The NF-kB system described in the OP is a very good example, with its many semiotic polymorphisms. 3) Finally, the ultimate decoding takes place in the nucleus, where the complex codes and subcodes of TFs, with their manifold interactions and tweakings, must in some way transform the initial message into an effective modulation, in space and time and intensity, of the transcription of multiple specific genes (often hundreds, or even thousands of them). The final result in cell behaviour modifications will be the controlled, and usually very efficient, consequence of the original message started by the activity of many distant cells in the organism. All that is certainly beautiful and fascinating. But also amazing. Very much indeed. gpuccio
Jawa at #329: Thank you for the statistics! It's good to see that the thread is still going rather well, even if I have been rather busy with other things. :) gpuccio
This is interesting, isn’t it? Popular Posts (Last 30 Days) Controlling the waves of dynamic, far from… (1,304) (Visited 2,758 times, 250 visits today) Jul 10, 329 replies Are extinctions evidence of a divine purpose in life? (1,272) (Visited 1,272 times, 37 visits today). Aug 4, 11 replies Chemist James Tour calls time out on implausible… (1,140) (Visited 1,238 times, 9 visits today) Aug 19, 16 replies Apes and humans: How did science get so detached… (959) “Descartes’ mind-body problem” makes nonsense of materialism (947) jawa
Hey! Has anybody seen Sven Mill lately? Will he ever come back to respond GP’s comments @270 and @272? Did he run out of objections? Maybe Dr Art Hunt or Dr Larry Moran could assist him with writing a coherent counterargument ? :) jawa
This is interesting: Popular Posts (Last 30 Days) 1. Now Steve Pinker is getting #MeToo’d, at Inside… (2,534) Visited 2,679 times, 32 visits today Posted on July 17 1 comment 2. Controlling the waves of dynamic, far from… (1,292) Visited 2,543 times, 79 visits today Posted on July 10 326 comments jawa
Deletion of NFKB1 enhances canonical NF-kB signaling and increases macrophage and myofibroblast content during tendon healing https://www.nature.com/articles/s41598-019-47461-5
better understanding the downstream processes mediated by NF-kB signaling could reveal candidate pathways that could be viably targeted by therapeutics.
OLV
Communication codes in developmental signaling pathways Pulin Li, Michael B. Elowitz Development 2019 146: dev170977 doi: 10.1242/dev.170977
A handful of core intercellular signaling pathways play pivotal roles in a broad variety of developmental processes. It has remained puzzling how so few pathways can provide the precision and specificity of cell-cell communication required for multicellular development. Solving this requires us to quantitatively understand how developmentally relevant signaling information is actively sensed, transformed and spatially distributed by signaling pathways. Recently, single cell analysis and cell-based reconstitution, among other approaches, have begun to reveal the ‘communication codes’ through which information is represented in the identities, concentrations, combinations and dynamics of extracellular ligands. They have also revealed how signaling pathways decipher these features and control the spatial distribution of signaling in multicellular contexts. Here, we review recent work reporting the discovery and analysis of communication codes and discuss their implications for diverse developmental processes.
“communication codes” ? OLV
Structures of autoinhibited and polymerized forms of CARD9 reveal mechanisms of CARD9 and CARD11 activation Nat Commun. 2019; 10: 3070. doi: 10.1038/s41467-019-10953-z https://www.nature.com/articles/s41467-019-10953-z
While significant questions remain, in particular in understanding the regulatory role of the larger coiled-coil domain, our study provides a number of critical steps towards a full structural description of regulation within the protein family.
OLV
That’s a very interesting paper that GP posted @319. Here’s the link again for those who don’t want to scroll up to GP’s original comment: http://m.jbc.org/content/early/2019/08/07/jbc.RA119.009551.long?view=long&pmid=31391255 OLV
GP @319:
Coordinated regulation. Actively coordinate scaffold opening and the induction of enzymatic activity.
All that resulted from RV+NS+T? :) OLV
GP @318: Excellent explanation, as usual. Thanks. OLV
GP @319:
please note the use of the words “has evolved to” in the end, just to mean “is able to”.
:) OLV
To all: At #118 I have mentioned the complexity of the CBM signalosome, whose role in T-cell receptor (TCR)-mediated T-cell activation is fundamental. I have also mentioned how CARD11 is a wonderful example of a very big and complex protein exhibiting a huge information jump in vertebrates (see also the additional figure at the end of the OP). Well, here is another very recent paper about CARD11: Coordinated regulation of scaffold opening and enzymatic activity during CARD11 signaling. https://www.ncbi.nlm.nih.gov/pubmed/31391255
Abstract: The activation of key signaling pathways downstream of antigen receptor engagement is critically required for normal lymphocyte activation during the adaptive immune response. CARD11 is a multidomain signaling scaffold protein required for antigen receptor signaling to NF-?B, c-Jun N-terminal Kinase (JNK), and mTOR. Germline mutations in the CARD11 gene result in at least four types of primary immunodeficiency, and somatic CARD11 gain-of-function mutations drive constitutive NF-?B activity in Diffuse Large B Cell Lymphoma and other lymphoid cancers. In response to antigen receptor triggering, CARD11 transitions from a closed, inactive state to an open, active scaffold that recruits multiple signaling partners into a complex to relay downstream signaling. However, how this signal-induced CARD11 conversion occurs remains poorly understood. Here we investigate the role of IE1, a short regulatory element in the CARD11 Inhibitory Domain, in the CARD11 signaling cycle. We find that IE1 controls the signal-dependent Opening Step that makes CARD11 accessible to the binding of cofactors, including Bcl10, MALT1, and the HOIP catalytic subunit of the Linear Ubiquitin Chain Assembly Complex. Surprisingly, we find that IE1 is also required at an independent step for the maximal activation of HOIP and MALT1 enzymatic activity after cofactor recruitment to CARD11. This role of IE1 reveals that there is an Enzymatic Activation Step in the CARD11 signaling cycle that is distinct from the Cofactor Association Step. Our results indicate that CARD11 has evolved to actively coordinate scaffold opening and the induction of enzymatic activity among recruited cofactors during antigen receptor signaling.
Interesting. Coordinated regulation. Actively coordinate scaffold opening and the induction of enzymatic activity. See also the very interesting Fig. 6. Oh, and please note the use of the words "has evolved to" in the end, just to mean "is able to". :) gpuccio
OLV at #316-317: I had a look at the paper. I am nor sure what the problem is. It is not surprising, IMO, that some master TFs have an important role in the spacial definition of limbs and appendages, in different types of animals. What's the problem there? Establishing three dimensional axes seems to be a very basic engineering tool, I am not suprised at all that it is basicallty implemented by the same TF families in different beings. Of course, that does not explain at all the differences between limbs: those must be explained by other types of information, other genes or other epigenetic networks. The establishment of axes and simmetries is one thing. The morphological definition of limbs is another thing. It is rather clear that biological engineering takes place at different, well ordered levels. Some functions remain similar and are conserved, others neeed completely new implementations to generate diversity of function. I am not sure of the supposed role of "cooption" in all this. gpuccio
GP, Here’s a major hint to answer the question @316:
According to the ‘co-option hypothesis’, a developmental program evolved in the common ancestor of the bilaterians – a group that includes most animals except for primitive forms like sponges – to shape an appendage that later disappeared during evolution
I’m beginning to like that cool co-option idea. :) Have you seen any coherent explanation of how it all works ? Could this be a potential topic for a future OP? OLV
GP, Please, help me with this: A lesson in homology ? https://doi.org/10.7554/eLife.48335 https://elifesciences.org/articles/48335#x339477c1
The same genes and signaling pathways control the formation of limbs in vertebrates, arthropods and cuttlefish.
Does that mean that something else is involved in this complex process besides the genes and the signaling pathways ? Thanks. OLV
In the papers cited @314 note the following topics associated with the current OP: NF-kB down-regulation NF-kB up-regulation NF-kB activation NF-kB inactivation Inhibition of NF-kB Signaling Pathway OLV
Back to the OP topic: Inhibition of LPS-Induced Oxidative Damages and Potential Anti-Inflammatory Effects of Phyllanthus emblica Extract via Down-Regulating NF-kB, COX-2, and iNOS in RAW 264.7 Cells Loss of BAP1 Is Associated with Upregulation of the NFkB Pathway and Increased HLA Class I Expression in Uveal Melanoma Decursinol angelate ameliorates 12-O-tetradecanoyl phorbol-13-acetate (TPA) -induced NF-kB activation on mice ears by inhibiting exaggerated inflammatory cell infiltration, oxidative stress and pro-inflammatory cytokine production A novel curcumin analog inhibits canonical and non-canonical functions of telomerase through STAT3 and NF-kB inactivation in colorectal cancer cells    Azithromycin Polarizes Macrophages to an M2 Phenotype via Inhibition of the STAT1 and NF-kB Signaling Pathways OLV
But I digress- This at least seems to produce another catch-22. Lipid bubbles can't survive without amino acids and the molecules of life cannot survive without some environmental barrier. And lipid bubbles present such a barrier. So a cytoplasm filled with amino acids- to some extent- would create a stable barrier along with the raw materials needed to produce proteins. But not just any protein will do. And the method of producing them will be too slow to be effective- if it's even capable. barriers pores pumps and gates
A simple, completely exclusive barrier between the intracellular and extracellular environments is not by itself much use in homeostasis. The barrier with the outside world must allow in those things necessary or growth and development whilst excluding everything else. In short it must be selectively permeable. [b]Pure lipid would be impermeable to most water-soluble substances so a plasma membrane contains channels and pores built from protein molecules to enable selected substances to enter (or leave) the cell.[/b] A pore or channel is a protein with a hydrophobic (water hating, lipid loving) exterior which can sit happily in the membrane and a hydrophilic (water loving) centre through which water and small water soluble molecules can pass. If such a molecule is inserted into a plasma membrane so that one end sticks out of the cell and the other end sticks out into the cell interior water soluble compounds can cross the membrane without ever coming into contact with the lipid. A plasma membrane almost always includes water, K+, Cl- and Ca2+ channels; many cell types also have Na+ channels. Ion channels are also (usually) gated i.e. they may be open or closed.
From a design standpoint this all makes sense- this foundational requirement- the ready made selectively permeable membrane. It has a cytoplasm teeming with amino acids for structural support of that membrane. And they are also raw materials for making proteins. The proteins used in creating the pores and channels. ET
Wow, PavelU- I had just finished reading that article about a half hour ago. You do realize that not just any membrane will do, right? You have to get nutrients in and waste out. You also have to be able to communicate with the different compartments. But most of all, without some internal replication mechanism, nothing will ever come from lipid bubbles with amino acids. But yes, it is all interesting stuff an shows how desperate some people are. ET
Last part of PavelU's cited article:
She’s now looking into what happens after the protocells assemble. Sure, there’s a compartment that contains the building blocks for making proteins and RNA. “But how do those individual building blocks bond to form the larger molecules?” she says. “It’s a very hard question.”
To wit:
"We have no idea how to put this structure (a simple cell) together.,, So, not only do we not know how to make the basic components, we do not know how to build the structure even if we were given the basic components. So the gedanken (thought) experiment is this. Even if I gave you all the components. Even if I gave you all the amino acids. All the protein structures from those amino acids that you wanted. All the lipids in the purity that you wanted. The DNA. The RNA. Even in the sequence you wanted. I've even given you the code. And all the nucleic acids. So now I say, "Can you now assemble a cell, not in a prebiotic cesspool but in your nice laboratory?". And the answer is a resounding NO! And if anybody claims otherwise they do not know this area (of research).” - James Tour: The Origin of Life Has Not Been Explained - 4:20 minute mark (The more we know, the worse the problem gets for materialists) https://youtu.be/r4sP1E1Jd_Y?t=255 Origin of Life: An Inside Story - Professor James Tour – May 1, 2016 Excerpt: “All right, now let’s assemble the Dream Team. We’ve got good professors here, so let’s assemble the Dream Team. Let’s further assume that the world’s top 100 synthetic chemists, top 100 biochemists and top 100 evolutionary biologists combined forces into a limitlessly funded Dream Team. The Dream Team has all the carbohydrates, lipids, amino acids and nucleic acids stored in freezers in their laboratories… All of them are in 100% enantiomer purity. [Let’s] even give the team all the reagents they wish, the most advanced laboratories, and the analytical facilities, and complete scientific literature, and synthetic and natural non-living coupling agents. Mobilize the Dream Team to assemble the building blocks into a living system – nothing complex, just a single cell. The members scratch their heads and walk away, frustrated… So let’s help the Dream Team out by providing the polymerized forms: polypeptides, all the enzymes they desire, the polysaccharides, DNA and RNA in any sequence they desire, cleanly assembled. The level of sophistication in even the simplest of possible living cells is so chemically complex that we are even more clueless now than with anything discussed regarding prebiotic chemistry or macroevolution. The Dream Team will not know where to start. Moving all this off Earth does not solve the problem, because our physical laws are universal. You see the problem for the chemists? Welcome to my world. This is what I’m confronted with, every day.“ James Tour – leading Chemist https://uncommondesc.wpengine.com/intelligent-design/origin-of-life-professor-james-tour-points-the-way-forward-for-intelligent-design/ August 2019- Evidence from Quantum Biology for God being behind life https://uncommondesc.wpengine.com/intelligent-design/if-you-can-reproduce-how-life-got-started-10-million-is-yours/#comment-681958
bornagain77
ET, Here’s something for you and your friends to learn from before you write your next comment: A New Clue to How Life Originated A long-standing mystery about early cells has a solution—and it’s a rather magical one. https://www.theatlantic.com/science/archive/2019/08/interlocking-puzzle-allowed-life-emerge/595945/ PavelU
Jawa- "Their" argument is and always has been "that X exists and we know (wink, wink) it wasn't via intelligent design. It's just a matter of time before we figure it all out." It does make for a nice narrative, though. I was impressed when I went to the Smithsonian and saw the short movie on how life's diversity arose. But it all seemed so Lamarckian, though, as it still does. They always talk about physical transformations without any discussion of the mechanisms capable of carrying them out. There is never any genetic link. And that is very telling ET
ET, I like your comment. Good point. But I doubt PavelU will understand it, because the poor guy seems oblivious. He should wake up and smell the flowers in the garden. :) jawa
PavelU- There isn't any literature supporting the claim that NS, which includes RV, can do anything beyond merely changing allele frequency over time, within a population. Speculation based on the assumption abounds in textbooks. But no one knows how to test the claim that NS, drift or any other blind and mindless process can actually do as advertised. That is why probability arguments exist. There isn't any actual data, observations or experiments to support it. If the textbooks claim otherwise then they are promoting lies, falsehoods and misrepresentations. ET
ET, That's what is written in the textbooks. Are you implying that the textbooks are incorrect? Really? There's abundant literature supporting RV+NS. PavelU
PavelU:
Tell them that it’s widely accepted that it all resulted from long evolutionary processes, mainly RV+NS.
Why tell them a lie? ET
PeterA: “If my children and grandchildren ask me why and how that phenomenal jump occurred, what should I tell them?” Tell them that it’s widely accepted that it all resulted from long evolutionary processes, mainly RV+NS. PavelU
GP @2: “It should be rather obvious that, if the true purpose of biological beings were to achieve the highest survival and fitness, as neo-darwinists believe, life should have easily stopped at prokaryotes.” That’s an interesting observation indeed. Far from stopping at the comfortable fitness level of prokaryotes, evolution produced a mind boggling information jump to eukaryotes! How come? How can one explain that? If my children and grandchildren ask me why and how that phenomenal jump occurred, what should I tell them? PeterA
Deletion of NFKB1 enhances canonical NF-?B signaling and increases macrophage and myofibroblast content during tendon healing OLV
NF-kB graphical illustrations (links): NF-kB Signaling NF-kB mechanism of action (OP)        B NF-kB Activation in Lymphoid Malignancies NF-kB Pathway NF-kB Signalling NF-kB more images       OLV
PeterA, Perhaps Sven Mill will answer all those questions next time he comes back to respond GP's comments @270 and @272. :) Maybe Dr Art Hunt or Dr Larry Moran could assist Sven Mil to write a coherent objection to your comment @299. :) Just wait... be patient. :) jawa
I like the poetic way this OP ends:
Is this the working of a machine? Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency. But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible. It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario. It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves. It is, from all points of view, amazing. Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design. And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system? Do you still have any doubts?
I would add another question: any objection? PeterA
NF-kB is all over the map. :) It’s funny that before this OP I didn’t notice this NF-kB, but now it seems to pop up in many papers. OLV
Popular Posts (Last 30 Days) Now Steve Pinker is getting #MeToo’d, at Inside… (2,614) Controlling the waves of dynamic, far from… (2,330) Atheism’s problem of warrant (–>… (1,850) Chemist James Tour calls time out on implausible… (1,209) Are extinctions evidence of a divine purpose in life? (1,196) jawa
Another NF-kB paper OLV
The NF-kB Signaling is quite simple. :) OLV
Bill Cole, Thanks. OLV
Graphic OLV
Olv It is a cleaving enzyme so it is transcribed at some interval. If it does not work properly it can be potentially responsible for certain diseases. As far as I can tell regulation is from transcription rates. Gpuccio, do you agree? bill cole
NF-kB Hepatoprotective Effects of Morchella esculenta against Alcohol-Induced Acute Liver Injury in the C57BL/6 Mouse Related to Nrf-2 and NF-kB Signaling A novel curcumin analog inhibits canonical and non-canonical functions of telomerase through STAT3 and NF-kB inactivation in colorectal cancer cells Validation of the prognostic value of NF-kB p65 in prostate cancer: A retrospective study using a large multi-institutional cohort of the Canadian Prostate Cancer Biomarker Network OLV
Bill Cole @288: Any idea how is that cleaving mechanism established and activated? OLV
GP @287: My pleasure. OLV
Also, as far as I understand sumo tags must be cleaved (removed) prior to ubiquitin guided protein degradation. bill cole
OLV: Yes, SUMO is a very interesting "side actor" in the already extremely complex ubiquitin system! :) By the way, thank you for the very detailed summaries, my friend. :) gpuccio
GP, Off topic: the plot thickens... Another ubiquitin-related stuff? :) How Does SUMO Participate in Spindle Organization? https://www.mdpi.com/2073-4409/8/8/801
The ubiquitin-like protein SUMO is a regulator involved in most cellular mechanisms. Recent studies have discovered new modes of function for this protein. Of particular interest is the ability of SUMO to organize proteins in larger assemblies, as well as the role of SUMO-dependent ubiquitylation in their disassembly. These mechanisms have been largely described in the context of DNA repair, transcriptional regulation, or signaling, while much less is known on how SUMO facilitates organization of microtubule-dependent processes during mitosis. Remarkably however, SUMO has been known for a long time to modify kinetochore proteins, while more recently, extensive proteomic screens have identified a large number of microtubule- and spindle-associated proteins that are SUMOylated. The aim of this review is to focus on the possible role of SUMOylation in organization of the spindle and kinetochore complexes. We summarize mitotic and microtubule/spindle-associated proteins that have been identified as SUMO conjugates and present examples regarding their regulation by SUMO. Moreover, we discuss the possible contribution of SUMOylation in organization of larger protein assemblies on the spindle, as well as the role of SUMO-targeted ubiquitylation in control of kinetochore assembly and function. Finally, we propose future directions regarding the study of SUMOylation in regulation of spindle organization and examine the potential of SUMO and SUMO-mediated degradation as target for antimitotic-based therapies.
OLV
The Human Transcription Factors Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme Selectivity of the NF-?B Response 30 years of NF-?B: a blossoming of relevance to human pathobiology Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle NF-kB oscillations translate into functionally related patterns of gene expression NF-?B Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration 30 years of NF-?B: a blossoming of relevance to human pathobiology +++++++ @3: Two of the papers I quote in the OP: Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle and: NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration are really part of a research topic: Understanding Immunobiology Through The Specificity of NF-kB including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology. Here are the titles: Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms Cellular Specificity of NF-kB Function in the Nervous System Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF Techniques for Studying Decoding of Single Cell Dynamics NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP) Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP) Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics +++++++ @13: Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B @15: Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression @17: Introduction to the Thematic Minireview Series: Chromatin and transcription @20: Cellular Specificity of NF-?B Function in the Nervous System @21: Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB @29: Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation @52: Lnc-ing inflammation to disease @67 Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit @96 The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases @131 Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears @133 Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA) Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains @139 Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift @192 Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection. @193 Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation @194 On chaotic dynamics in transcription factors and the associated effects in differential gene regulation @209 Conservation and divergence of p53 oscillation dynamics across species @220 The functional analysis of MicroRNAs involved in NF-kB signaling. @222 gga-miR-146c Activates TLR6/MyD88/NF-?B Pathway through Targeting MMP16 to Prevent Mycoplasma Gallisepticum (HS Strain) Infection in Chickens Temporal characteristics of NF-?B inhibition in blocking bile-induced oncogenic molecular events in hypopharyngeal cells @223 The Regulation of NF-?B Subunits by Phosphorylation The Ubiquitination of NF-?B Subunits in the Control of Transcription A Role for NF-?B in Organ Specific Cancer and Cancer Stem Cells @226 Veins and Arteries Build Hierarchical Branching Patterns Differently: Bottom-Up versus Top-Down @236 Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function @237 Computational Biology Solutions to Identify Enhancers-target Gene Pairs @245 Displacement of the transcription factor B reader domain during transcription initiation @246 Design Principles Of Mammalian Transcriptional Regulation @247 Dynamic interplay between enhancer–promoter topology and gene activity @248 A genome disconnect Highly rearranged chromosomes reveal uncoupling between genome topology and gene expression Does rearranging chromosomes affect their function? @256 The Met1-Linked Ubiquitin Machinery: Emerging Themes of (De)regulations @257 Nuclear Transcription Factors in the Mitochondria: A New Paradigm in Fine-Tuning Mitochondrial Metabolism. @258 Regulation of NF-kB signalling cascade by immunophilins Biological Actions of the Hsp90-binding Immunophilins FKBP51 and FKBP52 @264 Transcription-driven chromatin repression of Intragenic transcription start sites Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function Computational Biology Solutions to Identify Enhancers-target Gene Pairs Detection of condition-specific marker genes from RNA-seq data with MGFR Enhancer RNAs: Insights Into Their Biological Role Epigenetic control of early dendritic cell lineage specification by the transcription factor IRF8 in mice Competitive endogenous RNA is an intrinsic component of EMT regulatory circuits and modulates EMT Delta Like-1 Gene Mutation: A Novel Cause of Congenital Vertebral Malformation @266 Widespread roles of enhancer-like transposable elements in cell identity and long-range genomic interactions @271 Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases @282 Recent Progress on the Localization of the Spindle Assembly Checkpoint Machinery to Kinetochores OLV
OLV @279: You should be patient. Sven Mil is probably a very busy scientist, hence he can’t comment here on demand. You have to wait. It’s possible he’s related to professors Art Hunt or Larry Moran. :) jawa
In response to the comment @279 we hear only the sound of silence. :) OLV
Off topic: When I read a paper that mentions proteins automatically GP’s quantitative method comes to mind. :) In the following text several proteins are mentioned. The given article claims that most of them are very conserved through numerous biological systems. I wonder what are the genetic regulatory mechanisms associated with the mentioned proteins. Here’s the text: In essence, SAC is a cellular signaling pathway. Multiple mitotic kinases and their substrates are involved in this signaling. Therefore, the correct position of specific kinases to its substrates is of great importance for the functional integrity of the SAC. We envision the kinetochore localization of SAC factors may serve several functions. First, the kinetochore localization of Mps1 kinase (and Bub1, Plk1 kinase and CDK1-Cyclin B) positions the kinase close to their substrates (i.e., Knl1). Second, the kinetochore localization of Bub1 serves as a scaffold to recruit its downstream factors such as BubR1, Mad1/Mad2 and RZZ. Last, the kinetochore localization of Mps1 and Bub1 may facilitate their own activation due to the higher local concentration at kinetochore. The hierarchical recruitment pathway of SAC is becoming elucidated gradually. In brief, Aurora B activity boosts the kinetochore recruitment and activation of Mps1. Then, Mps1 phosphorylates Knl1, and in turn, phosphorylated Knl1 recruits Bub1/Bub3. Bub1 works as a scaffold to recruit BubR1/Bub3, Mad1/Mad2, RZZ and Cdc20. Despite important progress, many outstanding questions remain. For example, an exact molecular delineation of how Aurora B activity and ARHGEF17 promote Mps1 kinetochore recruitment remains elusive. Future studies to address these questions will definitely deepen our understanding on SAC signaling. Advanced protein structural analyses, protein-protein interaction interface delineation and protein localization dynamics analyses using super-resolution imaging tool combination with optogenetic operation will pave our way in future. Recent Progress on the Localization of the Spindle Assembly Checkpoint Machinery to Kinetochores Cells. 2019 Mar; 8(3): 278. doi: 10.3390/cells8030278 OLV
Some papers cited in the comments: @3: Two of the papers I quote in the OP: Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle and: NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration are really part of a research topic: Understanding Immunobiology Through The Specificity of NF-kB including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology. Here are the titles: Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms Cellular Specificity of NF-kB Function in the Nervous System Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF Techniques for Studying Decoding of Single Cell Dynamics NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP) Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP) Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics +++++++ @13: Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B @15: Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression @17: Introduction to the Thematic Minireview Series: Chromatin and transcription @20: Cellular Specificity of NF-?B Function in the Nervous System @21: Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB @29: Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation @52: Lnc-ing inflammation to disease @67 Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit @96 The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases @131 Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears @133 Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA) Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains @139 Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift @192 Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection. @193 Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation @194 On chaotic dynamics in transcription factors and the associated effects in differential gene regulation OLV
Here’re some interesting papers cited in this OP: The Human Transcription Factors Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme Selectivity of the NF-?B Response 30 years of NF-?B: a blossoming of relevance to human pathobiology Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle NF-kB oscillations translate into functionally related patterns of gene expression NF-?B Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration OLV
So far only a confused commenter (Sven Mil) has attempted unsuccessfully to present a counter argument. The last comment by Sven Mil (@267) was clearly responded @270 and @272. Is there another reader that would like to present contrarian arguments? [crickets] OLV
This OP reminds us of this fact: “biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.” “It is, from all points of view, amazing.” “Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.” “And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?” “Do you still have any doubts?” OLV
Regarding the confused criticism presented by Sven Mil, I feel sorry for the guy. It must feel bizarre to try so hard and get nothing out of it. Wasted effort, unless he finally understands GP’s idea. Let’s hope so. :) OLV
August 4, 2019 at 1:41 am Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6614138/ Natural antisense (AS) transcripts are RNA molecules that are transcribed from the opposite DNA strand to sense (S) transcripts, partially or fully overlapping to form S/AS pairs. It is now well documented that AS transcription is a common feature of genomes from bacteria to mammals Thousands of lncRNA genes have been identified in mammalian genomes, with their number increasing steadily It is now clear that lncRNAs can regulate several biological processes, including those that underlie human diseases and yet their detailed functional characterization remains limited. Altogether, our results highlight the enormous complexity of gene regulation by antisense lncRNAs at any given locus OLV
Thank you, gpuccio. "Sequence homology" is not functional similarity. Sven Mil seems to have the two confused. ET
PeterA @269: I’m biology-challenged too, but regarding your third question, I think most proteins are produced through gene expression: transcription, post-transcriptional modifications, translation, post-translational modifications. OLV
GP @270 & 272: Clear concise explanations. Thanks. Let’s hope Sven Mil gets it this time. Sometimes the penny doesn’t drop right away. :) OLV
Sven Mil (and all): What you really don't understand (or simply pretend not to understand) is that I am in no way trying to "trace the evolution of proteins back", as you seem to believe. I am trying to detect and locate in space and time the appearance of new complex functional information during the evolution of proteins, whatever their distant origin may be. That's why I look for information jumps, in the form of new specific sequences that appear at some evolutionary time and are then conserved for hundreds of million years. I have explained the rationale for that many times. You say that I "will always find those jumps if I go back far enough. That's simply not true. Take, for example, the case of the alpha and beta chains of ATP synthase, that I often use as an example. There is no jump there. Exact probably when those proteins first appeared, but we simply don't know, because those proteins are present in bacteria and in all living eukaryotes. So, no jumps here: only thousands of bits of information conserved for billions of years. You just have to explain how that functional information came into existence. Instead, I have described a lot of functional jumps in the transitions to vertebrates: functional proteins that usually already existed, or sometimes appear de novo, and whose sequence specificity is then conserved for the next 400 million years. So, I detect those jumps for the simple reason that they are there. Those proteins, even if they already existed in previous deuterostomia and chordates, have been highly re-engineered in vertebrates. Do I "always find jumps"? Absolutely not. For a lot of proteins, there is no jump at the transition to vertebrates. They remain almost the same, or you can observe those weak and gradual differences that are compatible with neutral evolution. The simple reason for that is that those proteins have not been re-engineered in vertebrates, they have just kept their old function. The alpha and beta chains of ATP synthase are good examples. But a lot of other proteins do show big jumps at the transition to vertebrates. Those are the jumps that I have discussed in my OPs. IOWs, I detect jumps if they are there, and i do not detect them if they are not there. As it should be. IOWs, Blast as I use it is a very good tool to detect and measure sequence homology between proteins. As serious scientists all over the world know very well. gpuccio
Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6614138/ Natural antisense transcripts are common features of mammalian genes providing additional regulatory layers of gene expression. A comprehensive description of antisense transcription in loci associated to familial neurodegenerative diseases may identify key players in gene regulation and provide tools for manipulating gene expression. This work provides evidence for the existence of additional regulatory mechanisms of the expression of neurodegenerative disease-causing genes by previously not-annotated and/or not-validated antisense long noncoding RNAs. OLV
Sven Mil at #267: You really don't understand, do you? My method (blast alignment by the default algorithm) is simply the method used routinely by almost all researchers to detect sequence homology. So, I am not doing anything particular, as you seem to believe. Those who are interested in detecting weak and distant homologies, of course, can use other methods, like more sensitive algorithms of alignment and structural similarities, if they like. That will give higher sensitivity and lower specificity in detecting if two proteins are homologues. IOWs, more false positives. As I have said many times, I am not trying to detect if two proteins are distant homologues, because that has nothing to do with my reasoning. The researchers you quote say that sigma factor and human TFIIB are homologues? Maybe. Maybe not. Anyway, I have no problems with that statement. If they are, they are. That makes no difference in my reasoning. More in next post. gpuccio
GP, Sven Mil, ET, et al, I’m ignorant of basic biology. I’ve tried to understand what you’re discussing but can’t figure it out. Please, explain this to me in easy to understand terms: 1. Are you comparing two proteins P1 and P2 which work for prokaryotes (P1) and eukaryotes (P2) respectively? 2. Could P1 work for eukaryotes too? 2.1. If YES then why wasn’t it kept in eukaryotes rather than being replaced by P2? 3. Any idea how P1 and P2 could have appeared? I may have more questions, but these are fine to start. Note that I would like to read the answers from all of you and from other readers of this discussion. Thanks. PeterA
sven mil:
The fact is Gpucc, you are unable to detect homology between two proteins that perform the same function and that have been shown to be homologs by other methods.
How are using the word "homology"? Convergence explains two different proteins having the same function, as does a common design.
You can detect high conservation (i.e. when a protein’s functional niche has become well-defined and locked into place evolutionarily speaking) but you are completely unable to detect the actual evolution of a protein
Is there any evidence that blind and mindless processes can produce proteins? I would think that gpuccio is open to the concept of proteins evolving by means of intelligent design. ET
The fact is Gpucc, you are unable to detect homology between two proteins that perform the same function and that have been shown to be homologs by other methods. This means your method is simply not sensitive enough (as you have already admitted) to trace the evolution of proteins back in the way that you are attempting to. You can detect high conservation (i.e. when a protein's functional niche has become well-defined and locked into place evolutionarily speaking) but you are completely unable to detect the actual evolution of a protein And that's why you will always find your "jumps" if you go back far enough. You seem smart enough that I'd bet you knew that from the start... Guess I shouldn't really be surprised Sven Mil
Widespread roles of enhancer-like transposable elements in cell identity and long-range genomic interactions https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6314169/ OLV
GP, The increasing number of research papers on this OP topic definitely point to complex functional information processing systems with multiple control levels that can only result from conscious design. Please, I would like to read your comments on any of the papers linked @264 that you haven’t cited before. Thanks. OLV
GP, There’s so much literature on transcription regulation that it’s difficult to look at them all. Here’s just a small sample: (Note that you have cited some of these papers) Transcription-driven chromatin repression of Intragenic transcription start sites https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6373976/ Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6585034/ Computational Biology Solutions to Identify Enhancers-target Gene Pairs https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6611831/ Detection of condition-specific marker genes from RNA-seq data with MGFR https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6542349/ Enhancer RNAs: Insights Into Their Biological Role https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6505235/ Epigenetic control of early dendritic cell lineage specification by the transcription factor IRF8 in mice http://www.bloodjournal.org/content/133/17/1803.long?sso-checked=true Competitive endogenous RNA is an intrinsic component of EMT regulatory circuits and modulates EMT https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6456586/ Delta Like-1 Gene Mutation: A Novel Cause of Congenital Vertebral Malformation https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6593294/ OLV
jawa, That discussion is over. GP took care of it appropriately and wisely continued to provide very interesting information on the current topic. This thread has already exceeded my expectations. PeterA
OLV @261: Maybe Sven Mil can help to answer your questions, after he responds the GP’s comments addressed to him after his last comment @240? :) jawa
GP @256: “It seems perfectly natural that a polymorphic semiotic system like NF-kB is strictly regulated by another universal semiotic system, the ubiquitin system. And the regulation is not simple at all, but deeply and semiotically complex:” Is it also natural that those semiotic systems resulted from natural selection operating on random variations over gazillion years? I’m looking for the literature where this is explained. For example, what did those systems evolve from? What were their ancestors? OLV
GP @257: “orchestrate and fine-tune cellular metabolism at various levels of operation.” “A new paradigm in fine tuning? We are becoming accustomed to that kind of thing,” Agree. OLV
GPuccio, Definitely you’re on a roll! You’ve referenced several very interesting papers in a row. OLV
To all: Another rather exotic level of regulation of the NF-kB system: immunophilins. Regulation of NF-kB signalling cascade by immunophilins http://www.eurekaselect.com/131456/article
Abstract: The fine regulation of signalling cascades is a key event required to maintain the appropriate functional properties of a cell when a given stimulus triggers specific biological responses. In this sense, cumulative experimental evidence during the last years has shown that high molecular weight immunophilins possess a fundamental importance in the regulation of many of these processes. It was first discovered that TPR-domain immunophilins such as FKBP51 and FKBP52 play a cardinal role, usually in an antagonistic fashion, in the regulation of several members of the steroid receptor family via its interaction with the heat-shock protein of 90-kDa, Hsp90. These Hsp90-associated cochaperones form a functional unit with the molecular chaperone influencing ligand binding capacity, receptor trafficking, and hormone-dependent transcriptional activity. Recently, it was demonstrated that the same immunophilins are also able to regulate the NF-kB signalling cascade in an Hsp90 independent manner. In this article we analize these properties and discuss the relevance of this novel regulatory pathway in the context of the pleiotropic actions managed by NF-kB in several cell types and tissues.
Emphasis mine. You may rightfully ask: what are immunophilins? Let's take a simple answer from Wikipedia: "immunophilins are endogenous cytosolic peptidyl-prolyl isomerases (PPI) that catalyze the interconversion between the cis and trans isomers of peptide bonds containing the amino acid proline (Pro). They are chaperon molecules that generally assist in the proper folding of diverse "client" proteins". Here is a recent review about them: Biological Actions of the Hsp90-binding Immunophilins FKBP51 and FKBP52 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6406450/
Abstract: Immunophilins are a family of proteins whose signature domain is the peptidylprolyl-isomerase domain. High molecular weight immunophilins are characterized by the additional presence of tetratricopeptide-repeats (TPR) through which they bind to the 90-kDa heat-shock protein (Hsp90), and via this chaperone, immunophilins contribute to the regulation of the biological functions of several client-proteins. Among these Hsp90-binding immunophilins, there are two highly homologous members named FKBP51 and FKBP52 (FK506-binding protein of 51-kDa and 52-kDa, respectively) that were first characterized as components of the Hsp90-based heterocomplex associated to steroid receptors. Afterwards, they emerged as likely contributors to a variety of other hormone-dependent diseases, stress-related pathologies, psychiatric disorders, cancer, and other syndromes characterized by misfolded proteins. The differential biological actions of these immunophilins have been assigned to the structurally similar, but functionally divergent enzymatic domain. Nonetheless, they also require the complementary input of the TPR domain, most likely due to their dependence with the association to Hsp90 as a functional unit. FKBP51 and FKBP52 regulate a variety of biological processes such as steroid receptor action, transcriptional activity, protein conformation, protein trafficking, cell differentiation, apoptosis, cancer progression, telomerase activity, cytoskeleton architecture, etc. In this article we discuss the biology of these events and some mechanistic aspects.
In particular, section 6: "Immunophilins Regulate NF-kB Activity" gpuccio
To all: Oh, this is really new. Did you know that TFs seem to have a key role not only in nuclear transcription regulation, but also in the regulation of those other strange genome-bearing organelles, the mitochondria? Of course, NF-kB is one of the TF systems involved there, too: Nuclear Transcription Factors in the Mitochondria: A New Paradigm in Fine-Tuning Mitochondrial Metabolism. https://www.ncbi.nlm.nih.gov/pubmed/27417432
Abstract: Noncanonical functions of several nuclear transcription factors in the mitochondria have been gaining exceptional traction over the years. These transcription factors include nuclear hormone receptors like estrogen, glucocorticoid, and thyroid hormone receptors: p53, IRF3, STAT3, STAT5, CREB, NF-kB, and MEF-2D. Mitochondria-localized nuclear transcription factors regulate mitochondrial processes like apoptosis, respiration and mitochondrial transcription albeit being nuclear in origin and having nuclear functions. Hence, the cell permits these multi-stationed transcription factors to orchestrate and fine-tune cellular metabolism at various levels of operation. Despite their ubiquitous distribution in different subcompartments of mitochondria, their targeting mechanism is poorly understood. Here, we review the current status of mitochondria-localized transcription factors and discuss the possible targeting mechanism besides the functional interplay between these factors.
Emphasis mine. A new paradigm in fine tuning? We are becoming accustomed to that kind of thing, I suppose! :) gpuccio
To all: It seems perfectly natural that a polymorphic semiotic system like NF-kB is strictly regulated by another universal semiotic system, the ubiquitin system. And the regulation is not simple at all, but deeply and semiotically complex: The Met1-Linked Ubiquitin Machinery: Emerging Themes of (De)regulation https://www.cell.com/molecular-cell/fulltext/S1097-2765(17)30649-4?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1097276517306494%3Fshowall%3Dtrue
The linear ubiquitin chain assembly complex, LUBAC, is the only known mammalian ubiquitin ligase that makes methionine 1 (Met1)-linked polyubiquitin (also referred to as linear ubiquitin). A decade after LUBAC was discovered as a cellular activity of unknown function, there are now many lines of evidence connecting Met1-linked polyubiquitin to NF-?B signaling, cell death, inflammation, immunity, and cancer. We now know that Met1-linked polyubiquitin has potent signaling functions and that its deregulation is connected to disease. Indeed, mutations and deficiencies in several factors involved in conjugation and deconjugation of Met1-linked polyubiquitin have been implicated in immune-related disorders. Here, we discuss current knowledge and recent insights into the role and regulation of Met1-linked polyubiquitin, with an emphasis on the mechanisms controlling the function of LUBAC. Main Text: Transcription factors in the nuclear factor-kB (NF-kB) family orchestrate inflammatory responses and their activation by immune receptors, such as pattern recognition receptors (PRRs), cytokine receptors, and antigen receptors, is important for innate and adaptive immune function. A unifying feature of the signaling processes triggered by these receptors is that they rely on formation of ubiquitin (Ub) chains to transmit the signal from the activated receptor to the nucleus for stimulation of NF-kB-mediated transcription (Figure 1). The discovery that Ub chains are required for NF-kB activation was reported more than 20 years ago with the finding that Inhibitor of NF-kB alpha (IkBalpha, also termed NFKBIA) is modified with Ub chains (linked via Lys48; Lys48-Ub) in response to receptor activation, leading to its rapid degradation via the proteasome (Chen et al., 1995, Palombella et al., 1994, Traenckner et al., 1994). Subsequently, a series of studies by Zhijian “James” Chen and colleagues showed that Ub chains linked via Lys63 (Lys63-Ub) play a non-degradative role in kinase signaling and NF-kB activation by facilitating the activation of transforming growth factor (TGF)-beta-activated kinase 1 (TAK1) (Deng et al., 2000, Wang et al., 2001). In 2006, Kazuhiro Iwai and colleagues identified a Ub E3 ligase complex that only assembles Ub chains through the N-terminal methionine (Met1-Ub); they called this the linear Ub chain assembly complex (LUBAC) and subsequently discovered that LUBAC stimulates NF-kB activity by conjugating Met1-Ub (Tokunaga et al., 2009, Kirisako et al., 2006). Now, after 10 years of research into LUBAC and Met1-Ub biology, it is clear that Met1-Ub harbors potent signaling properties and, together with Lys63-Ub and Lys48-Ub, plays a central role in NF-kB activation and immune function (Figure 2). Met1-Ub is also implicated in signaling by viral nucleotide-sensing receptors, leading to interferon response factor (IRF)-mediated transcription (Figure 1) and other signaling processes (reviewed in Sasaki and Iwai, 2015). In this review, we primarily discuss its role in NF-kB signaling.
gpuccio
This discussion is the third most visited the last 30 days! Definitely a fascinating topic. Congratulations to GP! jawa
GPuccio @253: Excellent explanation. Thanks! OLV
Sven Mil, OLV and all: Some more facts: 1) The archaeal TFB shows definite and highly significant sequence homology with human TFIIB. These are the results of the usual Blast alignment, always using the defaul algorithm and nothing else: Proteins: Human general TFIIB (Q00403) vs archaeal TFB (A0A2D6Q6B7): Identities: 93; Positives: 154; Bitscore: 172 bits; E value: 2e-56 2) No significant sequence homology can be detected instead, using the same identical methodology, between bacterial sigma factor 70 and the archeal TFB: Proteins: Sigma factor 70 E. coli (P00579) vs archaeal TFB (A0A2D6Q6B7): Identities: 25; Positives: 44; Bitscore: 16.2 bits; E value: 2.0 3) And, of course, as already said, no significant sequence homology can be detected, using the same identical methodology, between bacterial sigma factor 70 and human TFIIB: Proteins: Sigma factor 70 E. coli (P00579) vs Human general TFIIB (Q00403): Identities: 32; Positives: 49; Bitscore: 16.9 bits; E value: 1.4 (plus three more non significant short alignments, with E values of 2.5, 2.7, 3.6) These are simple facts, that can be verified by all. At sequence level, there is a definite homology (anyway only partial) between the archaeal protein and the human protein. That corresponds to the well known concept that transcitpion initiation in archaea is much more similar to transcription initiation in eukaryotes, while in bacteria it is very different. Indeed, no significant sequence homology can be detected, always using that same methodology, between the human and the bacterial protein, or between the bacterial and the archaeal protein. These simple facts are undeniable. Check what I have written in my comment #202, to John_a_designer: "Now, in eukaryotes there are six general TFs. Archea have 3. In bacteria sigma factors have the role of general TFs. Sigma factors, archaeal general TFs and eukaryotic general TFs seem to share some homology. I think that the archaeal system, however, is much more similar to the eukaryotic system, and that includes RNA polymerases. ... While archaea are more similar to eukaryotes in the system of general TF, the regulation of transcription by one or two suppressor or activator seems to be similar to what described for bacteria. Finally, there is another important aspect where archaea are more similar to eukarya. Their chromatin structure is based on histones and nucleosome, like in eukaryotes, but the system is rather different from the corresponding eukaryotic system. Instead, bacteria have their form of DNA compression, but it is not based on histones and nucleosomes." gpuccio
GPuccio, It’s my pleasure to post links to interesting papers that sometimes I find in different journals. In some cases they may shed more light on the discussed topics. OLV
OLV: Thank you for the very interesting links. Yes, we are certainly not even near to a real understanding of how transcription is regulated. More on these fascinating topics as soon as I can use again a true keyboard! :) gpuccio
Biology research seems like a never-ending story: The more we know, more is there for us to learn from. Really fascinating, isn’t it?
Shaking the dogma “These results question the generality of a current dogma in the field, that chromatin domains (TADs) are essential to constrain and restrict enhancer function,” says Eileen Furlong, the EMBL group leader who led the study. “We were able to show that major changes in the 3D organisation of the genome had surprisingly little effect on the expression of most genes, at least in this biological context. The results indicate that while some genes are affected, many appear resistant to rearrangements in their chromatin domain, and that only a small fraction of genes are sensitive to such changes in their topology.” Enhancers are not that promiscuous This raises many interesting questions in the field of chromatin topology, for example: what are these other mechanisms that control the interactions between enhancers and their target genes? Many enhancers do not appear to be promiscuous: they do not link to just any target gene, but rather have preferred partners. The team will continue to dissect this by using genetics, optogenetics (a technique to control protein activity with laser light) and single-cell approaches. This will allow them to study the impact of many more perturbations to chromatin topology in both cis and trans.
OLV
GP, The plot thickens... “changes in chromatin domains were not predictive of changes in gene expression. This means that besides domains, there must be other mechanisms in place that control the specificity of interactions between enhancers and their target genes.” More control mechanisms? Don’t we have enough control mechanisms to keep track of already? :) OLV
A genome disconnect
Chromatin loops and domains are major organizational hallmarks of chromosomes. New work suggests, however, that these topological features of the genome are poor global predictors of gene activity, raising questions about their function.
  Highly rearranged chromosomes reveal uncoupling between genome topology and gene expression  
Chromatin topology is intricately linked to gene expression, yet its functional requirement remains unclear. Here, we comprehensively assessed the interplay between genome topology and gene expression using highly rearranged chromosomes (balancers) spanning ~75% of the Drosophila genome. Using transheterozyte (balancer/wild-type) embryos, we measured allele-specific changes in topology and gene expression in cis, while minimizing trans effects. Through genome sequencing, we resolved eight large nested inversions, smaller inversions, duplications and thousands of deletions. These extensive rearrangements caused many changes to chromatin topology, disrupting long-range loops, topologically associating domains (TADs) and promoter interactions, yet these are not predictive of changes in expression. Gene expression is generally not altered around inversion breakpoints, indicating that mis-appropriate enhancer–promoter activation is a rare event. Similarly, shuffling or fusing TADs, changing intra-TAD connections and disrupting long-range inter-TAD loops does not alter expression for the majority of genes. Our results suggest that properties other than chromatin topology ensure productive enhancer–promoter interactions.
The plot thickens:   Does rearranging chromosomes affect their function?   OLV
Dynamic interplay between enhancer–promoter topology and gene activity  
A long-standing question in gene regulation is how remote enhancers communicate with their target promoters, and specifically how chromatin topology dynamically relates to gene activation. Here, we combine genome editing and multi-color live imaging to simultaneously visualize physical enhancer–promoter interaction and transcription at the single-cell level in Drosophila embryos. By examining transcriptional activation of a reporter by the endogenous even-skippedenhancers, which are located 150?kb away, we identify three distinct topological conformation states and measure their transition kinetics. We show that sustained proximity of the enhancer to its target is required for activation. Transcription in turn affects the three-dimensional topology as it enhances the temporal stability of the proximal conformation and is associated with further spatial compaction. Furthermore, the facilitated long-range activation results in transcriptional competition at the locus, causing corresponding developmental defects. Our approach offers quantitative insight into the spatial and temporal determinants of long-range gene regulation and their implications for cellular fates.
  OLV
Design Principles Of Mammalian Transcriptional Regulation
Transcriptional regulation occurs via changes to different biochemical steps of transcription, but it remains unclear which steps are subject to change upon biological perturbation. Single cell studies have revealed that transcription occurs in discontinuous bursts, suggesting that features of such bursts like burst fraction (what fraction of time a gene spends transcribing RNA) and burst intensity could be points of transcriptional regulation. Both how such features might be regulated and the prevalence of such modes of regulation are unclear. I first used a synthetic transcription factor to increase enhancer-promoter contact at the ?-globin locus. Increasing promoter- enhancer contact specifically modulated the burst fraction of ?-globin in both immortalized mouse and primary human erythroid cells. This finding raised the question of how generally important the phenomenon of burst fraction regulation might be, compared to other modes of regulation. For example, biochemical studies have suggested that stimuli predominantly affect the rate of RNA polymerase II (Pol II) binding and the rate of Pol II release from promoter-proximal pausing, but the prevalence of these modes of regulation compared to changes in bursting had not been examined. I combined Pol II ChIP-seq and single cell transcriptional measurements to reveal that an independently regulated burst initiation step is required before polymerase binding can occur, and that the change in burst fraction produced by increased enhancer-promoter contact was caused by an increased burst initiation rate. Using a number of global and targeted transcriptional regulatory perturbations, I showed that biological perturbations regulated both burst initiation and polymerase pause release rates, but seemed not to regulate polymerase binding rate. Our results suggest that transcriptional regulation primarily acts by changing the rates of burst initiation and polymerase pause release.
The cells of a eukaryotic organism all share the same genome; however, they differentiate from a single zygote into many different cell types that carry out different functions mediated by the expression of cell-type-specific suites of proteins. A major focus of biological  science has been to understand how cells with the same genome can induce and maintain such divergent functional states. Relatedly, eukaryotic cells must be able to respond quickly to certain stimuli by changing protein expression: canonical examples of such stimuli include heat shock or inflammatory signals. Both cell-type identity and functional responses to signaling are chiefly governed at the level of DNA transcription into RNA, though other processes like protein posttranslational modification and degradation also play important roles.
Many unanswered questions related to the regulation of transcriptional bursting persist.
    OLV
Displacement of the transcription factor B reader domain during transcription initiation Stefan Dexl, Robert Reichelt, Katharina Kraatz, Sarah Schulz, Dina Grohmann, Michael Bartlett, Michael Thomm Nucleic Acids Research, Volume 46, Issue 19, Pages 10066–10081 DOI: 10.1093/nar/gky699
Transcription initiation by archaeal RNA polymerase (RNAP) and eukaryotic RNAP II requires the general transcription factor (TF) B/ IIB. Structural analyses of eukaryotic transcription initiation complexes locate the B-reader domain of TFIIB in close proximity to the active site of RNAP II. Here, we present the first crosslinking mapping data that describe the dynamic transitions of an archaeal TFB to provide evidence for structural rearrangements within the transcription complex during transition from initiation to early elongation phase of transcription. Using a highly specific UV-inducible crosslinking system based on the unnatural amino acid para-benzoyl-phenylalanine allowed us to analyze contacts of the Pyrococcus furiosus TFB B-reader domain with site-specific radiolabeled DNA templates in preinitiation and initially transcribing complexes. Crosslink reactions at different initiation steps demonstrate interactions of TFB with DNA at registers +6 to +14, and reduced contacts at +15, with structural transitions of the B-reader domain detected at register +10. Our data suggest that the B-reader domain of TFB interacts with nascent RNA at register +6 and +8 and it is displaced from the transcribed-strand during the transition from +9 to +10, followed by the collapse of the transcription bubble and release of TFB from register +15 onwards.
OLV
Sven Mil: As you have tried to make your non arguments more detailed, you certainly deserve a more detailed answer. As at present I can only answer from my phone, I will be brief for the moment (I am very bad at typing on the phone). Tomorrow I should be able to answer in greater length. Your biggest errors (but not the only ones) are: a) Thinking that I am denying that the two proteins are homologues, or evolutionary related. That is completely false. I have simply blasted the two proteins, and found no detectable homology. That is a simple fact. You can blast them too, and you will have the same result. That means that there is no obvious sequence homology using the default blast algorithm. Again, that is a very simple fact. I have also said that the authors of the paper I linked had used a different method, using structural considerations and different alignment algorithms, because they were interested in detecting a weak relationship to find a possible evolutionary relationship. That's perfectly fine, but I have no interest in affirming or denying a possible evolutionary relationship. If the two proteins are evolutionarily related, that's no problem for me. As you know, I believe in Common Descent by design. b) Thinking that two similar functions are identical. I have already discussed that. Just to add a point, of course all proteins that bind DNA, and that includes all TFs, have a DBD. I don't think that makes their functions identical, virtually or not. c) Thinking that I have problems with the idea that two proteins with highly different sequence can have a similar function. I have no problems with that. But the simple fact remains that in most cases proteins that retain a highly similar, maybe almost identical function through billions of years, like the alpha and beta chains of ATP synthase, show high sequence conservation. Look also at histones and ubiquitin, and thousands and thousands of other examples. Nobody who really believes in the basics of modern biology can deny that sequence conservation through long evolutionary periods is a measure of functional constraint. d) Thinking that I can detect only high sequence homologies. That is completely false. I use the default blast algorithm to have always the same tool in measuring sequence homology. And the default blast algorithms detects very well most sequence homologies, both low and high, and gives a definite measure of the relevance of those homologies in statistical terms, the E value. So, when I say that I could find no detectable homology, I mean a very precise fact: that blasting those two sequences, that I have clearly indicated, with the default blast algorithm, no homology is detected that reaches a significant E value. Again, you can blast the two sequences yourself. This is the method commonly used to detect homology between sequences. e) So, my procedure detects sequence homologies, both weak and strong. I am interested in jumps not because I can only detect jumps, as you foolishly seem to suggest, buy because jumps are clear indicators of design. I find a lot of jumps, some of them really big, and I find a lot of non jumps. As my graphics clearly show. For example, as I have argued in this same thread, TFs usually do not show big jumps, for example at the vertebrate level, for two interesting reasons: 1) Their DBDs are highly conserved and very old, older usually than the vertebrate appearances, usually already well detectable in single celled eukaryotes. 2) Their other domains or sequences are usually poorly conserved during the evolutionary history of metazoa. However, there are strong indications that such a sequence diversification is functional, and not simply a case of neutral variation in non functional sequences. I have made this argument here for RelA, at post #29. Well, that is enough for the moment. gpuccio
– how could two proteins, vastly different in sequence (according to your BLASTing) carry out the same function?
Different binding partners in the function. The different binding partners can change the rate of transcription. You may be comparing a light switch to a light dimmer and not know it. Gpuccio's method measures protein sequence divergence over time showing resistance to change based on purifying selection. This allows you to demonstrate substitutability and therefor genetic information. You first need to understand his method before trying to make an argument. So far you are talking over him. When you compare a eukaryotic cell to a prokaryotic cell you are using apples and oranges for your comparison and your argument fails. bill cole
This discussion seems interesting, but flies high above my head. What are the main differences between prokaryotic and eukaryotic cells? I tried to search for it but got gazillion results and don't know where to start from. Here are some abbreviations used in this discussion: BRE TFB/TFIIB recognition element CLR/HTH cyclin-like repeat/helix-turn-helix domain DPBB double psi beta barrel DDRP DNA-dependent RNA polymerase GTF general transcription factor LECA last eukaryotic common ancestor LUCA last universal common ancestor Ms Methanocaldococcus sp. FS406-22, PIF primordial initiation factor RDRP RNA-dependent RNA polymerase RNAP RNA polymerase Sc Saccharomyces cerevisiae TFB transcription factor B TFIIB transcription factor for RNAP II factor B Tt, Thermus thermophilus pw
Sven Mil:
how could two proteins, vastly different in sequence (according to your BLASTing) carry out the same function?
Easily- how can two sentences with vastly different letter sequences, carry the same message? Better yet, what is the evidence that blind and mindless processes produced either of the proteins? How can such a concept be tested? ET
Oh brother Gpuccio, let me spell it out so that even you can understand. These two proteins occupy the same space at the same time in their respective systems. Just skimming the paper you cited yourself: "In RNAP complexes with an open transcription bubble, sigma factors and TFIIB both closely approach catalytic sites indicating direct and similar roles in initiation." "Furthermore, sigma factors and TFIIB each have multiple DNA binding helix-turn-helix (HTH) motifs"... which contain the "recognition helix" which is "most important for sequence recognition within the DNA major groove" "Here, 2-HTH motifs of bacterial sigma factors and eukaryotic TFIIB are shown to occupy homologous environments within initiating RNAP and RNAP II complexes" "Based on extensive apparent structural homology, amino acid sequence alignments were generated, supporting the conclusion that sigma factors, TFB and TFIIB are homologs." They detect homology, why can't you Gpucc? =) When modeling the structure of the entire RNA polymerase complex: "The two C-terminal sigma and TFIIB HTH motifs appear to occupy homologous positions in the structures." "Remarkably, sigma CLR/HTH3.0-3.1 and TFIIB CLR/HTH1 occupy homologous positions, and sigma CLR/HTH4.1-4.2 and TFIIB CLR/HTH2 also appear to occupy homologous positions." "The B-reader region approaches the RNAP II active site and, although not homologous by orientation (N?C) or sequence to sigma-Linker3.1-3.2, appears to have convergent functions in initiation and promoter escape." "TFB/TFIIB CLR/HTH2 binding to BREup anchors the initiating complex on ds DNA and establishes the direction of transcription analogously to the anchoring of sigma CLR/HTH4.1-4.2 binding to the ds -35 region of the bacterial promoter" There's tons more, I have filtered out most of the technical/jargony stuff for your benefit. A quick look at that paper makes it clear that these two proteins perform the same function. I can't make it any clearer than that. Now either you haven't looked at the paper, or you are clinging to your denial for the sake of your method. As for your method: You have, "blasted sigma 70 from E. coli with human TFIIB and found no detectable homology" So, to reiterate, you are unable to detect the relationship between these two proteins which perform the same function. This raises many questions and issues with respect to your analyses. - how much bias are you introducing into your analyses by only being able to detect high homology? (you probably have no idea) - how much are you missing? (I bet it's a whole lot and also that you probably have no idea) - if you can only detect high homology (as you have already admitted that's what your method does) wouldn't you always have a jump in information? (the jump is due to your method being unable to detect low-mid homology; not sudden inputs of information from a designer as you love to imply) - how could two proteins, vastly different in sequence (according to your BLASTing) carry out the same function? (either your method is just not good at assessing structure/function relationships, or your assumptions about protein function in sequence space are wrong) (probably both) Hopefully that was in simple enough English for you. Can you understand it? Sven Mil
To Whom This May Concern: GP's method for quantifying relatively sudden appearances of significant amounts of complex functional information within protein groups, has been extensively explained many times in this website and it's obviously very well supported both theoretically and empirically. GP's detailed explanations are freely available to anyone interested in reading and understanding them. PeterA
Sven Mil: First of all, I don't need to cling to anything to defend my procedure, because you have made no real argument against it. If and when you do, I will defend it. I just noticed that the idea that the two functions are virtually identical, that you stated to add some apparent poison to your rethorical non argument, is simply wrong. The two functions are similar, but certainly not identical, either virtually or in any other way. Similar is a very simple English word. Can you understand it? If you had said that the two functions are similar, I would have agreed with you. I have said the same thing from the beginning. But the two proteins are very different, even if they are distant homologues and probably evolutionarily related. One is specifically engineered to help initiate transcription in prokaryotes. The other one is specifically engineered to help initiate transcription in eukaryotes. And, as everybody knows, transcription in prokaryotes and in eukaryotes is very different. gpuccio
Computational Biology Solutions to Identify Enhancers-target Gene Pairs Judith Mary Hariprakash, Francesco Ferrari DOI: 10.1016/j.csbj.2019.06.012
Enhancers are non-coding regulatory elements that are distant from their target gene. Their characterization still remains elusive especially due to challenges in achieving a comprehensive pairing of enhancers and target genes.
We expect this field will keep evolving in the coming years due to the ever-growing availability of data and increasing insights into enhancers crucial role in regulating genome functionality.
Enhancers are distal regulatory elements with a crucial role in controlling the expression of genes. From many point of views they are analogous to promoters, but they are located at a larger distance from the transcription start site (TSS) of the gene they regulate. Enhancers act through the binding of transcription factors just like promoters. However, elucidating the function of enhancers remains more elusive for multiple reasons.
  OLV
Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function Mary Lauren Benton, Sai Charan Talipineni, Dennis Kostka & John A. Capra  BMC Genomics    volume 20, Article number: 511 (2019) DOI: 10.1186/s12864-019-5779-x
Finally, we believe that ignoring enhancer diversity impedes research progress and replication, since “what we talk about when we talk about enhancers” includes diverse sequence elements across an incompletely understood spectrum, all of which are likely important for proper gene expression. Efforts to stratify enhancers into different classes, such as poised and latent, are steps in the right direction, but are likely too coarse given our incomplete current knowledge. We suspect that a more flexible model of distal regulatory regions is appropriate, with some displaying promoter-like sequence architectures and modifications and others with distinct regulatory properties in multiple, potentially uncharacterized, dimensions. Consistent and specific definitions of the spectrum of regulatory activity and architecture are necessary for further progress in enhancer identification, successful replication, and accurate genome annotation. In the interim, we must remember that genome-wide enhancer sets generated by current approaches should be treated as what they are—incomplete snapshots of a dynamic process.
  OLV
PavelU
Sven Mil seems to have an interesting argument here.
What argument do you think he is making? bill cole
Sven Mil seems to have an interesting argument here. PavelU
So, Gpuccio, you have to cling to this denial in order to support your design-of-the-gaps-BLASTing. Got it. The fact is, they don't just "both help start transcription". They perform the same function within the process of initiation, in fact, they both "closely approach catalytic sites indicating direct and similar roles in initiation" according to the paper you cited. There is only a handful of proteins that approach the RNA polymerase catalytic site in general (nevermind at the same time), and these are all associated with very specific functions (e.g. TFIIH). For the two proteins we are talking about (sigma and TFIIB) to both be approaching the catalytic site at the same time (during polymerase initiation), it can safely be said that their function is virtually identical. Sven Mil
Bill Cole, “making a rhetorical argument only with no real scientific value” That’s what it looks like. jawa
Sven
Gpuccio, if you can’t grasp the simple fact that these proteins perform virtually identical functions, how can you expect people to believe your attempted evaluations of protein function and homology?
How would you support the claim of virtually identical functions? Maybe start by defining virtually identical. If you pass on this then I have to assume you are making a rhetorical argument only with no real scientific value. bill cole
Virtual reality = reality? :) jawa
Sven Mil: "Virtually identical"? Funny indeed. Of course they both help starting transcription. That's why they are "equivalent", or have a "possible functional analogy and/or evolutionary relatedness", or "similar roles". In completely different organisms, having a very different transcription system, different proteins involved, different regulations. They have almost no sequence homology, as clearly shown by Blast, and some generic structure similarity in the DNA binding site. For you, that means that they have "virtually identical" functions. OK, everybody can judge what "virtually identical" means. For me, it's not identical at all. Maybe very much virtual. And you know, I expect nothing from people, they can evaluate my facts and ideas and believe what they like. And, certainly, I expect nothing from you. Have a good day. gpuccio
Gpuccio, if you can't grasp the simple fact that these proteins perform virtually identical functions, how can you expect people to believe your attempted evaluations of protein function and homology? Or maybe you refuse to admit this simple fact because you know that it means your "analyses" are garbage? Sven Mil
Sven Mil: "Sounds to me like the functions of these proteins are virtually identical." Not to me. Not at all. Nothing in the things you quote justifies your conclusion. However, if you like to think that way, it's fine. This is a free world. gpuccio
Off topic but interestingly related to the concept of complex functional specified information: Veins and Arteries Build Hierarchical Branching Patterns Differently: Bottom-Up versus Top-Down Kristy Red-Horse, Arndt F. Siekmann DOI: 10.1002/bies.201800198 Article Full text
A tree?like hierarchical branching structure is present in many biological systems, such as the kidney, lung, mammary gland, and blood vessels. Most of these organs form through branching morphogenesis, where outward growth results in smaller and smaller branches. However, the blood vasculature is unique in that it exists as two trees (arterial and venous) connected at their tips. Obtaining this organization might therefore require unique developmental mechanisms. arterial trees often form in reverse order. initial arterial endothelial cell differentiation occurs outside of arterial vessels. These pre-artery cells then build trees by following a migratory path from smaller into larger arteries, a process guided by the forces imparted by blood flow. Thus, in comparison to other branched organs, arteries can obtain their structure through inward growth and coalescence.
How hierarchical patterned trees form in diverse tubular organs, such as the kidney, lung, and vasculature, has been of scientific interest for several centuries. establishment of the vasculature follows unique developmental processes, guided by distinct mechanisms important for obtaining proper hierarchical structure and optimal organ function.
The vasculature consists of two interconnected trees. Schematized drawing of arterial and venous blood vessel trees. Note that the two trees are interconnected at their tips, allowing blood to flow from the arteries to the veins.
Although all hierarchical in nature, arteries in different organs and different organisms exhibit slightly different structures.
Advances in our understanding of how arteries are constructed has revealed that arterial trees can form in a unique manner with respect to other hierarchically branched structures—via inward growth rather than outward branching morphogenesis
distinct mechanisms can be responsible for the establishment of hierarchically patterned organs.
live imaging,lineage tracing, and single cell transcriptional analyses indicate that the processes of sprouting, cell fate reacquisition, and cell migration are heavily inter-twined, and are revealing general and organ-specific mechanisms.
it might be necessary to target EC proliferation and migration in ways that were previously not appreciated when enhancing blood flow as a therapeutic aim to diseased or regenerating tissue; and that manipulating these parameters within ECs must be done with caution, because they might affect the formation of venous and arterial trees in opposite ways. Furthermore, it is now clear that genetic and hemodynamic factors interact during artery formation. However, it still needs to be determined exactly how these behaviors result in the exquisitely defined hierarchical branching of the final structure of mature arteries. These new insights are sure to be the subject of exciting studies in the near future.
    OLV
Hmmm, Gpuccio, where to begin. You say (about sigma70 and TFIIB) 'How can you even think, least of all state so boldy, that those two proteins are “virtually identical with respect to function”' But previously you have even admitted "Sigma factors are in some way the equivalent of generic TFs" (TFIIB is a generic TF) And wikipedia apparently says 'sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”' So both sigma and TFIIB's main function is to catalyze RNA polymerase initiation. And the paper you have cited above says "several reports have indicated the possible functional analogy and/or evolutionary relatedness of bacterial ? factors and eukaryotic TFIIB" "sigma factors and TFIIB both closely approach catalytic sites indicating direct and similar roles in initiation" "Furthermore, sigma factors and TFIIB each have multiple DNA binding helix-turn-helix (HTH) motifs, which typically include three crossing helices and two turns: H1-T1-H2-T2-H3. H3 is referred to as the “recognition helix” because sequences within T2 and toward the N-terminal end of H3 are most important for sequence recognition within the DNA major groove." Sounds to me like the functions of these proteins are virtually identical. Sven Mil
OLV: Thank you for the interesting links. The first two papers quoted at #223 are specially intriguing, in the light of all that we have discussed: The Regulation of NF-kB Subunits by Phosphorylation https://www.mdpi.com/2073-4409/5/1/12/htm
Abstract: The NF-kB transcription factor is the master regulator of the inflammatory response and is essential for the homeostasis of the immune system. NF-kB regulates the transcription of genes that control inflammation, immune cell development, cell cycle, proliferation, and cell death. The fundamental role that NF-kB plays in key physiological processes makes it an important factor in determining health and disease. The importance of NF-kB in tissue homeostasis and immunity has frustrated therapeutic approaches aimed at inhibiting NF-kB activation. However, significant research efforts have revealed the crucial contribution of NF-kB phosphorylation to controlling NF-?B directed transactivation. Importantly, NF-kB phosphorylation controls transcription in a gene-specific manner, offering new opportunities to selectively target NF-kB for therapeutic benefit. This review will focus on the phosphorylation of the NF-?B subunits and the impact on NF-kB function.
And: The Ubiquitination of NF-kB Subunits in the Control of Transcription https://www.mdpi.com/2073-4409/5/2/23/htm
Abstract: Nuclear factor (NF)-kB has evolved as a latent, inducible family of transcription factors fundamental in the control of the inflammatory response. The transcription of hundreds of genes involved in inflammation and immune homeostasis require NF-kB, necessitating the need for its strict control. The inducible ubiquitination and proteasomal degradation of the cytoplasmic inhibitor of kB (IkB) proteins promotes the nuclear translocation and transcriptional activity of NF-kB. More recently, an additional role for ubiquitination in the regulation of NF-kB activity has been identified. In this case, the ubiquitination and degradation of the NF-kB subunits themselves plays a critical role in the termination of NF-kB activity and the associated transcriptional response. While there is still much to discover, a number of NF-kB ubiquitin ligases and deubiquitinases have now been identified which coordinate to regulate the NF-kB transcriptional response. This review will focus the regulation of NF-kB subunits by ubiquitination, the key regulatory components and their impact on NF-kB directed transcription.
Phosphorylation and ubiquitination are certainly two very basic levels of regulation of almost all biological processes. They are really everywhere. gpuccio
GP, you may have opened a can of worms with this OP. :) This NF-kB seems to be all over the map. Another NF-kB paper One more NF-kB paper and another one OLV
GP, You're keeping this discussion very interesting. Thanks. Here's a NF-kB article. Here's another NF-kB article. OLV
Figure 1. Panoramic view of the NF-?B miRNA target genes and target genes of miRNAs. Wow! PeterA
To all: Of course, it's not only lncRNAs. Let's not forget miRNAs! The functional analysis of MicroRNAs involved in NF-kB signaling. https://www.europeanreview.org/article/10746 For those who love fancy diagrams, have a look at Fig. 1. :) gpuccio
gpuccio, Thanks for the detailed explanation. Now I understand what you meant. PeterA
PeterA: "Is this because you may ignore functional information if the number of bits is less than certain threshold value for the number of bits that perhaps is very high?" No. That has nothing to do with the "bias" I have mentioned at #215. That is more or less what Sven Mil "suggested" (to say that he "argued" would really be inappropriate). The simple point is: with my method I detect sudden appearances of new functional information at the sequence level. The sequence is what is measured: the blast measures homologies in sequence. The procedure is meant to detect differences in huma conserved functional information. Those specific sequences that: a) Did not exist before they appear b) Are conserved for uindreds of million years after their appearance So, if I say that a protein shows an information jump in vertebrates of, say, 1280 bits, like CARD11 (see post #118), I mean that those 1280 bits of homology to the human protein are added in vertebrates to whatever homology to the human form already existed before. IOWs, in deuterostomia that are not vertebrates, including the first chordates, there may be any weak homology with the human protein. In the case of CARD11, it is really low, but detectable. Branchiostoma belcherii, e cephalochordate, exhibits 192 bits of homology between its form of CARD11 and the human form. The E value is 6e-37, and therefore the homology is certainly significant. IOWs, the protein already existed in chordates that are not vertebrates. In a form that was, however, very different from the human form, even if detectable as homologous. But in cartilaginous fishes, more than 1000 new bits of homology to the human protein are added to what already existed. Callorhincus milii exhibits 786 identities, and 1514 bits of homology to the human form. That is an amazing information jump, and it has nothing to do with minor homologies that are not considered or emphasized, as "suggested" by Sven Mil. That increment in sequence homology to the human form is very real, very sudden, and completely conserved. There is no way to explain it, except design. The "bias" that I mentioned at #215 consists in the fact that the blast algorithms underestimates the informational value of homologies. It assign about 2 bits to identities, while we know that the potential informational value of an AA identity is about 4.3 bits. Even correcting for many factors, that is a big underestimation, considering that we are dealing with a logatithmic scale. Another reason for the underestimation bias is that part of the sequence that is not conserved os often functional too, as I have argued many times, and here too at #29 with the very good example of RelA. I quote my conclusions there: "IOWs, my measure of functional information based on conserved homologies through long evolutionary times does measure functional information, but usually underestimates it. For example, in this case the value of 404 bits would measure only the conserved function in the DBD, but it would miss completely the undeniable functional information in the TAD domains, because that information, while certainly present, is not conserved among species. This is, IMO, a very important point." So, my procedure to evaluate functional information in proteins is certainly precise enough and reliable, but certainly biased in the sense of underestimation, for at least two important reasons: a) The blast algorithm is a good but biased estimator of functional information: it certainly underestimates it. b) The functional information in non conserved parts of the sequence is not detected by the procedure. So, the simple conclusion is: my values of functional information are certainly a reliable indicator of true functional information in proteins, but the true value of functional information and of information jumps is certainly higher than the value I get from my procedure. IOWs, we can be sure tha the real value of functional information in that protein or in that jump is at least the value given by my procedure. gpuccio
GP, “my method to measure funtional information by homology conservation for long evolutionary times as shown by the Blast algorithm is, in one important sense, biased: it certainly underestimates the true functional information, as I have shown many times here.” Is this because you may ignore functional information if the number of bits is less than certain threshold value for the number of bits that perhaps is very high? IOW, your method is very rigorous? PeterA
I agree that any argument against GP’s method for quantification of complex functional information in proteins, should clearly present a “credible pathway that can explain the appearance of thousands of bits of new functional information”. PeterA
Sven Mil: The "virtually identical with respect to function" seems to be your imagination, certainly in relation to the case we were discussing (the supposed homologies between sigma factor 700 and human TFIIB). How can you even think, least of all state so boldy, that those two proteins are "virtually identical with respect to function"?. That is a very telling indication of how serious your attitude is. Moreover, your "argument" seems to be that as I am not trying to detect very weak homologies, the extremely strong jumps in human conserved information that I do detect in short evoluyionary times are explained. By what? By weak homologies that have nothing to do with those strong specific sequences that appear suddenly, that are conserved for hundreds of million of years, and that anyone can easily detect? Is that even the start of an argument? No. It is just false reasoning, of the worst kind. So, if you have anything interesting to say, please say it. If you can point to any credible pathway that can explain the appearance of thousands of bits of new functional information, through anything that you can detetct in the genomes and proteomes, pleaso do it. If you have any hint of the functional intermediates that are nowhere to be seem at the molecular level for that well detectable information, please show that to us. On one point you are certainly right: my method to measure funtional information by homology conservation for long evolutionary times as shown by the Blast algorithm is, in one important sense, biased: it certainly underestimates the true functional information, as I have shown many times here. Have a good time. gpuccio
Gpuccio, it worries me that your method is unable to detect homology between proteins that are similar with respect to structure and virtually identical with respect to function. "I have no interest in denying possible weak homologies. I just ignore them, because they are not relevant to my argument." Not relevant? Just ignore them? Your argument consists of pointing to these "large jumps in homology", but isn't that what we'd expect to see if you can't detect low homology? If you could only see things 2 miles above sea level would you assume that planes never land and that birds don't actually exist? How much are you actually missing? A whole lot I'd bet. It seems like your method is extremely biased and capable only of detecting "steady-state", not the actual evolutionary steps we're interested in. Sven Mil
John_a_designer at #212: The questions you ask are very good, and the subject is not so intuitive as it could seem. I will try to express how I understand it, but of course I am ready to consider any contribution about this important point. The first important thing is that we must not confound randomness and chaos. I have quoted at #204 a paragraph from the paper which tries to explain the difference between the two. However, I must say that I am not completely happy wuth what is said there. My first point is that we are dealing here with sustems that are, in essence, deterministic. Both chaotic systems and random systems are deterministic, in the sense that waht happens in those system is in the end governed by necessity laws, in particular the laws of physics or chemistry. I ahve said many times that the only field of science that probably implies a true randomness, what we could call intrinsic randomness, is quantum mechanics. In quantum mechanics, the wave function, if and when it collapses, collapses according to probabilitstic distributions that are, probably (it dependd on the intepretations), intrinsically random. In all other nonquantum systems, we assume that the laws ofn physics are the real laws that govern the evolution of the system, and those laws, if not at quamtum level, are deterministic laws. Indeed, even quantum mechanics is mainly deterministic: the wave function evolves in a completely deterministic way, unless and until it collapses. So, both random systems and chaotic systems, if we are not considering quantum effects, are completely deterministic systems. So, what is the difference between what we call a deterministic system and what we call a random system? As I have said many times, the difference is only in how we can describe the system and its evolution. Let's consider a simple deterministic system. Let's say that we have a gear with two kinds of teeth, one kind shorter and one kind longer. Let's say that the gear os rotating at a constant rate, and it interacts with another gear so that the long teeth evoke one type of output, and the shorter teeh evoke another type of output. So, we have a cyclic output with two states, which can be well predicted knowing the configuration of the first gear. This is, very simply, a deterministic system, in the sense that we can fully describe it in terms of its initial configuration, and know with reasonable precision how the system will behave. Not, let's take instead a simple random system: the classic tossing of a fair coin. Here, too, the system is in essence deterministic: each coin tossing completely obeys the laws of classical mechanics. If we can know all the initial conditions of the tossing, we can, maybe with complex computations, know exactly if the result will be a head or a tail. But that is not the case. There is no way we can know all the variables involved. Because there are too many of them, and we cannot measure or control all of them. The consequence is that we cannot ever know for certain if one specific tossing will give a head or a tail. So, are we completely powerless in front of such a system? can we say nothing that helps us describe it? No. If the coin is fair, we know that, on a big number of tossings, the percentage of heads and tails will be similar. Not exactly the same, but very much similar, and ever more similar if we increase the number of tossings. This is a probabilistic description. We are applying a mathematical object, an uniform probability distribution where only two events are possible, and each has a probability of 0.5, to describe with some efficiency a simple system that we cannot describe in any other way. This is randomness. The impossibility to compute a single event, but the possibility to describe with some precision a general distribution. Now, there is no need that the probability dostribution is uniform. And there is no need that no necessity effect is detectable. In most real systems, including biological systems, random noise is mixed with necessity effects. If the random noise is strong enough, so that it cannot be ignored, the system is still random. Let's consider an unfair coin, where an uneven distribution of weight (a necessity effect) is strong enough that it modifies the neutral probability distribution, so that heads have a probability of 0.6 and tails a probability of 0.4. Is the system still random? Of course it is. We have no way to know in advance what the result of our next tossing will be. The system is still random, because we can describe it only probabilistically. Still, the uneven distribution tells us that there is some necessity effect that favors heads. OK, so this is randomness. Many different variables, that we cannot really measure or control, interact indipendently to generate a configuration that can be described only using a probability distribution. In no case can we know deterministically how the system wiill evolve. It is interesting that many random systems in nature are not described well by a uniform distribution, even if loaded, but rather by other probability distributions, first of all the normal distribution. In the normal distribution, the system is random, but certain events are much more likely than others. Chaos is another thing. Chaotic systems are deterministic systems, sometimes simple enough, where some special form of the mathematics that describes the system makes the evolution of the system extremely sensitive to small initial variations in the starting conditions. In the example of the model described in the paper, oscillations in the external signal determine the period and amplitude of the oscillations in the NF-kB system. If the amplitude in the external signal is low, the two systems are simply syncronized. That is a deterministic system. But, if the amplitude of the external signal increases, if it is very big, then the mathematics governing the interaction between the two systems becomes chaotic: while the oscillations of the external signal remain regular, the oscillations in the NF-kB system become completely unpredictable in amplitude. That is chaos. The system is still simple, two systems are essentially interacting. The scenario seems not different from the scenario where the two system are simply synchronized. But, suddenly, a simple increase in the amplitude of the external system changes the mathematical realtionships, and the response of the NF-kB system becomes chaotic. Now, let's go back to your example of traffic. I am not completely sure, but I would say that that is a random system, not necessarily a chaotic system. Here the lack of order is due to the many variables involved, that interact independently. In a sense, it is like the tossing of the coin. It is true that "every vehicle has a destination and a purpose for its travel", as it is true that the coin obeys precise laws when it is tossed. But there are too many vehicles, and their destinations are unrelated and independent. That generates a random configuration that we cannot anticipate with precision, because we should know in advance all the destinations and purposes, and even the driving style or mood of each driver, and so on. We can't. So, at best, we can describe the system by some probability distribution: maybe there is more probability of having traffic in one direction at certain times, and so on. The important point in the paper quote is not so much that two systems can interact in a chaotic way: that happens sometimes in physical systems. The amazing point is that such interaction can be generated by specific biological stimuli (for example by regulating the amplitude of the oscillation in the TNF system), so that chaos is generated in the NK-kB system, and that such chaotic response can change in a robust way the pattern of genes that are activated (for example favoring low affinity genes), and that this whole system is functional. IOWs, as I have said, a specific signal is semiotically connected to the correct, complex response, involving hundreds of different genes, by a translation system that uses (among other tools) the induction of a chaotic state to link the two things. gpuccio
Gpuccio, Here’s a question I have: Is what we perceive here as chaos just the result of overwhelming complexity and complex interactions of interacting, overlapping and numerous dynamic systems? Since my background is not in the biological sciences but in mechanical engineering-- specifically machine design-- I try to find analogies from the world of machines and machine systems to help me understand what is happening or maybe happening with biochemical “molecular machines” and biological systems. The analogy I started to think of from reading over the paper (cited @ 194) was urban traffic flow which from a time-lapse birds-eye-view can appear to be chaotic and even at times without rhyme or reason. Here, for example, are several time lapse video clips of street and highway traffic in Atlanta, Georgia in the U.S. https://www.youtube.com/watch?v=zOu-f-GdfhU While there is a continuous dynamic flow of traffic, it also at times appears to be chaotic as cars and trucks appear to more or less at random change or merge from one lane of traffic to another, or stop at a street intersection to make a turn etc. If, however by analogy, we take a “microscopic” view of what each car or truck is doing we find that every vehicle has a destination and a purpose for its travel. What make the scene appear to be chaotic is that the individual travelers have different destinations and different purposes for their travel. For example, there may be some of travelers who are going to work or out to dinner or out to a sporting event or out shopping or going back home. There may be trucks delivering supplies and merchandise to businesses… or, there may be fire and rescue vehicles speeding to an accident or a fire or police responding to a crime. It appears to me that there is something like that going on in individual cells which is just compounded astronomically when we consider the complexity of higher organisms as a whole. Of course as with all analogies the analogy breaks down. At present, at least until self-driving cars and trucks become widely available and viable, each car or truck is under the control of an intelligent agent. Biological systems are more analogous a world full of robots with the robots maintaining and propagating the system. To paraphrase Abraham Lincoln the robots would be of the system, by the system and for the system. Nevertheless, I still think on some level such a system would appear to be very chaotic but that would be due to its overwhelming complexity. If such systems were truly chaotic they would cease to function correctly and eventually cease to function at all. The overwhelming complexity, of course, is evidence of design. john_a_designer
I still didn't quite understand how old is the NK-kB, what it evolved from, how that could happen. pw
GP @208: "the NF-kB system is not only a wonderful polymorphic semiotic system (see post #181), but also a semiotic system that uses, as a tool to connect the right stimulus to the right response, not only the usual biochemical configuration patterns, but also a very peculiar physical and mathematical effect of the type of oscillations involved in two different and separate systems." I think in this case "Wow!" is an understatement. :) PeterA
To all: NK-kB is not the ony TF system that presents oscillations in concentration and nuclear occupancy. Another important example is p53: Conservation and divergence of p53 oscillation dynamics across species https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5687840/
Summary The tumor suppressing transcription factor p53 is highly conserved at the protein level and plays a key role in the DNA damage response. One important aspect of p53 regulation is its dynamics in response to DNA damage, which include oscillations. Here, we observe that while the qualitative oscillatory nature of p53 dynamics is conserved across cell lines derived from human, monkey, dog, mouse and rat, the oscillation period is variable. Specifically, rodent cells exhibit rapid p53 oscillations, whereas dog, monkey and human cells show slower oscillations. Computational modeling and experiments identify stronger negative feedback between p53 and MDM2 as the driver of faster oscillations in rodents, suggesting that an oscillation's specific period is a network-level property. In total, our study shows that despite highly conserved signaling, the quantitative features of p53 oscillations can diverge across evolution. We caution that strong amino acid conservation of proteins and transcriptional network similarity do not necessarily imply conservation of time dynamics.
p53 is a very important tumor suppressor gene, involved mainly in the response to DNA damage. gpuccio
To all: OK, back to the paper about chaos. So, the general idea is that, in the simplified (but very precise) model used by the authors, fixed priod (50 minites) oscillations in TNF concentration can act as an "external signal", so that the NK-kB oscillation "locks on to the external signal’s frequency and phase" (Fig. 1c, bottom line). But, according to the amplitude of the TNF oscillations, the effect changes. While for low amplitudes there is the "locking" effect, intermediate amplitudes transalte into some regular variation of the NF-kB amplitude, IOWs generate "multi stable cycles" of different amplitude in the NK-kB oscillations (Fig. 1c, intermediate line). Finally, if the amplitude of the external signal furtherly increases, the system becomes chaotic, and the amplitude of the NF-kB system oscillations becomes completely unpredictable. OK, that's very interesting. But the important point is: according to the authors, these variation of pattern in the NF-kB signal, induced vy variation in the amplitude of the external signal (TNF), will have definite effects on downstream transcription patterns in the nucleus. Indeed, the point made by the authors is that the chaotic pattern induces by high amplitudes in the external signall will have a definite and robuts effect: it will enhance transcription of genes with low affinity for the NK-kB TFs, (LAGs). In the other scenarios, instead, transcription of high affinity genes (HAGs) or medium affinity genes (MAGs) will prevail. And the idea is that such a robust effect of an unpredictable pattern may well have a specifi role in transcription regulation, IOWs it can be a supplementary "tool" in the functional regulation of what genes are transcribed, and therefore of the type and level of the response. Well, isn't that interesting? Of course, to do that in a functional way, there is the basic need that the high amplitude of the external signal and the chaotic pattern be correctly associated, so that the right signal generates the correct response. And that association is obviously semiotic. So, is all this is true, the NF-kB system is not only a wonderful polymorphic semiotic system (see post #181), but also a semiotic system that uses, as a tool to connect the right stimulus to the right response, not only the usual biochemical configuration patterns, but also a very peculiar physical and mathematical effect of the type of oscillations involved in two different and separate systems. Wow! :) gpuccio
Pw: I don't think we have any idea about that. gpuccio
GP, at what point it is believed that the oscillations in the NF-kB system appeared for the first time in biological history? pw
To all: The paper is about a simplified model of interaction between two different oscillating systems, NF-kB and TNF. The interaction between the two can generate, in some circumstancases, a chaotic system.
Our investigation starts with a model of the transcription factor NF-kB that is known to exhibit oscillatory dynamics3,9,22. A schematic version of this is found in Fig. 1a and a full description is presented in the Supplementary Note 1. In this deliberately simplified model, the oscillations arise from a single negative feedback loop between NF-kB and its inhibitor IkB?, and can be triggered by TNF via the activation of the IkB kinase (IKK). We then allow TNF to oscillate.
Indeed, the main cause of the oscillations in the NF-kB system seems to be due to the alternating degradation of the IkB alpha (the inhibitor), IOWs the activation of the dimer, and the re-synthesis of IkB alpha, a form of negative feedback. gpuccio
To all: From the paper above mentioned, a paragraph about the difference between random noise and chaos.
What is chaos? When we speak of chaos, we refer to deterministic chaos. Deterministic means that if one knows the initial state of the system exactly, then the dynamical trajectory will be the same every time it is initiated in that state. However, any two initial conditions infinitesimally apart will have exponentially diverging trajectories as time proceeds making it practically impossible to predict the future dynamics—hence chaos28–31. It is important to note that the unpredictability of chaos does not arise from stochasticity—the latter refers to a non-deterministic system with noise. Noise is observed in most real-world systems and can often result in very different dynamics than the deterministic version of the same system. For example, noise can cause transitions between different states which would never occur if the system were deterministic. Thus, both deterministically chaotic and noisy systems exhibit unpredictability of their future trajectories, but for very different underlying reasons.
gpuccio
Thank you Gp, The link you provided cleared up some misunderstanding on my part (operons are not TF’s but are grouping of genes that TF’s help activate) and clarified a number of other things. john_a_designer
John_a_designer: OK, that's how I see it. In eukaryotes we must distinguish between general TFs, which act in much the same way in all genes and are required to initiate transcription by helping recruit RNA polymerase at the promoter site, and specific TFs, that bind at enhancer sites and activate or repress transcription of specific genes. The NF-kB system described in the OP is a system of specific TFs. Now, in eukaryotes there are six general TFs. Archea have 3. In bacteria sigma factors have the role of general TFs. Sigma factors, archaeal general TFs and eukaryotic general TFs seem to share some homology. I think that the archaeal system, however, is much more similar to the eukaryotic system, and that includes RNA polymerases. Then bacteria have a rather simple system of repressors or activators, specific for specific genes, or better operones. Those repressors and activators bind DNA near the promoter of the specific operon. They are in some way the equivalent of eukaryotic specific TAs, but the system in by far simpler. You can find some good information about bacteria here: https://bio.libretexts.org/Bookshelves/Cell_and_Molecular_Biology/Book%3A_Cells_-_Molecules_and_Mechanisms_(Wong)/9%3A_Gene_Regulation/9.1%3A_Prokaryotic_Transcriptional_Regulation The operon is simply a collection of genes that are physically near, are transcribed together from one single promoter, and are functionally connected. So, the lac operon is formed by three genes, lacZ, lacY, lacA, sharing one promoter. A sigma factor binds at the promoter, together with RNA polymerase. A repressor and an activator may bind DNA near the promoter to regulate operon transcription. While archaea are more similar to eukaryotes in the system of general TF, the regulation of transcription by one or two suppressor or activator seems to be similar to what described for bacteria. Finally, there is another important aspect where archaea are more similar to eukarya. Their chromatin structure is based on histones and nucleosome, like in eukaryotes, but the system is rather different from the corresponding eukaryotic system. Instead, bacteria have their form of DNA compression, but it is not based on histones and nucleosomes. This, as far as I can understand. gpuccio
Gp, I am still trying to define precisely what a transcription factor is. Earlier @ 91, I asked ”are there transcription factors for prokaryotes?” According to Google, no.
Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type. Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed. There is no such structure seen in prokaryotes.
https://uncommondesc.wpengine.com/intelligent-design/controlling-the-waves-of-dynamic-far-from-equilibrium-states-the-nf-kb-system-of-transcription-regulation/#comment-680819 (But maybe what I am not understanding is the result of a difference of semantics, context or nuance.) Recently, I ran across another source which seemed to suggested that prokaryotes do have TF’s.
What has to happen for a gene to be transcribed? The enzyme RNA polymerase, which makes a new RNA molecule from a DNA template, must attach to the DNA of the gene. It attaches at a spot called the promoter. In bacteria, RNA polymerase attaches right to the DNA of the promoter. You can see how this process works, and how it can be regulated by transcription factors, in the lac operon and trp operon videos. In humans and other eukaryotes, there is an extra step. RNA polymerase can attach to the promoter only with the help of proteins called basal (general) transcription factors. They are part of the cell's core transcription toolkit, needed for the transcription of any gene.
https://www.khanacademy.org/science/biology/gene-regulation/gene-regulation-in-eukaryotes/a/eukaryotic-transcription-factors This article seems to suggest that the lac operon is a transcription factor but then in the next paragraph it states: “In humans and other eukaryotes, there is an extra step. RNA polymerase can attach to the promoter only with the help of proteins called basal (general) transcription factors.” So is the lac operon a transcription factor? Is the term operon synonymous with transcription factor or is there a difference? In other words, do “operons” have same role in transcription as TF’s? Is there a strong homology between the lac operon which turns on the gene for lactose metabolism in e coli and the TF/lactose metabolism gene in eukaryotes, including humans? Does this have anything to do with lactose intolerance? john_a_designer
EugeneS: That is an important point. The question is: can life be reduce to the designed information that sustains it? If that is the case, then design explains everything, both at OOL and later. If the anwer is no, all is different. As we still don't understand what life is, from a scientific point of view, we have no final scientific answer. My personal opinion is that the second option is true, and that would explain why in our experience life comes only from life. If life cannot be reduced to the designed information that sustains it, then certainly OOL is a case where both a lot of designed functional information appears and life is started, whatever that implies. For what happens after OOL, all depends on the model one accepts. I don't know if you have followed the discussion here between BA and me. In particular, the three possible models I have discussed at #43. In my model (model b in that post) after OOL things happen by descent with added design. So, in that model, it is true after OOL that life always comes from life (if the descent is universal), and only OOL would be a special event in that sense. The new functional information, in all cases, is the product of design interventions. In model c, instead, each new "kind" (to use BA's term) is designed from scratch at some time. So, the appearance of each new kind has the same status as an OOL event. Model a is just the neo-darwinian model, where everything, at all times, happens by RV + NS, and no design takes place, least of all a special, information independent start of life. gpuccio
EugeneS
However, it does not apply equally either to creation (design, for the purposes of this discussion) or to imagined abiogenesis. It is clear that the vitalistic rule is violated in the case of abiogenesis, but it is also violated in the case of design because the relation between the designer and the designed is not that of birth/descent. It can be more likened to the relation between the painter and the painting. Fundamentally, the painting is of a different nature from the painter, whereas descent implies the same nature between the ancestor and the progeny.
That is a great point and analogy. Yes, I think where there is design then there is a purposeful, creative act and what follows from that cannot be considered descent for the reason you give. Silver Asiatic
GP #129, Thanks very much. I will give it a read. Life comes from life, once it has been started, that is for sure. However, it does not apply equally either to creation (design, for the purposes of this discussion) or to imagined abiogenesis. It is clear that the vitalistic rule is violated in the case of abiogenesis, but it is also violated in the case of design because the relation between the designer and the designed is not that of birth/descent. It can be more likened to the relation between the painter and the painting. Fundamentally, the painting is of a different nature from the painter, whereas descent implies the same nature between the ancestor and the progeny. As an aside, a grumpy remark, I do not like the new GUI on this blog ;) The old one was way better. This one feels like one of .gov British sites for the plain English campaign. It is less convenient when accessed with a mobile phone. But it does not matter... EugeneS
OLV at #193: Interesting paper. Indeed I blasted the human p100 protein agains sponges, and there is a good homology (total bitscore 523 bits). So yes, the system is rather old in metazoa. Consider that the same protein, blasted against single celled eukaryotes, gives only a low homology (about 100 bits), limited to the central ANK repeats. No trace of the DNA binding domain. So, the system seems really to arise in Metazoa, and very early. gpuccio
To all: Indeed, I have not been really precise at #194, I realize. I said: "OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP:" But that is not really true. This paper indeed adds a new concept to what I have discussed in the OP. In fact the paper, while briefly discussing also random noise, is mainly about the effects of a chaotic system, something that I had not considered in any detail in my OP. My focus there has been on random noise and far from equilibrium dynamics. Chaos systems certainly add a lot of interesting perspective to our scenario. gpuccio
To all: The paper linked at #194 is really fascinating. I given it a first look, but I will certainly go back to digets better some aspects (probably not the differential equations! :) ). Two of the authors are from the Niels Bohr Institue in Copenaghen, a really interesting institution. The third author is from Bangalore, India. For the moment, let's start with the final conclusion (I have never been a tidy person! :) ):
Chaotic dynamics has thus far been underestimated as a means for controlling genes, perhaps because of its unpredictability. Our work shows that deterministic chaos potentially expands the toolbox available for single cells to control gene expression dynamically and specifically. We hope this will inspire theoretical and experimental exploration of the presence and utility of chaos in living cells.
The emphasis on "toolbox" is mine, and the reason I have added it should be rather self-evident. :) Let's think about that. gpuccio
To all: OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP: On chaotic dynamics in transcription factors and the associated effects in differential gene regulation https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6325146/ The abstract:
Abstract The control of proteins by a transcription factor with periodically varying concentration exhibits intriguing dynamical behaviour. Even though it is accepted that transcription factors vary their dynamics in response to different situations, insight into how this affects downstream genes is lacking. Here, we investigate how oscillations and chaotic dynamics in the transcription factor NF-kB can affect downstream protein production. We describe how it is possible to control the effective dynamics of the transcription factor by stimulating it with an oscillating ligand. We find that chaotic dynamics modulates gene expression and up-regulates certain families of low-affinity genes, even in the presence of extrinsic and intrinsic noise. Furthermore, this leads to an increase in the production of protein complexes and the efficiency of their assembly. Finally, we show how chaotic dynamics creates a heterogeneous population of cell states, and describe how this can be beneficial in multi-toxic environments.
I think I will read it carefully and come back about it later. :) gpuccio
GP, the topic you chose for this OP is fascinating indeed. Here's a related paper: Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation Leah M. Williams, Melissa M. Inge, Katelyn M. Mansfield, Anna Rasmussen, Jamie Afghani, Mikhail Agrba, Colleen Albert, Cecilia Andersson, Milad Babaei, Mohammad Babaei, Abigail Bagdasaryants, Arianna Bonilla, Amanda Browne, Sheldon Carpenter, Tiffany Chen, Blake Christie, Andrew Cyr, Katie Dam, Nicholas Dulock, Galbadrakh Erdene, Lindsie Esau, Stephanie Esonwune, Anvita Hanchate, Xinli Huang, Timothy Jennings, Aarti Kasabwala, Leanne Kehoe, Ryan Kobayashi, Migi Lee, Andre LeVan, Yuekun Liu, Emily Murphy, Avanti Nambiar, Meagan Olive, Devansh Patel, Flaminio Pavesi, Christopher A. Petty, Yelena Samofalova, Selma Sanchez, Camilla Stejskal, Yinian Tang, Alia Yapo, John P. Cleary, Sarah A. Yunes, Trevor Siggers, Thomas D. Gilmore doi: 10.1101/691097  
Biological and biochemical functions of immunity transcription factor NF-?B in basal metazoans are largely unknown. Herein, we characterize transcription factor NF-?B from the demosponge Amphimedon queenslandica (Aq), in the phylum Porifera. Structurally and phylogenetically, the Aq-NF-?B protein is most similar to NF-?B p100 and p105 among vertebrate proteins, with an N-terminal DNA-binding/dimerization domain, a C-terminal Ankyrin (ANK) repeat domain, and a DNA binding-site profile more similar to human NF-?B proteins than Rel proteins. Aq-NF-?B also resembles the mammalian NF-?B protein p100 in that C-terminal truncation results in translocation of Aq-NF-?B to the nucleus and increases its transcriptional activation activity. Overexpression of a human or sea anemone I?B kinase (IKK) can induce C-terminal processing of Aq-NF-?B in vivo, and this processing requires C-terminal serine residues in Aq-NF-?B. Unlike human NF-?B p100, however, the C-terminal sequences of Aq-NF-?B do not effectively inhibit its DNA-binding activity when expressed in human cells. Tissue of another demosponge, a black encrusting sponge, contains NF-?B site DNA-binding activity and an NF-?B protein that appears mostly processed and in the nucleus of cells. NF-?B DNA-binding activity and processing is increased by treatment of sponge tissue with LPS. By transcriptomic analysis of A. queenslandica we identified likely homologs to many upstream NF-?B pathway components. These results present a functional characterization of the most ancient metazoan NF-?B protein to date, and show that many characteristics of mammalian NF-?B are conserved in sponge NF-?B, but the mechanism by which NF-?B functions and is regulated in the sponge may be somewhat different.
OLV
To all: Again about crosstalk. It seems that our NF-kB system is continuously involved in crosstalks of all types. This is about crosstalk with the system of nucleoli: Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6210184/
Abstract Nucleoli are emerging as key sensors of cellular stress and regulators of the downstream consequences on proliferation, metabolism, senescence, and apoptosis. NF-kB signalling is activated in response to a similar plethora of stresses, which leads to modulation of cell growth and death programs. While nucleolar and NF-kB pathways are distinct, it is increasingly apparent that they converge at multiple levels. Exposure of cells to certain insults causes a specific type of nucleolar stress that is characterised by degradation of the PolI complex component, TIF-IA, and increased nucleolar size. Recent studies have shown that this atypical nucleolar stress lies upstream of cytosolic I?B degradation and NF-kB nuclear translocation. Under these stress conditions, the RelA component of NF-kB accumulates within functionally altered nucleoli to trigger a nucleophosmin dependent, apoptotic pathway. In this review, we will discuss these points of crosstalk and their relevance to anti-tumour mechanism of aspirin and small molecule CDK4 inhibitors. We will also briefly the discuss how crosstalk between nucleoli and NF-kB signalling may be more broadly relevant to the regulation of cellular homeostasis and how it may be exploited for therapeutic purpose.
Emphasis mine. And this is about crosstalk with Endoplasmic Reticulum: The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6027367/
Abstract Stressful conditions occuring during cancer, inflammation or infection activate adaptive responses that are controlled by the unfolded protein response (UPR) and the nuclear factor of kappa light polypeptide gene enhancer in B-cells (NF-kB) signaling pathway. These systems can be triggered by chemical compounds but also by cytokines, toll-like receptor ligands, nucleic acids, lipids, bacteria and viruses. Despite representing unique signaling cascades, new data indicate that the UPR and NF-kB pathways converge within the nucleus through ten major transcription factors (TFs), namely activating transcription factor (ATF)4, ATF3, CCAAT/enhancer-binding protein (CEBP) homologous protein (CHOP), X-box-binding protein (XBP)1, ATF6? and the five NF-kB subunits. The combinatorial occupancy of numerous genomic regions (enhancers and promoters) coordinates the transcriptional activation or repression of hundreds of genes that collectively determine the balance between metabolic and inflammatory phenotypes and the extent of apoptosis and autophagy or repair of cell damage and survival. Here, we also discuss results from genetic experiments and chemical activators of endoplasmic reticulum (ER) stress that suggest a link to the cytosolic inhibitor of NF-kB (IkB)alpha degradation pathway. These data show that the UPR affects this major control point of NF-kB activation through several mechanisms. Taken together, available evidence indicates that the UPR and NF-kB interact at multiple levels. This crosstalk provides ample opportunities to fine-tune cellular stress responses and could also be exploited therapeutically in the future.
Emphasis mine. Another word that seems to recur often is "combinatorial". And have you read? These two signaling pathways "converge within the nucleus through ten major transcription factors (TFs)". Wow! :) gpuccio
Silver Asiatic:
I always thought that’s what ID was trying to do. Use Dawkins’ own worldview and his own claims – all the things he already accepts — and show that ID is the most reasonable conclusion. It would all be based on his (or mainstream) science.
That's definitely what ID is trying to do. That's certainly what I am trying to do.
I am leaning more and more to the idea that it is not worth the effort to adopt “Dawkins philosophy/science” for the sake of trying to convince people, and that it may be more effective to start with the clash of philosophies and world-views rather than start with science (ID). Not sure, but I am leaning that way. Putting philosophy and theology first, and then using ID inferences to support that might work better.
Maybe. But I think the two thinks can and should work in parallel. There is no conflict at all, as far as each activity is guided by its good and pertinent philosophy! :) And, at least for me, the purpose is not to convince anyone, but to offer good idea to those who may be interested in them. In the end, I very much believe in free will, and free will is central not only in the moral, but also in the cognitive sphere. gpuccio
GP
We can freely pursue what is true or what is wrong. What is good or what is bad. We can freely lie, or fight for what we believe to be true. We can freely love or hate. And so on. I think I give the idea.
I realize that this may seem irritating, but I even caught myself with that. There are people, perhaps, who think that all of our actions are determined by some cause. It's the whole question of free-will. My point here is that I think a coherent philosophy, beginning with first principles, has to be in place. After that, the people that we talk with have to either understand, or better, accept our philosophy. If they have a bad philosophy, then I think the problem is to help them fix that. I think that has to happen before we can even get into the science. My philosophy is rooted in classical Western theism and is linked to my theological views. I am leaning more and more to the idea that it is not worth the effort to adopt "Dawkins philosophy/science" for the sake of trying to convince people, and that it may be more effective to start with the clash of philosophies and world-views rather than start with science (ID). Not sure, but I am leaning that way. Putting philosophy and theology first, and then using ID inferences to support that might work better. Silver Asiatic
GP
For me (because I assume full responsibility for that statement), but not in the sense that I consider that a subjective aspect. For me, that is an objective requirement of a good philosophy of science.
Right. Based on your philosophy and worldview it is objective. That is consistent and makes sense. Philosophically, you call some things "facts" and then you use those in your scientific reasoning. You have an overall understanding of reality. I'll suggest that you cannot really separate "everything else" of your philosophy from your scientific view. As I see it, they're all connected. This is especially true when you seek to talk about a designer or things like randomness or immaterial, natural, entities -- all of these things. This is where I agree that "ID is science" as long as "ID lines up with my philosophy of science". To me, that is consistent and reasonable (although whether the philosophy and definitions should be aligned could be debated). Someone like Dawkins will say "ID is not science" because he thinks that ID does not line-up with his philosophy of science. He has just defined ID out of the question. Dawkins will fail if he says: "My philosophy is consistent and rational and my science follows this". But then later he indicates that he will not accept conclusions that his own scientific philosophy will support. Then he's got a problem. I always thought that's what ID was trying to do. Use Dawkins' own worldview and his own claims - all the things he already accepts -- and show that ID is the most reasonable conclusion. It would all be based on his (or mainstream) science. I know some creationists who say ID is "dishonest" because the worldview is concealed, but I think ID is just trying to play by the rules of the game (consensus view) and show that there is evidence for Design even using mainstream evolutionary views. Silver Asiatic
Silver Asiatic:
At the risk of irritating you, I feel the need to repeat something continually through my response – and that is, almost everything you said was a philosophical discourse. I have been discussing on one of KF’s threads the objective foundation of philosophy, but after that (which is minimal) philosophy is almost entirely subjective. We can freely choose among options.
I disagree. My discourses here are rarely philosophical. Well, sometimes. But all my reasonings about ID detection, functional information, biology, functional information in biology, homologies, common descent, and so on, in practice most of what I discuss here is perfectly scientific, and in no way philosophical. Of course, as said, my science is always guided by my philosophy of science. I take full responsibility for both. And I fully disagree that "philosophy is almost entirely subjective". That's not true. There is much subjectivity in all human activities, including philosophy, science, art, and so on. But there is also a lot of objectivity in all those things. One thing is certainly true: "We can freely choose among options." Of course. In everything. We can freely pursue what is true or what is wrong. What is good or what is bad. We can freely lie, or fight for what we believe to be true. We can freely love or hate. And so on. I think I give the idea. Does that mean that truth, good, lies, love, are in no way objective? I don't believe that. But of course you can freely choose what to believe. And yes, this is a philosophical statement. gpuccio
Silver Asiatic: I have not the time now to answer all, but I want to clarify one point that is important, and that was not clear probably for my imprecision. When I say: "That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings." I am not including in that statement philosophy of science. My mistake, I apologize, I should have specified it, but you cannot think of everything. Of course I believe that our philosophy of science can and must guide the way we do science. Probably, it seemed so obvious to me that I did not think of specifying it. What I meant was that our philosophy about everything else must not influence, as far as that is possible, our scientific reasoning. As I have said, there is good science and bad science, good philosophy of science and bad philosophy of science. One is responsible both for his science and for is philosophy of science. But of course we have a duty to do science according to our philosophy of science. What else should we do? However, even if of course there can be very different philosophies of science, some basic points should be very clear. I think that almost all who do good science would agree about the basic importance of facts in scientific reasoning. So, any philosophy of science, and related science, that does not put facts at the very center of scientific reasoning is a bad philosophy of science. For me (because I assume full responsibility for that statement), but not in the sense that I consider that a subjective aspect. For me, that is an objective requirement of a good philosophy of science. OK, more later. gpuccio
GP I think you're being inconsistent. That's one thing I'm trying to point out. You agree that your statement about science (and therefore your foundation for ID) is a philosophical position. However, you often state something like this:
That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.
But it is simply not possible to avoid your philosophical view since that view is the basis of all your understanding of science and your scientific reasoning. In fact, I would say it's unreasonable to insist that you're trying to avoid your philosophical view. Why would you do that? Your philosophy is the most important aspect of your science. Why conceal it as if you could do science without a philosophical starting point? At the risk of irritating you, I feel the need to repeat something continually through my response - and that is, almost everything you said was a philosophical discourse. I have been discussing on one of KF's threads the objective foundation of philosophy, but after that (which is minimal) philosophy is almost entirely subjective. We can freely choose among options.
Philosophy is not subjective, as science is not objective. They are different, but both are rather objective, with many subjective aspects.
I disagree here and I offered a long explanation in debating with atheists on KF's most recent thread. The only objective thing about philosophy is the starting point - that truth has a greater value than falsehood. We cannot affirm a value for falsehood. But after that, even the first principles of reason are not entirely objective. They must be chosen, for a reason. A person must decide to think rationally. For reasons of virtue which are inherent in the understanding of truth, we have an obligation to use reason. But this obligation is a matter of choice.
It is true that nobody can stop you, but it is equally true that it can be bad science, and everyone has a right to judge for himself if it is good science or bad science.
My repeated phrase here: That's a philosophical view. Secondly, you are appealing to consensus "everyone can judge". There are some cultures that forbid a Western approach to science. Their consensus will say that "mainstream science" is bad science. They have different goals and purposes in life. I think of indigenous cultures, for example, or some religions where they approach science differently.
Good science and good philosophy of science must be judged in the sacred intimacy of our own cosnciousness. we are fully responsible for our judgement, and we have the privilege and the duty to defend that judgement and share it with others, whether they agree or not, whether there is consensus about it or it is shared only by a minority. Because in the end truth is the measure of good science and of good philosophy. Nothing else.
In this case, truth follows from first principles. Science is not an arbiter of truth, it is only a method that follows from philosophy in order to gain understanding, for a reason. If a science follows logically from its first principles, then it is good science. I gave an example of a different kind of science where I could say that God is a cause. Or we could talk about Creation Science where the Bible establishes rules for science. Those are different first principles - different philosophical starting points. Creationism is perfectly legitimate philosophy and if science follows from it logically, then the science is "good science". We may have a reason to reject Creationist philosophy but that cannot be done on an entirely objective basis. We decide based on the priority we give to certain values. We want something, so we want a science that supports what we want. But people can want different things.
ID requires nothing like that. ID infers a process of design. A process of design requires a designer. There is nothing non natural in that. Therefore ID is science.
Again, you offer your philosophical view. In your view, a process of design requires a designer. That is philosophy. If a person accepts your philosophy, then they can accept your ID science. I think the more usual statement of ID is that "we can observe evidence of intelligence" in various things. What I have not seen is that "all intelligent outputs require a designer". That is a philosophical statement, not a scientific one. Science cannot establish that all intelligence necessarily comes from "a designer" or even what the term "a designer" means in this context. All science can do is say that something "looks like it came from a source that we have already classified as 'intelligence'". If that source is "a designer", we do not know.
Consensus is only an hisotrical accident. Sometimes there is consensus for good things many times for bad things. Consensus for bad science does not make it good science. Ever.
Again, these are philosophical concepts. Even to judge good science versus bad science requires a correlation with philosophical starting points. Again, there is no such thing as "good science" as if "science" exists as an independent agent. Science is a function of philosophical principles. If the science aligns with the principles, then it is coherent and rational (but even that is not required). But it is impossible to judge if science is good or bad without first accepting a philosophical basis. The idea that only material causes can be accepted in science is a perfectly valid limitation. To disagree with it and prefer another definition is a philosophical debate and it will come down to "what do we want to achieve with science"? There is nothing objective about that. Science is a tool used for a purpose and there is nothing that says "science must only have this purpose and no other". People choose one philosophy of science or another. There is no good or bad. There can be contradictory or irrational application of science -- where science conflicts with the stated philosophy. For example, if Dawkins said "science can only accept material causes" and then said later that "science has indicated that a multiverse exists outside of time, space and matter" - that would be contradictory. We could call that "bad science" because it is irrational. But even there, a person is not required, necessarily, to be entirely rational in all aspects of life. We are required to be honest and to tell the truth. But if Dawkins said, that he makes an exception for a multiverse, his science remains just as "good" as any. Science is not absolute truth. It's a collection of rules used for measurement, classification, experiment to arrive at understanding within a certain context.
Moreover, I could show, as I have done many times, the the word “natural” is wholly misleading. In the end, it just mean “what we accept according to our present worldview”. In that sense, any form of naturalism is the end of true science. Naturalism is a good example of bad philosophy of science.
Again, this is entirely a philosophical view. There is nothing wrong with a science that says "we only accept what accords with our worldview". That's a philosophical starting point. People may have a very good reason for believing that. Or not. So, all of their science will be "natural" in that sense. Again, there is no such thing as "true science". You are not the arbiter of such a thing. Even to say that "all science must follow strictly logical processes" is a philosophical bias. There can be scientific philosophies that accept non-logical conclusions and various paradoxical understandings.
And I know, that is not the consensus. I know that very well. But it is not “my own rule”. It is a strong philosophical belief, that I am ready to defend and share with anyone, and to which I will remain loyal to the end of my days, unlessof course I do not find some day some principle that is even better.
When it say that it is "your own rule" I mean it is a rule that you have chosen to accept. You could have chosen another, like the consensus view. That is what I would prefer for ID, that it accept the consensus view on what "natural" means and basically all the consensus rules of science. I would not like to have to say that "ID requires a different understanding of terms and of science, than the consensus does". But even if not, ID researchers are free to have their own philosophical starting points and defend them, as you would do. But as I said, I think the only aspect of philosophy that we are compelled to accept is the proto-first principles. Even there, a person must accept that thinking rationally is a duty. As I said, there can be philosophical systems that do not hold logic, analysis, and rational thought as the highest virtue. There can be other values more important to human life which would leave rational thought as a secondary value, and therefore not absolutely required in all cases. So, a contradictory scientific result would not be a problem in that philosophical view.
True of false that it may be, the hypothesis that cosnciousness is not always linked to a physical body like ours is a reasonable idea.
Yes, exactly. Science can tell us nothing about this. Your view would be reasonable as matched against your philosophy. Again, it depends if a person has a philosophical view that could accept such a notion. If the belief is that everything that exists is physical, then your point here would not be rational. The science would have nothing to do with it except to be consistent with one view or another.
For example, science can define if sometning has mass or not. Some entities in reality have mass, others don’t. This is a scientific statement.
I wouldn't call that a "definition". It is more like a classification. Science cannot define what "mass" is. There is no observation in nature that we can make to tell us that "this is the correct definition of mass". In fact, there could be a philosophical view that does not recognize mass as an independent thing that could be classified. But there is a consensus view that has defined mass as a characteristic. Then science observes things and classifies them to see if they share what that thing (mass) is or not. Silver Asiatic
Silver Asiatic: "For me if ID can be fully compatible with the science that Dawkins uses, then that’s powerful." But ID is fully compatible with the science that Dawkins uses. It's Dawkins who uses that science badly and defends wrong theories. It's Dawkins who refutes the good theories of ID because of ideological prejudices. We cannot do nothing about that. It's his presonal choice, and he is a free individual. But there is no reason at all to be influenced or conditioned by his bad scientific and philosophical behaviour. gpuccio
Silver Asiatic: OK, I disagree with you about many things. Not all. Let's see if I can explain my position. You quote my statement: "Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview." And then you say that this is a philosophical view. And I absolutely agree. That was clearly a statement of my position about philosophy of science. Philosophy of science is phisolophy. I usually don't discuss my philosophy here, except of course my philosophy of science, which is absolutely pertinent to any scientific discussion. So yes, when I say that science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview, I am making a statement about philosophy of science. I also absolutely agree that "science cannot define itself or create its own limits". It's philosophy of science that must do that. Where I absolutely discgree with you is in the apparemt idea that philosophy of science is a completely subjective thing, and that everyone can "make his own rules". That is completely untrue. Philosophy is not subjective, as science is not objective. They are different, but both are rather objective, with many subjective aspects. There is good philosophy and bad philosophy, as there is good science and bad science. And, of course, there is bad philosophy of science. You say: "So, for example, if I wanted to do “my own science”, I could establish rules that I want. Nobody can stop me from that." It is true that nobody can stop you, but it is equally true that it can be bad science, and everyone has a right to judge for himself if it is good science or bad science. The same is true for philosophy of science. The really unbearable part of you discourse is when you equate science to consensus. This is a good example of bad philosophy of science. For me, of course. And for all those who waht to agree. There is no need that we are the majority. There is no nedd for consensus. Good science and good philosophy of science must be judged in the sacred intimacy of our own cosnciousness. we are fully responsible for our judgement, and we have the privilege and the duty to defend that judgement and share it with others, whether they agree or not, whether there is consensus about it or it is shared only by a minority. Because in the end truth is the measure of good science and of good philosophy. Nothing else. Consensus is only an hisotrical accident. Sometimes there is consensus for good thingsm many times for bad things. Consensus for bad science does not make it good science. Ever. Then you insist: "If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project." ID requires nothing like that. ID infers a process of design. A process of design requires a designer. There is nothing non natural in that. Therefore ID is science. Moreover, I could show, as I have done many times, the the word "natural" is wholly misleading. In the end, it just mean "what we accept according to our present worldview". In that sense, any form of naturalism is the end of true science. Naturalism is a good example of bad philosophy of science. And I know, that is not the consensus. I know that very well. But it is not "my own rule". It is a strong philosophical belief, that I am ready to defend and share with anyone, and to which I will remain loyal to the end of my days, unlessof course I do not find some day some principle that is even better. Just a final note. You say: " Science cannot even tell us what “matter” is or what it means for something to be “immaterial”. Those are philosophical concepts." Correct. And I don't think that even philosophy has good answers, at present, about those things. Indeed, I think that "matter" and "immaterial" are vague concepts. But science can be more precise. For example, science can define if sometning has mass or not. Some entities in reality have mass, others don't. This is a scientific statement. in our discussion, I did not use the word "immaterial". That word was introduced by you. I just stated, answering your question, that it seemed reasonable that the biological designer(s) did not have a physical body like us, because otherwise there should be some observable trace of that fact. This implies no sophisticated philosophical theory about what matter is. I suggested that, as we know that matter exists but we don't know what it is, it is not unreasonable to think that it can exist without a physical body like ours. Not only it is not unreasonable, but indeed most people have believed exactly that for millennia, and even today, probably, most people believe that. I could add that observable facts like the reports of NDEs strongly suggest that hypothesis. True of false that it may be, the hypothesis that cosnciousness is not always linked to a physical body like ours is a reasonable idea. There is no reason at all to consider that idea "not natural" or to ban it a priori from any scientific theory or scenario. To do that is to do bad science and bad philosophy of science, driven by a personal philosophical committment that has no right to be imposed on others. gpuccio
GP & JAD Here is JAD's comment on the topic of ID as science:
Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions. For example: That we exist in a real special-temporal world– that the world (the cosmos) is not an illusion and we are not “brains in a vat” in some kind of Matrix like virtual reality.
That is right. All science requires an a priori metaphysical commitment. "Mainstream science" has accepted one particular view. But nobody can say that view, or any view is "true science". It comes down to the philosophical view of "what is reality"? Are there real distinctions between things or are those distinctions arbitrary? Western philosophy tells us one thing, but there are other philosophical views. Again, if ID is saying that "Dawkins is using the wrong kind of science", then that's a philosophical debate about what science should be. For me if ID can be fully compatible with the science that Dawkins uses, then that's powerful. In that case, I think it would be more reasonable to say that "ID is science" since it is using the exact same understanding of science that people like Dawkins use. Silver Asiatic
GP
Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.
As I was discussing with JAD, I have always argued that ID is a scientific project. But I am tending now to see it as a philosophical proposition. Your statement above is a philosophical view. You are giving a framework for what you think science should be. But science cannot define itself or create its own limits. Science cannot even tell us what "matter" is or what it means for something to be "immaterial". Those are philosophical concepts. Science also cannot tell us what causes are acceptable. Science cannot tell us that it should not have a commitment to a worldview. So, for example, if I wanted to do "my own science", I could establish rules that I want. Nobody can stop me from that. I could have a rule: "For any observation that cannot be explained by known natural causes, we must conclude that God directly created what we observed". There is nothing wrong with that if that is "my science". Of course, if I want to communicate I would have to convince people to believe in my philosophy of science. But that would have nothing to do with science itself, but rather my efforts to convince people of my philosophical view. Now, we could have what we call "Dawkins Science". I believe that's what a majority of biologists accept today. Again, it is perfectly legitimate. Dawkins and all others like him will claim "science can only accept natural causes, or material causes". So, they establish rules. Science cannot tell us if those rules are correct or not. It is only philosophy that says it. Then ID comes along, and IDists will say "ID is science". Here is where I disagree. Whenever we make a sweeping statement about "science" we are talking about "the consensus". If Dawkins is the consensus, then to claim "ID is science" means that it is perfectly compatible with Dawkins' science. If, however, the claim "ID is science" means "you have to accept our version of science to accept ID", then that's a mistake. Again, to claim something "is science" usually means it is the consensus definition of science. To redefine science in any way one wants to, is not a scientific project. It is a philosophical project. If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project. With that, even if science accepted non-natural causes, I would still consider ID to be philosophical. ID uses scientific data, but the conclusions drawn are non-scientific. Only if ID stopped by stating "this is evidence of intelligence" - that would be science. But once the conversation moves to the idea that "where there is intelligence, there must be an intelligent designer" - that is philosophical. Science cannot even define what intelligence is. Those definitions are part of the rules of science that come from a philosophical view. For example, there could be a pantheistic view that believes that all intelligence emerges from a universal mind which is present in all of reality. So, evidence of intelligence would not mean that there is an Intelligent Designer. It would only mean that the intelligence came from the spirit of the universe which is an impersonal spiritual force and is not a "designer" in that sense. Silver Asiatic
To all (specially UB): One interesting aspect of the NF-kB system discussed here is that, IMO, it can be seen as a polymorphic semiotic system. Let's consider the core of the system: the NF-kB dimers in the cytoplasm, their inhibition by IkB proteins, and their activation by either the canonical or non canonical pathway, with the cooperation of the ubiquitin system. IOWs the central part of the system. This part is certainly not simple, and has its articulations, for example the different kinds of dimers that can be activated. However, when looking at the whole system, this part is relatively simple, and it uses a limited number of proteins. In a sense, we can say that there is a basic mechanism that works here, with some important variations. Well, like in all the many pathways that carry a signal from the membrane to the nucleus, even in this case we can consider the intermediate pathway (the central core just described) as a semiotic structure: indeed, it connects symbolically a signal to a response. The signal and the response have no direct biochemical association: they are separated, they do not interact directly, there is no direct biochemical law that derives the response from the signal. It's the specific configuration of the central core of the pathway that translates the signal, semiotically coupling it to the response. So, that core can be considered as a semiotic operator that given the operand (the signal) produces the result (the response at nuclear level). But in this specific case there is something more: the operator is able to connect multiple operands to multiple specific results, using an essential bulk of tools. IOWs, the NF-kB system behaves as a multiple semiotic operator, or if we want as a polymorphic semiotic operator. Now, that is not an exclusive property of this system. Many membrane-nucleus pathways behave, in some measure, in the same way. Biological signals and their associations are never simple and clear-cut. But I would say that in the NF-kB system this polymorphic attitude reaches its apotheosis. There are many reasons for that: a) The system is practically universal: it works in almos all types of cells in the organism b) There is a real multitude of signals and receptors, of very different types. Suffice it to mention cytokine stimuli (TNF, IL1), bacterial or viral components (LPS), specific antigen recognition (BCR, TCR). Moreover, each of these stimuli is connected to the central core by a specific, often very complex, pathway (see the CBM signalosome, for example). c) There is a real multitude of responses, in different cells and in the same cell type in different contexts. Even if most of them are in some way related to inflammation, innate immune response or adaptive immune response, there are also responses about cell differentiation (neurons). In B and T cells, for example, the system is invlolved both in the differentiation of B and T cells and in the immune response of mature B and T cells after antigen recognition. This is a really amazing flexibility and polymorphism. A complex semiotic system that implements, with remarkable efficiency, a lot of different functions. This is engineering and programming of the highest quality. gpuccio
Sven Mil: "Is there an explanation for this disagreement?" Thank you for the comment and welcome to the discussion. Thank you also for addressing an interesting and specific technical point. It is not really a disagreement, probably only a different perspective. Reseacrhers interested in possible homologies (IOWs, in finding orthologs or paralogs for some gene) often use very sensitive algorithms. They find homologies that are often very weak, or maybe not real. Or they may look at structural homologies, that are not evident at the sequence level. My point of view is different. In order to debate ID in biology, I am only interested in definite homologies, possibly very high homologies conserved for a long evolutionary time. My aim is specificity, not sensitivity. Moreover, as I accept CD (as discussed in detail in this thread) I have no interest in denying possible weak homologies. I just ignore them, because they are not relevant to my argument. That's why I always measure homology differences, not absolute homologies. I want to find information jumps at definite evolutionari times. Another possibility for the different result is that I have not blasted the right protein form. For brevity (it was nort really an important aspect of my discussion) I have not blaste all possible forms of sigma factors against eukaryotic factor TFIIB. I have just blasted sigma 70 from E. coli. Maybe a more complete search could detect some higher homology. OK, as you have raised the question, I have just checked the literature reference in the Wikipedia page: The sigma enigma: Bacterial sigma factors, archaeal TFB and eukaryotic TFIIB are homologs https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4581349/
Abstract Structural comparisons of initiating RNA polymerase complexes and structure-based amino acid sequence alignments of general transcription initiation factors (eukaryotic TFIIB, archaeal TFB and bacterial sigma factors) show that these proteins are homologs. TFIIB and TFB each have two-five-helix cyclin-like repeats (CLRs) that include a C-terminal helix-turn-helix (HTH) motif (CLR/HTH domains). Four homologous HTH motifs are present in bacterial sigma factors that are relics of CLR/HTH domains. Sequence similarities clarify models for sigma factor and TFB/TFIIB evolution and function and suggest models for promoter evolution. Commitment to alternate modes for transcription initiation appears to be a major driver of the divergence of bacteria and archaea.
As you can see from the abstract, they took into consideration structure similarities, not only sequence alignments. Maybe you can have a look at the whole article. Now I don't think I have the time. gpuccio
Interesting conversation here, 'sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”.' "I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here." Is there an explanation for this disagreement? Sven Mil
John_a_designer at #177: Exactly! That's why I say that ID is fully scientific. Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview. That reality must behave according to our religious convictions is an a priori wordview. That's why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings. That realy must behave according to our atheistic or materialistic convictions is an a priori wordview. That's why our knid interlocutors should strive a lot to avoid, as much as humanly possible, any influence of their philosophy or atheology on their scientific reasonings. The simple fact is that ID theory, reasoning from facts in a perfectly scientific way, infers a process of design for the origin of biological objects. Now, our interlocutors can debate if our arguments are right or wrong from a scientific point of view. That's part of the scientific debate. But the simple idea that we have no other evidence of the existence of conscious agent, for example, at the time of OOL is not enough. Because we have no evidence of the contrary, too. The simple idea that non physical conscious agents cannot exist is not enough, because it is only a specific philosophical conviction. Od course non physical conscious agents can exist. We don't even know what consciousness is, least of all how it works and what is necessary for its existence. My point is: the design inference is real and perfectly scientific. All arguments about things that we don't know are no reason to ignore that scientific inference. They are certainly valod reasons to pursue any further scientific investigation to increase our knowledge about those things. That's perfectly legitimate. For example, I am convinced that our rapidly growing understanding of biology will certainly help to understand how the design was implemented at various times. And, even if ID is not a theory of consciousness, there is no doubt that future theories of consciousness can integrate ID and its results. For example, much can be done to understand better if a quantum interface between conscious representations and physical events is working in us humans, as many have proposed and as I believe. That same model could be applied to biological design in natural history. And of course, philosophy, physics, biophysics and what else can certainly contribute to a better understanding of consciousness, and of its role in reality. A better study of common events like NDEs can certainly contribute to understand what consciousness is. I would like to repeat hear a statment that I have made in the discussion with Silver Asiatic, that sums up well my position about science: Science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations. gpuccio
A few years ago here at UD one of our regular interlocutors who was arguing with me about the ID explanation for origin of life pointed out:
the inference from that evidence to intelligence being involved is really indirect. You don’t have any other evidence for the existence of an intelligence during the times it would need to be around.
I responded, “We have absolutely no evidence as to how first self-replicating living cell originated abiogenetically (from non-life). So following your arbitrarily made-up standard that’s not a logical possibility, so we shouldn’t even consider it... As the saying goes, ‘sauce for the goose is sauce for the gander.’” When you argue that life originated by some “mindless natural process,” that is not an explanation how. Life is not presently coming into existence by abiogenetically, so if such process existed in the past it no longer exists in the present. Therefore you are committing the same error which you accuse ID’ists of committing. That’s a double standard, is it not? This kind of reasoning on the part of materialists also reveals that they don’t really have any strong arguments based on reason, logic and the evidence. If they do, why are they holding back? john_a_designer
OLV @139: The paper you cited doesn't seem to support Behe's polar bear argument. PavelU
JAD @173, Yes, that makes much sense. PeterA
Just to clarify, it’s not my view that ID doesn’t raise some very legitimate scientific questions. Behe’s discovery of irreducible complexity (IC) raises some important questions. For example, in his book Darwin’s Black Box, Michael Behe asks,
“Might there be an as yet undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless we can say that if there is such a process, no one has a clue how it would work. Further it would go against all human experience, like postulating that a natural process might explain computers… In the face of the massive evidence we do have for biochemical design, ignoring the evidence in the name of a phantom process would be to play the role of detective who ignore the elephant.” (p. 203-204)
Basically Behe is asking, if biochemical complexity (irreducible complexity) evolved by some natural process x, how did it evolve? That is a perfectly legitimate scientific question. Notice that even though in DBB Behe was criticizing Neo-Darwinism he is not ruling out a priori some other mindless natural evolutionary process, “x”, might be able to explain IC. Behe is simply claiming that at the present there is no known natural process that can explain how irreducibly complex mechanisms and processes originated. If he and other ID’ist are categorically wrong then our critics need to provide the step-by-step-by-step empirical explanation of how they originated, not just speculation and wishful thinking. Unfortunately our regular interlocutors seem to only be able to provide the latter not the former. Behe made another point which is worth keeping in mind.
“In the abstract, it might be tempting to imagine that irreducible complexity simply requires multiple simultaneous mutations - that evolution might be far chancier than we thought, but still possible. Such an appeal to brute luck can never be refuted... Luck is metaphysical speculation; scientific explanations invoke causes.”
In other words, a strongly held metaphysical belief is not a scientific explanation. So why does Neo-Darwinism persist? I believe it is because of its a-priori ideological or philosophical fit with naturalistic or materialistic world views. Human being are hard wired to believe in something-- anything to explain or make some sense of our existence. Unfortunately we also have a strong tendency to believe in a lot of untrue things. On the other hand, if IC is the result of design, it has to answer the question of how was the design instantiated. If ID wants to have a place at the table it has to find a way to answer questions like that. Once again, one of the primary things science is about is answering the “how” questions. Or as another example, ID’ists argue that the so-called Cambrian explosion can be better explained by an infusion of design. Okay that is possible. (Of course, I whole heartedly agree because I am very sympathetic to the concept of ID.) But how was the design infused to cause a sudden diversification of body plans? Did the “designer” tinker with the genomes of simpler life forms or were they specially created as some creationists would argue? (The so-called interventionist view.) Or were the new body plans somehow pre-programmed into their progenitors genomes (so-called front loading.) How do you begin to answer such questions that have happened in the distant past? At least the Neo-Darwinists have the pretense of an explanation. Can we get them to abandon their theory by declaring it impossible? Isn’t it at least possible, as Behe acknowledges, that there could be some other unknown natural explanation “x.” Is saying something is metaphysically possible a scientific explanation? The goal of science is to find some kind of provisional proof or compelling evidence. Why for example was the Large Hadron Collider built at the cost of billions of dollars (how much was it in euros?) Obviously it was because in science mere possibility is not the end of the line. The ultimate quest of science is truth and knowledge. Of course, we need to concede that science will never be able to explain everything. john_a_designer
Peter A Final edit: "As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view." That is what I meant to say and luckily corrected before the edit function timed out. Hopefully that makes sense now. john_a_designer
John_a_designer at #169: I agree with almost everything that you say, except of course that ID is not science. For me, it is science without any doubt. It has, of course, important philosophical implications, like many other important scientific theories (Big Bang, Quantum mechanics, Relativity, Dark energy, and so on). gpuccio
John_a_designer at #166: I agree with what you say about Dawkins. He is probably honest enougheven if completely wrong, but he is really obsessed by his antireligious crusade. The book you mention is "Signature in the Cell" by Stephen Meyer. gpuccio
JAD @169: “As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is insufficient as a world view” “do not think” “is insufficient” Is that the combination you wanted to express? I’m not sure if I understood it. PeterA
SA, [The following is something I posted on UD before which defines my position about I.D. Please note, however, I see it nothing more than just a personal opinion and I am not stating it in an attempt to change anyone’s mind. Indeed it remains tentative and subject to change but over the years I have seen no reason to change it.] Even though I think I.D. provokes some interesting questions I am actually not an I.D. proponent in the same sense that several other commenters here are. I don’t think I.D. is “science” (the empirical study of the natural world) any more than naturalism/materialism is science. So questions from materialists, like “who designed the designer,” are not scientific questions; they are philosophical and/or theological questions. However, many of the questions have philosophical/theological answers. For example, the theist would answer the question, “who designed the designer,” by arguing that the designer (God) always existed. The materialist can’t honestly reject that explanation because historically materialism has believed that the universe has always existed. Presently they are trying to shoehorn the multiverse into the discussion to get around the problem of the Big-Bang. Of course, this is a problem because there is absolutely no scientific evidence for the existence of a multiverse. In other words, it is just an arbitrary ad hoc explanation used in an attempt to try to wiggle out of a legitimate philosophical question. However, this is not to say that science can’t provoke some important philosophical and theological questions-- questions which at present can’t be answered scientifically. For example: Scientifically it appears the universe is about 14.5 billion years old. Who or what caused the universe to come into existence? If it was “a what”-- just natural causes-- how do we know that? Why does the universe appear to exhibit teleology, or design and purpose? In other words, what is the explanation for the universes so-called fine tuning? How did chemistry create the code in DNA or RNA? How dose mindless matter “create” consciousness and mind? If consciousness and mind are “just an appearance” how do we know that? These are questions that arise out of science which are philosophical and/or theological questions. Is it possible that they could have scientific explanations? Possibly. But even if someday some of them could be answered scientifically that doesn’t make them at present illegitimate philosophical/theological questions, because we don’t know if they have, or ever could have, scientific answers. As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view. Naturalism (or materialism) cannot provide:
*1. An ultimate explanation for existence. Why does anything at all exist? *2. An explanation for the nature of existence. Why does the universe appear to exhibit teleology, or Design and Purpose? *3. A sufficient foundation for truth, knowledge and meaning. *4. A sufficient foundation for moral values and obligations. *5. An explanation for what Aristotle called form and what we call information. Specifically how did chemistry create the code in DNA or RNA? *6. An explanation for mind and consciousness. How dose mindless matter “create” consciousness and mind? If consciousness and mind are just an appearance how do we know that? *7. An explanation for the apparently innate belief in the spiritual-- a belief in God or gods, and the desire for immortality and transcendence.
Of course the atheistic naturalist will dismiss numbers 6 or 7 as illusions and make up a just-so story to explain them away. But how do they know they are illusions? The truth is they really don’t know and they certainly cannot prove that they are. They just believe. How ironic to be an atheist/naturalist/ materialist you must believe a lot-- well actually everything-- on the basis of faith. john_a_designer
JAD
Bonus question: Ben Stein was made famous by one word. Does anyone know what that one word was? Anyone?
The kid in the movie - can't remember his name. Travis?
As whether or not ID is science. I am skeptical of the claim that Darwinism in the macro-evolutionary sense is science or that SETI is science (what empirical evidence is there that ETI’s exist?) How does NS + RV cause macro-evolutionary change? Science, needs to answer the question of how. Just saying “oh somehow it could” with any airy wave of the hand is not a sufficient explanation. But that applies for people on both sides of the debate.
It's a great point. I have argued for many years that ID is science. By that, I mean "the same science as Dawkins uses". It is my belief that 90% of the scientists agree with Dawkins' view of science - it's the mainstream view. I also believed that ID was a subterfuge - an apologetic for the existence of God. I don't see anything wrong with that. ID was going to use the exact same science that Dawkins uses, and then show that there is evidence of intelligent design. The method for doing that is to show that proposed natural mechanisms (RM + NS) cannot produce the observed effects. Intelligence can produce them, so Intelligence is the best, most probable inference. However, what I learned from many IDists over the years (GP pointed it out to me just previously) is that to accept ID, one needs a different science than what Dawkins uses. I find that to be a big problem. If, in order to accept ID, a person first needs "a different kind of science" than the normal, mainstream science of Dawkins, then there's no reason to start talking about ID first. Instead, one should start to convince everyone that a different kind of science should be used throughout the world. Because for me, Dawkins' version of science is fine. He just does what mainstream science does. They look at observations, collect data, propose causes. The first problem is that Dawkins' mechanisms cannot produce the observed effects. So, even on his own terms, the science fails. However, when Dawkins says that science can only accept material causes, that doesn't make a lot of sense - as you have pointed out. Additionally, he's talking about a philosophical view. In that case, it is one philosophy versus another. The philosophy of ID vs Dawkins' philosophical view. We can't speak about science at that point. So, I hate to admit it because so many of my opponents over the years said this and I disagreed, but I do now accept that ID has always been a game to introduce God into the closed world of materialistic science. The difference in my view now is that I don't see anything wrong with that game. Why not try to put God in science? What's wrong with that? If the only way to do this is to trick materialist scientists using their own words, concepts and reasoning, again - what's wrong with that? Dishonest? I don't think so. The motive for using a certain methodology (ID in this case) has no bearing on what the methodology shows. In the same way, it doesn't matter what belief an evolutionist has, they have to show that the observations can be explained from their theory. If, however, ID requires an entirely different science and philosophical view (that is possible also), then I don't really see much need for the discussion on whether ID is a science or not. Why not just start with the idea that God exists, and then use ID observations to support that view? I don't see why that is a problem. If IDists are saying "we don't accept mainstream science", then why appeal to mainstream science for credibility? Just create your own ID-science. But for me, I'm a religious believer with philosophical reasons for believing in God (as the best inference from facts and far more rational than atheism) so instead of trying to prove to everyone that we need a new science, I'd just start with God and then do science from that basis. That's the way it would be if ID is not science. If, however, ID is science, for me that means "ID is the same science that Dawkins and all mainstream scientists use". The inferences from ID can be shown using exactly the same data and observations that Dawkins uses. For me, that would give ID a lot more value. Silver Asiatic
GP
But it is true that ID is the first scientifc way to detect something that only conciousness can do: generate complex functional information.
What I have been doing is questioning what ID can or cannot do and even questioning scientific assumptions along the lines of the ideas you've posted. You have explained your views on design and how consciousness is involved and even on whether the actions of conscious mind can be considered "creative acts", as well as how we evaluate immaterial entities. I have always argued that ID is a scientific project but I could reconsider that. ID does not need to be scientific to have value. I'll respond to JAD in the next post with some thoughts that I question myself on and just respond to his feedback, but your definitions of science and ID will also be included in my considerations. Silver Asiatic
Gp @ #156,
To be fair to Dawkins, I don’t think that he assumes that “design is impossible”. On the contrary, he is one of the few who admit that design could be a scientific explanation. He just does not accept it as a valid scientific explanation. That is epistemologically correct, even if of course completely wrong in the essence.
Indeed, here is another stunning admission by Richard Dawkins: https://www.youtube.com/watch?v=BoncJBrrdQ8 Dawkins concedes that (because nobody knows) first life on earth could have been intelligently designed-- as long as it was an ET intelligence not an eternally existing transcendent Mind (God.) Of course other atheists have admitted the same thing. See the following article which refers to a paper written by Francis Crick and British chemist Leslie Orgel. https://blogs.scientificamerican.com/guest-blog/the-origins-of-directed-panspermia/ I believe it was Crick and Orgel who coined the term directed panspermia. To be fair I think Dawkins later tried to walk back his position. Maybe Crick and Orgel did as well. But the point remains, until you prove how life first originated by mindless, purposeless “natural causes” intelligent design is a logical possibility-- a very viable possibility. Ironically, in the Ben Stein interview Dawkins said that if life were intelligently designed (by space aliens) the scientific research may be able to discover their signature. Didn’t someone write a book about the origin of life with the word signature in the title? Who was that? I wonder if he picked up the idea from Dawkins. Does anyone know? Bonus question: Ben Stein was made famous by one word. Does anyone know what that one word was? Anyone? john_a_designer
Silver Asiatic: Theory of consciousness is a fascinating issue. A philosophical issue which, like all philosophical issues, can certainly use some scientific findings. I have my ideas about theory of consciousness, and sometimes I have discussed some of them here. But ID is not a theory of consciousness. But it is true that ID is the first scientifc way to detect something that only conciousness can do: generate complex functional information. In this sense, the results of ID are certainly important to any theory of consciousness. The simple fact that there is something that only consciousness can do, and that there is a scientific way to detect it, is certainly important. It also tells us that consciousness can do things that no non comscious algorithm, however intelligent or complex, can do. I usually say that some properties of conscious experiences, like the experience of understanding meaning and of feeling purposes, are the best rationale to explain why conscious agents can generate complex functional information while non conscious systems cannot. But again, ID is not a theory of consciousness. All spheres of human cognition are interrelated: religion, philosophy, science, art, everything. But each of those things a specificity. ID theory will probably be, in the future, part of a theory of consciousness, if and when we can develop a scientific approach to it. But at present it is only a theory about how to detect a specific product of consciousness, complex functional information, in material objects. Jeffrey Schwartz and Mario Beauregard are neuroscientists who have dealed brilliantly with the problem of consciousness. the spiritual brain is a very good book. Chalmers is a philosopher who has given us a precious intuition with his concept of the hard problem of consciousness. None of those approaches, however, is even near to understand anything about the "origin" of cosnciousness. Least of all ID. I am absolutely certain that consciousness is in essence immaterial. But that is my philosophical conviction. the best scienctific evidence that I can imagine about that are NDEs, and they are not related to ID theory. gpuccio
JAD
Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions.
Agreed. Science does not stand alone as a self-evident process. It is dependent upon philosophical assumptions. Dawkins has his own assumptions. If he said, for example, that science can only accept material causes for all of reality, that is just his philosophical view. If ID says that science can accept immaterial causes, then it is different science. A person might also say that science must accept that God exists. That's a philosophical starting point. In the end, people who do science are carrying out a philosophical project. If a person is willing to do enough philosophy to carry out the project of science, I believe they have the responsibility to carry the philosophy farther than science. The philosophical questions go beyond simply what causes we can accept. But people like Dawkins and others do not accept this. They think that science simply has one set of rules, and they claim to be the ones following the true scientific rules, as if those rules always existed. Some IDists have tried to convince the world that ID is just following the normal, accepted rules of science and that people do not need to accept a new kind of science in order to accept ID conclusions. Others will say that mainstream science itself is incorrect and that people need a different kind of science in order to understand ID. I think ID will even work with Dawkins' version of science. He may say that "only material causes" can be considered. So, we observe intelligence and so some material cause created the intelligent output? The question for Dawkins would be what material cause creates intelligent outputs? Silver Asiatic
"CSI is a reliable indicator of design" --- William Dembski "it is CSI on which David Chalmers hopes to base a comprehensive theory of human consciousness." -- William Dembski https://www.asa3.org/ASA/PSCF/1997/PSCF9-97Dembski.html Silver Asiatic
SA: I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design. GP: Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.
ID and Neuroscience https://uncommondesc.wpengine.com/intelligent-design/id-and-neuroscience/ My good friend and colleague Jeffrey Schwartz (along with Mario Beauregard and Henry Stapp) has just published a paper in the Philosophical Transactions of the Royal Society that challenges the materialism endemic to so much of contemporary neuroscience. By contrast, it argues for the irreducibility of mind (and therefore intelligence) to material mechanisms. William Dembski
Silver Asiatic
SA, Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions. For example: That we exist in a real special-temporal world-- that the world (the cosmos) is not an illusion and we are not “brains in a vat” in some kind of Matrix like virtual reality. That the laws of nature are universal throughout time and space. Or that there are really causal connections between things and things, people and things. David Hume famously argued that that wasn’t self-evidently true. Indeed, in some cases it isn’t. Sometime there is correlation without causation or “just coincidence.” Again, notice the logic Dawkins wants us to accept. He wants us to implicitly accept his premise that that living things only have the appearance of being designed. But how do we know that premise is true? Is it self-evidently true? I think not. Why can’t it be true that living things appear to be designed for a purpose because they really have been designed for a purpose? Is that logically impossible? Metaphysically impossible? Scientifically impossible? If one cannot answer those questions then design cannot be eliminated from consideration or the discussion. Therefore, it is a legitimate inference from the empirical (scientific) evidence. I have said this here before, the burden of proof is on those who believe that some mindless, purposeless process can “create” a planned and purposeful (teleological) self-replicating system capable of evolving further though purposeless mindless process (at least until it “creates” something purposeful, because, according to Dawkins, living things appear to be purposeful.) Frankly, this is something our regular interlocutors consistently and persistently fail to do. As a theist I do not claim I can prove (at least in an absolute sense) that my world view is true. Can naturalists/ materialists prove that their world view is true? Personally I believe that all worldviews rest on unprovable assumptions. No one can prove that their world view is true. Is that true of naturalism/ materialism? If it can someone with that world view needs to step forward and provide the proof. As whether or not ID is science. I am skeptical of the claim that Darwinism in the macro-evolutionary sense is science or that SETI is science (what empirical evidence is there that ETI’s exist?) How does NS + RV cause macro-evolutionary change? Science, needs to answer the question of how. Just saying “oh somehow it could” with any airy wave of the hand is not a sufficient explanation. But that applies for people on both sides of the debate. john_a_designer
Silver Asiatic at #157: OK, I apologize too. Multiple question marks are not intended as an offense, only as an expression of true amazement. Some other statements may have been a little more "heated", as you say. Let's try to be more detached. :) I have just finished commenting on your statements. Please, forgive any possible question marks or tones. My purpose is always, however, to clarify. I am afraid that Egnor and BA are not exactly my main reference for ID theory. I always quote my main references: Dembski (with whom, however, I have sometimes a few problems, but whose genius and importance for ID theory cannot be overestimated) Behe, with whom I agree (almost) always. Abel, who has given a few precious intuitions, at least to me. Berlinsky, who has entertained me a lot with creative and funny thoughts. Meyer, who has done very good work about OOL and the Cambrian explosion. And, of course, others. Including many friends here. Let me quote at least KF and UB for the many precious contributions, but of course there are a lot more, and I hope nobody feels excluded: it would be a big work to give a coherent list. gpuccio
Silver Asiatic at #152: I wasn’t “playing” with it. I was helping you clarify your statement. Well, I hope I have clarified it. Thank you for the help.
Again, normally IDists would not say that science can Directly investigate, evaluate, analyze, measure or describe immaterial entities. You seem to disagree with that.
Well, it seems that I have not clarified enough. Please, read again what I have written. Here are some more clues: 1) "investigate, evaluate, analyze, measure or describe" are probably too many different words. I quote myself: "But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable. Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed. My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it." So, again. Science starts with facts: what can be observed. "Measures" are only made on what can be observed. I suppose that all your fancy words can apply to our interaction with facts: - When we gather facts and observe their properties, it can be said, I suppose, that we are "investigating" facts, and "analyzing" them. And "eveluating" them or "describing" them. And of course taking measures is part of observing facts. - When we build theories to explain observed facts, not all those terms apply. For example, let's say that we hypothesize a cause and effect relationship. That is part of our theory, but we don't take measures of the cause-effect relationship. At most, we infer it from the measures we have taken of facts. But in a wide sense building a theory can be considered an evaluation, certainly it is a form of investigation. I have said clearly that we can use any possible concept in our theories, provided that the purpose is to explain facts. We use the cause-effect relationship, we use complex numbers in quantum mechanics, we can in principle use the concept of God, if useful. Or of immaterial entities. That does not mean that we can measure those things, or have further information about them except for what can be reasonably inferred from facts. That should be clear, but I don't know why I will not be suprised if again you don't understand.
Evaluation is not the gathering of facts. Collecting facts comes from observation, measurement, or investigation. Evaluation can create some facts (such as logical conclusions) but in science it all must start with observation. After that, we can evaluate. To infer is to draw a logical conclusion from observations and evaluation.
As you like. As said, it's not a problem about words. You want to limit "evaluation" in some, not very clear to me, way, be my guest. I will simply avoid the word with you. But please, note that logical conclusions are not facts. If you insist on that kind of epistemology, we cannot really communicate.
As I have heard other ID theorists state, ID cannot observe anything about an immaterial designer or designers. I think you disagree with this.
No. Why should I? Of course if a thing is immaterial it cannot be "observed". The only exception is our personal consciousness, that each of us observes directly, intuitively. I have only said that we can use the concept of immaterial entoities in our theories, and that we can make inferences about the designer from observed facts, be he material or immaterial.
The only thing ID attempts to do is show that there is evidence of Intelligence at work.
Of intelligent designers.
The effects that we observe in nature could have been produced by millions of designers, each one of which has less intelligence than a human being, but collectively create design in nature.
I absolutely disagree. ATP synthase could never have been designed by a crowd of stupid designers. It's the first time I hear such a silly idea.
If you are speaking about a designer that exists outside of space and time, then we do not have any experience with that.
I have never said that. I have said many times that the designer acts in space and time. Where he exists, I really don't know. Have you some information about that?
We can observe various effects, but not the entity itself.
That's right. Like dark energy or dark matter. As for that, we cannot even observe conscious representations in anyone else except us, but still we very much base our science and map of reality on their effects and the inference tha thy exist.
It seemed that you disagree with this and believe instead that science can directly observe an immaterial designer (or any immaterial entity) that produces effects in reality.
This is only your unwarranted misinterpretation. I have said many times that science can directly observe some effects and infer a designer, maybe immaterial. It's exactly the other way round. gpuccio
Richard Dawkins’ books should be in the “cheap philosophy” section of bookstores. But instead they have them in the Science section. Specially after Professor Denis Noble has discredited them. Bizarre. jawa
GP
??? Again, I can’t follow you. Who is “we”? I am not aware that ID, especially in its biological form, but probably also in the cosmological form, is inferring anything about “the generation of consciousness”. Why do you say that?
Your use of multiple question-marks and the personal digs ("even you can understand") indicate to me that this conversation is getting too heated. You apologized previously, so thank you. I'll also apologize for the tone of my remarks. You asked about ID and consciousness:
Yet the adequacy of matter to generate agency (or apparent agency) is fundamental to both the problem of consciousness and the problem of the origins of biological complexity. If immaterial explanations are necessary to explain the agency inherent to the mind, then the view that immaterial explanations are necessary to explain the agency apparent in living things gains considerable traction. https://evolutionnews.org/2008/12/consciousness_and_intelligent/
Michael Egnor writes about consciousness as evidence supporting ID. I think here, BornAgain77 often posts resources that support this concept. I understand that your interest is in biological ID, and therefore limited to biological designer or designers. You answered my questions adequately. Again, I appreciate your comments and I apologize for any misunderstandings that may have arisen in this conversation. Silver Asiatic
John_a_designer at #150: I agree with what you say. I just want to clarify that: 1) IMO Dawkin's biological arguments are very bad, but at least they are a good incarnation of true neo-darwinism, thereofre easy to confute. In that sense, he is better than many post-post-neo-darwinists, whose thoughts are so ethereal that you cannot even catch them! :) 2) On the contrary, Dawkin's philosohical arguments are arrogant, superficail and ignorant. Unbearable. He should stick to being a bad thinker about biology. 3) To be fair to Dawkins, I don't think that he assumes that "design is impossible". On the contrary, he is one of the few who admit that design could be a scientific explanation. He just does not accept it as a valid scientific explanation. That is epistemologically correct, even if of course completely wrong in the essence. gpuccio
JAD
ID, on the other hand, is at the very least a philosophical inference from the study of nature itself.
It's a complicated issue and I can see where you are going with this. At the same time, I think many prominent IDists will say that ID is not a philosophical inference. It's a scientific inference from what science already knows about the power of intelligence. So, something is observed that appears to be the product of intelligent design, then science evaluates the probability that it came from natural causes. If that probability is too remote, intelligent design becomes the best answer since we know that intelligence can design things like that which has been observed. On the other hand, with your view, there are different philosophical starting points for both ID and Dawkins. So, depending on what we mean it may be correct to say that ID is really a philosophical inference. It's a different philosophy of science than that of Dawkins. I think Dembski and Meyer would disagree with this. They have attempted to show that ID uses exactly the same science as Dawkins does. Silver Asiatic
Jawa at #149: Maybe translucent OPs. :) gpuccio
Silver Asiatic at #138: Let's see your last statements.
As above, the designer we refer to in ID is the designer of the universe, not merely of biological information.
That's not correct. As said, the inference of a designer for the universe, and the inference of a biological designer are both part of ID, but they are different and use completely different observed facts. Therefore, even if both are correct (which I do believe), there is no need that the designer of the universe is the same designer as the designer of biological information. I don't follow your logic.
We infer something about the generation of consciousness.
??? Again, I can't follow you. Who is "we"? I am not aware that ID, especially in its biological form, but probably also in the cosmological form, is inferring anything about "the generation of consciousness". Why do you say that?
In fact, the immaterial quality of consciousness is evidence in support of ID.
No. Big epistemological errors here. Consciousness is a fact, because we can directly observe it. Being a fact, anyone can use its existence as evidence for what one likes. But "the immaterial quality of consciousness" is a theory, not a fact. It's a theory that I accept in my worldview and philosophy, but I would not say that we have incontrovertible scientific evidence for it. Maybe strong scientific evidence, at best. But the important point is: a theory is not a fact. It is never evidence of anything. A theory, however good, needs the support of facts as evidence. it is not evidence for other theories. At most, it is more or less compatible with them.
We look for the origin of that which we can observe.
Correct, and as consciousness can be observed, it is perfectly reasonable to look for some scientific theory that explains its origin. But that theory is not ID. As I have said, ID is not a theory about the origin of consciousness. It is a theory that says that conscious agents are the orign of designed objects. I believe that you can see the difference.
Mainstream evolution already assumes that consciousness is an evolutionary development.
Mainstream evolution assumes a lot of things. Most of them are wrong. And so?
I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design.
Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.
Consciousness separates humans from non-human animals.
??? Why do you say that? I believe that a cat or a dog are conscious. And I think that most ID thinkers would agree. Ask ET about nears! :)
Evolutionary theory offers an explanation, and ID (not your version of ID but others) offers an opposing one.
An explanation for what? For the origin of consciousness? But what ID sources have you been perusing? One of the most famous ID icons is the bacterial flagellum, since Behe used it to explain the concept of irreducible complexity (a concept linked to functional complexity). Is that an explanation of human consciousness? I can't see how. Meyer has written a whole book about OOL and a whole book about the Cambrian explosion. Are those theories about the origin of human consciousness? Of course ID thinkers certainly believe that some special human functions, like reason, are linked to the specific design of humans. But it is equally true that the special functions of bacteria (like the CRISPR system) are certainly linked to the specific design of bacteria. The desing inference is perfectly valid in both cases. But consiousness is not "a function". It is much more. It is a component of reality that we cannot in any way explain by objective configurations of external things. ID is not a theory of consciousness. gpuccio
GP
My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.
I wasn't "playing" with it. I was helping you clarify your statement. I'm not trying to say gotcha. I sincerely thought you believed that science could investigate (directly evaluate, measure, analyze) anything (like God) that produces observable facts. I kept in mind that you said that science is not limited by matter. I'd conclude from that a belief that science can investigate (evaluate, analyze, measure, observe, describe) immaterial entities. You cited a philosophy of science to support that view. How am I supposed to know what you are thinking of? I asked you if science could "investigate" God, but you didn't want to answer that. Again, normally IDists would not say that science can Directly investigate, evaluate, analyze, measure or describe immaterial entities. You seem to disagree with that.
Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.
Evaluation is not the gathering of facts. Collecting facts comes from observation, measurement, or investigation. Evaluation can create some facts (such as logical conclusions) but in science it all must start with observation. After that, we can evaluate. To infer is to draw a logical conclusion from observations and evaluation. As I have heard other ID theorists state, ID cannot observe anything about an immaterial designer or designers. I think you disagree with this. The only thing ID attempts to do is show that there is evidence of Intelligence at work. The effects that we observe in nature could have been produced by millions of designers, each one of which has less intelligence than a human being, but collectively create design in nature. If you are speaking about a designer that exists outside of space and time, then we do not have any experience with that. We can observe various effects, but not the entity itself. It seemed that you disagree with this and believe instead that science can directly observe an immaterial designer (or any immaterial entity) that produces effects in reality. Silver Asiatic
Silver Asiatic at #138:
ID science is not limited to the study of biology. ID also looks at the origin of the universe. In that case, ID is making a claim about the origin of time, space and matter. It is not limited to reconfigurations of existing matter.
That's correct. The cosmological argument, especially in the form of fine tuning, is certainly part of the ID debate. But here I have never discussed the cosmological argument in detail. I think it is a very good argument, but many times I have said that it is different from the biological argument, because it has, inevitably, a more philosophical aspect and implication. I have always discussed the biological argument of ID here, and it is also the main object of discussion, I belieev, since the ID movement started. Dembski, Behe, Meyer, Abel, Berlinski and others usually refer mainly to the biological argument. So I apologize if that created some confusion: all that I say about ID is referred to the biological argument. And biological design always happens in space and time.
You’re trying to blame me for something here, but what you quoted did not answer the question. You avoided answering it when I asked about God also. You say that science can investigate anything that produces observable facts. You explain that by saying science can only make inferences from observable effects. As I said before, those two ideas contradict. In the first (bolded) you say that science can investigate “the producer” of the facts. You then shame me for asking why ID cannot investigate the designer by saying that ID can investigate the observable effects. As I said above, you corrected your first statement with the second – but you should not have blamed me for something that merely pointed to the conflict here.
As I have explained, there is no conflict at all. Of course the word "investigate" refers both to the analysis of facts and to the building of hypotheses. Every action of the mind in relation to science is an "investigation" and an "evaluation", IOWs a cognitive activity in search of some truth about reality. I think I have been clear enough at #128: "The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations." That should be clear, even to you. There are no limitations. If a concept of god were necessary to build a better scientific model of reality that explains observed things, there is no problem: god can be included in that model. But I refuse, and always will refuse, in a scientific discussion, to start from some philosophical or religious idea of God and allow, without any conscious resistance on my part, that such idea influence my scientific reasoning. Science should work, or try to work, independently from any pre-conceived worldview. If scientific reasonings brings to the inclusion, or to the exclusion, of God in a good map of reality, scientific reasoning should follow that line of thought and impartially test it. The opposite is not good, IMO. I hope that's clear enough.
I’m not trying to trick or trap you or win anything. You make a statement that contradicts everything I had known about ID, as well as what contradicts science itself (that science can investigate anything that produces observations). I’m not really worried about your personal views on these things, I was just interested in what seemed to be a confused approach to the issue.
Neither am I. I am trying to clarify. When I don't understand well what my interlocutor is saying, I ask. When they ask me, I answer. That's the way. It's strange that my statements contradict everything you have known of ID. My application of the ID procedure for design inference is very standard, maybe with some more explicit definition. About God, an issue that I never discuss here for the reasons I have given, it is rather clear that all the official ID movement unanimously states that the design inference from biology tells nothing about God. Indeed, ID defenders are usually reluctant to tell anything about the biological designer. I want to clarify well my position about that, even if I have been explicit many times here. 1) I absolutely agree with the idea that there is no need to say anything about the designer to make a valid design inference. This is a pillar of the ID thoughtm and it is perfectly correct. I ofetn say that the designer can only be describet as some conscious, intelligent and purposeful agent. But that is implicit in the definition of design, it is not in any way something we infer about any specific designer. 2) That said, I have always been available here, maybe more than other ID defenders, to make reasonable hypotheses about the biological designer in the measure that those hypotheses can be reasonably driven from known facts. That's what I have done at #100 and #101, trying to answer a number of questions that you had asked. I know very well that trying to reason scientifically about those issues is always a sensitive matter, both for those in my filed and for those in the other. Or maybe just in.between. But I do believe that science must pursue all possible avenues of thought, provided that we always start form observable facts and are honest in building our theories. Knowing that, I have also added, at the end of post #101: "That’s the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them." I can only repeat my statement: That’s the best I can do to answer your questions. More in next post. gpuccio
Gpuccio and Silver Asiatic, A few of my thoughts about the relationship between science, philosophy, theology and religion. Creationism is based on a religious text-- the Jewish-Christian scriptures. ID, on the other hand, is at the very least a philosophical inference from the study of nature itself. Even materialists recognize the possibility that nature is designed. Richard Dawkins, for example, has argued that “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” He then goes on to argue that it is not designed. So what is Dawkins argument? Let’s try out his quote as the main premise in a basic logical argument. Premise 1: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Premise 2: Dawkins (a trained zoologist) believes that “design” is only an appearance. Conclusion: Therefore, nothing we study in the biosphere is designed. The conclusion is based on what? Are Dawkin’s beliefs and opinions self-evidently true? Is the science settled as he suggests? If the answer for those two questions is no (Dawkin’s arguments BTW are by no means conclusive) then what is the reason for not looking at living systems that have “the appearance of having been designed for a purpose?” Couldn't they really have been designed for a purpose? That is a basic justification for ID. It begins from a philosophical neutral position (that some things could really be designed) whereas a committed Darwinian like Dawkins, along with other "committed" materialists, begins with the logically fallacious assumption that design is impossible. john_a_designer
GP @141: But even in the case where you would develop translucent fur, I hope you’ll keep writing OPs for us here, right? :) jawa
Silver Asiatic at #138:
I responded to your statement: Science can investigate anything that produces observable facts. You then said: The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. Those two statements actually conflict with each other. You ask me to assume your meaning of various terms (as if the meaning is obvious) but in this case, I assume that your first statement is incorrect and you corrected it with the second.
Oh, good heavens! That's what happens when someone (you) discusses not to understand and be understood, but just to generate confusion. You are of course equivocating on the word "investigate". Maybe the second from is more precise, but the meaning is the same. However, let's clarify, for those who can be confused by your playing with words. Science always starts from facts: what can be observed. But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable. Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed. My error was probably to use the word "investigate", which was ambiguous enought to allow you to play with it. OK, let's say that science can build hypotheses only to explain observed facts, but of course those hypotheses, those maps of reality, can include any cognitive content, if it is appropriate to the explanation. The word "evaluate" can refer of course both to the gathering of facts and to the building of theories. My original statement was. " Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions." Wasn't it clear enough for you?
I was using the general and ordinary meaning of the term “design”. Whatever is designed, even if using previously existing material, is an act of creation. If that which at one moment was inanimate matter, suddenly, by an act of an intelligent agent becomes a living organism – that is a creation. The designer created something that did not exist before. You limited the term creation to only those acts which are ex nihilo but that’s an artificial limit.
The problem here is not the meaning of the word design, but the meaning of the word creation. The word creation here, in this blog and I would say in the whole debate about ID and more, is used in the sense of "creation ex nihilo", something that only God can do. Why do you think that our adversaries (maybe you too) call us "creationists" and not "designists"? It's strange that one like you, that has been coming here for some time, is not aware of that, and suddenly inteprets "creation" in this debate as a statement about a movie or a book. However, the problem is not the meaning of words, For that, it's enough to clarify what we mean. Clearly, and without word plays. More in next post. gpuccio
ET, The Massachusetts bears may be cool animals, but didn’t get hired for Coca-Cola TV ads like their polar cousins. :) jawa
ET: Thanks to you! :) I suspected you had some special connection with bears! I am more a cat guy, but I do understand love and interest for all animals. :) gpuccio
Thank you, gpuccio. We have a little impasse as I think it is the number of specific mutations and the functions are all the physiological changes afforded by them. In his book "Human Errors", Nathan Lents tells us that it is highly unlikely that one locus will receive another mutation after already getting mutated. And yet it has the same probability for change as any other site. So it looks like evolutionists are talking about the probability of a specific mutation happening regardless of function. As for bears- living in Massachusetts I run into black bears all of the time. They come up on my deck at night. I have photos of them in my yard. And being a dog-person I have a keen interest. That's all- I think they are really cool animals. ET
ET at #142: Thank you for the further clarifications about bears. You are really an expert! :) However, it is not really the numser of specific mutations that counts. It is the number of coordinated mutations necessaty to get a function, none of which has any functional effect alone. There is a big difference. I have tried to explain that at #140. gpuccio
Pw at #137: "Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function?" You mean the small drop in amphibians in the blue line (BCL10)? Yes, that kind of pattern can be observed often enough, usually in one or two classes. The strict meaning is that the best homology hit in that class was lower than in the older class. Here the effect is small, but sometimes we can see a whole unexpected drop in one class of organisms, while the general pattern is completelt consistent in all the other ones. Technically, we are speaking of human conserved information. That's what is measured here. Probably, it is a loss of function in relation to that protein in that class. That is perfectly compatible with Behe's concept of devolution. That form of the protein seems somtemise to be completely lacking in one class. In some cases, it could also be a technical error in the databases, or in the blast algorithm. We can expect that, it happens. Some of the classes I have considered are more represented in the databases, some less. However, if one proteins lacks any relevant homology in one class in my graphic, that means that none of the organisms in that class showed any relevant homology, because I always consider the best hit among all the proteins of all the organisms os that class included in the Ncbi databases. gpuccio
1- Bears with actual white fur exist 2- There are grizzly (brown) bears with actual white fur. They are not polar bears. 3- I am looking at the number of specific mutations it would take to get a polar bear from a common ancestor with brown bears. That would tell me if blind and mindless processes are up to the task. The paper gpuccio provided gives us a hint and it already goes against blind and mindless processes. ET
Jawa at #134: "Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials?" Absolutely! Let's wait: if I develop translucent fur in the next few years, that will be a strong argument in favour of your hypothesis! :) gpuccio
ET: "Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts." OK, we have no polar bears here in Italy, so I cannot share your expertise! :) So, I read a little about the issue. Polar bear's fur is hollow and lacks any pigment. Indeed, it is rather transparent. The white color is due to optical effects. And the skin is black, as you say. Brown bears has a fur that is solid and pigmented. OK, what does that mean? First of all, let's say that the fact that the fur is not really white is not important in relation to the supposed selection of white in polar animals, because indeed polar bears appear white, so to the purpose of the supposed positive selcetion there is no real difference. But that is not the real point, I would say. The real point is: what is the mechanism of the divergence between brown bears and polar bears? The paper I mentioned puts the split at about 500000 years ago, that is not much. Some give a few million years. Whatever, it is certainly a rather recent event in evolutionary history. So, can the divergence be explained by neo-darwinian mechanisms, or is it the result of design? Or of some biological algorithm embedded in the common ancestor? The paper I mentioned of course has a neo-darwinian answe, but that could hardly be different. Behe thinks that this can be a case of darwinian "devolution": differentiation through loss of function which goves some environmental advantage. You are definitely in favor of design (or an adaptation algorithm, I am not sure). Who is right? I think this is a case that shows clearly how ID theory is necessary to give good answers to that kind of problems. IOWs, we can answer only if we can evaluate the functional complexity of the divergence. The problem is that I cannot find any appropriate data in all the source that have been mentioned, or that I could find in my brief search, to do that. Why? Because nobody seems to know the molecular basis for the difference in fur structure and pigmentation. And it is not completely clear how functionally important the polar bear fur structure is, even if it is generally believe that it is under positive selection, therefor somehow functional in the appropriate environment. If you have some better data, please let me know. Of course, fur is not the only difference, but for the moment let's focus on that. So, from an ID point of view, we have different possible scenarios, if we could measure the functional information behind the difference in fur structure and pigmentation. To safely infer design according to the classic procedure, we need some function that implies more than 500 bits of functional information. However, as we are dealing here with a population (bears) rather limited in number and slow-reproducing, and with a rather short time window, I would be more than happy with 150 bits of functional information to infer design in this case. The genomic differences highlighted in the paper I quoted seem to be rather simple. Most of them can be interpreted as one aminoacid mutations with loss of function, perfectly in the range of neo-darwinism and of Behe's model. But I have no idea if those simple genetic differences are enough to explain what we observe. The lack of pigmentation is probably easier to explain. For the hollow structure, I have no ideas. The problem is: we have to know the molecular basis, otherwise no computation of functional information can be made. Because, as we know, there are sometimes big morphological differences that have a vert simple biological explanation, and vice versa. So again, I must ask: have you any data about the molecular foundation of the differences? In the meantime, I would say that yhe scenarios are: 1) The differences can be explained by one or more independent mutations affecting functions already present. Or, at most, 2 or 3 coordinated mutations where each one affects the same function in a relevant way, so that NS could intervene at each step (IOWs a simple tweaking pathway of the loss of function, as we see for example in antibiotic resistance). These scenarios are in the range of what RV + NS could in principle do, maybe even in a population like bears. In this case, I would accept a neo-darwinian mechanism as a reasonable explanation, until different data are discovered. 2) The differences imply a gain in functional information of 150+ bits. We can safely infer design. Polar bears were designed, some time about 400000 years ago, or a little more. 3) The differences imply something between 12 bits (3 AAs) and 150 bits. In this case, It would be wise to remain cautious. It is not the best scenario to infer design, even if it is rather unlikely for a neo-darwinian mechanism in that kind of population. Maybe some simple active adaptation algorithm embedded in brown bears could be considered. But such an algorithm should be in some way detailed and shown to be there, not only imagined. IMO, this is how ID theory works. Through facts, and objective measurements of functional information. There is no other way. Just a final note about the "waiting for two mutations" paper. That is of course a very interesting article. But it is about two coordinated mutations needed to generate a new function, none of which individually confers any advantage. IOWs, this is more or less the scenario of chloroquine resistance, again linked to Behe. I agree that such a scenario, even if possible, is extremely unlikely in a population like bears. But the simple fact is that almost all the variations considered by Behe in his reasonings about devolution are very simple. One mutation is often enough to lose a function. One frameshift mutation can inactivate a whole protein, losing maybe thousands of bits of functional information. And we can have a lot of such individual independent mutations in a population like bears in 400000 years. So, unless we have better data on the functional information involved in the transition to polar bears, I suspend any judgement. gpuccio
More on the cute polar bears: Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift David C. Rinker, Natalya K. Specian, Shu Zhao, and John G. Gibbons PNAS July 2, 2019 116 (27) 13446-13451;   DOI: 10.1073/pnas.1901093116
  Copy number variation describes the degree to which contiguous genomic regions differ in their number of copies among individuals. Copy number variable regions can drive ecological adaptation, particularly when they contain genes. Here, we compare differences in gene copy numbers among 17 polar bear and 9 brown bear individuals to evaluate the impact of copy number variation on polar bear evolution. Polar bears and brown bears are ideal species for such an analysis as they are closely related, yet ecologically distinct. Our analysis identified variation in copy number for genes linked to dietary and ecological requirements of the bear species. These results suggest that genic copy number variation has played an important role in polar bear adaptation to the Arctic.
Polar bear (Ursus maritimus) and brown bear (Ursus arctos) are recently diverged species that inhabit vastly differing habitats. Thus, analysis of the polar bear and brown bear genomes represents a unique opportunity to investigate the evolutionary mechanisms and genetic underpinnings of rapid ecological adaptation in mammals. Copy number (CN) differences in genomic regions between closely related species can underlie adaptive phenotypes and this form of genetic variation has not been explored in the context of polar bear evolution. Here, we analyzed the CN profiles of 17 polar bears, 9 brown bears, and 2 black bears (Ursus americanus). We identified an average of 318 genes per individual that showed evidence of CN variation (CNV). Nearly 200 genes displayed species-specific CN differences between polar bear and brown bear species. Principal component analysis of gene CN provides strong evidence that CNV evolved rapidly in the polar bear lineage and mainly resulted in CN loss. Olfactory receptors composed 47% of CN differentiated genes, with the majority of these genes being at lower CN in the polar bear. Additionally, we found significantly fewer copies of several genes involved in fatty acid metabolism as well as AMY1B, the salivary amylase-encoding gene in the polar bear. These results suggest that natural selection shaped patterns of CNV in response to the transition from an omnivorous to primarily carnivorous diet during polar bear evolution. Our analyses of CNV shed light on the genomic underpinnings of ecological adaptation during polar bear evolution.
OLV
Gpuccio I responded to your statement:
Science can investigate anything that produces observable facts.
You then said:
The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality.
Those two statements actually conflict with each other. You ask me to assume your meaning of various terms (as if the meaning is obvious) but in this case, I assume that your first statement is incorrect and you corrected it with the second.
You are equivocating on the meaning of “creation”. Of course all acts of design are “creative” in a very general sense.
I was using the general and ordinary meaning of the term "design". Whatever is designed, even if using previously existing material, is an act of creation. If that which at one moment was inanimate matter, suddenly, by an act of an intelligent agent becomes a living organism - that is a creation. The designer created something that did not exist before. You limited the term creation to only those acts which are ex nihilo but that's an artificial limit. ID science is not limited to the study of biology. ID also looks at the origin of the universe. In that case, ID is making a claim about the origin of time, space and matter. It is not limited to reconfigurations of existing matter.
You quote me saying: “Indeed, ID is not evaluating anything about the designer…” and then you comment: As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer? This is quote mining of the worst kind. The original statement was: ” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.” Shame on you.
You're trying to blame me for something here, but what you quoted did not answer the question. You avoided answering it when I asked about God also. You say that science can investigate anything that produces observable facts. You explain that by saying science can only make inferences from observable effects. As I said before, those two ideas contradict. In the first (bolded) you say that science can investigate "the producer" of the facts. You then shame me for asking why ID cannot investigate the designer by saying that ID can investigate the observable effects. As I said above, you corrected your first statement with the second - but you should not have blamed me for something that merely pointed to the conflict here. I'm not trying to trick or trap you or win anything. You make a statement that contradicts everything I had known about ID, as well as what contradicts science itself (that science can investigate anything that produces observations). I'm not really worried about your personal views on these things, I was just interested in what seemed to be a confused approach to the issue.
The designer that we infer in ID is the designer of biological information.
As above, the designer we refer to in ID is the designer of the universe, not merely of biological information. We infer something about the generation of consciousness. In fact, the immaterial quality of consciousness is evidence in support of ID. We look for the origin of that which we can observe.
We infer nothing about the generation of consciousness (I don’t use the term design, because as I have explained I speak of design only for materila objects). As said, nobody here is trying to build a theory of consciousness.
Mainstream evolution already assumes that consciousness is an evolutionary development. I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design. Consciousness separates humans from non-human animals. Evolutionary theory offers an explanation, and ID (not your version of ID but others) offers an opposing one. Silver Asiatic
GP, I appreciate your answers at 107. Please, let me ask you another question: Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function? pw
If all of king's horses and all of king's men couldn't put Humpty together again, who else can do it? :) jawa
GP @129: Thanks for referencing the discussion about the Humpty Dumpty argument. Very interesting indeed. PeterA
Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials? :) jawa
GP @131:
About polar bears, and in support of Behe’s ideas: Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears
Here's another article also mentioning the cute polar bears: Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism Matteo Fumagalli, Stephane M Camus, Yoan Diekmann, Alice Burke, Marine D Camus, Paul J Norman, Agnel Joseph, Laurent Abi-Rached, Andrea Benazzo, Rita Rasteiro, Iain Mathieson, Maya Topf, Peter Parham, Mark G Thomas, Frances M Brodsky eLife 2019;8:e41517 DOI: 10.7554/eLife.41517
CHC22 clathrin plays a key role in intracellular membrane traffic of the insulin-responsive glucose transporter GLUT4 in humans. We performed population genetic and phylogenetic analyses of the CHC22-encoding CLTCL1 gene, revealing independent gene loss in at least two vertebrate lineages, after arising from gene duplication. All vertebrates retained the paralogous CLTC gene encoding CHC17 clathrin, which mediates endocytosis. For vertebrates retaining CLTCL1, strong evidence for purifying selection supports CHC22 functionality. All human populations maintained two high frequency CLTCL1 allelic variants, encoding either methionine or valine at position 1316. Functional studies indicated that CHC22-V1316, which is more frequent in farming populations than in hunter-gatherers, has different cellular dynamics than M1316-CHC22 and is less effective at controlling GLUT4 membrane traffic, altering its insulin-regulated response. These analyses suggest that ancestral human dietary change influenced selection of allotypes that affect CHC22’s role in metabolism and have potential to differentially influence the human insulin response.
 It is also possible that some forms of polar bear CHC22 are super-active at GLUT4 sequestration, providing a route to maintain high blood glucose, as occurs through other mutations in the cave fish (Riddle et al., 2018).
Regulators of fundamental membrane traffic pathways have diversified through gene duplication in many species over the timespan of eukaryotic evolution. Retention and loss can, in some cases, be correlated with special requirements resulting from species differentiation
The genetic diversity that we report here may reflect evolution towards reversing a human tendency to insulin resistance and have relevance to coping with increased carbohydrate in modern diets.
  And here's another one; Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA) Heli Routti, Mari K. Berg, Roger Lille-Langøy, Lene Øygarden, Mikael Harju, Rune Dietz, Christian Sonne & Anders Goksøyr  Scientific Reports   volume 9, Article number: 6918 (2019) DOI: 10.1038/s41598-019-43337-w
Peroxisome proliferator-activated receptor alfa (PPARA/NR1C1) is a ligand activated nuclear receptor that is a key regulator of lipid metabolism in tissues with high fatty acid catabolism such as the liver. Here, we cloned PPARA from polar bear liver tissue and studied in vitrotransactivation of polar bear and human PPARA by environmental contaminants using a luciferase reporter assay. Six hinge and ligand-binding domain amino acids have been substituted in polar bear PPARA compared to human PPARA. Perfluorocarboxylic acids (PFCA) and perfluorosulfonic acids induced the transcriptional activity of both human and polar bear PPARA. The most abundant PFCA in polar bear tissue, perfluorononanoate, increased polar bear PPARA-mediated luciferase activity to a level comparable to that of the potent PPARA agonist WY-14643 (~8-fold, 25??M). Several brominated flame retardants were weak agonists of human and polar bear PPARA. While single exposures to polychlorinated biphenyls did not, or only slightly, increase the transcriptional activity of PPARA, a technical mixture of PCBs (Aroclor 1254) strongly induced the transcriptional activity of human (~8-fold) and polar bear PPARA (~22-fold). Polar bear PPARA was both quantitatively and qualitatively more susceptible than human PPARA to transactivation by less lipophilic compounds.
it should be kept in mind that polar bear metabolism is highly adapted to cold climate and feeding and fasting cycles, and direct comparison of physiological functions between polar bears and humans is thus challenging.
  Here's an article about the brown bears that mentions the polar bear cousins too: Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains Alba Rey-Iglesia, Ana García-Vázquez, Eve C. Treadaway, Johannes van der Plicht, Gennady F. Baryshnikov, Paul Szpak, Hervé Bocherens, Gennady G. Boeskorov & Eline D. Lorenzen  Scientific Reports   volume 9, Article number: 4462 (2019) DOI: 10.1038/s41598-019-40168-7
The mtDNA of extant polar bears (Ursus maritimus), clade 2b, is embedded within brown bears and is most closely related to clade 2a, the ABC brown bears18.
  OLV
Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts. "Lack of pigmentation"? It's a translucent hollow tube! Luminescence- when sunlight shines on it there is a reaction we call luminescence (another great word for sobriety check points). The skin is black. To claim that differential accumulation of genetic accidents, errors and mistakes just happened upon luminescence for polar bears, is extraordinary and without a means to test it. Count the number of specific changes already discussed and compare that to waiting for TWO mutations. You will see there isn't enough time in the universe for Darwinian processes to pull it off. ET
For all interested: About polar bears, and in support of Behe's ideas: Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears https://www.cell.com/cell/fulltext/S0092-8674(14)00488-7
Genes Associated with White Fur A white phenotype is usually selected against in natural environments, but is common in the Arctic (e.g., beluga whale, arctic hare, and arctic fox), where it likely confers a selective advantage. A key question in the evolution of polar bears is which gene(s) cause the white coat color phenotype. The white fur is one of the most distinctive features of the species and is caused by a lack of pigment in the hair. We find evidence of strong positive selection in two candidate genes associated with pigmentation, LYST and AIM1 (Table 1). LYST encodes the lysosomal trafficking regulator Lyst. Melanosomes, where melanin production occurs, are lysosome-related organelles and have been implicated in the progression of disease associated with Lyst mutation in mice (Trantow et al., 2010). The types and positions of mutations identified in LYST vary widely, but Lyst mutant phenotypes in cattle, mice, rats, and mink are characterized by hypopigmentation, a melanosome defect characterized by light coat color (Kunieda et al., 1999, Runkel et al., 2006, Gutiérrez-Gil et al., 2007). LYST contains seven polar bear-specific missense substitutions, in contrast to only one in brown bear. One of these, a glutamine to histidine change within a conserved WD40-repeat containing domain, is predicted to significantly affect protein function (Figure 5B, Table S7). Three polar bear changes in LYST are located in proximity to the N-terminal structural domain and map close to human mutations associated with Chediak-Higashi syndrome, a hair and eyes depigmentation disease (Figure 5C). We predict that all these protein-coding changes, possibly aided by regulatory mutations or interactions with other genes, dramatically suppress melanin production and transport, causing the lack of pigment in polar bear fur. Variation in expression of the other color-associated gene, AIM1 (absent in melanoma 1), has been associated with tumor suppression in human melanoma (Trent et al., 1990), a malignant tumor of melanocytes that affects melanin pigment production.
See also comments #75 and #112. gpuccio
Silver Asiatic:
I mentioned Mozart’s symphonies which were designed in his conscious mind. They weren’t designed on paper or by musical instruments.
No. According to the definitions I have given, and that I always use when discussing ID. Mozart's symphonies were designed when he put them on paper. Before that, they were conscious representations, and not designed objects. As said, we are not discussing how conscious representations take form in consciousness. In ID we are interested only in the design of objects.
Also, if an immaterial entity created other immaterial entities, you would say “that is not an act of purposeful design”?
Again, that would not be design in the sense I have given, Indeed, that problem has nothing to do with ID theory. Immaterial entities do not have a configuration that can be observed, and therefore no functional information can be measured for them. ID theory is not appropriate for immaterial entities. It is about designed objects. gpuccio
EugeneS: I remember the argument mentioned by Sal Cordova, but it seems that the original argument was made by Jonathan Wells (or maybe someone else before him). Here is an OP by V. J. Torley (the old VJT :) ), defending the argument. It gives a transcript of the argument bt Wells. https://uncommondesc.wpengine.com/intelligent-design/putting-humpty-dumpty-back-together-again-why-is-this-a-bad-argument-for-design/ IMO. the argument is extremely strong. OOL theories imagine that in some way some of the molecules necessary for life originated. That some life was produced. The simple fact is: we cannot produce life in any way, even using all the available molecules and structures that are associated to life on our whole planet. The old fact is still a fact: life comes only from life. Even when Venter engineers his modified genomes, he must put them in a living cell to make them part of a living being. When scientists clone organisms, they must use living cells. You cannot make a living cell from inanimate matter, however biologically structured it is. And yet these people really belive that natural events did generate living cells, from completely unstructured inanimate matter! It is simply folly. I will tell you this: if it were not for the simple ideological necessity that "it must have happened without design, because ours is the only game in town", no serious scientist would ever consider for a moment any of the current theories for OOL. As I have said, they are not even bad scentific theories, They are mere imagination. gpuccio
Silver Asiatic:
Do you think that science can investigate God?
As said many times, I don't discuss God in a scientific context. The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.
I believe that design is the ultimate creative act. Design is an action of creation with and for a purpose. It begins as a creative act in a conscious mind – a thought which did not exist before is created for a purpose. This thought is then implemented through various means. But how can there be design without creation? How can a purposeful act occur without it having been created by a mind?
You are equivocating on the meaning of "creation". Of course all actd of design are "creative" in a very general sense. But of course, as everyone can understand, that was not the sense I was using. I was clearly speaking of "creation" in the specific philosophical/religious meaning: generating some reality from nothing. Design is not that. In materila objects, design gives specific configurations to existing matter. I always speak of design according to that definition, that I have given explicitly here: https://uncommondesc.wpengine.com/intelligent-design/defining-design/ This definition is the only one that is necessary in ID, because ID infer design from the material object. You speak of a "creative act in a conscious mind". Maybe, maybe not. We have no idea of how thoughts arise in a conscious mind. Moreover, as we are not trying to build a theory of the mind, or of consciousness, we are not interested in that. The process of design begins when some form, already existing in the consciousness of the designer as a representation, is outputted to a material object. That is the process of design. That is what we want to infer from the material object. It is not creation, only the input of a functional configuration to an object.
How are immaterial objects constrained by space and time? What measurements can be performed on immaterial entities?
Energy is not material, yet it exists in space and time. Dark energy is probably not material: indeed, we don't know what it is. Can you say that it cannot exist in relation to space and time? Strange, because it apparently accelerates the expansion of the universe, and that seems to be in relation, very strongly, with space and time. If we can or cannot measure something has nothing to do with the properties of that something. Things don't wait for our measures to be what they are. Our ability to measure things evolves with our understanding of what things are. You quote me saying: "Indeed, ID is not evaluating anything about the designer…" and then you comment:
As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?
This is quote mining of the worst kind. The original statement was: " Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions." Shame on you.
What scientific evidence do you have to show that the designer did not design human consciousness? Where do you think human consciousness comes from?
Again, misinterpretation, maybe intentional. Of course I am speaking of what we can infer according to ID theory, The designer that we infer in ID is the designer of biological information. We infer nothing about the generation of consciousness (I don't use the term design, because as I have explained I speak of design only for materila objects). As said, nobody here is trying to build a theory of consciousness. I have alredy stated clearly that IMO science has no real understanding of what consciousness is, least of all of how it originates. We can treat consciousness as a fact, because it can be directly observed, but we don't understand what it is. Could the designer of ciological objects be also the originator of human consciousness? Maybe. Maybe not. I have nothing to infer an answer. Certainly not in ID theory. Which is what we are discussing here. And certainly I have no duty to show that the designer did not originate human consciousness, or that he did, because I have made absolutely no inferences about the origin of human consciousness. I have only said that we infer a designer for biological objects, not for human cosnciousness.
Again, an algorithm is a process or set of rules used for calculation or programmatic purposes. A designer can create an immaterial algorithm in an agent that acts on biological entities. There could be no direct evidence of such a thing, but the effects of it can be seen in the development of biological organisms.
Again, everything is possible. I am not interested in what is possible, but in what is supported by facts. You use the word "algorithm" to indicate mental contents. I have nothing against that, but it is not the way I use it, and it is of no interest for ID theory. Again, ID theory is about inferring a design origin for some material objects. To do that, we are not interested in what happens in the consciousness of the designer, those are issues for a theory of the mind. We only need to know that the form we oberve in the object originated from some conscious, intelligent and purposeful agent who inputted that form to the object starting from some conscious representation. If the configuration comes directly from a conscious being, design is proved. All thhis discussion about algorithms is because some people here believe that the designer does not design biological objects directly, but rather designs some other object, probably biological, which then after some time, deisgne the new biological objects by aòlgorithmic computation programmed originally by the designer. IOWs, this model assumes that the designer designs, let's call it so, a "biological computer" which then designs (computes) new biological beings. I have said many times that I don't believe in this strange theory, and I have given my reasons to confute it. However, in this theory the algorithm is not a conscious agent who designs: it is a biological machine, IOWs an object. That's why in this discussion I use algorithm to indicate an object that can compute. Again, the algorithm is designed, because it is a configuration given to a biological machine by the designer, a configuration that can make computations. If you want to know if a mental algorithm in a mind is designed, I cannot answer, because I am not discussion a theory of the mind here. Certainly, it is not designed according to my definition, because it is not a material object. ID theory is simple, when people don't try to pretend that it is complicated. We observe some object. We observe the configuration of the object. We ask ourselves if the object is designed, IOWs is the configuration we observe originated as a conscious representation in a conscious agent, and was then inputted purposefully into the object. We define an objective property, functional information, linked to some function that can be implemented using the object and that can be measured. We measure it. If the complexity of the function that can be implemented by the object is great enough, we infer a design origin for the object. That's all. gpuccio
GP Thanks very much. Could you point to the 'humpty dumpty' OP you mentioned? EugeneS
GP @106: Regarding Fig. 1 in the OP: "the figure is there just to give a first general idea of the system" I agree. And it does it very well, specially within the context of the fascinating topic of your OP. Even without the missing information that you listed:
Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors. The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion. Only the canonical pathway is shown. Only the most common type of dimer is shown. Coactivators and interactions with other pathways are not shown or barely mentioned. Of course, lncRNAs are not shown.
the figure has many details that give a convincing idea of functional complexity. Thus, after carefully studying the figure to understand the flow of functional information, you reveal how much is still missing, one can only wonder how would anyone believe that such a system could arise through unguided physio-chemical events. PeterA
GP
design is the configuration of material objects
I mentioned Mozart's symphonies which were designed in his conscious mind. They weren't designed on paper or by musical instruments. Also, if an immaterial entity created other immaterial entities, you would say "that is not an act of purposeful design"? Silver Asiatic
GPuccio Again, thank you for clarifications and even repeating things you stated before. It has been very helpful. I am not fully understanding several of your points which I will illustrate below:
GP Science can investigate anything that produces observable facts. In no way it is limited to “matter”.
Do you think that science can investigate God?
And the designer needs not have “created” anything. Design is not creation.
I believe that design is the ultimate creative act. Design is an action of creation with and for a purpose. It begins as a creative act in a conscious mind - a thought which did not exist before is created for a purpose. This thought is then implemented through various means. But how can there be design without creation? How can a purposeful act occur without it having been created by a mind?
Not having a physical body does not necessarily mean that an entity is not subject to space and time.
How are immaterial objects constrained by space and time? What measurements can be performed on immaterial entities?
Indeed, ID is not evaluating anything about the designer...
As I quoted you above " Science can investigate anything that produces observable facts", why is not ID evaluating the designer?
The designer designs biological information. Not human consciousness, or any other consciousness,
What scientific evidence do you have to show that the designer did not design human consciousness? Where do you think human consciousness comes from?
Not “immaterial algorithms”. design is the configuration of material objects, starting from cosncious representations of the designer.
Again, an algorithm is a process or set of rules used for calculation or programmatic purposes. A designer can create an immaterial algorithm in an agent that acts on biological entities. There could be no direct evidence of such a thing, but the effects of it can be seen in the development of biological organisms. Silver Asiatic
EugeneS: Of course they would never succeed, in an autoclave or elsewhere. I suppose that Darwin's argument was that, in the absence of existing life, the first organic molecules generated (by magic, probably) would have been more stable than what we can expect today. Indeed, today simple organic molecules have very short life in any environment because of existing forms of life. The argument is however irrelevant. The simple truth is that simple organic molecules (Darwin was probably thinking of proteins, today they should be RNA to be fashionable) are completely useless to build life of any form. Let's be serious: even if we take all components, membrane, genome, and so on, for example by disrupting bacteria, and put them together in a test tube, we can never build a living cell. This is the classic humpty dumpty argument, made here time ago, if I remember well, by Sal Cordova. It remains a formidable argument. All reasonings about OOL from inanimate matter are, really, nothing more than fairy tales, They don't even reach the status of bad scientific theories. gpuccio
GP Yes, of course. I agree. I have missed out 'physical'. Maybe, it is a distraction from the thread but anyway. I recall one conversation with a biologist. I had posted something against Darwin's explanation of why we can't see another sort of life emerging. Correct me if I am wrong but my understanding is that, basically, Darwin claimed that organic compounds that would have easily become life are immediately consumed by the already existing life forms. I was saying that this is a rubbishy argument. But according to my interlocutor, it actually wasn't. My friend said it wss extremely difficult to get rid of life in an experimental setting for abiogenesis. In relation to what we are discussing here, this claim effectively means that the existing life allegedly devours any signs of emerging life as soon as they appear. My answer at the time was, why don't they put their test tubes in an autoclave? He said that this was not so easy as I thought as getting rid of existing life also destroys the organic chemicals, and defeats the purpose. Today, I still strongly believe it is a bad argument but for a different reason, i.e. due to the impossibility of the translation apparatus that relies on a symbolic memory and semiotic closure self-organizing. There is no empirical warrant to back the claim that such self-organization is possible. What do you think about Darwin's argument and, in particular, about the difficulty of creating the right conditions for a clean abiogenesis experiment? EugeneS
GP @108: "Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway: https://rockland-inc.com/nfkb-signaling-pathway.aspx" Oh, no! Wow! OK, you have persuaded me. I'm convinced now. Thanks! PeterA
EugeneS: The statement was: "“Is the designer a biological organism? Is the designer a physical entity?” I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us." What I mean is that the continuing presence of one or more physical designers, with some physical body, should have left some trace, reasonably. A physical designer has to be physically present at all design intervertions. And physical agents, usually, leave some trace of themselves. I mean, betond the design itself. Of course the design itself is evidence of a designer. But in the case of a non physical designer, we don't expect to find further physical evidence, beyond the design itself. In the case of a physical designer, I would expect something, especially considering the many acts of design in natural history. This is what I meant. gpuccio
GP (101) "...we should have some evidence of that. But there is none." This is where you lost me. Isn't what you so painstakingly analyse here and in other OPs something that constitutes the said evidence? Maybe I am wrong and I have missed out part of the conversation. But it is exactly what we observe that strongly suggests design. It is precisely that. All the rest is immaterial. Consequently, it must be the evidence that you are saying does not exist. I hope I am just misinterpreting what you said there. EugeneS
PeterA and all: An interesting example of complexity is the CBM signalosome. As said briefly in the OP, it is a protein complex made of three proteins: CARD11 (Q9BXL7): 1154 AAs in the human form. Also known as CARMA1. BCL10 (O95999): 233 AAs in the human form. MALT1 (Q9UDY8): 824 AAs in the human form. These three proteins have the central role in transferring the signal from the specific immune receptors in B cells (BCR) and T cells (TCR) to the NF-kB activation system (see Fig. 3 in the OP). IOWs, they signal the recognition of an antigen by the specific receptors on B or T cells, and start the adaptive immune response. A very big task. The interesting part is that those proteins practically appear in vertebrates, because the adaptive immune system starts in jawed fishes. So, I have made the usual analysis for the information jump in vertebrates of these three proteins. Here are the results, that are rather impressing, especially for CARD11: CARD11: absolute jump in bits: 1280; in bits per aminoacid (bpa): 1.109185 BCL10: absolute jump in bits: 165.1; in bits per aminoacid (bpa): 0.7085837 MALT1: absolute jump in bits: 554; in bits per aminoacid (bpa): 0.6723301 I am adding to the OP a graphic that shows the evolutionary history of those three proteins, in terms of human conserved information. gpuccio
ET at #113:
I don’t see any issues with it.
Well, I do. Let's say that we have different ideas about that.
And for every genetic disease there are probably thousands of changes that do not cause one.
Of course. And they are called neutral or quasi neutral random mutations. When they are present in more than 1% of the whole population, they are called polymorphisms. gpuccio
Silver Asiatic at #111
I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.
I disagree. Algorithms, as I have already explained, are configurations of material objects. We were discussing algorithms on our planet, not imaginary algorithms in the mind of a conscious agent of whom we know almost nothing. My statement was about a real alforithm really implemented in material objects. To compute ATP synthase, that algorithm would certainly be much more complex than ATP stnthase itself. But all these reasonings are silly. We have no example of algorithms in nature, even in the biological world, which do compute new complex functional objects. Must we still waste our time with fairy tales?
The algorithm could be computed by an immaterial entity. The designer, I think you’re saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.
OK, I hope it's clear that this is the theory I am criticizing. Certainly not mine. And I have never said, or discussed, that "The designer created immaterial consciousnesses (human)". As said, ID can say nothing about the nature of consciousness. ID just says that functional information derives from consciousness. And the designer needs not have "created" anything. Design is not creation. The designer designs biological information. Not human consciousness, or any other consciousness, Not "immaterial algorithms". design is the configuration of material objects, starting from cosncious representations of the designer. As said so many times.
If the computing agent is immaterial then you could have no scientific evidence of it.
Not true, as said. Immaterial realities that cause observable facts can be inferred from those facts. Instead, a physical algorithm existing on our planet should leave some trace of its physical existence. This was my simple point.
You propose an immaterial designer — it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.
Not having a physical body does not necessarily mean that an entity is not subject to space and time. The interventions of the designer on matter are certainly subject to those things. About science, I have already answered. Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions. gpuccio
Gp states, " I think that at present universality seems more likely, but I am not really sure. I think the question remains open." Thank you very much for at least admitting that degree of humility on your part. bornagain77
Silver Asiatic at #110:
I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there.
Well, when you have facts science has to propose hygpotheses to explain them. Neo-darwinism is one hypothesis, and it does not explain what it should explain. Design is another hypothesis. You can't just say: it happened, and not try to explain it. That's not science.
Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.
Everything is possible. But my points are: a) There is no trace of those algorithms. They are just figments of the imagination. b) There are severe limits to what an algorithm can do. An algorithm cannot find solutions to problems for which it has not been programmed to find solutions. An algorithm just computes. Only consciousness has cognitive representations, understanding and purpose. Regarding innovations, I am afraid they are limited to what Behe describes, plus maybe some limited cases of simple computational adaptation. Innovations exist, but they are always simple.
Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don’t think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.
I strongly disagree. Here you are indeed assuming methodologic naturalism, something that I consider truly bad philosophy of science (even if I have been recently accused of doing exactly that). Science can investigate anything that produces observable facts. In no way it is limited to "matter". Indeed, many of the most important concepts in science have nothing to do with matter. Abd science does debate ideas and realities about which we still have no clear understanding, see dark matter and especially dark energy. Why, Because those things, whatever they may be, seem to have detectable effects, to generate facts. Moreover, consciousness is in itself a fact. It is subjectively perceived by each of us (you too, I suppose). Therefore is can and must be investigated by science, Even if, at present, science has no clear theory about what consciousness is. Design is an effect of consciousness. There is no evidence that consciousness need to be physical. Indeed, there are good evidences of the constrary, but I will not discuss them now. However, design, functional information and consciousness are certainly facts that need to be investigated by science. Even if the best explanation, maybe the only one, is the intervention of some non physical conscious agent.
That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did.
Correct.
That designer would not be a terrestrial, biological entity.
Not physical, therefore not biological. Terrestrial? I don't know. a non physical entity could well, in principle be specially connected to out planet. Or not, of course. If we don't know we don't know.
I don’t think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don’t see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth – would this suggest that there is a huge population of designers affecting mutations?
You seem to make some confusion about three different concepts: functional information, life and consciousness. ID is about the origin of functional information, in particular the functional information we observe in living organisms. It can say nothing about what life and consciousness are, least of all about how to generate those things. Functional information is a configuration of material objects to implement some function in the world we observe. Nothing else. Complex functional information originates only from conscious agents (we know that empirically), but it tells us nothing about what consciousness is or how it is generated. And life itself cannot easily be defined, and it is probably more than the information it needs to exist. As humans, we can design functional information. We can also design biological functional information, even rather complex. OK, we are not really very good. We cannot design anything like ATP synthase. But, in time, we can improve. Designers can design complex functional information. More or less complex, good or bad. But they can do it. But human designers, at present, cannot generate life. Indeed. we don't even know what life is. Even more that is true of consciousness. And again, I don't think we can say how many designers have contributed to biological design. Period.
Even if it is only cells where there were innovations that seems to be quite a lot of intervention.
It is a lot of intervention. And so?
I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also.
He could also be very simple.
I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.
Science has established practically nothing about the nature of consciousness. But there is time. Certainly, it has not established that consciousness derives fron the physical body.
The options I see for this introduction of information are: 1. Direct creation of vertibrates 2. Guided or tweaked mutations 3. Pre-programmed innovations that were triggered by various criteria 4. Mutation rates are not constant but can be accelerated at times 5. We don’t know
5 is true enough, but after that 2 is the only reasonable hypothesis. Intelligent selection can have a role too, of course, like in human protein engineering. But I think that transposons act as a form of guided mutation. gpuccio
gpuccio:
Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef?
I don't see any issues with it. There is a Scientific America article from over a decade ago titled "Evolving Inventions". One invention had a transistor in it that did not have its output connected to anything. The point being is the only details required are what is needed to get the job done, ie connecting a "P" to ADP.
But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated.
And for every genetic disease there are probably thousands of changes that do not cause one. ET
ET at #109:
The Designer is never seen.
Correct. But, as I have said, the designer needs not be physical. I believe that consciousness can exist without being necessarily connected to a physical body. I have explained at #101 (to SilverAsiatic). I quote myself: “Is the designer a biological organism? Is the designer a physical entity?” I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us. An algorithm, instead, needs to be physically instantiated. An algorithm is not a conscious agent. It works like a machine. It need a physical "body" to exist and work.
The way ATP synthase works, by squeezing the added “P” onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA’s that have created human inventions.
ATP synthase squeezes the P using mechanical force from a proton gradient. It works like a water mill. Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef? Algorithms compute, and do nothing else. They are sophisticated abacuses, nothing more. The amazing things that they do are simply due to the specific cponfigurations designed for them by conscious intelligent beings. Maybe the designer needed some algorithm to do the computations, if his computing ability is limited, like ours. Maybe not. But, if he used some algorithm, it seems not to have happened on this planet, ot he accurately destroyed any trace of it. Don't you think that these are just ad hoc reasonings?
I would love to see how you made that determination, especially in the light of the following:
I am not aware that waht Spetner says is true by default. Again, I don't know his thought in detail, and I don't want to judge. But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated. See comments #64 and #96. The always precious Behe has clearly shown that differentiation at low level (let's say inside families) is just a matter of adaptation thorugh loss of information, never a generation of new functional information. To be clear, the loss of information is random, due to deleterious mutations, and the adaptation id favoured by an occasionl advantage gained in specific environments, therefore to NS. This is the level where the neo-darwinian model works. But without generating any new functional information. Just by losing part of it. This is Behe's model (see polar bears). And it is mine, too. For the rest, actual design is always needed. gpuccio
GP
So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself.
I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.
So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase).
The algorithm could be computed by an immaterial entity. The designer, I think you're saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.
OK, so my simple question is: where is, or was, that object? The computing object? I am aware of nothing like that in the known universe.
If the computing agent is immaterial then you could have no scientific evidence of it.
So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself?
I think we are saying that science cannot know this. Additionally, you refer to "the designer" but there could be millions of designers. Again, science cannot make a statement on that.
What’s the sense of such a scenario? What scientific value has it? The answer is simple: none.
You propose an immaterial designer -- it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.
Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.
I don't think that conclusion is obvious. Why did the design have to occur when needed and not before. And again the algorithm could have been administered by an immaterial agent, which we never could observe scientifically. There's no way for science to know this. Silver Asiatic
Gpuccio Thank you for your detailed replies on some complex questions. You explained your thoughts very clearly and well.
Of course, there is instead no evidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information.
I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there. Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.
While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.
Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don't think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.
This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before.
That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did. That designer would not be a terrestrial, biological entity.
Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That’s how our consiousness is interfaced to our body. Why shouldn’t some other conscious entity be able to do something similar with biological organisms?
I don't think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don't see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth - would this suggest that there is a huge population of designers affecting mutations?
And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place.
I'd think that the activity of mutations within organisms is such that a continual monitoring would be required in order to achieve designed effects, but perhaps not. Even if it is only cells where there were innovations that seems to be quite a lot of intervention.
We don’t know. How complex is our consciousness, if separated from our body? We don’t know how complex non physical entities need to be. Maybe the designer is very simple. Or not.
I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also. Additionally, I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.
I don’t know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth.
The options I see for this introduction of information are: 1. Direct creation of vertibrates 2. Guided or tweaked mutations 3. Pre-programmed innovations that were triggered by various criteria 4. Mutation rates are not constant but can be accelerated at times 5. We don't know Silver Asiatic
gpuccio:
Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.
The Designer is never seen. The point of the algorithm was to address the "how" the Intelligent Designer designed living organisms and their complex parts and systems. The way ATP synthase works, by squeezing the added "P" onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA's that have created human inventions.
That would still leave 99.7% of all mutations that could be random. Indeed, that are random.
I would love to see how you made that determination, especially in the light of the following:
He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108
ET
PeterA: Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway: https://rockland-inc.com/nfkb-signaling-pathway.aspx gpuccio
Pw: "Could the answer include the following issues?" Yes, of course. "You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?" All of them, if they are functionally complex. That's the theory. That's ID. The procedure, if correctly applied, should have no false positives. "Does CD stand for common design or common descent with designed modifications?" CD stands just for "common descent". I suppose that each person can add his personal connotations. Possibly making them explicit in the discussion. I have explained that for me common descent just means a physical continuity between organisms, but that all new complex functional information is certainly designed. Without exceptions. So, I suppose that "common descent with designed modifications" is a good way to put it. Just a note about the universality. Facts are very strong in supporting common descent (in the sense I have specified). It remains open, IMO, if it is really universal: IOWs, if all forms of life have some continuity with a single original event of OOL, or if more than one event of OOL took place. I think that at present universality seems more likely, but I am not really sure. I think the question remains open. For example, some differences between bacteria and archea are rather amazing. "Does “common” relate to the observed similarities ?" Common, in my version of CD, refers to the physical derivation (for existing information) from one common ancestor. So, let's say that at some time there was in the ocean a common ancestor of vertebrates: maybe some form of chordate. And at some time, vertebrates are already split into cartilaginous fish and bony fish. If both cartilaginous fish and bony fish physically reuse the same old information from a common ancestor, that is common descent, even of course all the new information is added by specific design. I really don't understand how that could be explained without any form of physical descent. Do they really believe that cartilaginous fish were designed from scratch, from inanimate matter, and that bony fish were too designed from scratch, from inanimate matter, but separately? And that the supposed ancestor, the first chordates, were also designed from scratch? And the first eukaryotes? And so on? gpuccio
PeterA: "Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing?" Of course it is. A gross simplification. many important details are missing. For example: Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors. The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion. Only the canonical pathway is shown. Only the most common type of dimer is shown. Coactivators and interactions with other pathways are not shown or barely mentioned. Of course, lncRNAs are not shown. And so on. Of course, the figure is there just to give a first general idea of the system. gpuccio
Upright BiPed: "Illuminating thread otherwise." Thank you! :) gpuccio
GP, Fascinating topic and interesting discussion, though sometimes unnecessarily personal. Scientific discussions should remain calm, focused in on details, unbiased. At the end we want to understand more. Undoubtedly biology today is not easy to understand well in all details and it doesn’t look like it could get easier anytime soon. Someone asked: “What evidence do we have of a designer directly intervening into biology?” Could the answer include the following issues? OOL, prokaryotes, eukaryote, and according to Dr Behe, who said that at one point he would point to the class level, now he would focus it on at least at the family level, where the Darwinian paradigm lacks explanatory power for the physiological differences between cats and dogs allegedly proceeding from a common ancestor. You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design? Does CD stand for common design or common descent with designed modifications? Does “common” relate to the observed similarities ? For example, in the case of cats and dogs, “common” relates to their observed anatomical and/or physiological similarities, which were mostly designed too? pw
.
Luckily, some friends are ready to be fiercely antagonistic!
Yes, I see that. Illuminating thread otherwise. Upright BiPed
GP, The first graphic illustration shows the mechanism of NF-kB action, which you associated with the canonical activation pathway "summarized" in figure 1. The figure 1 -without breaking it into more details- could qualify as a complex mechanism. Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing? Aren't all the control procedures associated with this mechanism shown in the figure? Are any important details missing, or just irrelevant details? Well, you answered those question when you elaborated on those details in the OP. In this particular example, we first see the "signals" shown in figure 1 under the OP section "The stimuli". Thus, what in the figure 1 appears as a few colored objects and arrows is described in more details, showing the tremendous complexity of each step of the graphic, specially the receptors in the cell membrane. Can the same be said about every step within the figure? PeterA
Silver Asiatic:: You also say: "What evidence do we have of a designer directly intervening into biology?" That's rather simple. The many examples, well known, of sudden appearance in natural history of new biological objects full of tons of new complex functional imformation, information that did not exist at all before. For example, I have analyzed quantitatively the transition to vertebrates, which happened more than 400 million years ago, in a time window of probably 20 million years, and which involved the appearance, for the first time in natural history, of about 1.7 million bits of new functiona information. Information that, after that time, has been conserved up to now. This is the evidence of a design intevrentio, specifically localized in time. Of course, there is instead no ecidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information. You say: "Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?" These are good questions. To many of them, we cannot at present give answers. But not all. "Is the designer a biological organism? Is the designer a physical entity?" I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my "confutation" of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us. "Did the designer exist before life on earth existed?" This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before. "What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth?" Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That's how our consiousness is interfaced to our body. Why shouldn't some other conscious entity be able to do something similar with biological organisms? And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place. "How complex is the designer? " We don't know. How complex is our consciousness, if separated from our body? We don't know how complex non physical entities need to be. Maybe the designer is very simple. Or not. This answer is valid for many other questions: we don't understand, at present, how consciousness can work outside of a physical body. Maybe we will understand more in the future. "Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned." I don't know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth. "Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?" Most likely he uses tools. Of course the designer's consciousness needs to interface with matter, otherwise no design could be possible. That is exactly what we do when our consciousness interfaces with our brain. So, no big problem here. The interface is probably at quantum level, as it is probably in our brains. There are many events in cells that could be more easily tweaked at quantum level in a consciousness related way. Penrose believes that a strict relationship exists in our brain between consciousness and microtubules in neurons. Maybe. I think, as I have said many times, that the most likely tool of design that we can identify at present are transposons. The insertions of transposons, usually random (see my previous posts), could be easily tweaked at quantum level by some conscious intervention. And there is some good evidence that transposons are involved in the generation of new functional genes, even in primates. That's the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them. gpuccio
Silver Asiatic: It's not really question of knowing what is in the mind of the designer. The problem is: what is in material objects? Let's go back to ATP synthase. Please, read my comment #74. So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself. So, let's say, just for as moment, that the designer does not design ATP synthase directly. Let's say that the designer designs the algorithm. After all, he is clever enough. So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase). OK, so my simple question is: where is, or was, that object? The computing object? I am aware of nothing like that in the known universe. Maybe it existed 4 billion years ago, and now it is lost? Well, everything is possible, but what facts support such an idea? None at all. Have we traces of that algorithm, indications of how it worked? Have we any idea of the object where it was implemented? It seems reasonable that it was some biologcial object, probably an organism. So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself? What's the sense of such a scenario? What scientific value has it? The answer is simple: none. Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information. And there is more: such a complex algorithm, made to compute ATP synthase, could not certainly compute another, completely different, protein system, like for example the spliceosome. Because that's another function, another plan. A completely different computation would be needed, a different purpose, a different context. So, what do we believe? That the designer designed, later, another complex organism with another comples algorithm to compute and realize the spliceosome? And the immune system? And our brain? Or that, in the beginning, there wa one organism so complex that it could compute the sequence of all future necessary proteins, protein systems, lncRNAs, and so on? A monster of which no trace has remained? OK, I hope that's enough. gpuccio
Gp states, "That would still leave 99.7% of all mutations that could be random. Indeed, that are random." LOL, just can't accept the obvious can he? Bigger men than you have gone to their deaths defending their false theories Gp. :)
“It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns” James Shapiro – Evolution: A View From The 21st Century – (Page 82)
To presuppose that the intricate molecular machinery in the cell is just willy nilly moving stuff around on the genome is absurd on its face. And yet that is ultimately what Gp is trying to argue for. Of note: It is not on me to prove a system is completely deterministic in order to falsify Gp's model. I only have to prove that it is not completely random in order to falsify his model. And that threshold has been met. Perhaps Gp would also now like to still defend the notion that most (+90%) of the genome is junk? bornagain77
GP Good answer, thank you.
But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity.
Yes, but I think this answers your question about a Designer who created algorithms. In a software output, it can be programmed to create information that was not known to the designer. That information actually causes other things to happen. I would think that it is the definition of complex, specified, functional information. We observe the software creating that information, and rightly infer that the information network (process) was designed. But do we, or can we know that the designer was unaware of what the software produced? I don't think so. We do not have access to the designer's mind. We only see the software and what it produces. We know it is the product of design. But we do not know if the functional information was designed for any specific instance, or if it is the output of a previous design farther back, invisible to us. This, I think, is the case in biology. I believe you are saying that the design occurs at various discrete moments where a designer intervenes, and not that the design occurred at some distant time in the past and is merely being worked out by "software". What we observe shows functional information, but this information may either be created directly by the designer at the moment, or it may be an output of a designed system. I do not see how we could distinguish between the two options. With software, we can observe the inputs and calculations and we can determine that the software created something "new"'. It is all the output of design, but we can trace what the software is doing and therefore infer where the "design implementation" took place. It's that term that is the issue here, really. It is "design implementation". Where and when was the design (in the mind of the designer) put into biology? I do not believe that is a question that ID proposes an answer for, and I also do not believe it is a scientific question. Silver Asiatic
Silver Asiatic: "I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process." I perfectly agree. The designed object here is the software. The design happens when the designer writes the software, from his mind. I see your problem. let's be clear. The software never designs anything, because it is not conscious. Design, by definition, is the output of form from consiousness to a materila object. But you seem to believe that the siftware creates new functional information. Well, it does in a measure, but it is not new complex functional information. this is a point that is often misunderstood. Let's say that the software produces visualizations exactly as programmed to do. In that case, it is easy. All the functional information that we get has been designed when the software was designed. But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity. Finally, maybe the software uses new information from the environment. In that case, there will be some increse in functional information, but it will be very low, if the environment does not contain complex functional information. IOWs, the environment cannot teach a system how to build ATP synthase, except when the sequence of ATP syntghase (or, for that, of a Shakespeare sonnet in the case of language) is provided externally to the system. Now I must go. More in next post. gpuccio
To all: OK, now let's talk briefly of transposons. It's really strange that transposons have been mentioned here as a confutation of my ideas. But life is strange, as we all know. The simple fact is: I have been arguing here for years that transposons are probably the most important tool of intelligent design in biology. I remember that an interlocutor, some time ago, even accused me of inventing the "God of transposons". The simple fact is: there are many facts that do suggest that transposon activity is responsible for generating new functional genes, new functional proteins. And I think that the best intepretation is that transposon activity can be intelligently directed, in some cases. IOWs, if biological design is, at least in part, implemented by guided mutations, those guided mutations are probably the result of guided transposon activity. We have no certainty of that, but it is a very reasonable scenario, according to known facts. OK, but let's put that into perspective, especially in relation to the confused and confounding statements that have been made or reported here about "random mutations". I will refer to the following interesting article: The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4196381/ So, the first question that we need to answer is: a) How frequent are transposon-dependent mutations in relation to all other mutations? There is an answer to that in the paper:
Recent studies have revealed the implications of TEs in genomic instability and human genome evolution [44]. Mutations associated with TE insertions are well studied, and approximately 0.3% of all mutations are caused by retrotransposon insertions [27
0.3% of all mutations. So, let's admit for a moment that transposon derived mutations are not random, as it has been siggested in this thread. That would still leave 99.7% of all mutations that could be random. Indeed, that are random. But let's go on. I have already stated that I believe that transposons are an important tool of design. Therefore, at least some of transposon activity must be intelligently guided. But does that mean that all transposon activity is guided? Of course, absolutely not. I do believ that most transposon activity is random, and is not guided. Let's read again from the paper:
Such insertions can be deleterious by disrupting the regulatory sequences of a gene. When a TE inserts within an exon, it may change the ORF, such that it codes for an aberrant peptide, or it may even cause missense or nonsense mutations. On the other hand, if it is inserted into an intronic region, it may cause an alternative splicing event by introducing novel splice sites, disrupting the canonical splice site, or introducing a polyadenylation signal [8, 9, 10, 11, 42, 43]. In some instances, TE insertion into intronic regions can cause mRNA destabilization, thereby reducing gene expression [45]. Similarly, some studies have suggested that TE insertion into the 5' or 3' region of a gene may alter its expression [46, 47, 48]. Thus, such a change in gene expression may, in turn, change the equilibrium of regulatory networks and result in disease conditions (reviewed in Konkel and Batzer [43]). The currently active non-LTR transposons, L1, SVA, and Alu, are reported to be the causative factors of many genetic disorders, such as hemophilia, Apert syndrome, familial hypercholesterolemia, and colon and breast cancer (Table 1) [8, 10, 11, 27]. Among the reported TE-mediated genetic disorders, X-linked diseases are more abundant than autosomal diseases [11, 27, 45], most of which are caused by L1 insertions. However, the phenomenon behind L1 and X-linked genetic disorders has not yet been revealed. The breast cancer 2 (BRCA2) gene, associated with breast and ovarian cancers, has been reported to be disrupted by multiple non-LTR TE insertions [9, 18, 49]. There are some reports that the same location of a gene may undergo multiple insertions (e.g., Alu and L1 insertions in the adenomatous polyposis coli gene) (Table 1).
And so on. Have we any reason to believe that that kind of transposon activity is guided? Not at all. It just behaves like all other random mutations, that are oftne cause of genetic diseases. Moreover, we know that deleterious mutations are only a fraction of all mutations. Most mutations, indeed, are neutral or quasi neitral. Therefore, it is absolutely reasonable that most transposon induced mutations are neutral too. And the design? The important point, that can be connected to Abel's important ideas, is that functional design happens when an intelligent agnet acts to give a functional (and absolutely unlikely) form to a number of "configurable switches". Now, the key idea here is that the switches must be configurable. IOWs, if they are not set by the designer, their individual configuration is in some measure indifferent, and the global configuration can therefore be described as random. The important point here is that functional sequences are more similar to random sequences than to ordered sequences. Ordered sequences cannot convey the functional information for complex function, because they are constrained by their order. Functional sequences, instead, are pseudo-random (not completely, of course: some order can be detected, as we know well). That relative freedom of variation is a very good foundation to use them in a designed way. So, the idea is: transposon activity is probably random in most cases. In some cases, it is guided. Pribably through some qunatum interface. That's also the reason why a quantum interface is usually considered (by me too) as the best interface between mind and matter: because quantum phenomena are, at one level, probabilistic, random, and that's exactly the reason why they can be used to implement free intelligent choices. To conclude, I will repeat, for the nth time, that a system is a random system when we cannot describe it deterministically, but we can proved some relatively efficient and useful description of it using a probability distribution. There is no such a thing as "complete randomness". If we use a probability distributuon to describe a system, we are treating that system as a random system. Randomness is not an intrinsic property of evens (except maybe at quantum level). A random syste, like the tossing of a coin, is completely deterministic in essence. But we are not able to describe it deterministically. In the same way, random systems that do not follow an uniform distribution are random just the same. A loaded dice is as rando as a fair dice. But, if the laoding is so extreme that only one event can take place, that becomes a necessity system, that can very well be described deterministically. In the same way, there is nothing strange in the fact that some factrs, acting as necessity causes, can modify a probability distribution. As a random system is in reality deterministic in essence, of course is some of the variables that act in it is strong enought to be detected, that variable wil modify the probability dostribution in a detectable way. There is nothing strange in that. The system is stil random (we use a probabiliti dostribution to describe it), but we can detect one specific variable that modifies the probability distribution (what has been called here, not so precisely IMO, a bias). That's the case, for example, of radiations increasing the rate and modifying the type of random mutations, as in the great incread of leukemia cases at Hiroshima after the bomb. That has always been well known, even is some people seem to discover it only now. In all those cases, we are still dealing with random systems: systems where each single event cannot be anticipated, but a probability distribution can rather efficiently describe the system. Mutations are a random system, except maybe for the rare cases of guded mutations in the coruse of biological design. Finally, let me say that, of all the things of which I have been accused, "assuming Methodologcal Naturalism as a starting assumption" is probably the funniest. Next time, they will probably accuse me of being a convinced compatibilist! :) Life is strange. gpuccio
GP
design happens when the functional information is inputted into the material object we observe
I don't quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer - it's an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.
what facts support the existence of such an independent physical algorithm in physical reality?
Again, with Mozart. The orchestra plays the symphony. Does this mean that the symphony could only be created as an independent physical text in physical reality? The facts say no - he had it in his mind. I believe you are saying that a Designer enters into the world at various specific points of time, and intervenes in the life of organisms and creates mutations or functions at those moments. What facts support the existence of those interventions in time, versus the idea that the organism was designed with the capability and plan for various changes from the beginning of the universe? What evidence do we have of a designer directly intervening into biology?
I only know that [the designer] designs biological things, and must be conscious, intelligent and purposeful.
Well, I think we could try to infer more than that - or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)? Silver Asiatic
Bill Cole: Great to hear from you! :) And let's not forget lncRNAs (see comments #52, #67 and #68 here). gpuccio
John_a_designer: Thank you for your very thoughtful comment. Yes, in this OP and in others I have dealed mainly with eukaryotes. But of course you are right, prokaryotes are equally fascinating, maybe only a little bit simpler, and, as you say: "If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation". And game over it is, because the functional complexity in prokaryotes ia already overwhelming, and can never be explained by RV + NS. It's not a case that the example I use probably most frequently is ATP synthase. And that is a bacterial protein. You describe very correctly the transcription system in prokaryotes. It's certainly much simpler than in eukaryotes, but still ots complexity is mind-boggling. I think the system of TFs is essentially eukaryotic, but of course a strict regulation is present in prokaryotes too. You mention sigma factors and rho, of course, and there is the system of activators and repressors. But there are big differences, starting form the very different organization of the bacterial chromosome (histone independent supercoiling, and so on). Sigma factors are in some way the equivalent of generic TFs. According to Wikipedia, sigma factor "is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB". Maybe. I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here. I have blasted the same E. coli sigma 70 against all bacteria, excluding proteobacteria (the phylum of E. coli). I would say that there is good conservarion in different types of bacteria, such as up to 1251 bits in firmicutes, 786 bits in actinobacteria, 533 bits in cyanobacteria, and so on. So, this molecule seems to be rather conserved in bacteria. I think that eukaryogenesis is one of the most astounding designed jumps in natural history. I do accept that mithocondria and plastids are derived from bacteria, and that some important eukaryotic features are mainly derived from archae, but even those partial derivations require tons of designed adjustments. And that is only the tip of the iceberg. Most eukaryotic features (the nuclear membrane and nuclear pore, chromatin organization, the system of TFs, the spliceosome, the ubiquitin system, and so on) are essentially eikaryotic, even if of course some vague precursor can be detected, in many cases, in prokaryotes. And each of these system is a marvel of original design. gpuccio
Hi gpuccio Thanks for the interesting post. From my study cell control comes from the availability of transcription acting molecules in the nucleus. They can be either proteins or small molecules that are not transcribed but obtained by other sources like enzyme chains. Testosterone and estrogen are examples of non transcribed small molecules. How this is all coordinated so that a living organism can reliably operate is fascinating and I am thrilled to see you start this discussion. Great to have you back :-) bill cole
Once again (along with others) thank you for a very interesting and evocative OP. On the other hand, as a mild criticism, I am just an uneducated layman when it comes to bio-chemistry so I am continuously trying to get up to speed on the topic. I think I get the gist of what you are saying but I imagine someone stumbling onto this site for the first time are going to find this topic way over their heads. Maybe something of a basic summary which briefly explains transcription, the role of RNA polymerase and the difference between prokaryotic and eukaryotic transcription would be helpful (or a link to such a summary if you done that somewhere else.) As for myself I think I get the gist of what you are saying but I am a little confused by differences between prokaryotic and eukaryotic transcription. (Most of my study and research has been centered on the prokaryote. If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation.) For example, one question I have is, are there transcription factors for prokaryotes? According to Google, no.
Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type. Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed. There is no such structure seen in prokaryotes.
Is that true? What about the Sigma factor which initiates transcription in prokaryotes and the Rho factor which terminates it? Isn’t that essentially what transcription factors which come in two forms, promoters and repressors, do in eukaryotic transcription? Are Sigma factors and Rho factors the same in all prokaryotes or is there a species difference? As far a termination in eukaryotes, one educational video I ran across recently (it’s dated to 2013) said that it is still unclear how termination occurs in eukaryotes. Is that true? In prokaryote there are two ways transcription is terminated: there is Rho dependent, where the Rho factor is utilized, and Rho independent, where it isn’t. Do we know anymore six years later? Hopefully answering those kinds of questions can help me and others. (Of course, they’re going to have to do some homework on their own.) john_a_designer
To all: Of course, I will make the clarifications about transposons as soon as possible. gpuccio
Bornagain77: OK, I think I will leave it at that with you. Even if you don't. gpuccio
Gp has, in a couple of instances now, tried to imply that I (and others) do not understand randomness. In regards to Shapiro Gp states,
Moreover, the reference to “statistically significant non-random patterns” could simply point to some necessity effect that modifies the probability distribution, like in the case of the loaded dice. As explained, that does not make the system “non-random”. And that has nothing to do with guidance, design or creation.
Might I suggest that it is Gp himself that does not understand randomness. As far as I can tell, Gp presupposes complete randomness within his model, (completely free from 'loaded dice'), and is one of the main reasons that he states that he can think of no "other possible explanation" to explain the sequence data.. Yet, if 'loaded dice' are producing “statistically significant non-random patterns” within genomes then that, of course, falsifies Gp's assumption of complete randomness in his model. Like I stated before 'directed' mutations, (and/or 'loaded dice' to use Gp's term), are 'another possible explanation' that I can think of. bornagain77
Gp 77 and 85 disingenuously claims that he is the one being 'scientific' while trying, as best he can, to keep God out of his science. Hogwash! His model specifically makes claims as to what he believes the designer, i.e. God, is and is not doing. i.e. Johnny Cash's 'One Piece at a Time". Perhaps Gp falsely believes that if he compromises his theology enough that he is somehow being more scientific than I am? Again Hogwash. As I have pointed out many times, assuming Methodologcal Naturalism as a starting assumption, (as Gp seems bent on doing in his model as far as he can do it without invoking God), results in the catastrophic epistemological failure of science itself. (See bottom of post for refutation of methodological naturalism) Bottom line, Gp, instead of being more scientific than I, as he is falsely trying to imply (much like Darwinists constantly try to falsely imply), has instead produced a compromised, bizarre, and convoluted, model. A model that IMHO does not stand up to even minimal scrutiny. And a model that no self respecting Theist or even Darwinist would ever accept as being true. A model that, as far as I can tell, apparently only Gp himself accepts as being undeniably true..
As I have pointed out several times now, assuming Naturalism instead of Theism as the worldview on which all of science is based leads to the catastrophic epistemological failure of science itself. Basically, because of reductive materialism (and/or methodological naturalism), the atheistic materialist is forced to claim that he is merely a ‘neuronal illusion’ (Coyne, Dennett, etc..), who has the illusion of free will (Harris), who has unreliable beliefs about reality (Plantinga), who has illusory perceptions of reality (Hoffman), who, since he has no real time empirical evidence substantiating his grandiose claims, must make up illusory “just so stories” with the illusory, and impotent, ‘designer substitute’ of natural selection (Behe, Gould, Sternberg), so as to ‘explain away’ the appearance (i.e. illusion) of design (Crick, Dawkins), and who must make up illusory meanings and purposes for his life since the reality of the nihilism inherent in his atheistic worldview is too much for him to bear (Weikart), and who must also hold morality to be subjective and illusory since he has rejected God (Craig, Kreeft). Bottom line, nothing is real in the atheist’s worldview, least of all, morality, meaning and purposes for life.,,, – Darwin’s Theory vs Falsification – video – 39:45 minute mark https://youtu.be/8rzw0JkuKuQ?t=2387 Thus, although the Darwinist may firmly believes he is on the terra firma of science (in his appeal, even demand, for methodological naturalism), the fact of the matter is that, when examining the details of his materialistic/naturalistic worldview, it is found that Darwinists/Atheists are adrift in an ocean of fantasy and imagination with no discernible anchor for reality to grab on to. It would be hard to fathom a worldview more antagonistic to modern science than Atheistic materialism and/or methodological naturalism have turned out to be. 2 Corinthians 10:5 Casting down imaginations, and every high thing that exalteth itself against the knowledge of God, and bringing into captivity every thought to the obedience of Christ;
bornagain77
ET at #83: "Yes, the algorithm would be more complex than the structure. " OK. "So what? Where is the algorithm? With the Intelligent Designer. " ??? What do you mean? I really don't understand. "A trace of it is in the structure itself." The structure aloows us to infer design. I don't see what in the structure points to some specific algorithm. Can you help? "The algorithm attempts to answer the question of how ATP synthase was intelligently designed. " OK, I am not saying that the designer did not use any algorithm. Maybe the designer is there in his lab, and has a lot of computers working fot him in the process. But: a) He probably designed the computers too b) His conscious cognition is absolutely necessary to reach the results. Computers do the computations, but it's consciousness that defines puproses, and finds strategies. And however, design happens when the functional information is inputted into the material object we observe. So, if the designer inputs information after having computed it in his lab. that is not really relevant. I though that your mention of an algorithm meant something different. I thought you meant that the designer designs an algorithm and put it in some existing organism (or place), and tha such algorithm them compute ATP synthase or what else. So, if that is your idea, again I ask: what facts support the existence of such an independent physical algorithm in physical reality? The answer is simple enough: none at all. " Of course an omnipotent intelligent designer wouldn’t require that and could just design one from its mind." I have no idea if the biological designer is omnipotent, or if he designs things from his mind alone, or if he uses computers or watches or anything else in the process. I only know that he designs biological things, and must be conscious, intelligent and purposeful. gpuccio
Bornagain77 at #82:
Gp in 77 tried to imply he was completely theologically neutral. That is impossible.
Emphasis mine. That's unfair and not true. I quote myself at #77: "One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it’s not easy)." No comments. You see, the difference between your position and my position is that you are very happy to derive your scientific ideas from your theology. I try as much as possible not to do that. As said, both are strong choices. And I respect choices. But that's probably one of the reasons why we cannot really communicate constructively about scientific things. gpuccio
Bornagain77 at #69 and #76 (and to all): OK, so some people apparently disagree with me. I will try to survive. But I would insist on the "apparently", because again, IMO, you make some confusion in your quotes and their intepretation. Let's see. At #69, you make 6 quote (excluding the internal reference to ET): 1. Shapiro. I don't think I can comment on this one. The quote is too short, and I have not the book to check the context. However, the reference to "genome change operator" is not very clear. Moreover, the reference to "statistically significant non-random patterns" could simply point to some necessity effect that modifies the probability distribution, like in the case of the loaded dice. As explained, that does not make the system "non-random". And that has nothing to do with guidance, design or creation. 2. Noble. That "genetic change is far from random and often not gradual" is obvious. It is not random because it is designed, and it is well known that it is not gradual. I perfectly agree. That has nothing to do with random mutations, because design is of course not implemented by random mutations. This is simply a criticism of model a. Another point is that some epigenetic modification can be inherited. Again, I have nothing against that. But of course I don't believe that such a mechanism can create complex functional information and body plans. Neither do you, I believe. You say you believe in the "creation of kinds". 3. and 4. Stermberg and the PLOS paper. These are about transposons. I will address this topic specifically at the end of this post. 5. The other PLOS paper. Here is the abstract:
Abstract Mutations drive evolution and were assumed to occur by chance: constantly, gradually, roughly uniformly in genomes, and without regard to environmental inputs, but this view is being revised by discoveries of molecular mechanisms of mutation in bacteria, now translated across the tree of life. These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation. Mutation is also nonrandom in genomic space, with multiple simultaneous mutations falling in local clusters, which may allow concerted evolution—the multiple changes needed to adapt protein functions and protein machines encoded by linked genes. Molecular mechanisms of stress-inducible mutation change ideas about evolution and suggest different ways to model and address cancer development, infectious disease, and evolution generally.
This is simple. The paper, again, uses the term "random" and "not random" incorrectly. It is obvious in the first phrase. The authors complain that mutations do not occur "roughly uniformly" in the genome, and that would make them not random. But, as explained, the uniform distribution is only one of the many probability distributions that describe well natural phenomena. For example, many natural systems are well described, as well known, by a normal distribution, which has nothing to do with an uniform distribution. That does not mean that they are not random systems. The criticism to graduality I have already discussed: I obviously agree, but the only reason for non gradual variation is design. Indeed, neutral mutations are instead gradual, because they are not designed. And what's the problem with "environmental inputs"? We know very well that environmental inputs change the rate, and often the type, of mutation. Radiations, for example, do that. We have known that for decades. That is no reason to say that mutations are not random. They are random, and environmental inputs do modify the probability distribution. A lot. Are these authors really discovering, in 2019, that a lor of leukemias were caused by the bomb in Hiroshima? 6. Wells. He is discussing the interesting concept of somatic genomic variation. Here is the abstract of the paper to which he refers:
Genetic variation between individuals has been extensively investigated, but differences between tissues within individuals are far less understood. It is commonly assumed that all healthy cells that arise from the same zygote possess the same genomic content, with a few known exceptions in the immune system and germ line. However, a growing body of evidence shows that genomic variation exists between differentiated tissues. We investigated the scope of somatic genomic variation between tissues within humans. Analysis of copy number variation by high-resolution array-comparative genomic hybridization in diverse tissues from six unrelated subjects reveals a significant number of intra-individual genomic changes between tissues. Many (79%) of these events affect genes. Our results have important consequences for understanding normal genetic and phenotypic variation within individuals, and they have significant implications for both the etiology of genetic diseases such as cancer and for immortalized cell lines that might be used in research and therapeutics.
As you can see (if you can read that abstract impartially) the paper does not mention in any way anything that supports Wells'final (and rather gratuitous) statement: "From what I now know as an embryologist I would say that the truth is the opposite: Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism." Indeed, the paper says the opposit: that somatic genomic variations are important to better understand "the etiology of genetic diseases such as cancer". Why? The reason is simple: because they are random mutations, often deleterious. Ah, and by the way: of course somatic mutattions cannot be inherited, and therefore have no role in building the functional inforamtion in organisms. So, as you can see (but will not see) you are making a lot of confusion with your quotations. The only interesting topic is transposons. But it's late, so I will discuss that topic later, in next post. gpuccio
gpuccio:
Because, of course, the algorithm would be by far more complex than the result. And where is that algorithm? there is absolutely no trace of it.
Yes, the algorithm would be more complex than the structure. So what? Where is the algorithm? With the Intelligent Designer. A trace of it is in the structure itself. The algorithm attempts to answer the question of how ATP synthase was intelligently designed. Of course an omnipotent intelligent designer wouldn't require that and could just design one from its mind. ET
Gp in 77 tried to imply he was completely theologically neutral. That is impossible. Besides science itself being impossible without basic Theological presuppositions (about the rational intelligibility of the universe and of our minds to comprehend it), any discussion of origins necessarily entails Theological overtones. It simply can't be avoided. Gp is trying to play politics instead of being honest. Perhaps next GP will try to claim that he is completely neutral in regards to breathing air. :) bornagain77
Basically I believe one of Gp's main flaws in his model is that he believes that the genome is basically static and most all the changes to the genome that do occur are the result of randomness (save for when God intervenes at the family level to introduce ''some' new information whilst saving parts of the genome that have accumulated changes due to randomness). Yet the genome is now known to be dynamic and not to be basically static.
Neurons constantly rewrite their DNA - Apr. 27, 2015 Excerpt: They (neurons) use minor "DNA surgeries" to toggle their activity levels all day, every day.,,, "We used to think that once a cell reaches full maturation, its DNA is totally stable, including the molecular tags attached to it to control its genes and maintain the cell's identity," says Hongjun Song, Ph.D.,, "This research shows that some cells actually alter their DNA all the time, just to perform everyday functions.",,, ,,, recent studies had turned up evidence that mammals' brains exhibit highly dynamic DNA modification activity—more than in any other area of the body,,, http://medicalxpress.com/news/2015-04-neurons-constantly-rewrite-dna.html A Key Evidence for Evolution Involving Mobile Genetic Elements Continues to Crumble - Cornelius Hunter - July 13, 2014 Excerpt: The biological roles of these place-jumping, repetitive elements are mysterious. They are largely viewed (by Darwinists) as “genomic parasites,” but in this study, researchers found the mobile DNA can provide genetic novelties recruited as certain population-unique, functional enrichments that are nonrandom and purposeful. “The first shocker was the sheer volume of genetic variation due to the dynamics of mobile elements, including coding and regulatory genomic regions, and the second was amount of population-specific insertions of transposable DNA elements,” Michalak said. “Roughly 50 percent of the insertions were population unique.” http://darwins-god.blogspot.com/2014/07/a-key-evidence-for-evolution-involving.html Contrary to expectations, genes are constantly rearranged by cells - July 7, 2017 Excerpt: Contrary to expectations, this latest study reveals that each gene doesn’t have an ideal location in the cell nucleus. Instead, genes are always on the move. Published in the journal Nature, researchers examined the organisation of genes in stem cells from mice. They revealed that these cells continually remix their genes, changing their positions as they progress though different stages. https://uncommondesc.wpengine.com/intelligent-design/researchers-contrary-to-expectations-genes-are-constantly-rearranged-by-cells/
And again, DNA is now, contrary to what is termed to be 'the central dogma', far more passive than it was originally thought to be. As Denis Noble stated, “The genome is an ‘organ of the cell’, not its dictator”
“The genome is an ‘organ of the cell’, not its dictator” - Denis Noble – President of the International Union of Physiological Sciences
Another main flaw in Gp's 'Johnny Cash model', and as has been pointed out already, is that he assumes 'randomness' to be a defining notion for changes to the genome. This is the same assumption that Darwinists make. In fact, Darwinists. on top of that, also falsely assume 'random thermodynamic jostling' to be a defining attribute of the actions within a cell. Yet, advances in quantum biology have now overturned that foundational assumption of Darwinists, The first part of the following video recalls an incident where 'Harvard Biovisions' tried to invoke 'random thermodynamic jostling' within the cell to undermine the design inference. (i.e. the actions of the cell, due to advances in quantum biology, are now known to be far more resistant to 'random background noise' than Darwinists had originally presupposed:)
Darwinian Materialism vs. Quantum Biology – Part II – video https://www.youtube.com/watch?v=oSig2CsjKbg
Of supplemental note:
How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology correlate – 27 minute mark) https://youtu.be/4f0hL3Nrdas?t=1634
bornagain77
EugeneS: Hi, Eugene, Welcome anyway to the discussion, even for an off.topic! :) gpuccio
Upright Biped, An off-topic. You have mail as of a long time ago :) I apologise for my long silence. I have changed jobs twice and have been quite under stress. Because of this I was not checking my non-business emails regularly. Hoping to get back to normal. EugeneS
Bornagain77 at #76: For "God reusing stuff", see my previous post. For the rest, mutations and similar, see my next post (I need a little time to write it). gpuccio
Bornagain77: "I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating ‘kinds’ that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking." "And as ET pointed out, Gp’s presupposition also makes no sense theologically speaking" I have ignored this kind of objection, but as you (and ET) insist, I will say just a few words. I believe that you are theologically committed in your discussions about science. This is not a big statement, I suppose, because it is rather obvious in all that you say. And it is not a criticism, believe me. It is your strong choice, and I appreciate people who make strong choices. But, of course, I don't feel obliged to share those choices. You see, I too make my strong choices, and I like to remain loyal to them. One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it's not easy). This is, for me, an important question of principle. So, I will not answer any argument that makes any reference to theology, or even simply to God, in a scientific discussion. Never. So, excuse me if I will go on ignoring that kind of remarks from you or others. It's not out of discourtesy. It's to remain loyal to my principles. gpuccio
Gp claims:
neutral signatures accumulate as differences as time goes on, between there is physical continuity. Creation or design form scratch for each organism cannot explain that. This is the argument that BA seems not to understand.
To be clear, Gp is arguing for a very peculiar. even bizarre. form of UCD where God reuses stuff and does not create families de novo (which is where Behe now puts the edge of evolution). Hence my reference to Johnny Cask's song "One Piece at a Time" Earlier, Gp also claimed that he could think of no other possible explanation to explain the data. I pointed out that 'directed' mutations are another possible explanation. Gp then falsely claimed that there are no such thing as directed mutations. Specifically he claimed, "Most mutations that we observe, maybe all, are random." Gp, whether he accepts it or not, is wrong in his claim that "maybe all mutations are random". Thus, Gp's "Johnny Cash" model is far weaker than he imagines it to be.
JOHNNY CASH – ONE PIECE AT A TIME – CADILLAC VIDEO https://www.youtube.com/watch?v=Hb9F2DT8iEQ
bornagain77
ET at #71: As far as I can understand, the divergence of polar bears is probably simple enough to be explained as adaptation under environmental constraints. This is not ATP synthase. Not at all. I don't know the topic well, so mine is just an opinion. However, bears are part of the family Ursidae, so brown bears and polar bears are part of the same family. So, is we stick to Behe's very reasonable idea that family is probably the level which still requres design, this is an inside family divergence. gpuccio
ET at #70:
Evolution by means of intelligent design is active design.
Yes, it is.
Genetic changes don’t have to produce some perceived advantage in order to be directed.
Of course. That's exactly my point. See my post #43, this statement about my model (modeol b): "There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process." Emphasis added.
And if genetic entropy has interfered with the directed mutation function then that could also explain what you observe.
In my model, it does. You see, for anything to explain the differences created in time by neutral variation (my point 1 at post #43, what I call "signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split"), you definitely need physical continuity between different organisms. Otherwise, nothing can be explained. IOWs, neutral signatures accumulate as differences as time goes on, between there is physical continuity. Creation or design form scratch for each organism cannot explain that. This is the argument that BA seems not to understand.
And yes, ATP synthase was definitely intelligently designed.
Definitely.
Why can’t it be that it was intelligently designed via some sort of real genetic algorithm?
Because, of course, the algorithm would be by far more complex than the result. And where is that algorithm? there is absolutely no trace of it. It is no good to explain things with mere imagination, We need facts. Look, we are dealing with functional information here, not with some kind of pseudo-order that can be generated by some simple necessity laws coupled to random components. IOWs, this is not something that self-organization can even start to do. Of course, an algorithm could do it. If I had a super-computer already programmed with all possible knowledge about ciochemistry, and the computing ability to anticipate top down how protein sequences will fold and what biochemical activity they will have, and with a definite plan to look for some outcome that can transform a proton gradient into ATP, possibly with at least a strong starting plan that it should be something like a water mill, then yes, maybe that super-computer could be, in time, elaborate some relatively efiicient project on that basis. Of course, that whole apparatus would be much more complex than what we want to obtain. After all, ATP synthase has only a few thousand bits of functional information. Here we are discussing probably many gigabytes for the algorithm. That's the problem, in the end. Functional information can be generated only in two was: a) Direct design by a consious, intelligen, purposeful agent. Of course that agent may have to use previous data or knowledge, but the point is that its cognitive abilities and its ability to have purposes will create those shortcuts that no non design system can generate. b) Indirect design through some designed system complex enough to include a good programming of how to obtain some results. As said, that can work, but it has severe limitaitons. The designed system is already very complex, and the further functional information that can be obtained is usually very limited and simple. Why? Because the system, not being open to a further intervention of conaciousness and intelligence, can only do what it has been progarmmed to do. Nothing else. The purposes are only those purposes that have already been embedded at the beginning. Nothing else. The computations, all the apparently "intelligent" activities, are merely passive executions of intelligent programs already designed. They can do what they have been programmed to do, but nothing else. So, let's say that I want to program a system that can find a good solution for ATP-synthase. OK, I can do that (not me, of course, let's say some very intelligent designer). But I must already be conscious that I will need ATP.synthase, ir something like that. I must put that purpose in my system. And of course all the knowledge and power needed to do what I want it to do. Or, of course, I can just design ATP synthase and introduce that design in the system (that I have already designed myself soem time ago) if and when it is needed. Which is more probably true? Again, facts and only facts must guide us. ATP synthase, in a form very similar to what we observe today, was alredy present billion of years ago, when reasonably only prokaryotes were living on our planet. Was a complex algorithm capable of that kind of knowledge and computations present on our planet before the appearance of ATP synthase? In what form? What fatcs have we that support such an idea The truth is very simple. For all that we can know and reasonably infer, at some time, very early after our plane became compatible with any form of life, ATP synthase appeared, very much similar to what it is today, in some bacterial like form of life. There is nothing to suggest, or support, or even mak credible or reasonable, that any complex algorithm capable of computing the necessary information for it was present at that time. No such algorithm, or any trace of it, exists today. If we wanted to compute ATP synthase today, we would not have the palest idea of how to do it. These are the simple facts. Then, anyone is free to believe as he likes. As for me, I stick to my model, and am very happy with it. gpuccio
Upright BiPed: Hi UB, nice to hear from you! :) "Once again, where are your anti-ID critics?" As usual, they seem to have other interests. :) Luckily, some friends are ready to be fiercely antagonistic! :) Which is good, I suppose... gpuccio
. Another excellent post GP, thank you for writing it. Reading thru it now. Once again, where are your anti-ID critics? Upright BiPed
And those polar bears. The change in the structure of the fur didn't happen by chance. So either the original population(s) of bears already had that variation or the information required to produce it. With that information being teased out due to the environmental changes and built-in responses to environmental cues. ET
Evolution by means of intelligent design is active design. Genetic changes don't have to produce some perceived advantage in order to be directed. And if genetic entropy has interfered with the directed mutation function then that could also explain what you observe. And yes, ATP synthase was definitely intelligently designed. Why can't it be that it was intelligently designed via some sort of real genetic algorithm? ET
Gp adamantly states,
I beg to differ. Most mutations that we observe, maybe all, are random.
And yet Shapiro adamantly begs to differ,,,
"It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns” James Shapiro - Evolution: A View From The 21st Century - (Page 82)
Noble also begs to differ
Physiology is rocking the foundations of evolutionary biology - Denis Noble - 17 MAY 2013 Excerpt: The ‘Modern Synthesis’ (Neo-Darwinism) is a mid-20th century gene-centric view of evolution, based on random mutations accumulating to produce gradual change through natural selection.,,, We now know that genetic change is far from random and often not gradual.,,, http://onlinelibrary.wiley.com/doi/10.1113/expphysiol.2012.071134/abstract - Denis Noble – President of the International Union of Physiological Sciences
Richard Sternberg also begs to differ
Discovering Signs in the Genome by Thinking Outside the BioLogos Box - Richard Sternberg - March 17, 2010 Excerpt: The scale on the x-axis is the same as that of the previous graph--it is the same 110,000,000 genetic letters of rat chromosome 10. The scale on the y-axis is different, with the red line in this figure corresponding to the distribution of rat-specific SINEs in the rat genome (i.e., ID sequences). The green line in this figure, however, corresponds to the pattern of B1s, B2s, and B4s in the mouse genome.... *The strongest correlation between mouse and rat genomes is SINE linear patterning. *Though these SINE families have no sequence similarities, their placements are conserved. *And they are concentrated in protein-coding genes.,,, ,,, instead of finding nothing but disorder along our chromosomes, we are finding instead a high degree of order. Is this an anomaly? No. As I'll discuss later, we see a similar pattern when we compare the linear positioning of human Alus with mouse SINEs. Is there an explanation? Yes. But to discover it, you have to think outside the BioLogos box. http://www.evolutionnews.org/2010/03/signs_in_the_genome_part_2032961.html Beginning to Decipher the SINE Signal - Richard Sternberg - March 18, 2010 Excerpt: So for a pure neutralist model to account for the graphs we have seen, ~300,000 random mutation events in the mouse have to match, somehow, the ~300,000 random mutation events in the rat. What are the odds of that? http://www.evolutionnews.org/2010/03/beginning_to_decipher_the_sine032981.html
Another paper along that line,
Recent comprehensive sequence analysis of the maize genome now permits detailed discovery and description of all transposable elements (TEs) in this complex nuclear environment. . . . The majority, perhaps all, of the investigated retroelement families exhibited non-random dispersal across the maize genome, with LINEs, SINEs, and many low-copy-number LTR retrotransposons exhibiting a bias for accumulation in gene-rich regions. http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1000732
and another paper
PLOS Paper Admits To Nonrandom Mutation In Evolution - May 31, 2019 Abstract: “Mutations drive evolution and were assumed to occur by chance: constantly, gradually, roughly uniformly in genomes, and without regard to environmental inputs, but this view is being revised by discoveries of molecular mechanisms of mutation in bacteria, now translated across the tree of life. These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation. Mutation is also nonrandom in genomic space, with multiple simultaneous mutations falling in local clusters, which may allow concerted evolution—the multiple changes needed to adapt protein functions and protein machines encoded by linked genes. Molecular mechanisms of stress-inducible mutation change ideas about evolution and suggest different ways to model and address cancer development, infectious disease, and evolution generally.” (open access) – Fitzgerald DM, Rosenberg SM (2019) What is mutation? A chapter in the series: How microbes “jeopardize”the modern synthesis. PloS Genet 15(4): e1007995. https://uncommondesc.wpengine.com/evolution/plos-paper-admits-to-nonrandom-mutation-in-evolution/
And as Jonathan Wells noted, "I now know as an embryologist,,,Tissues and cells, as they differentiate, modify their DNA to suit their needs. It's the organism controlling the DNA, not the DNA controlling the organism."
Ask an Embryologist: Genomic Mosaicism - Jonathan Wells - February 23, 2015 Excerpt: humans have a "few thousand" different cell types. Here is my simple question: Does the DNA sequence in one cell type differ from the sequence in another cell type in the same person?,,, The simple answer is: We now know that there is considerable variation in DNA sequences among tissues, and even among cells in the same tissue. It's called genomic mosaicism. In the early days of developmental genetics, some people thought that parts of the embryo became different from each other because they acquired different pieces of the DNA from the fertilized egg. That theory was abandoned,,, ,,,(then) "genomic equivalence" -- the idea that all the cells of an organism (with a few exceptions, such as cells of the immune system) contain the same DNA -- became the accepted view. I taught genomic equivalence for many years. A few years ago, however, everything changed. With the development of more sophisticated techniques and the sampling of more tissues and cells, it became clear that genetic mosaicism is common. I now know as an embryologist,,,Tissues and cells, as they differentiate, modify their DNA to suit their needs. It's the organism controlling the DNA, not the DNA controlling the organism. http://www.evolutionnews.org/2015/02/ask_an_embryolo093851.html
And as ET pointed out, Gp's presupposition also makes no sense theologically speaking
Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes? It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in.
bornagain77
OLV and all: Here is a database of known human lncRNAs: https://lncipedia.org/ It includes, at present, data for 127,802 transcripts and 56,946 genes. A joy for the fans of junk DNA! :) Let's look at one of these strange objects. MALAT-1 is one of the lncRNAs described in the paper at the previous post. Here is what the paper says:
MALAT1 Metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) is a highly conserved lncRNA whose abnormal expression is considered to correlate with the development, progression and metastasis of multiple cancer types. Recently we reported the role of MALAT1 in regulating the production of cytokines in macrophages. Using PMA-differentiated macrophages derived from the human THP1 monocyte cell line, we showed that following stimulation with LPS, a ligand for the innate pattern recognition receptor TLR4, MALAT1 expression is increased in an NF-kB-dependent manner. In the nucleus, MALAT1 interacts with both p65 and p50 to suppress their DNA binding activity and consequently attenuates the expression of two NF-kB-responsive genes, TNF-a and IL-6. This finding is in agreement with a report based on in silico analysis predicting that MALAT1 could influence NF-kB/RelA activity in the context of epithelial–mesenchymal transition. Therefore, in LPS-activated macrophages MALAT1 is engaged in the tight control of the inflammatory response through interacting with NF-kB, demonstrating for the first time its role in regulating innate immunity-mediated inflammation. As MALAT1 is capable of binding hundreds of active chromatin sites throughout the human genome, the function and mechanism of action so far uncovered for this evolutionarily conserved lncRNA may be just the tip of an iceberg.
Emphasis mine, as usual. Now, if we look for MALAT-1 in the database above linked, we find 52 transcripts. The first one, MALAT1:1, has a size of 12819 nucleotides. Not bad! :) 342 papers quoted about this one transcript. gpuccio
OLV and all: This is another paper about lncRNAs and NF-kB: Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5343356/ This is open access.
SUMMARY The nuclear factor-kB (NF-kB) family of transcription factors play an essential role for the regulation of inflammatory responses, immune function and malignant transformation. Aberrant activity of this signalling pathway may lead to inflammation, autoimmune diseases and oncogenesis. Over the last two decades great progress has been made in the understanding of NF-kB activation and how the response is counteracted for maintaining tissue homeostasis. Therapeutic targeting of this pathway has largely remained ineffective due to the widespread role of this vital pathway and the lack of specificity of the therapies currently available. Besides regulatory proteins and microRNAs, long non?coding RNA (lncRNA) is emerging as another critical layer of the intricate modulatory architecture for the control of the NF-kB signalling circuit. In this paper we focus on recent progress concerning lncRNA-mediated modulation of the NF-kB pathway, and evaluate the potential therapeutic uses and challenges of using lncRNAs that regulate NF-kB activity.
gpuccio
OLV: "Are you surprised?" No. :) But, of course, self-organization can easily explain all that! :) gpuccio
GP @52: " the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search" Are you surprised? :) This crosstalk concept is very interesting indeed. OLV
ET: "I doubt it. I would say most are directed and only some are happenstance occurrences". I beg to differ. Most mutations that we observe, maybe all, are random. Of course, if the functional information we observe in organisms was generated by mutations, those mutations were probably guided. But we cannot observe that process directly, or at least I am not aware that it has been observed. Instead, we observe a lot of more or less spontaneous mutations that are really random. Many of them generate diseases, often in real time. Radiation and toxic substances dramatically increase the rate of random mutations, and the frequency of certain diseases or malformations. We know that very well. And yet, no law can anticipate when and how those mutations will happen. We just know that they are more common. The system is still probabilistic, even if we can detect the effect of specific causes. I don't know Spetner in detail, but it seems that he believes that most functional information derives from some intelligent adaptation of existing organisms. Again, I beg to differ. It is certainly true that "all the evolution that has been actually observed and which is not accounted for by modern evolutionary theory" needs some explanation, but the explanation is active design, not adaptation. I am not saying that adaptation does not exist, or does not have some important role. We can see good examples, for example in bacteria (the plasmid system, just to mention one instance). Of course a complex algorithm can generate some new information by computing new data that come from the environment. but the ability to adapt depends on the specific functional information that is already in the system, and has therefore very strict limitations. Adaptation can never generate a lot of new original functional information. Let's make a simple example. ATP synthase, again. There is no adaptation system in bacteria that could have found the specific sequences of tha many complex components of the system. It is completely out of discussion. And yet, ATP synthase exists in bacteria from billion of years, and is still by far similar in humans. This is of course the result of design, not adaptation. The same can be said for body plans, all complex protein networks, and I agree with Behe that families of organisms are already levels of complexity that scream design. Adaptation, even for an already complex organism, cannot in any way explain those things. It is true that the mutations we observe are practically always random. It is true that they are often deleterious, or neutral. More often neutral or quasi neutral. We know that. We see those mutations happen all the time. Achondroplasia, for example, which is the most common cause of dwarfism, is a genetic disease that (I quote from Wikipedia for simplicity): "is due to a mutation in the fibroblast growth factor receptor 3 (FGFR3) gene.[3] In about 80% of cases this occurs as a new mutation during early development.[3] In the other cases it is inherited from one's parents in an autosomal dominant manner." IOWs, in 80% of cases the disease is due to a new mutation, one that was not present in the parents. If you look at the Exac site: http://exac.broadinstitute.org/ you will find the biggest database of variations in the human genome. Random mutations that generate neutral variation are facts. They can be observed, their rate can be measured with some precision. There is absolutely no scientific reason to deny that. So, to sum up: a) The mutations we observe every day are random, often neutral, sometimes deleterious. b) The few cases where those mutations generate some advantage, as well argued by Behe, are cases of loss of information in complex structures that, by chance, confers some advantage in specific environments. see antibiotic resistance. All those variations are simple. None of them generates any complex functional information. c) The few cases of adaptation by some active mechanism that are in some way documented are very simple too. Nylonase, for example, could be one of them. The ability of viruses to change at very high rates could be another one. d) None of those reasonings can help explain the appearance, throughout natural history, of new complex functional information, in the form of new functional proteins and protein networks, new body plans, new functions, new regulations. None of those reasonings can explain OOL, or eukaryogenesis, or the transition to vertebrates. None of them can even start to explain ATP synthase, ot the immune system, or the nervous system in mammals. And so on, and so on. e) All these things can only be explained by active design. This is my position. This is what I firmly believe. That said, if you want, we can leave it to that. gpuccio
Hazel: In a strict sense, random is a system where the events cannot be anticipated by a definite law, but can be reasonably described by a probability distribution. Of course, it is absolutely true that in that case "there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism". I would describe that aspect saying that the system, as a whole, is blind to those results. Randomness is a concept linked to our way of describing the system. Random systems, like the tossing of a coin, are in essence deterministic, but we have no way to describe them in a deterministic way. The only exception could be the intrinsic randomness of the wave function collapse in quantum mechanics. In the interpretations where it is really considered intrinsic. gpuccio
Thank you, bornagain77. And yes- the non-random evolutionary hypothesis featuring built-in responses to environmental cues. ET
Excellent point at 59 ET. Isn't Spetner's model called the "Non-Random' Evolutionary hypothesis?
Spetner goes through many examples of non-random evolutionary changes that cannot be explained in a Darwinian framework. https://evolutionnews.org/2014/10/the_evolution_r/ Gloves Off — Responding to David Levin on the Nonrandom Evolutionary Hypothesis Lee M. Spetner September 26, 2016 In the book, I present my nonrandom evolutionary hypothesis (NREH) that accounts for all the evolution that has been actually observed and which is not accounted for by modern evolutionary theory (the Modern Synthesis, or MS). Levin ridicules the NREH but does not refute it. There is too much evidence for it. A lot of evidence is cited in the book, and there is considerably more that I could add. He ridicules what he cannot refute. Levin calls the NREH Lamarckian. But it differs significantly from Lamarkism. Lamarck taught that an animal acquired a new capability — either an organ or a modification thereof — if it had a need for it. He offered, however, no mechanism for that capability. Because Lamarck’s theory lacked a mechanism, the scientific community did not accept it. The NREH, on the other hand, teaches that the organism has an endogenous mechanism that responds to environmental stress with the activation of a transposable genetic element and often leads to an adaptive response. How this mechanism arose is obscure at present, but its operation has been verified in many species.,,, https://evolutionnews.org/2016/09/gloves_off_-_r/
bornagain77
"Mutations are random" means they are accidents, errors and mistakes. They were not planned and just happened to happen due to the nature of the process. Yes, x-rays may have caused the damage that produced the errors but the changes were spontaneous and unpredictable as to which DNA sequences, if any, would have been affected. ET
gpuccio:
Most mutations are random. There can be no doubt about that.
I doubt it. I would say most are directed and only some are happenstance occurrences. See Spetner, "Not By Chance",1997. Also Shapiro, "Evolution: a view from the 21st Century". And:
He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108
Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes? It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in. ET
Good post at 56, gp. Also, it is my understanding that when someone says "mutations are random" they mean there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism. "Mutations are random" doesn't refer to the causes of the mutations, I don't think. hazel
I, of course, disagree with you. The third article,,, "According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets.,,, “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe." That is fairly straightforward. And again, Directed mutations are ‘another possible explanation’. Your 'convoluted' model is not nearly as robust as you have presupposed. bornagain77
Bornagain77: Most mutations are random. There can be no doubt about that. Of course, that does not exclude that some are directed. A directed mutation is an act of design. I perfectly agree with Behe that the level of necessary design intervention is at least at the family level. The three quotes you give have nothing to do with directed mutations and design. In particular, the author if the second one is frankly confused. He writes:
Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it. On the contrary, there's much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism's predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.
This is simple ignorance. The existence of patterns does not mean that a system is not probabilistic. It just means that there are also necessity effects. He makes his error clear saying: "Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern." Now, "a higher chance" is of course a probabilistic statement. A random distribution is not a distribution where all events have the same probability to happen. That is called a uniform probability distribution. If some events (like mutations near a place where mutations have already occurred) have a higher probability to occur, that is still a random distribution, one where the probability of the events is not uniform. Things become even worse. He writes: "While we can't say mutations are random, we can say there is a large chaotic component, just as there is in the throw of a loaded dice. But loaded dice should not be confused with randomness because over the long run—which is the time frame of evolution—the weighted bias will have noticeable consequences." But of course a loaded dice is a random system. Let's say that the dice is loaded so that 1 has a higher probability to occur. So the probabilities of the six possible events, instead of being all 1/6 (uniform distribution), are, for example, 0.2 for 1 and 0.16 for all the other outcomes. So, the dice is loaded. And so? Isn't that a random system? Of course it is. Each event is completely probabilitstic: we cannot anticipate it with a necessity rule. But the event one is more probable than the others. That article is simply a pile of errors and confusion. Whoever understands something about probability can easily see that. Unfortunately you tend to quote a lot of things, but it seems that not always you evaluate them critically. Again, I propose: let's leave it at that, This discussion does not seem to lead anywhere. gpuccio
Gp states
I think that most mutations are random,
And yet the vast majority of mutations are now known to be 'directed'
:How life changes itself: the Read-Write (RW) genome. – 2013 Excerpt: Research dating back to the 1930s has shown that genetic change is the result of cell-mediated processes, not simply accidents or damage to the DNA. This cell-active view of genome change applies to all scales of DNA sequence variation, from point mutations to large-scale genome rearrangements and whole genome duplications (WGDs). This conceptual change to active cell inscriptions controlling RW genome functions has profound implications for all areas of the life sciences. http://www.ncbi.nlm.nih.gov/pubmed/23876611 WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? Fully Random Mutations – Kevin Kelly – 2014 Excerpt: What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it. On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. http://edge.org/response-detail/25264 Duality in the human genome – November 28, 2014 Excerpt: According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets. Scientists refer to these as cis and trans mutations, respectively. Evidently, an organism must have more cis mutations, where the second gene form remains intact. “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe. http://medicalxpress.com/news/2014-11-duality-human-genome.html
i.e. Directed mutations are 'another possible explanation'. As to, "do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?" I believe in 'top down' creation of 'kinds' with genetic entropy, as outlined by Sanford and Behe, following afterwards. As to exactly where that line should be, Behe has recently revised his estimate:
"I now believe it, (the edge of evolution), is much deeper than the level of class. I think is actually goes down to the level of family" Michael Behe: Darwin Devolves - video - 2019 https://www.youtube.com/watch?v=zTtLEJABbTw In this bonus footage from Science Uprising, biochemist Michael Behe discusses his views on the limits of Darwinian explanations and the evidence for intelligent design in biology.
I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating 'kinds' that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking. Your model, Theologically speaking, humorously reminds me of this old Johnny Cash song:
JOHNNY CASH - ONE PIECE AT A TIME - CADILLAC VIDEO https://www.youtube.com/watch?v=Hb9F2DT8iEQ
bornagain77
Bornagain77: Moreover, the mechanisms described by Behe in Darwin devolves are the known mechanisms of NS. They can certainly create some diversification, but essentially they give limited advantages in very special contextx, and they are essentially very simple forms of variation, They cannot certainly explain the emergence of new species, least of all explain the emergence of new comples functional information, like new functional proteins. So, do you believe that all relevant functional information is generated when "kinds" are created? And when would that happen? Just to understand. gpuccio
Bornagain77: OK, it's too easy to be right in criticizing Swamidass! :) (Just joking, just joking... but not too much) Just to answer you observations about randomness: I think that most mutations are random, unless they are guided by design. I am not sure that I understand what your point is. Do you believe they are guided? I also believe that some mutations are guided, but that is a form of design. If they are not guided, how can you describe the system? If you cannot describe it in terms of necessity (and I don't think you can), some probability distribution is the only remaining option. Again, I don't understand what you really mean. But of course the mutations (if they are mutations) that generate new functional information are not random at all. they must be guided, or intelligently selected. As you know, I cannot debate God in this context. I can only do what ID theory allows is to do: recognize events where a design inference is absolutely (if you allow the word) warranted. gpuccio
To all: As usual, the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search. We are all interested, of course, in long non coding RNAs. Well, this paper is about their role in NF-kB signaling: Lnc-ing inflammation to disease https://www.ncbi.nlm.nih.gov/pubmed/28687714
Abstract Termed 'master gene regulators' long ncRNAs (lncRNAs) have emerged as the true vanguard of the 'noncoding revolution'. Functioning at a molecular level, in most if not all cellular processes, lncRNAs exert their effects systemically. Thus, it is not surprising that lncRNAs have emerged as important players in human pathophysiology. As our body's first line of defense upon infection or injury, inflammation has been implicated in the etiology of several human diseases. At the center of the acute inflammatory response, as well as several pathologies, is the pleiotropic transcription factor NF-??. In this review, we attempt to capture a summary of lncRNAs directly involved in regulating innate immunity at various arms of the NF-?? pathway that have also been validated in human disease. We also highlight the fundamental concepts required as lncRNAs enter a new era of diagnostic and therapeutic significance.
The paper, unfortunately, is not open access. It is interesting, however, than lncRNAs are now considered "'master gene regulators". gpuccio
No, I do not think Dr. Cornelius Hunter is ALWAYS right. But I certainly think he is right in his critique of Swamidass. Whereas I don't think you are always wrong. I just think you are, in this instance, severely mistaken in one or more of your assumptions behind your belief in common descent. Your model is, from what I can tell, severely convoluted. If you presuppose randomness in your model at any instance prior to the design input from God to create a new family of species.,, that is one false assumption that would undermine your claim. I can provide references if need be. bornagain77
Bornagain77: I disagree with Cornelius Hunter when I think he is wrong. In that sense, I treat him like anyone else. You seem to believe that he is always right. I don't. Many times I have found that he is wrong in what he says. And no, my argument about neutral variation has nothing to do wiith the argument of shared errors. And with the idea that "lightning doesn’t strike twice". My argument is about differences, not similarities, I think you don't understand it. But that's not a problem. gpuccio
"I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree." Like when he contradicts you? :) Though you tried to downplay it, your argument from supposedly 'neutral variations' is VERY similar to the shared error argument. As such, for reasons listed above, it is not nearly as strong as you seem to presuppose. It is apparent that you believe the variations were randomly generated and therefore you are basically claiming that “lightning doesn’t strike twice”, which is exactly the argument that Dr. Hunter critiqued. Moreover, If anything we now have far more evidence of mutations being 'directed' than we do of them being truly random. You said you could think of no other possible explanation, I hold that directed mutations are a 'other possible explanation' that is far more parsimonious to the overall body of evidence than your explanation of a Designer, i.e. God, creating a brand new species without bothering to correct supposed neutral variations and/or supposed shared errors. bornagain77
Bornagain77: My argument is not about shared errors. It is about neutral mutations at neutral sites, grossly proportional to evolutionary split times. It is about the ka/ks ratio and the saturation of neutral sites after a few hundred million years. I have made the argument in great detail in the past, with examples, but I have no intention to repeat all the work now. By the way, I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree. gpuccio
as to:
1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences.,,, OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1)
Again, the argument is not nearly as strong as you seem to think it is: Particularly You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor. The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different. In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms.
Shared Errors: An Open Letter to BioLogos on the Genetic Evidence, Cont. Cornelius Hunter - June 1, 2016 In recent articles (here, here and here) I have reviewed BioLogos Fellow Dennis Venema’s articles (here, here and here) which claimed that (1) the genomes of different species are what we would expect if they evolved, and (2) in particular the human genome is compelling evidence for evolution. Venema makes several confident claims that the scientific evidence strongly supports evolution. But as I pointed out Venema did not reckon with an enormous body of contradictory evidence. It was difficult to see how Venema could make those claims. Fortunately, however, we were able to appeal to the science. Now, as we move on to Venema’s next article, that will all change. In this article, Venema introduces a new kind of genetic evidence for evolution. Again, Venema’s focus is on, but not limited to, human evolution. Venema’s argument is that harmful mutations shared amongst different species, such as the human and chimpanzee, are powerful and compelling evidence for evolution. These harmful mutations disable a useful gene and, importantly, the mutations are identical. Are not such harmful, shared mutations analogous to identical typos in the term papers handed in by different students, or in historical manuscripts? Such typos are telltale indicators of a common source, for it is unlikely that the same typo would have occurred independently, by chance, in the same place, in different documents. Instead, the documents share a common source. Now imagine not one, but several such typos, all identical, in the two manuscripts. Surely the evidence is now overwhelming that the documents are related and share a common source. And just as a shared, identical, typos are a telltale indicator of a common source, so too must shared harmful mutations be proofs of a common ancestor. It is powerful and compelling evidence for common descent. It is, explains Venema, “one of the strongest pieces of evidence in favor of common ancestry between humans and chimpanzees (and other organisms).” There is only one problem. As we have explained so many times, the argument is powerful because the argument is religious. This isn’t about science. The Evidence Does Not Support the Theory The first hint of a problem should be obvious: harmful mutations are what evolution is supposed to kill off. The whole idea behind evolution is that improved designs make their way into the population via natural selection, and by the same logic natural selection (or purifying selection in this case) filters out the harmful changes. Therefore finding genetic sequence data that must be interpreted as harmful mutations weighs against evolutionary theory. Also, there is the problem that any talk of how a gene proves evolutionary theory is avoiding the problem that evolution fails to explain how genes arose in the first place. Evolution claiming proof in the details of gene sequences seems to be putting the cart before the horse. No Independent Changes You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor. The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different. In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms. The problem is that these repeated designs appear in species so distant that, according to evolutionary theory, their common ancestor could not have had that design. The human and squid have similar vision systems, but their purported common ancestor, a much simpler and more ancient organism, would have had no such vision system. Evolutionists are forced to say that incredibly complex designs must have arisen, yes, repeatedly and independently. And this must have occurred over and over in biology. It would be a challenge simply to document all of the instances in which evolutionists agreed to an independent origins. For evolutionists then to insist that similar designs in allied species can only be explained by common descent amounts to having it both ways. Bad Designs This “shared error” argument also relies on the premise that the structures in question are bad designs. In this case, the mutations are “harmful,” and so the genes are “broken.” And while that may well be true, it is a premise with a very bad track record. The history of evolutionary thought is full of claims of bad, inefficient, useless designs which, upon further research were found to be, in fact, quite useful. Simply from a history of science perspective, this is a dangerous argument to be making. Epicureanism The “shared error” argument is bad science and bad history, but it remains a very strong argument. This is because its strength does not come from science or history, but rather from religion. As I have explained many times, evolution is a religious theory, and the “shared error” argument is no different. This is why the scientific and historical problems don’t matter. Venema explains: The fact that different mammalian species, including humans, have many pseudogenes with multiple identical abnormalities (mutations) shared between them is a problem for any sort of non-evolutionary, special independent creation model. This is a religious argument, evolution as a referendum on a “special independent creation model.” It is not that the species look like they arose by random chance, it is that they do not look like they were created. Venema and the evolutionists are certain that God wouldn’t have directly created this world. There must be something between the Creator and creation — a Plastik Nature if you will. And if Venema and the evolutionists are correct in their belief then, yes, evolution must be true. Somehow, some way, the species must have arisen naturalistically. This argument is very old. In antiquity it drove the Epicureans to conclude the world must have arisen on its own by random motion. Today evolutionists say the same thing, using random mutations as their mechanism. Needed: An Audit Darwin’s book was loaded with religious arguments. They were the strength of his otherwise weak thesis, and they have always been the strength behind evolutionary thought. No longer can we appeal to the science, for it is religion that is doing the heavy lifting. Yet evolutionists claim the high ground of objective, empirical reasoning. Venema admits that some other geneticists do not agree with this “shared error” argument but, he warns, they do so “for religious reasons.” We have also seen this many times. Evolutionists make religious claims and literally in the next moment lay the blame on the other guy. This is the world according to the Warfare Thesis. We need an audit of our thinking. https://evolutionnews.org/2016/06/shared_errors_a/
and
In Arguments for Common Ancestry, Scientific Errors Compound Theoretical Problems Evolution News | @DiscoveryCSC May 16, 2016 (6) Swamidass points to pseudogenes as evidence for common ancestry, even though many pseudogenes show evidence of function, including the vitellogenin pseudogene that Swamidass cites. Swamidass repeatedly cites Dennis Venema’s arguments for common ancestry based upon pseudogenes. However, as we’ve discussed here in the past, quite a few pseudogenes have turned out to be functional, and we’re discovering more all the time. It’s only recently that we’ve had the technology to study the functions of pseudogenes, so we are just at the beginning of doing so. While it’s true that there’s a lot about pseudogenes we still don’t know, an RNA Biology paper observes, “The study of functional pseudogenes is just at the beginning.” And it predicts that “more and more functional pseudogenes will be discovered as novel biological technologies are developed in the future.” The paper concludes that functional pseudogenes are “widespread.” Indeed, when we carefully study pseudogenes, we often do find function. One paper in Annual Review of Genetics tellingly observed: “Pseudogenes that have been suitably investigated often exhibit functional roles.” One of Swamidass’s central examples mirrors Dennis Venema’s argument that the vitellogenin pseudogene in humans demonstrates we’re related to egg-laying vertebrates like fish or reptiles. But a Darwin-doubting scientist was willing to dig deeper. Good genetic evidence now indicates that what Dennis Venema calls the “human vitellogenin pseudogene” is really part of a functional gene, as one technical paper by an ID-friendly creationist biologist has shown. https://evolutionnews.org/2016/05/in_arguments_fo/
bornagain77
Bornagain77: I quote myself: "That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible." The only thing in my model that explains biological form is design. Maybe it is not enough, but it is certainly necessary. I want to be clear: I agree with you about the importance of consciousness and of quantum mechanics. But what has that to do with my argument? Do you believe that functional information is designed? I do. Design comes from consciousness. Consciousness interacts with matter through some quantum interface. That's exactly what I believe. My model is not parsimonious and requires gargantuan jumps? Is it worse than the initial creation of kinds? However, for me we can leave it at that. As explained, I was not even implying CD in my initial discussion here. gpuccio
correct time mark is 27 minute mark
How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology correlate – 27 minute mark) https://youtu.be/4f0hL3Nrdas?t=1634
bornagain77
What is the falsification criteria of your model? It seems you are lacking a rigid criteria. Not to mention lacking experimental warrant that what you propose is even possible. "No descent at all. This is, I believe, your model." I do not believe in UCD, but I do believe in diversification from an initially created "kind" by devolutionary processes. i.e. Behe "Darwin Devolves" and Sanford "Genetic Entropy". I note, especially in the Cambrian, we are talking gargantuan jumps in the fossil record. Your model is not parsimonious to such gargantuan jumps. Moreover, your genetic evidence is not nearly as strong as you seem to think it is. And even if it were, it is not nearly enough to explain 'biological form'. For that you need to incorporate recent finding from quantum biology:
How Quantum Mechanics and Consciousness Correlate - video (how quantum information theory and molecular biology corelate - 23 minute mark) https://www.youtube.com/watch?v=4f0hL3Nrdas Darwinian Materialism vs. Quantum Biology – Part II - video https://www.youtube.com/watch?v=oSig2CsjKbg
bornagain77
Bornagain77 at #42: "All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about." It should be clear. However, let's try again. Let's say that there are 3 main models for how functional information comes into existence in biological beings. a) Descent with modifications generated by RV + NS: this is the neo-darwinian model. I absoluetly (if you allow the word) reject it. So do you, I suppose. b) Descent with designed modifications: this is my model. This is the process I refer to: a process of design, of engineering, which derives new species from what already exists. The important point, that justifies the term "descent", is that, as I have said, the old information that is appropriate is physically passed on from the ancestor to the new species. All the rest, the new functional information, is engineered in the design process. So, to be more clear, let's say that species B appears in natural history at time T. Before it, there exists another species, A, which has some strong similarities to species B. Let's say that, according to my model, species B derives physically from the already existing species A. How doe it happen? Let's say that, just as an imaginary example, A and B share about 50% of protein coding genes. The proteins coded by these genes are very similar in the two species, almost identical, at least at the beginning. The reason for that is that the function implemented by those proteins in the two species are extremely similar. But that is only part of the game. Of course, B has a lot of new proteins, or parts of proteins, or simply regulatory parts of the genome, that are not the same as A at all. Those sequences are absolutely funtional, but they do things that are specific to B, and do not exist in A, In the same way, many specific functions of A are not needed in B, and so they are not implemented there. Now, losing some proteins or some functions is not so difficult. We know that losing information is a very easy task, and requires no special ability. But how does all that new functional information arise in B? It did not exist in A, or in any other living organism that existed before to time T. It arises in B for the first time, and approximately at time T. The obvious answer, in my model, is: it is newly designed functional information. If I did not believe that, I would be in the other field, and not here in ID. But the old information, the sequence information that retains its function from A to B? Well, in my model, very simply, it is physically passed on from A to B.That is the meaning of descent in my model. That's what makes A an ancestor of B, even if a completely new process of design and engineering is necessary to derive B from A. Now, you may ask: how does that happen? Of course, we don't know the details, but we know three important facts: 1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences. 2) The new functional information arises often in big jumps, and is almost always very complex. For the origin of vertebrates, I have computed about 1.7 million bits of new functional information, arising is at most 20 million years. RV + NS couild never do that, because it totally lacks the necessary probabilistic resources. 3) The fossil record and the existing genomes and proteomes show no trace of the many functional intermediates that would be necessary for RV + NS to even try something. Therefore, RV + NS did not do it, because there is no trace of what should absolutely be there. So, how did design do it, with physical descent? Let's say that we can imagine us doing it. If we were able. What would we do? It's very simple: we would take a few specimens of A, bring them to some lab of ours, and work on them to engineer the new species with our powerful means of genetic engineering. Adding the new functional information to what already exists, and can still be functional in the new project. Where? And in what time? These are good questions. They are good questions in any case, even if you stick to your (I think) model, model c, soon to be described. Because species B does appear at time T. And that must happen somewhere. And that must happen in some time window. But the details are still to be understood. We know too little. But one thing is certain: both space and time are somehow restricted. Space is restricted, because of course the new species must appear somewhere. It does not appear at once all over the globe. But there is more. Model a, the neo-darwinian model, needs a process that takes place almost everywhere. Why? Because it badly needa as many probabilistic resources as possible. IOWs, it badly needs big numbers. Of course, we know very well that no reasonable big number will do. The probabilstic resources simply are not there. Even for bacteria crowding the whole planet for 5 billion years. But with small populations, any thought if RV and NS is blatantly doomed from the beginning. But design does not work that way. Design does not need big numbers, big populations. Especially if it is mainly top down engineering. So, we could very well engineer B working on a relativel small sample of A. In our lab. In what time? I really don't know, but certainly not too much. As you well know, those information jumps are rather sudden in natural history, This is a fact. So? I minute? 1 year? 1 million year? Interesting questions, but in the end it is not much anyway. Not instantaneously, I would say. Not in model b, anyway. If it is an engineering process, it needs time, anyway. So, what is important about this model? Simply that it is the best model that explains facts. 1) The signatures of neutral variation in conserved sequences are perfectly explained. As those sequences have been passed on as they are fron A to B, they keep those signatures. IOWs, if A has existed for 100 million years from some previous split, in those 100 milllion years neutral variation happens in the sequence, and differentiates that sequence in A from some homologous sequence in A1 (the organsim derived from that old split). So, B inherits those changes from A, and if we compare B and A1, we find those differences, as we find them if we compare A and A1. The differences in B are inherited from A as it was 100 million years after the split from A1. 2) The big jumps in functional information are, of course, explained by the design process, the only type of process that can do those things. 3) There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process. Of course, the new engineered species, when it is ready and working, is released into the generla environment. IOWs, it is "published". That's what we observe in the fossil record, and in the genomes: the release of the new engineered species. Nothing else. So, model b, my model, explains all three types of observed facts. c) No descent at all. This is, I believe, your model. What does that mean? Well, it can mean sudden "creation" (if the new species appears of of thin air, from nothing), or, more reasonably, engineering from scratch. I will not discuss the "creation" aspect. I would not know what to say, from a scientific point of view. But I will discuss the "engineering from scratch" model. However it is conceived (quick or slow, sudden or gradual), it implies one simple thing: each time, everything is re-engineered from scratch. Even what had already been engineered in previously existing species. From what? It's simple. If it is not creation ex nihilo, "scratch" here can mean only one thing: from inanimated matter. IOWs, it means re-doing OOl each time a new species originates. OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1) Moreover, I would definitely say that all your arguments against descent, however good (IMO, some are good, some are not) are always arguments agains model a). They have no relevance at all against model b), my model. Once and for all, I absolutely (if you allow the word) reject model a). That said, I am rather sure that you will stick to your model, model c). That's fine for me. But I wanted to clarify as much as possible. gpuccio
"For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species. I hope this is the last time I have to tell you that." To this in particular,,, "passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on." All new information is 'designed in the process"???? Please elaborate on exactly what process you are talking about. As to examples that falsify the common descent model: Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.
New Paper by Winston Ewert Demonstrates Superiority of Design Model - Cornelius Hunter - July 20, 2018 Excerpt: Ewert’s three types of data are: (i) sample computer software, (ii) simulated species data generated from evolutionary/common descent computer algorithms, and (iii) actual, real species data. Ewert’s three models are: (i) a null model which entails no relationships between any species, (ii) an evolutionary/common descent model, and (iii) a dependency graph model. Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree. Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process. Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model. Where It Counts Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered — the actual, real, biological species data — the results were unambiguous. Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent. Darwin could never have even dreamt of a test on such a massive scale. Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other. We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand. Ten thousand is a big number. But it gets worse, much worse. Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division. The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set, given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division. Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth? Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros. That’s the ratio of how probable the data are on these two models! By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage over that of common descent. 10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence. This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits. But It Gets Worse The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450. In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450. We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data. https://evolutionnews.org/2018/07/new-paper-by-winston-ewert-demonstrates-superiority-of-design-model/ Response to a Critic: But What About Undirected Graphs? - Andrew Jones - July 24, 2018 Excerpt: The thing is, Ewert specifically chose Metazoan species because “horizontal gene transfer is held to be rare amongst this clade.” Likewise, in Metazoa, hybridization is generally restricted to the lower taxonomic groupings such as species and genera — the twigs and leaves of the tree of life. In a realistic evolutionary model for Metazoa, we can expect to get lots of “reticulation” at lower twigs and branches, but the main trunk and branches ought to have a pretty clear tree-like form. In other words, a realistic undirected graph of Metazoa should look mostly like a regular tree. https://evolutionnews.org/2018/07/response-to-a-critic-but-what-about-undirected-graphs/ This Could Be One of the Most Important Scientific Papers of the Decade - July 23, 2018 Excerpt: Now we come to Dr. Ewert’s main test. He looked at nine different databases that group genes into families and then indicate which animals in the database have which gene families. For example, one of the nine databases (Uni-Ref-50) contains more than 1.8 million gene families and 242 animal species that each possess some of those gene families. In each case, a dependency graph fit the data better than an evolutionary tree. This is a very significant result. Using simulated genetic datasets, a comparison between dependency graphs and evolutionary trees was able to distinguish between multiple evolutionary scenarios and a design scenario. When that comparison was done with nine different real genetic datasets, the result in each case indicated design, not evolution. Please understand that the decision as to which model fit each scenario wasn’t based on any kind of subjective judgement call. Dr. Ewert used Bayesian model selection, which is an unbiased, mathematical interpretation of the quality of a model’s fit to the data. In all cases Dr. Ewert analyzed, Bayesian model selection indicated that the fit was decisive. An evolutionary tree decisively fit the simulated evolutionary scenarios, and a dependency graph decisively fit the computer programs as well as the nine real biological datasets. http://blog.drwile.com/this-could-be-one-of-the-most-important-scientific-papers-of-the-decade/ Why should mitochondria define species? - 2018 Excerpt: The particular mitochondrial sequence that has become the most widely used, the 648 base pair (bp) segment of the gene encoding mitochondrial cytochrome c oxidase subunit I (COI),,,, The pattern of life seen in barcodes is a commensurable whole made from thousands of individual studies that together yield a generalization. The clustering of barcodes has two equally important features: 1) the variance within clusters is low, and 2) the sequence gap among clusters is empty, i.e., intermediates are not found.,,, Excerpt conclusion: , ,The simple hypothesis is that the same explanation offered for the sequence variation found among modern humans applies equally to the modern populations of essentially all other animal species. Namely that the extant population, no matter what its current size or similarity to fossils of any age, has expanded from mitochondrial uniformity within the past 200,000 years.,,, https://phe.rockefeller.edu/news/wp-content/uploads/2018/05/Stoeckle-Thaler-Final-reduced.pdf Sweeping gene survey reveals new facets of evolution – May 28, 2018 Excerpt: Darwin perplexed,,, And yet—another unexpected finding from the study—species have very clear genetic boundaries, and there’s nothing much in between. “If individuals are stars, then species are galaxies,” said Thaler. “They are compact clusters in the vastness of empty sequence space.” The absence of “in-between” species is something that also perplexed Darwin, he said. https://phys.org/news/2018-05-gene-survey-reveals-facets-evolution.html
bornagain77
Bornagain77: It's amazing how much you misunderstand me, even if I have repeatedly tried to explain my views to you. 1) "Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan." Interesting claims, that have nothing to do with my belief in CD, and about which I can absolutely agree with you. I absolutely believe that the fossil record is discontinuous, that genetic evidence is discontinuous, and that no one has ever changed the basic body plan of an organism into another body plan. And so? 2) "Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?" I don't believe that scientific certanty is ever absolute. I use "absolutely" to express my strength of certainty that there is empirical warrant of CD. And I have explained why, many times, even to you. As I have explained many times to you what I mean by CD. But I am not sure that you really listen to me. That's OK, I believe in free will, as you probably know. 3) "For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself." I am in no way a reductionist, least of all a materialist. My certainty about CD only derives from scientific facts, and from what I believe to be the most reasonable way to interpret them. As I have tried to explain many times. Essentially, the reasons why I believe in CD (again, the type of CD that I believe in, and that I have tried to explain to you many times) are essentially of the same type for which I believe in Intelligent Design. There is nothing reductionist or materialist in them. Only my respect for facts. For example, I do believe that we do not understand at all how body plans are implemented. You seem to know more. I am happy for you. 4) "In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD." I have just stated that IMO we don't understand at all how body plans are implemented. Moreover, I don't believe at all that we have any complete microscopic description of any living organsism. We are absolutely (if you allow the word) distant from that. OK. But I still don't understand what that has to do with CD. For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species. I hope this is the last time I have to tell you that. gpuccio
"I absolutely believe in CD" Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan. Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part? For instance, it seems you are holding somewhat to a reductive materialistic framework in your 'absolute' certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself. In the following article entitled 'Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics', which studied the derivation of macroscopic properties from a complete microscopic description, the researchers remark that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, The researchers further commented that their findings challenge the reductionists' point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description."
Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics - December 9, 2015 Excerpt: A mathematical problem underlying fundamental questions in particle and quantum physics is provably unsolvable,,, It is the first major problem in physics for which such a fundamental limitation could be proven. The findings are important because they show that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, "We knew about the possibility of problems that are undecidable in principle since the works of Turing and Gödel in the 1930s," added Co-author Professor Michael Wolf from Technical University of Munich. "So far, however, this only concerned the very abstract corners of theoretical computer science and mathematical logic. No one had seriously contemplated this as a possibility right in the heart of theoretical physics before. But our results change this picture. From a more philosophical perspective, they also challenge the reductionists' point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description." http://phys.org/news/2015-12-quantum-physics-problem-unsolvable-godel.html
In other words, even with a complete microscopic description of an organism, it is impossible for you to have 'absolute' certainty about the macroscopic behavior of that organism much less to have 'absolute' certainty about CD. bornagain77
Bornagain77; No. As you know, I absolutely believe in CD, but that is not the issue here. Homology is homology, and divergence is divergence, whatever the model we use to explain them. I just wanted to show an example of a protein (RelA), indeed a TF, where both homology (in the DBD) and divergence (in the TADs) are certainly linked to function. When I want to "push" for CD, I know how to do that. gpuccio
Silver Asiatic and all: OK, a few words about the myth of "self organization". You say: "But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways." It is perfectly true that we "don't have enough data" about that. We don't have them because there is none: "self organization" simply does not work as a substitute for Darwinian mechanisms. IOWs, it explain absolutely nothing about functional complexity (not that Darwinian mechanisms do, but at least they try). Let's see. I would say that there is a correct concept of self-organization, and a completely mythological expansion of it to realities that have nothing to do with it. The correct concept of self-organization comes from physics and chemistry, essentially. It is the science behind systems that present some unexpected "order"deriving from the interaction of random components and physical laws. Examples: a) Physics: Heat applied evenly to the bottom of a tray filled with a thin sheet of viscous oil transforms the smooth surface of the oil into an array of hexagonal cells of moving fluid called Bénard convection cells b) Chenistry: A Belousov–Zhabotinsky reaction, or BZ reaction, is a nonlinear chemical oscillator, including bromine and an acid. These reactions are far from equilibrium and remain so for a significant length of time and evolve chaotically, being characterized by a noise-induced order. And so on. Now, the concept of self-organization has been artificially expanded to almost everything, including biology. But the phemomenon is essentially derived from this type of physical models. In general, in these examples, some stochastic system tends to achieve some more or less ordered stabilization towards what is called an attractor. Now, to make things simple, I will just mention a few important points that show how the application of those principles to biology is completely wrong. 1) In all those well known physical systems, the system obeys the laws of physics, and the pattern that "emerges" can very well be explained as an interaction between those laws and some random component. Snowflakes are another example. 2) The property we observe in these systems is some form of order. That is very important. It is the most important reason why self-organization has nothing to do with functional complexity. 3) Functional complexity is the number of specific bits that are necessary to implement a function. It has nothing to do with a generic "order". Take a protein that has an enzymatic activity, for example, and compare it to a snowflake. The snowflake has order, but no complex function. Its order can be explained by simple laws, and the differences between snowflakes can be explained by random differences in the conditions of the system. Instead, the function of a protein strictly depends on the sequence of AAs. It has nothing to do with random components, and it follows a very specific "recipe" coming from outside the system: the specific sequence in the protein, which in turn depends on the specific sequence of nucleotides in the protein coding gene. There is no way that such a specific sequence can be the result of "self-organization". To believe that it is the result of Natural Selection is foolish, but at least it has some superficial rationale. But to believe that it can be the result of self-organization, of physical and chemical laws acting on random components, is total folly. 4) The simple truth is that the sequence of AAs generates function according to chemical rules, but to find what sequence among all possible sequences will have the function requires deep understanding of the rules of chemistry, and extreme computational power. We still are not able to build functional proteins by a top down process. Bottom up processes are more efficient, but still require a lot of knowledge, computation power, and usually strictly guided artificial selection. Even so, we are completely unable to engineer anything like ATP synthase, as I have discussed in detail many times. Nor could ever RV + NS do that. But, certainly, no amount of "self-organization" in the whole reality could even begin to do such a thing. 5) Complex networks like the one I have discussed here certainly elude our understanding in many ways. But one thing is certain: they do require tons of functional information at the level of the sequences in proteins and other parts of the genome to wortk correctly. As we have seen in the OP, mutations in different parts of the system are connected to extremely serious diseases. Of course, no self-organization of any kind can ever correct those small errors in digital functional information. 6) The function of a protein is not an "emerging" quality of the protein any more than the function of a watch is an emerging quality of the gears. The function of a protein depends on a very precise correspondence between the digital sequence of AAs and the laws of biochemistry, which determines the folding and the final structure and status (or statuses) of the protein. This is information. The same information that makes the code for Excel a functional reality. Do we see codes for software emerging from self-organization? We should maybe inform video game programmers of that, they could spare a lot of work and time. In the end, all these debates about self-organizarion, emerging properties and snowflakes have nothing to do with functional information. The only objects that exhibit functional information beyond 500 bits are, still, human artifacts and biological objects. Nothing else. Not snowflakes, not viscous oil, not the game of life. Only human artifacts and biological objects. Those are the only objects in the whole known universe that exhibit thousands, millions, maybe billions of bits strictly aimed at implementing complex and obvious functions. The only existing instances of complex functional information. gpuccio
"What is the problem? What am I missing?" Could be me missing something. I thought you might, with your emphasis on conservation, be pushing for CD again. bornagain77
Bornagain77 at #35: I am not sure that I understand what you mean. My theory? Falsification? Counterexamples? At #12 you quote a paper that says: "Similarity regression inherently quantifies TF motif evolution, and shows that previous claims of near-complete conservation of motifs between human and Drosophila are inflated, with nearly half of the motifs in each species absent from the other, largely due to extensive divergence in C2H2 zinc finger proteins." OK? At #14 I agree with the paper, and add a comment: Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms. A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences." OK? At #29 I reference a paper about RelA, one of the TFs discussed in this OP, that shows a clear example of what I said at #14: homology of the DBD and divergence of the functional TADs between humans and cartilaginous fishes. Which is exactly what was stated in the paper you quoted. What is the problem? What am I missing? gpuccio
Per Gp 32, it is not enough, per falsification, to find examples that support your theory. In other words, I can find plenty of counterexamples. bornagain77
Silver Asiatic at #31: Very good points. Yes, my argument is exactly that as the cell is more than a machine, and yet it implements the same type of functions as traditional machines do, only with much higher flexibility and complexity, it does require a lot more of intelligent design and engineering to be able to work. So, it is absolutely true that the researcher in that paper has made a greater point for Intelligent Design. But, of course, he (or they) will never admit such a thing! And we know very well why. So, the call to "self-organization", or to "stochastic systems". Of course, that's simply mystification. And not even a good one. I will comment on the famous concept of "self-organization" in my next post. gpuccio
Silver Asiatic at #30: I absolutely agree with what you say here! :) gpuccio
Bornagain77 at #12: I believe that my comment at #29 is strictly connected to you observations. It also expands, with a real example, the simple ideas I had already expressed at #14. So, you could like to have a look at it! :) gpuccio
GP
What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality?
I think you did a great job, but just a thought … You responded to the notion that supported our view - the researcher says that the cell is not merely engineering but is more dynamic. So, we support that and you showed that the cell is far more than a machine. However, in supporting that researcher's view, has the discussion changed? In this case, the researcher is actually saying that deterministic processes cannot explain these cellular functions. He says it's all about self-organization, etc. Now, what you have done is amplified his statement very wonderfully. However … What remains open are a few things: 1. Why didn't the researcher, stating what you (and we) would and did - just conclude Design? 2. The researcher is attacking Darwinism (subtly) accepting some of it:
This familiar understanding grounds the conviction that a cell's organization can be explained reductionistically, as well as the idea that its molecular pathways can be construed as deterministic circuits. The machine conception of the cell owes a great deal of its success to the methods traditionally used in molecular biology. However, the recent introduction of novel experimental techniques capable of tracking individual molecules within cells in real time is leading to the rapid accumulation of data that are inconsistent with an engineering view of the cell.
… so, hasn't he already conceded the game to us on that point? Could we now show how self-organization is not a strong enough answer for this type of system? I believe we could simply use Nicholson's paper to discredit Darwinism (as he does himself), and our amplification of his work does "favor a design view". But we don't have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways. Silver Asiatic
GP Agreed. You've done a great job to expose the reality of those systems. The functional relationships are indication of purpose and design, yes. I think what happens also is that evolutionists find some safety in the complexity that you reveal. They assume that nobody will actually go that far "down into the weeds" so they can always claim there's something going on that is far too sophisticated for the average IDist to understand. So, they hide in the details. You've called their bluff and show what is really going on, and it is inexplicable from their mechanisms. They look for an escape but there is none. I agree also that it's not merely a defeat of RM + NS that is indicated, but evidence of design in the actual operation of complex systems. Another tactic we see is that an extremely minor point is attacked and they attempt to show that it could have resulted from a mutation or HGT or drift. If they can make it half-way plausible then their entire claim will stand unrefuted, supposedly. It's a game of hide-and-seek, whack-a-mole. We have to deal with 50 years of story-telling that just continued to build one assumption upon another, without any evidence, and having gained unquestioning support from academia simply on the idea that "evolution is right and every educated and intelligent person believes in it". But even in papers citing evolution they never (or rarely) give the probabilistic outlooks on how it could have happened. Silver Asiatic
To all: This is interesting: Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6353211/
Abstract Transcription factors (TFs) regulate gene expression in both prokaryotes and eukaryotes by recognizing and binding to specific DNA promoter sequences. In higher eukaryotes, it remains unclear how the duration of TF binding to DNA relates to downstream transcriptional output. Here, we address this question for the transcriptional activator NF-?B (p65), by live-cell single molecule imaging of TF-DNA binding kinetics and genome-wide quantification of p65-mediated transcription. We used mutants of p65, perturbing either the DNA binding domain (DBD) or the protein-protein transactivation domain (TAD). We found that p65-DNA binding time was predominantly determined by its DBD and directly correlated with its transcriptional output as long as the TAD is intact. Surprisingly, mutation or deletion of the TAD did not modify p65-DNA binding stability, suggesting that the p65 TAD generally contributes neither to the assembly of an "enhanceosome," nor to the active removal of p65 from putative specific binding sites. However, TAD removal did reduce p65-mediated transcriptional activation, indicating that protein-protein interactions act to translate the long-lived p65-DNA binding into productive transcription.
Now, let's try to understand what this means. First of all, just to avoid confusion, p65 is just another name for RelA, the most common among the 5 proteins that contribute to NF-kB dimers. The paper here studied the behavour of the p65(RelA)-p50 dimer, with special focus on the RelA interaction with DNA. Now, we know that RelA, like all TFs, has a DNA binding domain (DBD) which binds specific DNA sites. We also know that the DBD is usually strongly conserved, and is supposed to be the most functional part in the TF. The paper here shows, in brief, that the DBD is really responsible for the DNA binding and for its stability (the duration of the binding), and the duration is connected to transcription. However, it is not the DBD itself that works on transcription, but rather the two protein-protein transactivation domains (TADs). While DNA binding is necessary to activate transcription, mere DNA binding does not work: mutations in the TADs will reduce transcription, even if the DNA binding remains stable. IOWs, it's the TADs that really affect transcription, even if the DBD is necessary. OK, why is that interesting? Let's see. The DBD is located, in the RelA molecule, in the first 300 AAs (the human protein is 551 AAs long). The two TADs are located, instead, in the last part of the molecule, more or less the last 100 - 200 AAs. So, I have blasted the human protein against our old friends, cartilaginous fishes. Is the protein conserved across our usual 400+ million years? The answer is the same as for most TFs: moderately so. In Rhincodon typus, we have about 404 bits of homology, less than 1 bit per aminoacid (bpa). Enough, but not too much. But is it true that the DBD is highly conserved? It certainly is. The 404 bits of homology, indeed, are completely contained in the first 300 AAs or so. IOWs, the homology is practically completely due to the DBD. So yes, the DBD is highly conserved. The rest of the sequence, not at all. In particular, the last 100 - 200 AAs at the C terminal, where the TAD domains are localized, show almost no homology bewteen humans and cartilaginous fishes. But... we know that those TAD domains are essential for the function. It's them that really activate the transcription cascade. We can have no doubt about that! And so? So, this is a clear example of a concept that I have tried to defend many times here. There is function which remains the same through natural history. Therefore, the corresponding sequences are highly conserved. And there is function which changes. Which must change from species to species. Which is more specific to the individual species. That second type of function is not highly conserved at sequence level. Not because it is less essential, but because it is different in different species, and therefore has to change to remain functional. So, in RelA we can distinguish (at least) two different functions: a) The DNA binding: this function is implemented by the DBD (firts 300 AAs). It happens very much in the same way in humans and cartilaginous fishes, and thereofre the corresponding sequences remain highly homologous after 400+ years of evolutionary separation. b) The protein-protein interaction which really actovates the specific transcription: this function is implemented by the TADs (last 200 AAs). It is completely different in cartilaginous fishes and humans, because probably different genes are activated by the same signal, and therefore the corresponding sequence is not conserved. But it is highly functional just the same. In different ways, in the two different species. IOWs, my measure of functional information based on conserved homologies through long evolutionary times does measure functional information, but usually underestimates it. For example, in this case the value of 404 bits would measure only the conserved function in the DBD, but it would miss completely the undeniable functional information in the TAD domains, because that information, while certainly present, is not conserved among species. This is, IMO, a very important point. gpuccio
Silver Asiatic at #24: I think that the amazing complexity of newtork functional configurations in these complex regulation systems is direct evidence of intelligence and purpose. It is, of course, also an obvious falsification of the neo-darwinist paradigm, which cannot even start to try to explain that kind of facts. You are right that post-post-neo-darwinists are trying as well as they can to build new and more fashionable religions, such as self-organization, emerging properties, magical stochastic systems, and any other intangible, imaginary principle that is supposed to help. But believe me, that will not do. That simply does not work. When really pressured, they always go back to the old good fairy tale: RV + NS. In the end, it's the only lie that retains some superficial credibility. The only game in town. Except, of course, design. :) gpuccio
Jawa at #23: Of course Arthur Hunt would be very welcome here. Indeed, any competent defender of the neo-darwinian paradigm would be very welcome here. gpuccio
Jawa and others: Or maybe they don't believe that there is anything in my arguments tha really favours design. Some have made that objection in the past, I believe. good arguments, but what have they to do with design? Well. I believe that they have a lot to do with design. What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality? Just to know... gpuccio
Jawa at #22: Frankly, I don't think they are interested in my arguments. They are probably too bad! gpuccio
jawa
Can’t understand why the anti-ID folks allow GP to discredit neo-Darwinism so boldly in his OPs and commentaries.
Discrediting Neo-Darwinism is one phase that we go through. Probably there is enough dissent within evolutionary science that they will back off from the more extreme proclamations of the greatness of Darwin. Mainstream science mags are openly saying things like "it overturns Darwinian ideas". They don't mind the idea of revolution. They're building a defense for the next phase. It won't be Neo-Darwinism but a collection of ad hoc observations and speculations. They explain that things happen. Self-organizing chemical determination caused it. They don't need mutations or selection. Any mindless actions will do. It's not about Darwin, and it's not even about evolution. It's not even about science. It's all just a program to explain the world according to a pre-existing belief system. Even materialism is expendable when it is shown to be ridiculous. They will sell-out and jettison all previous claims and everything they use and just grab another (that's how science works, we hear) - it's all about protecting their inner belief. That's the one thing that drives all of it. We know what that inner belief is, and ID is an attempt to chip away at it from the edges - indirectly and carefully, using their own terminology and doctrines. We've done well. But defeating Darwin is only a small part. Behe has been doing it for years and they'll eventually accept his findings. The evolution story line will just adjust itself. Proving that there is actually Intelligent Design is much more difficult and without a knock-down argument, our best efforts remain ignored. Silver Asiatic
Sorry, someone called my attention to my misspelling of UKY Professor Art Hunt’s name in my previous post. Mea culpa. :( I was referring to this distinguished professor who has posted interesting comments here before: https://pss.ca.uky.edu/person/arthur-hunt http://www.uky.edu/~aghunt00/agh.html It would be interesting to have him back here debating GP. jawa
Can’t understand why the anti-ID folks allow GP to discredit neo-Darwinism so boldly in his OPs and commentaries. Are there objectors left out there? Have they missed GP’s arguments? Where are professors Larry Moran, Art Hunter, and other distinguished academic personalities that openly oppose ID? Did they give up? Do they lack solid arguments to debate GP? Are they afraid of experiencing public embarrassment? jawa
To all: This paper deals in more detail with the role of NF-kB system in synaptic plasticity, memory and learning: Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4736603/
Abstract Activation of nuclear factor kappa B (NF-kB) transcription factors is required for the induction of synaptic plasticity and memory formation. All components of this signaling pathway are localized at synapses, and transcriptionally active NF-kB dimers move to the nucleus to translate synaptic signals into altered gene expression. Neuron-specific inhibition results in altered connectivity of excitatory and inhibitory synapses and functionally in selective learning deficits. Recent research on transgenic mice with impaired or hyperactivated NF-kB gave important insights into plasticity-related target gene expression that is regulated by NF-kB. In this minireview, we update the available data on the role of this transcription factor for learning and memory formation and comment on cross-sectional activation of NF-kB in the aged and diseased brain that may directly or indirectly affect kB-dependent transcription of synaptic genes. 1. Introduction Acquisition and consolidation of new information by neuronal networks often referred to as learning and memory formation depend on the instant alterations of electrophysiological parameters of synaptic connections (long-term potentiation, long-term depression), on the generation of new neurons (neuroneogenesis), on the outgrowth of axons and dendrites (neuritogenesis), and on the formation/remodulation of dendritic spines (synaptogenesis). The transmission of active synapses becomes potentiated by additional opening of calcium channels and incorporation of preexisting channel proteins, that is, during the induction of long-term potentiation. In contrast, long-term structural reorganization of the neuronal network depends on the induction of specific gene expression programs [1]. The transcription factor NF-kB has been shown to be involved in all of the aforementioned processes of learning-associated neuronal plasticity, that is, long-term potentiation, neuroneogenesis, neuritogenesis, and synaptogenesis (for review, see [2]).
A few concepts: a) All NF-kB Pathway Proteins Are Present at the Synapse. b) NF-kB Becomes Activated at Active Synapses c) NF-kB Induces Expression of Target Genes for Synaptic Plasticity d) Activation of NF-kB Is Required for Learning and Memory Formation gpuccio
To all: We have said that NF-kB is an ubiquitously expressed transcription factor. It really is! So, while its more understood functions are mainly related to the immune system and inflammation, it does implement competely different functions in other types of cells. This very interesting paper, which is part of the research topic quoted at #3, is about the increasing evidennces of the important role of the NK-kB system in the Central Nervous System: Cellular Specificity of NF-?B Function in the Nervous System https://www.frontiersin.org/articles/10.3389/fimmu.2019.01043/full And, again, it focuses on the cellular specificity of the NF-kB response. Here is the introduction:
Nuclear Factor Kappa B (NF-kB) is a ubiquitously expressed transcription factor with key functions in a wide array of biological systems. While the role of NF-kB in processes, such as host immunity and oncogenesis has been more clearly defined, an understanding of the basic functions of NF-kB in the nervous system has lagged behind. The vast cell-type heterogeneity within the central nervous system (CNS) and the interplay between cell-type specific roles of NF-kB contributes to the complexity of understanding NF-kB functions in the brain. In this review, we will focus on the emerging understanding of cell-autonomous regulation of NF-?B signaling as well as the non-cell-autonomous functional impacts of NF-?B activation in the mammalian nervous system. We will focus on recent work which is unlocking the pleiotropic roles of NF-kB in neurons and glial cells (including astrocytes and microglia). Normal physiology as well as disorders of the CNS in which NF-kB signaling has been implicated will be discussed with reference to the lens of cell-type specific responses.
Table 1 in the paper lists the following functions for NF-kB in neurons: -Synaptic plasticity -Learning and memory -Synapse to nuclear communication -Developmental growth and survival in response to trophic cues And, for glia: -Immune response -Injury response -Glutamate clearance -Central control of metabolism As can be seen, while the roles in glia cells are more similar to what we would expect from the more common roles in the immune system, the roles in neurons are much more specific and refined. The Table also mentions the following: "The pleiotropic functions of the NF-kB signaling pathway coupled with the cellular diversity of the nervous system mean that this table reflects generalizations, while more specific details are in the text of this review." So, while I certainly invite all interested to look at the "more specific details", I am really left with the strange feeling that, for the same reasons mentioned there (pleiotropic functions, cellular diversity, and probably many other things), everything we know about the NF-kB system, and probably all similar biological systems, really "reflects generalizations". And that should really give us a deep sense of awe. gpuccio
OLV at #17: "Why “at least”? Could there be more?" Yes. There can always be more, in biology. Indeed, strangely, there always is more. :) By the way, nice mini-review about chromatin and transctiption you found! I will certainly read it with great attention. gpuccio
Eugen at #16:
We have classical, celestial and quantum mechanics but this article describes the process of what we should call chemical mechanics. Why not?
Yes, why not? Chemical mechanics? That is a brilliant way to put it! :) gpuccio
GP @11: (Regarding my questions @7) "It requires the binding of general TFs at the promoter and the formation of the pre-initiation complex (which is the same for all genes), plus the binding of specific TFs at one or more enhancer sites, with specific modifications of the chromatin structures. At least." thanks for the explanation. Why "at least"? Could there be more? With the information you provided, I found this: Introduction to the Thematic Minireview Series: Chromatin and transcription OLV
We have classical, celestial and quantum mechanics but this article describes the process of what we should call chemical mechanics. Why not? :) Eugen
To all: This is a more general paper about oscillations in TF nuclear occupancy as a way to regulate transcription: Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5345753/ The abstract:
Naturally occurring oscillations in glucocorticoids induce a cyclic activation of the glucocorticoid receptor (GR), a well-characterized ligand-activated transcription factor. These cycles of GR activation/deactivation result in rapid GR exchange at genomic response elements and GR recycling through the chaperone machinery, ultimately generating pulses of GR-mediated transcriptional activity of target genes. In a recent article we have discussed the implications of circadian and high-frequency (ultradian) glucocorticoid oscillations for the dynamic control of gene expression in hippocampal neural stem/progenitor cells (NSPCs) (Fitzsimons et al., Front. Neuroendocrinol., 2016). Interestingly, this oscillatory transcriptional activity is common to other transcription factors, many of which regulate key biological functions in NSPCs, such as NF-kB, p53, Wnt and Notch. Here, we discuss the oscillatory behavior of these transcription factors, their role in a biologically accurate target regulation and the potential importance for a dynamic control of transcription activity and gene expression in NSPCs.
And here is the part about NF-kB:
The NF-kB pathway is composed of a group of transcription factors that bind to form homo- or hetero-dimers. Once formed, these protein complexes control several cellular functions such as the response to stress and the regulation of growth, cell cycle, survival, apoptosis and differentiation in NSPCs.14-16 Oscillations in NF-kB were first observed in embryonic fibroblasts, this observation suggested that temporal control of NF-kB activation is coordinated by the sequential degradation and synthesis of inhibitor kappa B (IkB) proteins.3 More recently, oscillations in the relative nuclear/cytosolic concentration of NF-kB transcription factors have been observed in single cells in vivo, indicating this may be an additional regulatory mechanism to control NF-kB-dependent transcriptional activity. Importantly, the frequency and amplitude of these oscillations changed in a cell-type dependent fashion and differentially affected the dynamics of gene expression,5 indicating that NF-kB transcription factors may use changes in the frequency and amplitude of their oscillatory dynamics to regulate the transcription of target genes.1,17 Thus, the NF-kB pathway provides a well-characterized example of how oscillatory transcription factor activity may encode additional, biologically relevant, information for an accurate control of gene expression.
So, these "waves" of nuclear occupancy by TFs, regulating transcription according to their frequency/period and amplitude, seem to be a pattern that is not isolated at all. Maybe more important and common than we can at present imagine. gpuccio
Bornagain77: Yes, I have looked at that paper. Interesting. Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms. A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences. gpuccio
To all: Well, the first paper in the "reasearch topic" I mentioned at #3 is: Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B It immediately brings us back to an old and recurring concept: crosstalk Now, if there is one concept that screams design, that is certainly "crosstalk". Because, to have crosstalk, you need at least two intelligent systems, each of them with its own "language", interacting in intelligent ways. Or, of course, at least two intelligent people! :) This paper is about one specific aspect of the NF-kB system: transcription regulation in response to non specific stimuli from infecting agents, the so called innate immune response. You may remember from the OP that the specific receptors for bacterial or viral components (for example bacterial lipopolysaccharide , LPS) are called Toll like receptors (TLRs), and that their activation converges, through its own complex pathways, into the canonical pathway of activation of the NF-kB system. This is a generic way to respond to infections, and is called "innate immune response", to distinguish it from the adaptive immune response, where T and B lymphocytes resognize specific patterns (epitopes) in specific antigens and react to them by a complex memory and amplification process. As we know, the NF-kB system has a very central role in adaptive immunity too, but it is completely different. But let's go back to innate immunity. The response, in this case, is an inflammatory response. This response, of course, is more generic than the refined adaptive immune response, involving antibodies, killer cells and so on. However, even is simpler, the quality and quantity of the inflammatory response must be strictly fine tuned, because otherwise it becomes really dangerous for the tissues. This paragraph sums up the main concepts in the paper:
To ensure effective host defense against pathogens and to maintain tissue integrity, immune cells must integrate multiple signals to produce appropriate responses (14). Cells of the innate immune system are equipped with pattern recognition-receptors (PRRs) that detect pathogen-derived molecules, such as lipopolysaccharides and dsRNA (3). Once activated, PRRs initiate series of intracellular biochemical events that converge on transcription factors that regulate powerful inflammatory gene expression programs (15). To tune inflammatory responses, pathways that do not trigger inflammatory responses themselves may modulate signal transduction from PRRs to transcription factors through crosstalk mechanisms (Figure 1). Crosstalk allows cells to shape the inflammatory response to the context of their microenvironment and history (16). Crosstalk between two signaling pathways may emerge due shared signaling components, direct interactions between pathway-specific components, and regulation of the expression level of a pathway-specific component by the other pathway (1, 17). Since toll-like receptors (TLRs) are the best characterized PRRs, they provide the most salient examples of crosstalk at the receptor module. Key determinants of tissue microenvironments are type I and II interferons (IFNs), which do not activate NF?B, but regulate NF?B-dependent gene expression (18–21). As such, this review focuses on the cross-regulation of the TLR-NF?B signaling axis by type I and II IFNs.
So, a few interesting points: a) TLRs, already a rather complex class of receptors, are part of a wider class of receptors, the pattern recognition-receptors (PRRs). Complexity never stops! b) The interferon system is another, different system implied in innate immunity, especially in viral infections. We all know its importance. Interferons are a complex set of cytokines with its own complex set of receptors and responses. c) Howerver, the interferon system does not directly activate the NF-kB system. In a sense, they are two "parallel" signaling systems, both implied in innate immune responses. d) But, as the paper well outlines, there is a lot of "crosstalk" between the two systems. One interferes with the other at multiple levels. And that crosstalk is very important for a strict fine tuning of the innate immune response and of imflammatory processes. Interesting, isn't it? I quote here the conclusions:
Concluding Remarks Maintaining a delicate balance