
I have recently commented on another thread:
about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.
That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.
IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.
In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.
The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:
I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.
Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.
TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.
I quote again here a recent review about human TFs:
The Human Transcription Factors
The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.
In general, I will refer a lot to this very recent paper about it:
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):
- RelA (551 AAs)
- RelB (579 AAs)
- c-Rel (619 AAs)
- p105/p50 (968 AAs)
- p100/p52 (900 AAs)
Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.
Then there are at least 4 inhibitor proteins, collectively called IkBs.
The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.
When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.
This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.

Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.
Attribution: Boghog2 at English Wikipedia [Public domain]
Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.
The stimuli.
First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.
The main concept is: the NF-kB system is a central pathway activated by many stimuli:
- Inflammation
- Stress
- Free radicals
- Infections
- Radiation
- Immune stimulation
IOWs, a wide variety of aggressive stimuli can activate the system
The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).
The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International
I will not go into further details about this part, but those interested can have a look at this very good paper:
TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme
In particular, Figg. 1, 2, 3.
In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.
An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.
The response.
But what is the cellular response?
Again, there are multiple and complex possible responses.
Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.
Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.
So, the important point is that the response to activation must be (at least):
- Lineage-specific
- Stimulus-specific
IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.
The following paper is a good review of the topic:
Selectivity of the NF-κB Response
For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.
From:
30 years of NF-κB: a blossoming of relevance to human pathobiology
“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”
And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example, in human dendritic cells promoters for Il6, Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).
So, to sum up:
- A variety of stimuli can activate the system in different ways
- The system itself has its complexity (different dimers)
- The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
- The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.
How does it work?
So, what do we know about the working of such a system?
I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.
For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.
Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.
To do that; I will rely mainly on the recent paper quoted at the beginning:
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?
A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.
Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.
But that is only a tiny part of the real thing.
The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.
Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.
So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.
Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:
- Abundance
- Affinity
- Binding site availability
1. Abundance
Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.
1a) Abundance of NF-kB Binding Sites in the genome:
It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:
5′-GGGRNWYYCC-3′
where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.
That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.
So the problem is: how many such sequences do exist in the human genome?
Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6
1b) Abundance of Nucleus-Localized NF-kB Dimers:
An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.
So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.
But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:
NF-kB oscillations translate into functionally related patterns of gene expression
For example, this very recent paper :
NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.
In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:
Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.
In macrophages, instead, the curve is rather:
a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.
In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.
Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:
in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles
Moreover, the binding itself seems to be rather short-lived:
Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.
2. Affinity
Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:
Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.
IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.
Moreover, different bindings can affect transcription differently. Again, from the paper:
How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.
Quite a complex scenario, I would say!
But there is more:
Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.
IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.
This section of the paper ends with a very interesting statement:
Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.
Emphasis mine.
This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.
Unless, of course, there is some higher, powerful level of control.
3. Availability of high affinity kB binding sequences
We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.
Why?
Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..
We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:
- DNA methylation
- Histone modifications (methylation, acetylation, etc)
- Chromatin modifications
- Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)
For example, from the paper:
The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation… In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.
We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.
The paper concludes:
Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.
This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.
Does the system work?
But does the system work?
Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.
And what happens if it does not work properly?
Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:
30 years of NF-κB: a blossoming of relevance to human pathobiology
First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.
But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.
Conclusions.
So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.
The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.
A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.
The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.
The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:
The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.
The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.
Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.
Different kinds of activated dimers relocate to the nucleus.
Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern
Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.
Many other factors and systems contribute to the final result.
The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.
All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.
In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.
So, let’s go back to our initial question.
Is this the working of a machine?
Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.
But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.
It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.
It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.
Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.
And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?
Do you still have any doubts?

On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.

Added graphic: two very different proteins and their functional history

My biggest concern is not even about the evolution vs. ID. It is about the technology used for the machinery of life being orders of magnitude more complex than what our brains seem to be capable of understanding or analyzing. In other words, we’re already way more complex than any machinery we can realistically hope to create. And we already exist (or being simulated, doesn’t matter). What purpose do we serve then to whoever is in possession of the technology we’re made with?
Eugene:
Thank you for the comment.
Yes, that’s exactly the point I was trying to make.
Well, that’s certainly a much bigger question, And, under many aspects, a philosophical one.
However, we can certainly try to get some clues from the design as we see it. For example, I have said very often that the main driving purpose of biological design, far from being mere survival and fitness, as neo-darwinists believe, seems to be the desire to express ever growingly complex life and, through life, ever growingly complex functions.
It should be rather obvious that, if the true purpose of biological beings were to achieve the highest survival and fitness, as neo-darwinists beòlieve, life should have easily stopped at prokaryotes.
To all:
Two of the papers I quote in the OP:
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
https://www.frontiersin.org/articles/10.3389/fimmu.2019.00705/full
and:
NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
https://www.frontiersin.org/articles/10.3389/fimmu.2019.00705/full
are really part of a research topic:
Understanding Immunobiology Through The Specificity of NF-kB
https://www.frontiersin.org/research-topics/7955/understanding-immunobiology-through-the-specificity-of-nf-b#articles
including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology.
Here are the titles:
Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB
An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms
Cellular Specificity of NF-kB Function in the Nervous System
Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF
Techniques for Studying Decoding of Single Cell Dynamics
NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP)
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP)
Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics
You can access all of them from the linked page.
Those papers, as a whole, certainly add a lot to the ideas I have expressed in the OP.
I will have a look at all of them, and discuss here the most interesting things.
Right from the start, GP graciously warns us (curious readers) to fasten our seat belts and get ready for a thrilling ride that should be filled with very insightful but provocative explanations (perhaps a little too technical for some folks):
Please, note that almost a year ago GP wrote this excellent article:
Transcription Regulation: A Miracle Of Engineering
(visited 3,545 times and commented 334 times)
following another very interesting discussion started by PaV a month earlier:
Chromatin Topology: The New (And Latest) Functional Complexity
(visited 3,338 times and commented 241 times)
Before this discussion goes further, I want to share my delight in seeing this excellent article here today and express my deep gratitude to GP for taking time to write it and for leading the discussion that I expect this fascinating (often mind boggling) topic should provoke.
Another GP thought-treat! Yay!!!! KF
I second KF @5.
It’s a pleasure to see a new OP by GP.
However, as usual, it’s so dense that it requires some chewing before it can be digested, at least partially. 🙂
Perhaps this time I see some loud anti-ID folks like the professors from Toronto and Kentucky will dare to present some valid arguments? However, I won’t hold my breath. 🙂
I agree with PeterA @ 4 and join Jawa @6 to second KF@5.
However, before embarking in a careful reading of what GP has written, let me publicly confess here that I still don’t understand certain basic things associated with transcription:
1. are there many DNA segments that can get transcribed by the RNA polymerase to a pre-mRNA that later can be spliced to form the mRNA that goes to translation?
2. what mechanisms determine which of those multiple potential segments is transcribed at a given moment? Don’t they all have starting and ending points? Then why will the RNA-polymerase transcribe one segment and not another? Are the starting marks different for every DNA segment?
3. is this an epigenetic issue or something else?
Perhaps these (most probably dumb) questions have been answered many times in the literature I have read, but I still don’t quite get it. I would fail to answer those questions if I had to pass a test on this subject right now.
Any help with this?
Thanks.
PS. the papers GP has linked in this OP are very interesting.
PeterA:
Thank you. 🙂
Indeed, the topic is fascinating. We really need to go beyond our conventional ideas about biology, armed by the powerful weapons of design inference and functional complexity.
KF:
Thank you! 🙂
Appreciate your enthusiasm! 🙂
Jawa:
Thank you! 🙂
I really hope there will be some interesting discussion.
OLV:
Thank you! 🙂
As you ask questions, here arer my answers:
1. Essentially, all protein coding genes, about 20000 in the human genome.
2. It requires the binding of general TFs at the promoter and the formation of the pre-initiation complex (which is the same for all genes), plus the binding of specific TFs at one or more enhancer sites, with specific modifications of the chromatin structures. At least.
3. Yes. It is an epigenetic process.
Did you see this recent paper Gp?, particularly this, “Even between closely related species there’s a non-negligible portion of TFs that are likely to bind new sequences,”?
To all:
Well, the first paper in the “reasearch topic” I mentioned at #3 is:
Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B
It immediately brings us back to an old and recurring concept:
crosstalk
Now, if there is one concept that screams design, that is certainly “crosstalk”.
Because, to have crosstalk, you need at least two intelligent systems, each of them with its own “language”, interacting in intelligent ways. Or, of course, at least two intelligent people! 🙂
This paper is about one specific aspect of the NF-kB system: transcription regulation in response to non specific stimuli from infecting agents, the so called innate immune response.
You may remember from the OP that the specific receptors for bacterial or viral components (for example bacterial lipopolysaccharide , LPS) are called Toll like receptors (TLRs), and that their activation converges, through its own complex pathways, into the canonical pathway of activation of the NF-kB system.
This is a generic way to respond to infections, and is called “innate immune response”, to distinguish it from the adaptive immune response, where T and B lymphocytes resognize specific patterns (epitopes) in specific antigens and react to them by a complex memory and amplification process. As we know, the NF-kB system has a very central role in adaptive immunity too, but it is completely different.
But let’s go back to innate immunity. The response, in this case, is an inflammatory response. This response, of course, is more generic than the refined adaptive immune response, involving antibodies, killer cells and so on. However, even is simpler, the quality and quantity of the inflammatory response must be strictly fine tuned, because otherwise it becomes really dangerous for the tissues.
This paragraph sums up the main concepts in the paper:
So, a few interesting points:
a) TLRs, already a rather complex class of receptors, are part of a wider class of receptors, the pattern recognition-receptors (PRRs). Complexity never stops!
b) The interferon system is another, different system implied in innate immunity, especially in viral infections. We all know its importance. Interferons are a complex set of cytokines with its own complex set of receptors and responses.
c) Howerver, the interferon system does not directly activate the NF-kB system. In a sense, they are two “parallel” signaling systems, both implied in innate immune responses.
d) But, as the paper well outlines, there is a lot of “crosstalk” between the two systems. One interferes with the other at multiple levels. And that crosstalk is very important for a strict fine tuning of the innate immune response and of imflammatory processes.
Interesting, isn’t it?
I quote here the conclusions:
As usual, emphasis is mine.
Please note the “have evolved” at the beginning, practically used by default instead of a simple “do exist” or “can be observed”. 🙂
Bornagain77:
Yes, I have looked at that paper. Interesting.
Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms.
A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences.
To all:
This is a more general paper about oscillations in TF nuclear occupancy as a way to regulate transcription:
Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5345753/
The abstract:
And here is the part about NF-kB:
So, these “waves” of nuclear occupancy by TFs, regulating transcription according to their frequency/period and amplitude, seem to be a pattern that is not isolated at all. Maybe more important and common than we can at present imagine.
We have classical, celestial and quantum mechanics but this article describes the process of what we should call chemical mechanics. Why not? 🙂
GP @11:
(Regarding my questions @7)
“It requires the binding of general TFs at the promoter and the formation of the pre-initiation complex (which is the same for all genes), plus the binding of specific TFs at one or more enhancer sites, with specific modifications of the chromatin structures. At least.”
thanks for the explanation.
Why “at least”? Could there be more?
With the information you provided, I found this:
Introduction to the Thematic Minireview Series: Chromatin and transcription
Eugen at #16:
Yes, why not?
Chemical mechanics? That is a brilliant way to put it! 🙂
OLV at #17:
“Why “at least”? Could there be more?”
Yes. There can always be more, in biology. Indeed, strangely, there always is more. 🙂
By the way, nice mini-review about chromatin and transctiption you found! I will certainly read it with great attention.
To all:
We have said that NF-kB is an ubiquitously expressed transcription factor. It really is!
So, while its more understood functions are mainly related to the immune system and inflammation, it does implement competely different functions in other types of cells.
This very interesting paper, which is part of the research topic quoted at #3, is about the increasing evidennces of the important role of the NK-kB system in the Central Nervous System:
Cellular Specificity of NF-?B Function in the Nervous System
https://www.frontiersin.org/articles/10.3389/fimmu.2019.01043/full
And, again, it focuses on the cellular specificity of the NF-kB response.
Here is the introduction:
Table 1 in the paper lists the following functions for NF-kB in neurons:
-Synaptic plasticity
-Learning and memory
-Synapse to nuclear communication
-Developmental growth and survival in response to trophic cues
And, for glia:
-Immune response
-Injury response
-Glutamate clearance
-Central control of metabolism
As can be seen, while the roles in glia cells are more similar to what we would expect from the more common roles in the immune system, the roles in neurons are much more specific and refined.
The Table also mentions the following:
“The pleiotropic functions of the NF-kB signaling pathway coupled with the cellular diversity of the nervous system mean that this table reflects generalizations, while more specific details are in the text of this review.”
So, while I certainly invite all interested to look at the “more specific details”, I am really left with the strange feeling that, for the same reasons mentioned there (pleiotropic functions, cellular diversity, and probably many other things), everything we know about the NF-kB system, and probably all similar biological systems, really “reflects generalizations”.
And that should really give us a deep sense of awe.
To all:
This paper deals in more detail with the role of NF-kB system in synaptic plasticity, memory and learning:
Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4736603/
A few concepts:
a) All NF-kB Pathway Proteins Are Present at the Synapse.
b) NF-kB Becomes Activated at Active Synapses
c) NF-kB Induces Expression of Target Genes for Synaptic Plasticity
d) Activation of NF-kB Is Required for Learning and Memory Formation
Can’t understand why the anti-ID folks allow GP to discredit neo-Darwinism so boldly in his OPs and commentaries. Are there objectors left out there? Have they missed GP’s arguments?
Where are professors Larry Moran, Art Hunter, and other distinguished academic personalities that openly oppose ID?
Did they give up? Do they lack solid arguments to debate GP?
Are they afraid of experiencing public embarrassment?
Sorry, someone called my attention to my misspelling of UKY Professor Art Hunt’s name in my previous post. Mea culpa. 🙁
I was referring to this distinguished professor who has posted interesting comments here before:
https://pss.ca.uky.edu/person/arthur-hunt
http://www.uky.edu/~aghunt00/agh.html
It would be interesting to have him back here debating GP.
jawa
Discrediting Neo-Darwinism is one phase that we go through. Probably there is enough dissent within evolutionary science that they will back off from the more extreme proclamations of the greatness of Darwin. Mainstream science mags are openly saying things like “it overturns Darwinian ideas”. They don’t mind the idea of revolution. They’re building a defense for the next phase. It won’t be Neo-Darwinism but a collection of ad hoc observations and speculations. They explain that things happen. Self-organizing chemical determination caused it. They don’t need mutations or selection. Any mindless actions will do. It’s not about Darwin, and it’s not even about evolution. It’s not even about science. It’s all just a program to explain the world according to a pre-existing belief system. Even materialism is expendable when it is shown to be ridiculous. They will sell-out and jettison all previous claims and everything they use and just grab another (that’s how science works, we hear) – it’s all about protecting their inner belief. That’s the one thing that drives all of it. We know what that inner belief is, and ID is an attempt to chip away at it from the edges – indirectly and carefully, using their own terminology and doctrines. We’ve done well.
But defeating Darwin is only a small part. Behe has been doing it for years and they’ll eventually accept his findings. The evolution story line will just adjust itself.
Proving that there is actually Intelligent Design is much more difficult and without a knock-down argument, our best efforts remain ignored.
Jawa at #22:
Frankly, I don’t think they are interested in my arguments. They are probably too bad!
Jawa and others:
Or maybe they don’t believe that there is anything in my arguments tha really favours design. Some have made that objection in the past, I believe. good arguments, but what have they to do with design?
Well. I believe that they have a lot to do with design.
What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality?
Just to know…
Jawa at #23:
Of course Arthur Hunt would be very welcome here. Indeed, any competent defender of the neo-darwinian paradigm would be very welcome here.
Silver Asiatic at #24:
I think that the amazing complexity of newtork functional configurations in these complex regulation systems is direct evidence of intelligence and purpose. It is, of course, also an obvious falsification of the neo-darwinist paradigm, which cannot even start to try to explain that kind of facts.
You are right that post-post-neo-darwinists are trying as well as they can to build new and more fashionable religions, such as self-organization, emerging properties, magical stochastic systems, and any other intangible, imaginary principle that is supposed to help.
But believe me, that will not do. That simply does not work.
When really pressured, they always go back to the old good fairy tale: RV + NS. In the end, it’s the only lie that retains some superficial credibility. The only game in town.
Except, of course, design. 🙂
To all:
This is interesting:
Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6353211/
Now, let’s try to understand what this means.
First of all, just to avoid confusion, p65 is just another name for RelA, the most common among the 5 proteins that contribute to NF-kB dimers. The paper here studied the behavour of the p65(RelA)-p50 dimer, with special focus on the RelA interaction with DNA.
Now, we know that RelA, like all TFs, has a DNA binding domain (DBD) which binds specific DNA sites. We also know that the DBD is usually strongly conserved, and is supposed to be the most functional part in the TF.
The paper here shows, in brief, that the DBD is really responsible for the DNA binding and for its stability (the duration of the binding), and the duration is connected to transcription. However, it is not the DBD itself that works on transcription, but rather the two protein-protein transactivation domains (TADs). While DNA binding is necessary to activate transcription, mere DNA binding does not work: mutations in the TADs will reduce transcription, even if the DNA binding remains stable. IOWs, it’s the TADs that really affect transcription, even if the DBD is necessary.
OK, why is that interesting?
Let’s see. The DBD is located, in the RelA molecule, in the first 300 AAs (the human protein is 551 AAs long). The two TADs are located, instead, in the last part of the molecule, more or less the last 100 – 200 AAs.
So, I have blasted the human protein against our old friends, cartilaginous fishes.
Is the protein conserved across our usual 400+ million years?
The answer is the same as for most TFs: moderately so. In Rhincodon typus, we have about 404 bits of homology, less than 1 bit per aminoacid (bpa). Enough, but not too much.
But is it true that the DBD is highly conserved?
It certainly is. The 404 bits of homology, indeed, are completely contained in the first 300 AAs or so. IOWs, the homology is practically completely due to the DBD.
So yes, the DBD is highly conserved.
The rest of the sequence, not at all.
In particular, the last 100 – 200 AAs at the C terminal, where the TAD domains are localized, show almost no homology bewteen humans and cartilaginous fishes.
But… we know that those TAD domains are essential for the function. It’s them that really activate the transcription cascade. We can have no doubt about that!
And so?
So, this is a clear example of a concept that I have tried to defend many times here.
There is function which remains the same through natural history. Therefore, the corresponding sequences are highly conserved.
And there is function which changes. Which must change from species to species. Which is more specific to the individual species.
That second type of function is not highly conserved at sequence level. Not because it is less essential, but because it is different in different species, and therefore has to change to remain functional.
So, in RelA we can distinguish (at least) two different functions:
a) The DNA binding: this function is implemented by the DBD (firts 300 AAs). It happens very much in the same way in humans and cartilaginous fishes, and thereofre the corresponding sequences remain highly homologous after 400+ years of evolutionary separation.
b) The protein-protein interaction which really actovates the specific transcription: this function is implemented by the TADs (last 200 AAs). It is completely different in cartilaginous fishes and humans, because probably different genes are activated by the same signal, and therefore the corresponding sequence is not conserved.
But it is highly functional just the same. In different ways, in the two different species.
IOWs, my measure of functional information based on conserved homologies through long evolutionary times does measure functional information, but usually underestimates it. For example, in this case the value of 404 bits would measure only the conserved function in the DBD, but it would miss completely the undeniable functional information in the TAD domains, because that information, while certainly present, is not conserved among species.
This is, IMO, a very important point.
GP
Agreed. You’ve done a great job to expose the reality of those systems. The functional relationships are indication of purpose and design, yes. I think what happens also is that evolutionists find some safety in the complexity that you reveal. They assume that nobody will actually go that far “down into the weeds” so they can always claim there’s something going on that is far too sophisticated for the average IDist to understand. So, they hide in the details.
You’ve called their bluff and show what is really going on, and it is inexplicable from their mechanisms. They look for an escape but there is none. I agree also that it’s not merely a defeat of RM + NS that is indicated, but evidence of design in the actual operation of complex systems.
Another tactic we see is that an extremely minor point is attacked and they attempt to show that it could have resulted from a mutation or HGT or drift. If they can make it half-way plausible then their entire claim will stand unrefuted, supposedly.
It’s a game of hide-and-seek, whack-a-mole. We have to deal with 50 years of story-telling that just continued to build one assumption upon another, without any evidence, and having gained unquestioning support from academia simply on the idea that “evolution is right and every educated and intelligent person believes in it”. But even in papers citing evolution they never (or rarely) give the probabilistic outlooks on how it could have happened.
GP
I think you did a great job, but just a thought …
You responded to the notion that supported our view – the researcher says that the cell is not merely engineering but is more dynamic. So, we support that and you showed that the cell is far more than a machine.
However, in supporting that researcher’s view, has the discussion changed?
In this case, the researcher is actually saying that deterministic processes cannot explain these cellular functions. He says it’s all about self-organization, etc.
Now, what you have done is amplified his statement very wonderfully. However …
What remains open are a few things:
1. Why didn’t the researcher, stating what you (and we) would and did – just conclude Design?
2. The researcher is attacking Darwinism (subtly) accepting some of it:
… so, hasn’t he already conceded the game to us on that point?
Could we now show how self-organization is not a strong enough answer for this type of system?
I believe we could simply use Nicholson’s paper to discredit Darwinism (as he does himself), and our amplification of his work does “favor a design view”. But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways.
Bornagain77 at #12:
I believe that my comment at #29 is strictly connected to you observations. It also expands, with a real example, the simple ideas I had already expressed at #14.
So, you could like to have a look at it! 🙂
Silver Asiatic at #30:
I absolutely agree with what you say here! 🙂
Silver Asiatic at #31:
Very good points.
Yes, my argument is exactly that as the cell is more than a machine, and yet it implements the same type of functions as traditional machines do, only with much higher flexibility and complexity, it does require a lot more of intelligent design and engineering to be able to work.
So, it is absolutely true that the researcher in that paper has made a greater point for Intelligent Design.
But, of course, he (or they) will never admit such a thing! And we know very well why.
So, the call to “self-organization”, or to “stochastic systems”.
Of course, that’s simply mystification. And not even a good one.
I will comment on the famous concept of “self-organization” in my next post.
Per Gp 32, it is not enough, per falsification, to find examples that support your theory. In other words, I can find plenty of counterexamples.
Bornagain77 at #35:
I am not sure that I understand what you mean.
My theory? Falsification? Counterexamples?
At #12 you quote a paper that says:
“Similarity regression inherently quantifies TF motif evolution, and shows that previous claims of near-complete conservation of motifs between human and Drosophila are inflated, with nearly half of the motifs in each species absent from the other, largely due to extensive divergence in C2H2 zinc finger proteins.”
OK?
At #14 I agree with the paper, and add a comment:
Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms.
A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences.”
OK?
At #29 I reference a paper about RelA, one of the TFs discussed in this OP, that shows a clear example of what I said at #14: homology of the DBD and divergence of the functional TADs between humans and cartilaginous fishes. Which is exactly what was stated in the paper you quoted.
What is the problem? What am I missing?
“What is the problem? What am I missing?”
Could be me missing something. I thought you might, with your emphasis on conservation, be pushing for CD again.
Silver Asiatic and all:
OK, a few words about the myth of “self organization”.
You say:
“But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways.”
It is perfectly true that we “don’t have enough data” about that. We don’t have them because there is none: “self organization” simply does not work as a substitute for Darwinian mechanisms. IOWs, it explain absolutely nothing about functional complexity (not that Darwinian mechanisms do, but at least they try).
Let’s see. I would say that there is a correct concept of self-organization, and a completely mythological expansion of it to realities that have nothing to do with it.
The correct concept of self-organization comes from physics and chemistry, essentially. It is the science behind systems that present some unexpected “order”deriving from the interaction of random components and physical laws.
Examples:
a) Physics: Heat applied evenly to the bottom of a tray filled with a thin sheet of viscous oil transforms the smooth surface of the oil into an array of hexagonal cells of moving fluid called Bénard convection cells
b) Chenistry: A Belousov–Zhabotinsky reaction, or BZ reaction, is a nonlinear chemical oscillator, including bromine and an acid. These reactions are far from equilibrium and remain so for a significant length of time and evolve chaotically, being characterized by a noise-induced order.
And so on.
Now, the concept of self-organization has been artificially expanded to almost everything, including biology. But the phemomenon is essentially derived from this type of physical models.
In general, in these examples, some stochastic system tends to achieve some more or less ordered stabilization towards what is called an attractor.
Now, to make things simple, I will just mention a few important points that show how the application of those principles to biology is completely wrong.
1) In all those well known physical systems, the system obeys the laws of physics, and the pattern that “emerges” can very well be explained as an interaction between those laws and some random component. Snowflakes are another example.
2) The property we observe in these systems is some form of order. That is very important. It is the most important reason why self-organization has nothing to do with functional complexity.
3) Functional complexity is the number of specific bits that are necessary to implement a function. It has nothing to do with a generic “order”. Take a protein that has an enzymatic activity, for example, and compare it to a snowflake. The snowflake has order, but no complex function. Its order can be explained by simple laws, and the differences between snowflakes can be explained by random differences in the conditions of the system. Instead, the function of a protein strictly depends on the sequence of AAs. It has nothing to do with random components, and it follows a very specific “recipe” coming from outside the system: the specific sequence in the protein, which in turn depends on the specific sequence of nucleotides in the protein coding gene. There is no way that such a specific sequence can be the result of “self-organization”. To believe that it is the result of Natural Selection is foolish, but at least it has some superficial rationale. But to believe that it can be the result of self-organization, of physical and chemical laws acting on random components, is total folly.
4) The simple truth is that the sequence of AAs generates function according to chemical rules, but to find what sequence among all possible sequences will have the function requires deep understanding of the rules of chemistry, and extreme computational power. We still are not able to build functional proteins by a top down process. Bottom up processes are more efficient, but still require a lot of knowledge, computation power, and usually strictly guided artificial selection. Even so, we are completely unable to engineer anything like ATP synthase, as I have discussed in detail many times. Nor could ever RV + NS do that.
But, certainly, no amount of “self-organization” in the whole reality could even begin to do such a thing.
5) Complex networks like the one I have discussed here certainly elude our understanding in many ways. But one thing is certain: they do require tons of functional information at the level of the sequences in proteins and other parts of the genome to wortk correctly. As we have seen in the OP, mutations in different parts of the system are connected to extremely serious diseases. Of course, no self-organization of any kind can ever correct those small errors in digital functional information.
6) The function of a protein is not an “emerging” quality of the protein any more than the function of a watch is an emerging quality of the gears. The function of a protein depends on a very precise correspondence between the digital sequence of AAs and the laws of biochemistry, which determines the folding and the final structure and status (or statuses) of the protein. This is information. The same information that makes the code for Excel a functional reality. Do we see codes for software emerging from self-organization? We should maybe inform video game programmers of that, they could spare a lot of work and time.
In the end, all these debates about self-organizarion, emerging properties and snowflakes have nothing to do with functional information. The only objects that exhibit functional information beyond 500 bits are, still, human artifacts and biological objects. Nothing else. Not snowflakes, not viscous oil, not the game of life. Only human artifacts and biological objects.
Those are the only objects in the whole known universe that exhibit thousands, millions, maybe billions of bits strictly aimed at implementing complex and obvious functions. The only existing instances of complex functional information.
Bornagain77;
No. As you know, I absolutely believe in CD, but that is not the issue here. Homology is homology, and divergence is divergence, whatever the model we use to explain them.
I just wanted to show an example of a protein (RelA), indeed a TF, where both homology (in the DBD) and divergence (in the TADs) are certainly linked to function.
When I want to “push” for CD, I know how to do that.
“I absolutely believe in CD”
Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan.
Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?
For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself.
In the following article entitled ‘Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics’, which studied the derivation of macroscopic properties from a complete microscopic description, the researchers remark that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, The researchers further commented that their findings challenge the reductionists’ point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description.”
In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD.
Bornagain77:
It’s amazing how much you misunderstand me, even if I have repeatedly tried to explain my views to you.
1) “Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan.”
Interesting claims, that have nothing to do with my belief in CD, and about which I can absolutely agree with you. I absolutely believe that the fossil record is discontinuous, that genetic evidence is discontinuous, and that no one has ever changed the basic body plan of an organism into another body plan. And so?
2) “Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?”
I don’t believe that scientific certanty is ever absolute. I use “absolutely” to express my strength of certainty that there is empirical warrant of CD. And I have explained why, many times, even to you. As I have explained many times to you what I mean by CD. But I am not sure that you really listen to me. That’s OK, I believe in free will, as you probably know.
3) “For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself.”
I am in no way a reductionist, least of all a materialist. My certainty about CD only derives from scientific facts, and from what I believe to be the most reasonable way to interpret them. As I have tried to explain many times.
Essentially, the reasons why I believe in CD (again, the type of CD that I believe in, and that I have tried to explain to you many times) are essentially of the same type for which I believe in Intelligent Design. There is nothing reductionist or materialist in them. Only my respect for facts.
For example, I do believe that we do not understand at all how body plans are implemented. You seem to know more. I am happy for you.
4) “In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD.”
I have just stated that IMO we don’t understand at all how body plans are implemented. Moreover, I don’t believe at all that we have any complete microscopic description of any living organsism. We are absolutely (if you allow the word) distant from that. OK. But I still don’t understand what that has to do with CD.
For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species.
I hope this is the last time I have to tell you that.
“For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species.
I hope this is the last time I have to tell you that.”
To this in particular,,, “passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on.”
All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about.
As to examples that falsify the common descent model:
Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.
Bornagain77 at #42:
“All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about.”
It should be clear. However, let’s try again.
Let’s say that there are 3 main models for how functional information comes into existence in biological beings.
a) Descent with modifications generated by RV + NS: this is the neo-darwinian model. I absoluetly (if you allow the word) reject it. So do you, I suppose.
b) Descent with designed modifications: this is my model. This is the process I refer to: a process of design, of engineering, which derives new species from what already exists.
The important point, that justifies the term “descent”, is that, as I have said, the old information that is appropriate is physically passed on from the ancestor to the new species. All the rest, the new functional information, is engineered in the design process.
So, to be more clear, let’s say that species B appears in natural history at time T. Before it, there exists another species, A, which has some strong similarities to species B.
Let’s say that, according to my model, species B derives physically from the already existing species A. How doe it happen?
Let’s say that, just as an imaginary example, A and B share about 50% of protein coding genes. The proteins coded by these genes are very similar in the two species, almost identical, at least at the beginning. The reason for that is that the function implemented by those proteins in the two species are extremely similar.
But that is only part of the game. Of course, B has a lot of new proteins, or parts of proteins, or simply regulatory parts of the genome, that are not the same as A at all. Those sequences are absolutely funtional, but they do things that are specific to B, and do not exist in A, In the same way, many specific functions of A are not needed in B, and so they are not implemented there.
Now, losing some proteins or some functions is not so difficult. We know that losing information is a very easy task, and requires no special ability.
But how does all that new functional information arise in B? It did not exist in A, or in any other living organism that existed before to time T. It arises in B for the first time, and approximately at time T.
The obvious answer, in my model, is: it is newly designed functional information. If I did not believe that, I would be in the other field, and not here in ID.
But the old information, the sequence information that retains its function from A to B? Well, in my model, very simply, it is physically passed on from A to B.That is the meaning of descent in my model. That’s what makes A an ancestor of B, even if a completely new process of design and engineering is necessary to derive B from A.
Now, you may ask: how does that happen? Of course, we don’t know the details, but we know three important facts:
1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences.
2) The new functional information arises often in big jumps, and is almost always very complex. For the origin of vertebrates, I have computed about 1.7 million bits of new functional information, arising is at most 20 million years. RV + NS couild never do that, because it totally lacks the necessary probabilistic resources.
3) The fossil record and the existing genomes and proteomes show no trace of the many functional intermediates that would be necessary for RV + NS to even try something. Therefore, RV + NS did not do it, because there is no trace of what should absolutely be there.
So, how did design do it, with physical descent?
Let’s say that we can imagine us doing it. If we were able. What would we do?
It’s very simple: we would take a few specimens of A, bring them to some lab of ours, and work on them to engineer the new species with our powerful means of genetic engineering. Adding the new functional information to what already exists, and can still be functional in the new project.
Where? And in what time?
These are good questions. They are good questions in any case, even if you stick to your (I think) model, model c, soon to be described.
Because species B does appear at time T. And that must happen somewhere. And that must happen in some time window.
But the details are still to be understood. We know too little.
But one thing is certain: both space and time are somehow restricted.
Space is restricted, because of course the new species must appear somewhere. It does not appear at once all over the globe.
But there is more. Model a, the neo-darwinian model, needs a process that takes place almost everywhere. Why? Because it badly needa as many probabilistic resources as possible. IOWs, it badly needs big numbers.
Of course, we know very well that no reasonable big number will do. The probabilstic resources simply are not there. Even for bacteria crowding the whole planet for 5 billion years.
But with small populations, any thought if RV and NS is blatantly doomed from the beginning.
But design does not work that way. Design does not need big numbers, big populations. Especially if it is mainly top down engineering.
So, we could very well engineer B working on a relativel small sample of A. In our lab.
In what time? I really don’t know, but certainly not too much. As you well know, those information jumps are rather sudden in natural history, This is a fact.
So? I minute? 1 year? 1 million year? Interesting questions, but in the end it is not much anyway.
Not instantaneously, I would say. Not in model b, anyway. If it is an engineering process, it needs time, anyway.
So, what is important about this model?
Simply that it is the best model that explains facts.
1) The signatures of neutral variation in conserved sequences are perfectly explained. As those sequences have been passed on as they are fron A to B, they keep those signatures. IOWs, if A has existed for 100 million years from some previous split, in those 100 milllion years neutral variation happens in the sequence, and differentiates that sequence in A from some homologous sequence in A1 (the organsim derived from that old split). So, B inherits those changes from A, and if we compare B and A1, we find those differences, as we find them if we compare A and A1. The differences in B are inherited from A as it was 100 million years after the split from A1.
2) The big jumps in functional information are, of course, explained by the design process, the only type of process that can do those things.
3) There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process.
Of course, the new engineered species, when it is ready and working, is released into the generla environment. IOWs, it is “published”. That’s what we observe in the fossil record, and in the genomes: the release of the new engineered species. Nothing else.
So, model b, my model, explains all three types of observed facts.
c) No descent at all. This is, I believe, your model.
What does that mean?
Well, it can mean sudden “creation” (if the new species appears of of thin air, from nothing), or, more reasonably, engineering from scratch.
I will not discuss the “creation” aspect. I would not know what to say, from a scientific point of view.
But I will discuss the “engineering from scratch” model.
However it is conceived (quick or slow, sudden or gradual), it implies one simple thing: each time, everything is re-engineered from scratch. Even what had already been engineered in previously existing species.
From what? It’s simple. If it is not creation ex nihilo, “scratch” here can mean only one thing: from inanimated matter.
IOWs, it means re-doing OOl each time a new species originates.
OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1)
Moreover, I would definitely say that all your arguments against descent, however good (IMO, some are good, some are not) are always arguments agains model a). They have no relevance at all against model b), my model.
Once and for all, I absolutely (if you allow the word) reject model a).
That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible.
What is the falsification criteria of your model? It seems you are lacking a rigid criteria. Not to mention lacking experimental warrant that what you propose is even possible.
“No descent at all. This is, I believe, your model.”
I do not believe in UCD, but I do believe in diversification from an initially created “kind” by devolutionary processes. i.e. Behe “Darwin Devolves” and Sanford “Genetic Entropy”.
I note, especially in the Cambrian, we are talking gargantuan jumps in the fossil record. Your model is not parsimonious to such gargantuan jumps.
Moreover, your genetic evidence is not nearly as strong as you seem to think it is. And even if it were, it is not nearly enough to explain ‘biological form’. For that you need to incorporate recent finding from quantum biology:
correct time mark is 27 minute mark
Bornagain77:
I quote myself:
“That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible.”
The only thing in my model that explains biological form is design. Maybe it is not enough, but it is certainly necessary.
I want to be clear: I agree with you about the importance of consciousness and of quantum mechanics. But what has that to do with my argument?
Do you believe that functional information is designed? I do. Design comes from consciousness. Consciousness interacts with matter through some quantum interface. That’s exactly what I believe.
My model is not parsimonious and requires gargantuan jumps? Is it worse than the initial creation of kinds?
However, for me we can leave it at that. As explained, I was not even implying CD in my initial discussion here.
as to:
Again, the argument is not nearly as strong as you seem to think it is: Particularly You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor.
The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different.
In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms.
and
Bornagain77:
My argument is not about shared errors. It is about neutral mutations at neutral sites, grossly proportional to evolutionary split times. It is about the ka/ks ratio and the saturation of neutral sites after a few hundred million years. I have made the argument in great detail in the past, with examples, but I have no intention to repeat all the work now.
By the way, I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree.
“I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree.”
Like when he contradicts you? 🙂
Though you tried to downplay it, your argument from supposedly ‘neutral variations’ is VERY similar to the shared error argument. As such, for reasons listed above, it is not nearly as strong as you seem to presuppose.
It is apparent that you believe the variations were randomly generated and therefore you are basically claiming that “lightning doesn’t strike twice”, which is exactly the argument that Dr. Hunter critiqued.
Moreover, If anything we now have far more evidence of mutations being ‘directed’ than we do of them being truly random.
You said you could think of no other possible explanation, I hold that directed mutations are a ‘other possible explanation’ that is far more parsimonious to the overall body of evidence than your explanation of a Designer, i.e. God, creating a brand new species without bothering to correct supposed neutral variations and/or supposed shared errors.
Bornagain77:
I disagree with Cornelius Hunter when I think he is wrong. In that sense, I treat him like anyone else. You seem to believe that he is always right. I don’t. Many times I have found that he is wrong in what he says.
And no, my argument about neutral variation has nothing to do wiith the argument of shared errors. And with the idea that “lightning doesn’t strike twice”. My argument is about differences, not similarities, I think you don’t understand it. But that’s not a problem.
No, I do not think Dr. Cornelius Hunter is ALWAYS right. But I certainly think he is right in his critique of Swamidass. Whereas I don’t think you are always wrong. I just think you are, in this instance, severely mistaken in one or more of your assumptions behind your belief in common descent.
Your model is, from what I can tell, severely convoluted. If you presuppose randomness in your model at any instance prior to the design input from God to create a new family of species.,, that is one false assumption that would undermine your claim. I can provide references if need be.
To all:
As usual, the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search.
We are all interested, of course, in long non coding RNAs. Well, this paper is about their role in NF-kB signaling:
Lnc-ing inflammation to disease
https://www.ncbi.nlm.nih.gov/pubmed/28687714
The paper, unfortunately, is not open access. It is interesting, however, than lncRNAs are now considered “‘master gene regulators”.
Bornagain77:
OK, it’s too easy to be right in criticizing Swamidass! 🙂 (Just joking, just joking… but not too much)
Just to answer you observations about randomness: I think that most mutations are random, unless they are guided by design. I am not sure that I understand what your point is. Do you believe they are guided? I also believe that some mutations are guided, but that is a form of design.
If they are not guided, how can you describe the system? If you cannot describe it in terms of necessity (and I don’t think you can), some probability distribution is the only remaining option. Again, I don’t understand what you really mean.
But of course the mutations (if they are mutations) that generate new functional information are not random at all. they must be guided, or intelligently selected.
As you know, I cannot debate God in this context. I can only do what ID theory allows is to do: recognize events where a design inference is absolutely (if you allow the word) warranted.
Bornagain77:
Moreover, the mechanisms described by Behe in Darwin devolves are the known mechanisms of NS. They can certainly create some diversification, but essentially they give limited advantages in very special contextx, and they are essentially very simple forms of variation, They cannot certainly explain the emergence of new species, least of all explain the emergence of new comples functional information, like new functional proteins.
So, do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?
Just to understand.
Gp states
And yet the vast majority of mutations are now known to be ‘directed’
i.e. Directed mutations are ‘another possible explanation’.
As to, “do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?”
I believe in ‘top down’ creation of ‘kinds’ with genetic entropy, as outlined by Sanford and Behe, following afterwards. As to exactly where that line should be, Behe has recently revised his estimate:
I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating ‘kinds’ that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking.
Your model, Theologically speaking, humorously reminds me of this old Johnny Cash song:
Bornagain77:
Most mutations are random. There can be no doubt about that. Of course, that does not exclude that some are directed. A directed mutation is an act of design.
I perfectly agree with Behe that the level of necessary design intervention is at least at the family level.
The three quotes you give have nothing to do with directed mutations and design. In particular, the author if the second one is frankly confused. He writes:
This is simple ignorance. The existence of patterns does not mean that a system is not probabilistic. It just means that there are also necessity effects.
He makes his error clear saying:
“Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.”
Now, “a higher chance” is of course a probabilistic statement. A random distribution is not a distribution where all events have the same probability to happen. That is called a uniform probability distribution. If some events (like mutations near a place where mutations have already occurred) have a higher probability to occur, that is still a random distribution, one where the probability of the events is not uniform.
Things become even worse. He writes:
“While we can’t say mutations are random, we can say there is a large chaotic component, just as there is in the throw of a loaded dice. But loaded dice should not be confused with randomness because over the long run—which is the time frame of evolution—the weighted bias will have noticeable consequences.”
But of course a loaded dice is a random system. Let’s say that the dice is loaded so that 1 has a higher probability to occur. So the probabilities of the six possible events, instead of being all 1/6 (uniform distribution), are, for example, 0.2 for 1 and 0.16 for all the other outcomes.
So, the dice is loaded. And so? Isn’t that a random system?
Of course it is. Each event is completely probabilitstic: we cannot anticipate it with a necessity rule. But the event one is more probable than the others.
That article is simply a pile of errors and confusion. Whoever understands something about probability can easily see that.
Unfortunately you tend to quote a lot of things, but it seems that not always you evaluate them critically.
Again, I propose: let’s leave it at that, This discussion does not seem to lead anywhere.
I, of course, disagree with you.
The third article,,, “According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets.,,, “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe.”
That is fairly straightforward. And again, Directed mutations are ‘another possible explanation’. Your ‘convoluted’ model is not nearly as robust as you have presupposed.
Good post at 56, gp.
Also, it is my understanding that when someone says “mutations are random” they mean there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism. “Mutations are random” doesn’t refer to the causes of the mutations, I don’t think.
gpuccio:
I doubt it. I would say most are directed and only some are happenstance occurrences. See Spetner, “Not By Chance”,1997. Also Shapiro, “Evolution: a view from the 21st Century”. And:
Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes?
It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in.
“Mutations are random” means they are accidents, errors and mistakes. They were not planned and just happened to happen due to the nature of the process. Yes, x-rays may have caused the damage that produced the errors but the changes were spontaneous and unpredictable as to which DNA sequences, if any, would have been affected.
Excellent point at 59 ET. Isn’t Spetner’s model called the “Non-Random’ Evolutionary hypothesis?
Thank you, bornagain77. And yes- the non-random evolutionary hypothesis featuring built-in responses to environmental cues.
Hazel:
In a strict sense, random is a system where the events cannot be anticipated by a definite law, but can be reasonably described by a probability distribution.
Of course, it is absolutely true that in that case “there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism”. I would describe that aspect saying that the system, as a whole, is blind to those results.
Randomness is a concept linked to our way of describing the system. Random systems, like the tossing of a coin, are in essence deterministic, but we have no way to describe them in a deterministic way.
The only exception could be the intrinsic randomness of the wave function collapse in quantum mechanics. In the interpretations where it is really considered intrinsic.
ET:
“I doubt it. I would say most are directed and only some are happenstance occurrences”.
I beg to differ. Most mutations that we observe, maybe all, are random.
Of course, if the functional information we observe in organisms was generated by mutations, those mutations were probably guided. But we cannot observe that process directly, or at least I am not aware that it has been observed.
Instead, we observe a lot of more or less spontaneous mutations that are really random. Many of them generate diseases, often in real time.
Radiation and toxic substances dramatically increase the rate of random mutations, and the frequency of certain diseases or malformations. We know that very well. And yet, no law can anticipate when and how those mutations will happen. We just know that they are more common. The system is still probabilistic, even if we can detect the effect of specific causes.
I don’t know Spetner in detail, but it seems that he believes that most functional information derives from some intelligent adaptation of existing organisms.
Again, I beg to differ. It is certainly true that “all the evolution that has been actually observed and which is not accounted for by modern evolutionary theory” needs some explanation, but the explanation is active design, not adaptation.
I am not saying that adaptation does not exist, or does not have some important role. We can see good examples, for example in bacteria (the plasmid system, just to mention one instance).
Of course a complex algorithm can generate some new information by computing new data that come from the environment. but the ability to adapt depends on the specific functional information that is already in the system, and has therefore very strict limitations.
Adaptation can never generate a lot of new original functional information.
Let’s make a simple example. ATP synthase, again.
There is no adaptation system in bacteria that could have found the specific sequences of tha many complex components of the system. It is completely out of discussion.
And yet, ATP synthase exists in bacteria from billion of years, and is still by far similar in humans.
This is of course the result of design, not adaptation. The same can be said for body plans, all complex protein networks, and I agree with Behe that families of organisms are already levels of complexity that scream design. Adaptation, even for an already complex organism, cannot in any way explain those things.
It is true that the mutations we observe are practically always random. It is true that they are often deleterious, or neutral. More often neutral or quasi neutral. We know that. We see those mutations happen all the time.
Achondroplasia, for example, which is the most common cause of dwarfism, is a genetic disease that (I quote from Wikipedia for simplicity):
“is due to a mutation in the fibroblast growth factor receptor 3 (FGFR3) gene.[3] In about 80% of cases this occurs as a new mutation during early development.[3] In the other cases it is inherited from one’s parents in an autosomal dominant manner.”
IOWs, in 80% of cases the disease is due to a new mutation, one that was not present in the parents.
If you look at the Exac site:
http://exac.broadinstitute.org/
you will find the biggest database of variations in the human genome.
Random mutations that generate neutral variation are facts. They can be observed, their rate can be measured with some precision. There is absolutely no scientific reason to deny that.
So, to sum up:
a) The mutations we observe every day are random, often neutral, sometimes deleterious.
b) The few cases where those mutations generate some advantage, as well argued by Behe, are cases of loss of information in complex structures that, by chance, confers some advantage in specific environments. see antibiotic resistance. All those variations are simple. None of them generates any complex functional information.
c) The few cases of adaptation by some active mechanism that are in some way documented are very simple too. Nylonase, for example, could be one of them. The ability of viruses to change at very high rates could be another one.
d) None of those reasonings can help explain the appearance, throughout natural history, of new complex functional information, in the form of new functional proteins and protein networks, new body plans, new functions, new regulations. None of those reasonings can explain OOL, or eukaryogenesis, or the transition to vertebrates. None of them can even start to explain ATP synthase, ot the immune system, or the nervous system in mammals. And so on, and so on.
e) All these things can only be explained by active design.
This is my position. This is what I firmly believe.
That said, if you want, we can leave it to that.
GP @52:
” the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search”
Are you surprised? 🙂
This crosstalk concept is very interesting indeed.
OLV:
“Are you surprised?”
No. 🙂
But, of course, self-organization can easily explain all that! 🙂
OLV and all:
This is another paper about lncRNAs and NF-kB:
Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5343356/
This is open access.
OLV and all:
Here is a database of known human lncRNAs:
https://lncipedia.org/
It includes, at present, data for 127,802 transcripts and 56,946 genes. A joy for the fans of junk DNA! 🙂
Let’s look at one of these strange objects.
MALAT-1 is one of the lncRNAs described in the paper at the previous post. Here is what the paper says:
Emphasis mine, as usual.
Now, if we look for MALAT-1 in the database above linked, we find 52 transcripts. The first one, MALAT1:1, has a size of 12819 nucleotides. Not bad! 🙂
342 papers quoted about this one transcript.
Gp adamantly states,
And yet Shapiro adamantly begs to differ,,,
Noble also begs to differ
Richard Sternberg also begs to differ
Another paper along that line,
and another paper
And as Jonathan Wells noted, “I now know as an embryologist,,,Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.”
And as ET pointed out, Gp’s presupposition also makes no sense theologically speaking
Evolution by means of intelligent design is active design. Genetic changes don’t have to produce some perceived advantage in order to be directed. And if genetic entropy has interfered with the directed mutation function then that could also explain what you observe.
And yes, ATP synthase was definitely intelligently designed. Why can’t it be that it was intelligently designed via some sort of real genetic algorithm?
And those polar bears. The change in the structure of the fur didn’t happen by chance. So either the original population(s) of bears already had that variation or the information required to produce it. With that information being teased out due to the environmental changes and built-in responses to environmental cues.
.
Another excellent post GP, thank you for writing it. Reading thru it now.
Once again, where are your anti-ID critics?
Upright BiPed:
Hi UB, nice to hear from you! 🙂
“Once again, where are your anti-ID critics?”
As usual, they seem to have other interests. 🙂
Luckily, some friends are ready to be fiercely antagonistic! 🙂 Which is good, I suppose…
ET at #70:
Yes, it is.
Of course. That’s exactly my point. See my post #43, this statement about my model (modeol b):
“There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process.”
Emphasis added.
In my model, it does. You see, for anything to explain the differences created in time by neutral variation (my point 1 at post #43, what I call “signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split”), you definitely need physical continuity between different organisms. Otherwise, nothing can be explained. IOWs, neutral signatures accumulate as differences as time goes on, between there is physical continuity. Creation or design form scratch for each organism cannot explain that. This is the argument that BA seems not to understand.
Definitely.
Because, of course, the algorithm would be by far more complex than the result. And where is that algorithm? there is absolutely no trace of it.
It is no good to explain things with mere imagination, We need facts.
Look, we are dealing with functional information here, not with some kind of pseudo-order that can be generated by some simple necessity laws coupled to random components. IOWs, this is not something that self-organization can even start to do.
Of course, an algorithm could do it. If I had a super-computer already programmed with all possible knowledge about ciochemistry, and the computing ability to anticipate top down how protein sequences will fold and what biochemical activity they will have, and with a definite plan to look for some outcome that can transform a proton gradient into ATP, possibly with at least a strong starting plan that it should be something like a water mill, then yes, maybe that super-computer could be, in time, elaborate some relatively efiicient project on that basis. Of course, that whole apparatus would be much more complex than what we want to obtain. After all, ATP synthase has only a few thousand bits of functional information. Here we are discussing probably many gigabytes for the algorithm.
That’s the problem, in the end. Functional information can be generated only in two was:
a) Direct design by a consious, intelligen, purposeful agent. Of course that agent may have to use previous data or knowledge, but the point is that its cognitive abilities and its ability to have purposes will create those shortcuts that no non design system can generate.
b) Indirect design through some designed system complex enough to include a good programming of how to obtain some results. As said, that can work, but it has severe limitaitons. The designed system is already very complex, and the further functional information that can be obtained is usually very limited and simple. Why? Because the system, not being open to a further intervention of conaciousness and intelligence, can only do what it has been progarmmed to do. Nothing else. The purposes are only those purposes that have already been embedded at the beginning. Nothing else.
The computations, all the apparently “intelligent” activities, are merely passive executions of intelligent programs already designed. They can do what they have been programmed to do, but nothing else.
So, let’s say that I want to program a system that can find a good solution for ATP-synthase. OK, I can do that (not me, of course, let’s say some very intelligent designer). But I must already be conscious that I will need ATP.synthase, ir something like that. I must put that purpose in my system. And of course all the knowledge and power needed to do what I want it to do.
Or, of course, I can just design ATP synthase and introduce that design in the system (that I have already designed myself soem time ago) if and when it is needed.
Which is more probably true?
Again, facts and only facts must guide us.
ATP synthase, in a form very similar to what we observe today, was alredy present billion of years ago, when reasonably only prokaryotes were living on our planet.
Was a complex algorithm capable of that kind of knowledge and computations present on our planet before the appearance of ATP synthase? In what form? What fatcs have we that support such an idea
The truth is very simple. For all that we can know and reasonably infer, at some time, very early after our plane became compatible with any form of life, ATP synthase appeared, very much similar to what it is today, in some bacterial like form of life. There is nothing to suggest, or support, or even mak credible or reasonable, that any complex algorithm capable of computing the necessary information for it was present at that time. No such algorithm, or any trace of it, exists today. If we wanted to compute ATP synthase today, we would not have the palest idea of how to do it.
These are the simple facts. Then, anyone is free to believe as he likes. As for me, I stick to my model, and am very happy with it.
ET at #71:
As far as I can understand, the divergence of polar bears is probably simple enough to be explained as adaptation under environmental constraints. This is not ATP synthase. Not at all.
I don’t know the topic well, so mine is just an opinion. However, bears are part of the family Ursidae, so brown bears and polar bears are part of the same family. So, is we stick to Behe’s very reasonable idea that family is probably the level which still requres design, this is an inside family divergence.
Gp claims:
To be clear, Gp is arguing for a very peculiar. even bizarre. form of UCD where God reuses stuff and does not create families de novo (which is where Behe now puts the edge of evolution). Hence my reference to Johnny Cask’s song “One Piece at a Time”
Earlier, Gp also claimed that he could think of no other possible explanation to explain the data. I pointed out that ‘directed’ mutations are another possible explanation. Gp then falsely claimed that there are no such thing as directed mutations. Specifically he claimed, “Most mutations that we observe, maybe all, are random.”
Gp, whether he accepts it or not, is wrong in his claim that “maybe all mutations are random”. Thus, Gp’s “Johnny Cash” model is far weaker than he imagines it to be.
Bornagain77:
“I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating ‘kinds’ that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking.”
“And as ET pointed out, Gp’s presupposition also makes no sense theologically speaking”
I have ignored this kind of objection, but as you (and ET) insist, I will say just a few words.
I believe that you are theologically committed in your discussions about science. This is not a big statement, I suppose, because it is rather obvious in all that you say. And it is not a criticism, believe me. It is your strong choice, and I appreciate people who make strong choices.
But, of course, I don’t feel obliged to share those choices. You see, I too make my strong choices, and I like to remain loyal to them.
One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it’s not easy).
This is, for me, an important question of principle. So, I will not answer any argument that makes any reference to theology, or even simply to God, in a scientific discussion. Never.
So, excuse me if I will go on ignoring that kind of remarks from you or others. It’s not out of discourtesy. It’s to remain loyal to my principles.
Bornagain77 at #76:
For “God reusing stuff”, see my previous post.
For the rest, mutations and similar, see my next post (I need a little time to write it).
Upright Biped,
An off-topic. You have mail as of a long time ago 🙂 I apologise for my long silence. I have changed jobs twice and have been quite under stress. Because of this I was not checking my non-business emails regularly. Hoping to get back to normal.
EugeneS:
Hi, Eugene,
Welcome anyway to the discussion, even for an off.topic! 🙂
Basically I believe one of Gp’s main flaws in his model is that he believes that the genome is basically static and most all the changes to the genome that do occur are the result of randomness (save for when God intervenes at the family level to introduce ”some’ new information whilst saving parts of the genome that have accumulated changes due to randomness).
Yet the genome is now known to be dynamic and not to be basically static.
And again, DNA is now, contrary to what is termed to be ‘the central dogma’, far more passive than it was originally thought to be. As Denis Noble stated, “The genome is an ‘organ of the cell’, not its dictator”
Another main flaw in Gp’s ‘Johnny Cash model’, and as has been pointed out already, is that he assumes ‘randomness’ to be a defining notion for changes to the genome. This is the same assumption that Darwinists make. In fact, Darwinists. on top of that, also falsely assume ‘random thermodynamic jostling’ to be a defining attribute of the actions within a cell.
Yet, advances in quantum biology have now overturned that foundational assumption of Darwinists, The first part of the following video recalls an incident where ‘Harvard Biovisions’ tried to invoke ‘random thermodynamic jostling’ within the cell to undermine the design inference. (i.e. the actions of the cell, due to advances in quantum biology, are now known to be far more resistant to ‘random background noise’ than Darwinists had originally presupposed:)
Of supplemental note:
Gp in 77 tried to imply he was completely theologically neutral. That is impossible. Besides science itself being impossible without basic Theological presuppositions (about the rational intelligibility of the universe and of our minds to comprehend it), any discussion of origins necessarily entails Theological overtones. It simply can’t be avoided. Gp is trying to play politics instead of being honest. Perhaps next GP will try to claim that he is completely neutral in regards to breathing air. 🙂
gpuccio:
Yes, the algorithm would be more complex than the structure. So what? Where is the algorithm? With the Intelligent Designer. A trace of it is in the structure itself.
The algorithm attempts to answer the question of how ATP synthase was intelligently designed. Of course an omnipotent intelligent designer wouldn’t require that and could just design one from its mind.
Bornagain77 at #69 and #76 (and to all):
OK, so some people apparently disagree with me. I will try to survive.
But I would insist on the “apparently”, because again, IMO, you make some confusion in your quotes and their intepretation.
Let’s see. At #69, you make 6 quote (excluding the internal reference to ET):
1. Shapiro.
I don’t think I can comment on this one. The quote is too short, and I have not the book to check the context. However, the reference to “genome change operator” is not very clear. Moreover, the reference to “statistically significant non-random patterns” could simply point to some necessity effect that modifies the probability distribution, like in the case of the loaded dice. As explained, that does not make the system “non-random”. And that has nothing to do with guidance, design or creation.
2. Noble.
That “genetic change is far from random and often not gradual” is obvious. It is not random because it is designed, and it is well known that it is not gradual. I perfectly agree. That has nothing to do with random mutations, because design is of course not implemented by random mutations. This is simply a criticism of model a.
Another point is that some epigenetic modification can be inherited. Again, I have nothing against that. But of course I don’t believe that such a mechanism can create complex functional information and body plans. Neither do you, I believe. You say you believe in the “creation of kinds”.
3. and 4. Stermberg and the PLOS paper.
These are about transposons. I will address this topic specifically at the end of this post.
5. The other PLOS paper.
Here is the abstract:
This is simple. The paper, again, uses the term “random” and “not random” incorrectly. It is obvious in the first phrase. The authors complain that mutations do not occur “roughly uniformly” in the genome, and that would make them not random. But, as explained, the uniform distribution is only one of the many probability distributions that describe well natural phenomena. For example, many natural systems are well described, as well known, by a normal distribution, which has nothing to do with an uniform distribution. That does not mean that they are not random systems.
The criticism to graduality I have already discussed: I obviously agree, but the only reason for non gradual variation is design. Indeed, neutral mutations are instead gradual, because they are not designed.
And what’s the problem with “environmental inputs”? We know very well that environmental inputs change the rate, and often the type, of mutation. Radiations, for example, do that. We have known that for decades. That is no reason to say that mutations are not random. They are random, and environmental inputs do modify the probability distribution. A lot. Are these authors really discovering, in 2019, that a lor of leukemias were caused by the bomb in Hiroshima?
6. Wells.
He is discussing the interesting concept of somatic genomic variation.
Here is the abstract of the paper to which he refers:
As you can see (if you can read that abstract impartially) the paper does not mention in any way anything that supports Wells’final (and rather gratuitous) statement:
“From what I now know as an embryologist I would say that the truth is the opposite: Tissues and cells, as they differentiate, modify their DNA to suit their needs. It’s the organism controlling the DNA, not the DNA controlling the organism.”
Indeed, the paper says the opposit: that somatic genomic variations are important to better understand “the etiology of genetic diseases such as cancer”. Why? The reason is simple: because they are random mutations, often deleterious.
Ah, and by the way: of course somatic mutattions cannot be inherited, and therefore have no role in building the functional inforamtion in organisms.
So, as you can see (but will not see) you are making a lot of confusion with your quotations.
The only interesting topic is transposons. But it’s late, so I will discuss that topic later, in next post.
Bornagain77 at #82:
Emphasis mine.
That’s unfair and not true.
I quote myself at #77:
“One of my strong choices is that my philosophy of science (and my theology, too) tell me that my scientific reasonings must not (as far as it is humanly possible) be influenced by my theology. In any way. So, I really strive to achieve that (and it’s not easy).”
No comments.
You see, the difference between your position and my position is that you are very happy to derive your scientific ideas from your theology. I try as much as possible not to do that.
As said, both are strong choices. And I respect choices. But that’s probably one of the reasons why we cannot really communicate constructively about scientific things.
ET at #83:
“Yes, the algorithm would be more complex than the structure. ”
OK.
“So what? Where is the algorithm? With the Intelligent Designer. ”
??? What do you mean? I really don’t understand.
“A trace of it is in the structure itself.”
The structure aloows us to infer design. I don’t see what in the structure points to some specific algorithm. Can you help?
“The algorithm attempts to answer the question of how ATP synthase was intelligently designed. ”
OK, I am not saying that the designer did not use any algorithm. Maybe the designer is there in his lab, and has a lot of computers working fot him in the process. But:
a) He probably designed the computers too
b) His conscious cognition is absolutely necessary to reach the results. Computers do the computations, but it’s consciousness that defines puproses, and finds strategies.
And however, design happens when the functional information is inputted into the material object we observe. So, if the designer inputs information after having computed it in his lab. that is not really relevant.
I though that your mention of an algorithm meant something different. I thought you meant that the designer designs an algorithm and put it in some existing organism (or place), and tha such algorithm them compute ATP synthase or what else. So, if that is your idea, again I ask: what facts support the existence of such an independent physical algorithm in physical reality?
The answer is simple enough: none at all.
” Of course an omnipotent intelligent designer wouldn’t require that and could just design one from its mind.”
I have no idea if the biological designer is omnipotent, or if he designs things from his mind alone, or if he uses computers or watches or anything else in the process. I only know that he designs biological things, and must be conscious, intelligent and purposeful.
Gp 77 and 85 disingenuously claims that he is the one being ‘scientific’ while trying, as best he can, to keep God out of his science. Hogwash! His model specifically makes claims as to what he believes the designer, i.e. God, is and is not doing. i.e. Johnny Cash’s ‘One Piece at a Time”.
Perhaps Gp falsely believes that if he compromises his theology enough that he is somehow being more scientific than I am? Again Hogwash. As I have pointed out many times, assuming Methodologcal Naturalism as a starting assumption, (as Gp seems bent on doing in his model as far as he can do it without invoking God), results in the catastrophic epistemological failure of science itself. (See bottom of post for refutation of methodological naturalism)
Bottom line, Gp, instead of being more scientific than I, as he is falsely trying to imply (much like Darwinists constantly try to falsely imply), has instead produced a compromised, bizarre, and convoluted, model. A model that IMHO does not stand up to even minimal scrutiny. And a model that no self respecting Theist or even Darwinist would ever accept as being true. A model that, as far as I can tell, apparently only Gp himself accepts as being undeniably true..
Gp has, in a couple of instances now, tried to imply that I (and others) do not understand randomness. In regards to Shapiro Gp states,
Might I suggest that it is Gp himself that does not understand randomness. As far as I can tell, Gp presupposes complete randomness within his model, (completely free from ‘loaded dice’), and is one of the main reasons that he states that he can think of no “other possible explanation” to explain the sequence data.. Yet, if ‘loaded dice’ are producing “statistically significant non-random patterns” within genomes then that, of course, falsifies Gp’s assumption of complete randomness in his model. Like I stated before ‘directed’ mutations, (and/or ‘loaded dice’ to use Gp’s term), are ‘another possible explanation’ that I can think of.
Bornagain77:
OK, I think I will leave it at that with you. Even if you don’t.
To all:
Of course, I will make the clarifications about transposons as soon as possible.
Once again (along with others) thank you for a very interesting and evocative OP. On the other hand, as a mild criticism, I am just an uneducated layman when it comes to bio-chemistry so I am continuously trying to get up to speed on the topic. I think I get the gist of what you are saying but I imagine someone stumbling onto this site for the first time are going to find this topic way over their heads. Maybe something of a basic summary which briefly explains transcription, the role of RNA polymerase and the difference between prokaryotic and eukaryotic transcription would be helpful (or a link to such a summary if you done that somewhere else.)
As for myself I think I get the gist of what you are saying but I am a little confused by differences between prokaryotic and eukaryotic transcription. (Most of my study and research has been centered on the prokaryote. If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation.) For example, one question I have is, are there transcription factors for prokaryotes? According to Google, no.
Is that true? What about the Sigma factor which initiates transcription in prokaryotes and the Rho factor which terminates it? Isn’t that essentially what transcription factors which come in two forms, promoters and repressors, do in eukaryotic transcription? Are Sigma factors and Rho factors the same in all prokaryotes or is there a species difference?
As far a termination in eukaryotes, one educational video I ran across recently (it’s dated to 2013) said that it is still unclear how termination occurs in eukaryotes. Is that true? In prokaryote there are two ways transcription is terminated: there is Rho dependent, where the Rho factor is utilized, and Rho independent, where it isn’t. Do we know anymore six years later?
Hopefully answering those kinds of questions can help me and others. (Of course, they’re going to have to do some homework on their own.)
Hi gpuccio
Thanks for the interesting post. From my study cell control comes from the availability of transcription acting molecules in the nucleus. They can be either proteins or small molecules that are not transcribed but obtained by other sources like enzyme chains. Testosterone and estrogen are examples of non transcribed small molecules. How this is all coordinated so that a living organism can reliably operate is fascinating and I am thrilled to see you start this discussion. Great to have you back 🙂
John_a_designer:
Thank you for your very thoughtful comment.
Yes, in this OP and in others I have dealed mainly with eukaryotes. But of course you are right, prokaryotes are equally fascinating, maybe only a little bit simpler, and, as you say:
“If you can’t explain the natural selection + random variation evolution in prokaryotes it’s game over for Neo-Darwinism. There has to be another explanation”.
And game over it is, because the functional complexity in prokaryotes ia already overwhelming, and can never be explained by RV + NS.
It’s not a case that the example I use probably most frequently is ATP synthase. And that is a bacterial protein.
You describe very correctly the transcription system in prokaryotes. It’s certainly much simpler than in eukaryotes, but still ots complexity is mind-boggling.
I think the system of TFs is essentially eukaryotic, but of course a strict regulation is present in prokaryotes too. You mention sigma factors and rho, of course, and there is the system of activators and repressors. But there are big differences, starting form the very different organization of the bacterial chromosome (histone independent supercoiling, and so on).
Sigma factors are in some way the equivalent of generic TFs. According to Wikipedia, sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”.
Maybe. I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here.
I have blasted the same E. coli sigma 70 against all bacteria, excluding proteobacteria (the phylum of E. coli). I would say that there is good conservarion in different types of bacteria, such as up to 1251 bits in firmicutes, 786 bits in actinobacteria, 533 bits in cyanobacteria, and so on. So, this molecule seems to be rather conserved in bacteria.
I think that eukaryogenesis is one of the most astounding designed jumps in natural history. I do accept that mithocondria and plastids are derived from bacteria, and that some important eukaryotic features are mainly derived from archae, but even those partial derivations require tons of designed adjustments. And that is only the tip of the iceberg. Most eukaryotic features (the nuclear membrane and nuclear pore, chromatin organization, the system of TFs, the spliceosome, the ubiquitin system, and so on) are essentially eikaryotic, even if of course some vague precursor can be detected, in many cases, in prokaryotes. And each of these system is a marvel of original design.
Bill Cole:
Great to hear from you! 🙂
And let’s not forget lncRNAs (see comments #52, #67 and #68 here).
GP
I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.
Again, with Mozart. The orchestra plays the symphony. Does this mean that the symphony could only be created as an independent physical text in physical reality? The facts say no – he had it in his mind.
I believe you are saying that a Designer enters into the world at various specific points of time, and intervenes in the life of organisms and creates mutations or functions at those moments. What facts support the existence of those interventions in time, versus the idea that the organism was designed with the capability and plan for various changes from the beginning of the universe? What evidence do we have of a designer directly intervening into biology?
Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?
To all:
OK, now let’s talk briefly of transposons.
It’s really strange that transposons have been mentioned here as a confutation of my ideas. But life is strange, as we all know.
The simple fact is: I have been arguing here for years that transposons are probably the most important tool of intelligent design in biology. I remember that an interlocutor, some time ago, even accused me of inventing the “God of transposons”.
The simple fact is: there are many facts that do suggest that transposon activity is responsible for generating new functional genes, new functional proteins. And I think that the best intepretation is that transposon activity can be intelligently directed, in some cases.
IOWs, if biological design is, at least in part, implemented by guided mutations, those guided mutations are probably the result of guided transposon activity. We have no certainty of that, but it is a very reasonable scenario, according to known facts.
OK, but let’s put that into perspective, especially in relation to the confused and confounding statements that have been made or reported here about “random mutations”.
I will refer to the following interesting article:
The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4196381/
So, the first question that we need to answer is:
a) How frequent are transposon-dependent mutations in relation to all other mutations?
There is an answer to that in the paper:
0.3% of all mutations. So, let’s admit for a moment that transposon derived mutations are not random, as it has been siggested in this thread. That would still leave 99.7% of all mutations that could be random. Indeed, that are random.
But let’s go on. I have already stated that I believe that transposons are an important tool of design. Therefore, at least some of transposon activity must be intelligently guided.
But does that mean that all transposon activity is guided? Of course, absolutely not.
I do believ that most transposon activity is random, and is not guided. Let’s read again from the paper:
And so on.
Have we any reason to believe that that kind of transposon activity is guided? Not at all. It just behaves like all other random mutations, that are oftne cause of genetic diseases.
Moreover, we know that deleterious mutations are only a fraction of all mutations. Most mutations, indeed, are neutral or quasi neitral. Therefore, it is absolutely reasonable that most transposon induced mutations are neutral too.
And the design?
The important point, that can be connected to Abel’s important ideas, is that functional design happens when an intelligent agnet acts to give a functional (and absolutely unlikely) form to a number of “configurable switches”.
Now, the key idea here is that the switches must be configurable. IOWs, if they are not set by the designer, their individual configuration is in some measure indifferent, and the global configuration can therefore be described as random.
The important point here is that functional sequences are more similar to random sequences than to ordered sequences. Ordered sequences cannot convey the functional information for complex function, because they are constrained by their order. Functional sequences, instead, are pseudo-random (not completely, of course: some order can be detected, as we know well). That relative freedom of variation is a very good foundation to use them in a designed way.
So, the idea is: transposon activity is probably random in most cases. In some cases, it is guided. Pribably through some qunatum interface.
That’s also the reason why a quantum interface is usually considered (by me too) as the best interface between mind and matter: because quantum phenomena are, at one level, probabilistic, random, and that’s exactly the reason why they can be used to implement free intelligent choices.
To conclude, I will repeat, for the nth time, that a system is a random system when we cannot describe it deterministically, but we can proved some relatively efficient and useful description of it using a probability distribution.
There is no such a thing as “complete randomness”. If we use a probability distributuon to describe a system, we are treating that system as a random system.
Randomness is not an intrinsic property of evens (except maybe at quantum level). A random syste, like the tossing of a coin, is completely deterministic in essence. But we are not able to describe it deterministically.
In the same way, random systems that do not follow an uniform distribution are random just the same. A loaded dice is as rando as a fair dice. But, if the laoding is so extreme that only one event can take place, that becomes a necessity system, that can very well be described deterministically.
In the same way, there is nothing strange in the fact that some factrs, acting as necessity causes, can modify a probability distribution. As a random system is in reality deterministic in essence, of course is some of the variables that act in it is strong enought to be detected, that variable wil modify the probability dostribution in a detectable way. There is nothing strange in that. The system is stil random (we use a probabiliti dostribution to describe it), but we can detect one specific variable that modifies the probability distribution (what has been called here, not so precisely IMO, a bias). That’s the case, for example, of radiations increasing the rate and modifying the type of random mutations, as in the great incread of leukemia cases at Hiroshima after the bomb. That has always been well known, even is some people seem to discover it only now.
In all those cases, we are still dealing with random systems: systems where each single event cannot be anticipated, but a probability distribution can rather efficiently describe the system. Mutations are a random system, except maybe for the rare cases of guded mutations in the coruse of biological design.
Finally, let me say that, of all the things of which I have been accused, “assuming Methodologcal Naturalism as a starting assumption” is probably the funniest. Next time, they will probably accuse me of being a convinced compatibilist! 🙂
Life is strange.
Silver Asiatic:
“I don’t quite follow that. We create software that evaluates data and then produces functional information (visualizations). So, the design of that software happened when the visualization occurred? I think we normally say that the design occurred first in the mind of the software designer – it’s an idea (Mozart wrote symphonies entirely in his mind before putting on paper). Then, the designer creates algorithms that produce functional information. But the software is not the designer. It is the output of a designed process.”
I perfectly agree. The designed object here is the software. The design happens when the designer writes the software, from his mind.
I see your problem. let’s be clear. The software never designs anything, because it is not conscious. Design, by definition, is the output of form from consiousness to a materila object.
But you seem to believe that the siftware creates new functional information. Well, it does in a measure, but it is not new complex functional information. this is a point that is often misunderstood.
Let’s say that the software produces visualizations exactly as programmed to do. In that case, it is easy. All the functional information that we get has been designed when the software was designed.
But maybe the software makes computation whose result was not previously known to the designer. that deos not change anything, The computation process has been designed anyway. And computations are algorithmic, they do not increase the Kolmogorov complexity of the system. And that complexity is the functional complexity.
Finally, maybe the software uses new information from the environment. In that case, there will be some increse in functional information, but it will be very low, if the environment does not contain complex functional information. IOWs, the environment cannot teach a system how to build ATP synthase, except when the sequence of ATP syntghase (or, for that, of a Shakespeare sonnet in the case of language) is provided externally to the system.
Now I must go. More in next post.
GP
Good answer, thank you.
Yes, but I think this answers your question about a Designer who created algorithms. In a software output, it can be programmed to create information that was not known to the designer. That information actually causes other things to happen. I would think that it is the definition of complex, specified, functional information. We observe the software creating that information, and rightly infer that the information network (process) was designed. But do we, or can we know that the designer was unaware of what the software produced?
I don’t think so. We do not have access to the designer’s mind. We only see the software and what it produces. We know it is the product of design. But we do not know if the functional information was designed for any specific instance, or if it is the output of a previous design farther back, invisible to us.
This, I think, is the case in biology.
I believe you are saying that the design occurs at various discrete moments where a designer intervenes, and not that the design occurred at some distant time in the past and is merely being worked out by “software”. What we observe shows functional information, but this information may either be created directly by the designer at the moment, or it may be an output of a designed system.
I do not see how we could distinguish between the two options.
With software, we can observe the inputs and calculations and we can determine that the software created something “new”‘. It is all the output of design, but we can trace what the software is doing and therefore infer where the “design implementation” took place.
It’s that term that is the issue here, really.
It is “design implementation”. Where and when was the design (in the mind of the designer) put into biology?
I do not believe that is a question that ID proposes an answer for, and I also do not believe it is a scientific question.
Gp states,
“That would still leave 99.7% of all mutations that could be random. Indeed, that are random.”
LOL, just can’t accept the obvious can he? Bigger men than you have gone to their deaths defending their false theories Gp. 🙂
To presuppose that the intricate molecular machinery in the cell is just willy nilly moving stuff around on the genome is absurd on its face. And yet that is ultimately what Gp is trying to argue for.
Of note: It is not on me to prove a system is completely deterministic in order to falsify Gp’s model. I only have to prove that it is not completely random in order to falsify his model. And that threshold has been met.
Perhaps Gp would also now like to still defend the notion that most (+90%) of the genome is junk?
Silver Asiatic:
It’s not really question of knowing what is in the mind of the designer. The problem is: what is in material objects?
Let’s go back to ATP synthase. Please, read my comment #74.
So, I think we can agree that any algorithm that can compute the sequences for ATP synthase would be, by far, more complex than ATP synthase itself.
So, let’s say, just for as moment, that the designer does not design ATP synthase directly. Let’s say that the designer designs the algorithm. After all, he is clever enough.
So, he designs the algorithm. But, of course, he must implement it is a material object. A material object that can do the computations and then build the compute outcome (IOWs, ATP synthase).
OK, so my simple question is: where is, or was, that object? The computing object?
I am aware of nothing like that in the known universe.
Maybe it existed 4 billion years ago, and now it is lost?
Well, everything is possible, but what facts support such an idea?
None at all. Have we traces of that algorithm, indications of how it worked? Have we any idea of the object where it was implemented? It seems reasonable that it was some biologcial object, probably an organism. So, what are we hypothesizing, that 4 billion years ago the designer designed and implemented some extremely complex organism capable of computing ATP synthase, only to compute ATP syntase for bacteria, and that such a complex organism then disappeared without leaving any trace of itself?
What’s the sense of such a scenario? What scientific value has it? The answer is simple: none.
Of course, the designer designed ATP synthase when it was needed, and not some mysterious algorithm, never seen, to compute its information.
And there is more: such a complex algorithm, made to compute ATP synthase, could not certainly compute another, completely different, protein system, like for example the spliceosome. Because that’s another function, another plan. A completely different computation would be needed, a different purpose, a different context.
So, what do we believe? That the designer designed, later, another complex organism with another comples algorithm to compute and realize the spliceosome? And the immune system? And our brain?
Or that, in the beginning, there wa one organism so complex that it could compute the sequence of all future necessary proteins, protein systems, lncRNAs, and so on? A monster of which no trace has remained?
OK, I hope that’s enough.
Silver Asiatic::
You also say:
“What evidence do we have of a designer directly intervening into biology?”
That’s rather simple. The many examples, well known, of sudden appearance in natural history of new biological objects full of tons of new complex functional imformation, information that did not exist at all before.
For example, I have analyzed quantitatively the transition to vertebrates, which happened more than 400 million years ago, in a time window of probably 20 million years, and which involved the appearance, for the first time in natural history, of about 1.7 million bits of new functiona information. Information that, after that time, has been conserved up to now.
This is the evidence of a design intevrentio, specifically localized in time.
Of course, there is instead no ecidence at all that the organisms that existed before included any complex algorithm capable of computing those 1.7 million bits of functional information.
You say:
“Well, I think we could try to infer more than that – or not? Is the designer a biological organism? Or did the designer exist before life on earth existed? Is the designer a physical entity? What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth? How complex is the designer? More complex than the algorithms you mentioned? Does the designer move from one cell present in Southern California, USA and then travel to intervene in another cell in Tanzania? Or does the designer do such interventions simultaneously? In either answer, are there some facts that show what the designer does in these cases? If simultaneously, how big is the designer and what mechanisms are used to intervene simultaneously into billions of cells at various points of time? Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned. Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?”
These are good questions. To many of them, we cannot at present give answers. But not all.
“Is the designer a biological organism? Is the designer a physical entity?”
I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.
“Did the designer exist before life on earth existed?”
This is easy. A designer was certainly responsible for OOL on our planet. OOL is of course one of the events that scream design with the highest strength. So the answer is yes: the designer, or at least the designer who designed life on our planet, certainly existed before.
“What facts show that an entity is capable of intervening physically inside of the functions of every cell of every living being on earth?”
Well, we humans, as conscious beings, are entities capable of intervening inside the functions of most cells in our brain or nervous system, adn amny in our bodies. That’s how our consiousness is interfaced to our body.
Why shouldn’t some other conscious entity be able to do something similar with biological organisms? And again, there is no need that the interface reach all cells of all organisms. The strict requirement is for those organisms where the design takes place.
“How complex is the designer? ”
We don’t know. How complex is our consciousness, if separated from our body? We don’t know how complex non physical entities need to be. Maybe the designer is very simple. Or not.
This answer is valid for many other questions: we don’t understand, at present, how consciousness can work outside of a physical body. Maybe we will understand more in the future.
“Does the designer decide to intervene minute-by-minute based on various criteria? Or are the interventions pre-planned.”
I don’t know when or how the designer decides things. But I know when he does things. For example, he introduced the functional information for vertebrates, all those 1.7 million bits, in some pre-existing organism (probably the first chordates), approximately in those 20 million years when vertebrates appear on earth.
“Does the designer use tools to carry out interventions? Or does he have appendages that enable him to tweak mutations (like with his fingers)?”
Most likely he uses tools. Of course the designer’s consciousness needs to interface with matter, otherwise no design could be possible. That is exactly what we do when our consciousness interfaces with our brain. So, no big problem here.
The interface is probably at quantum level, as it is probably in our brains. There are many events in cells that could be more easily tweaked at quantum level in a consciousness related way. Penrose believes that a strict relationship exists in our brain between consciousness and microtubules in neurons. Maybe.
I think, as I have said many times, that the most likely tool of design that we can identify at present are transposons. The insertions of transposons, usually random (see my previous posts), could be easily tweaked at quantum level by some conscious intervention. And there is some good evidence that transposons are involved in the generation of new functional genes, even in primates.
That’s the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them.
GP,
The first graphic illustration shows the mechanism of NF-kB action, which you associated with the canonical activation pathway “summarized” in figure 1.
The figure 1 -without breaking it into more details- could qualify as a complex mechanism.
Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing? Aren’t all the control procedures associated with this mechanism shown in the figure? Are any important details missing, or just irrelevant details?
Well, you answered those question when you elaborated on those details in the OP.
In this particular example, we first see the “signals” shown in figure 1 under the OP section “The stimuli”.
Thus, what in the figure 1 appears as a few colored objects and arrows is described in more details, showing the tremendous complexity of each step of the graphic, specially the receptors in the cell membrane.
Can the same be said about every step within the figure?
.
Yes, I see that.
Illuminating thread otherwise.
GP,
Fascinating topic and interesting discussion, though sometimes unnecessarily personal. Scientific discussions should remain calm, focused in on details, unbiased. At the end we want to understand more. Undoubtedly biology today is not easy to understand well in all details and it doesn’t look like it could get easier anytime soon.
Someone asked:
“What evidence do we have of a designer directly intervening into biology?”
Could the answer include the following issues?
OOL, prokaryotes, eukaryote, and according to Dr Behe, who said that at one point he would point to the class level, now he would focus it on at least at the family level, where the Darwinian paradigm lacks explanatory power for the physiological differences between cats and dogs allegedly proceeding from a common ancestor.
You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?
Does CD stand for common design or common descent with designed modifications?
Does “common” relate to the observed similarities ?
For example, in the case of cats and dogs, “common” relates to their observed anatomical and/or physiological similarities, which were mostly designed too?
Upright BiPed:
“Illuminating thread otherwise.”
Thank you! 🙂
PeterA:
“Is it possible that such an explicit graphic illustration, which includes so many details, is a simplification of the real thing?”
Of course it is. A gross simplification. many important details are missing.
For example:
Only two kinds of generic signals and receptors are shown. As we have seen, there are a lot of different specific receptors.
The pathways that connect each specific type of receptor to IKK are not shown (they are shown as simple arrows). But they are very complex and specific. I have given some limited information in the OP and in the discussion.
Only the canonical pathway is shown.
Only the most common type of dimer is shown.
Coactivators and interactions with other pathways are not shown or barely mentioned.
Of course, lncRNAs are not shown.
And so on.
Of course, the figure is there just to give a first general idea of the system.
Pw:
“Could the answer include the following issues?”
Yes, of course.
“You have pointed to the intentional insertion of transposable elements into the genetic code asanother empirical evidence. I think you’ve also mentioned the splicing mechanisms. Perhaps any of the complex functional mechanisms that appeared at some points could be attributed to conscious intentional design?”
All of them, if they are functionally complex. That’s the theory. That’s ID. The procedure, if correctly applied, should have no false positives.
“Does CD stand for common design or common descent with designed modifications?”
CD stands just for “common descent”. I suppose that each person can add his personal connotations. Possibly making them explicit in the discussion.
I have explained that for me common descent just means a physical continuity between organisms, but that all new complex functional information is certainly designed. Without exceptions.
So, I suppose that “common descent with designed modifications” is a good way to put it.
Just a note about the universality. Facts are very strong in supporting common descent (in the sense I have specified). It remains open, IMO, if it is really universal: IOWs, if all forms of life have some continuity with a single original event of OOL, or if more than one event of OOL took place. I think that at present universality seems more likely, but I am not really sure. I think the question remains open. For example, some differences between bacteria and archea are rather amazing.
“Does “common” relate to the observed similarities ?”
Common, in my version of CD, refers to the physical derivation (for existing information) from one common ancestor. So, let’s say that at some time there was in the ocean a common ancestor of vertebrates: maybe some form of chordate. And at some time, vertebrates are already split into cartilaginous fish and bony fish. If both cartilaginous fish and bony fish physically reuse the same old information from a common ancestor, that is common descent, even of course all the new information is added by specific design.
I really don’t understand how that could be explained without any form of physical descent. Do they really believe that cartilaginous fish were designed from scratch, from inanimate matter, and that bony fish were too designed from scratch, from inanimate matter, but separately? And that the supposed ancestor, the first chordates, were also designed from scratch? And the first eukaryotes? And so on?
PeterA:
Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway:
https://rockland-inc.com/nfkb-signaling-pathway.aspx
gpuccio:
The Designer is never seen.
The point of the algorithm was to address the “how” the Intelligent Designer designed living organisms and their complex parts and systems. The way ATP synthase works, by squeezing the added “P” onto ADP and not by some chemical reaction, is a clue- for me, anyway. It just seems like something an algorithm would tease out- and that comes from knowledge of many GA’s that have created human inventions.
I would love to see how you made that determination, especially in the light of the following:
Gpuccio
Thank you for your detailed replies on some complex questions. You explained your thoughts very clearly and well.
I think what science can show is that 1.7 million bits of FI appear. What is not shown is how they appeared there. Regarding a complex algorithm, the designing-mind itself had to have immense capabilities. Algorithms are programmatic functions which start in a mind. Organisms could have been programmed to trigger innovations over time.
Here is where it starts to get difficult. On the same basis that we say that there is no evidence of physical designers, we have to say there is no evidence if immaterial designers. Science cannot evaluate immaterial entities. So, our speculations here take us outside of science. I don’t think we can say that we have empirical evidence of immaterial entities or beings. The absence of evidence (in this case of physical designers) does not mean that we have direct evidence of immaterial designers.
That is good. We do not know if there is one or multiple designers, or if the designer of life is the same as the one who developed and changed life. But some designing intelligence existed before life on earth did. That designer would not be a terrestrial, biological entity.
I don’t think we have any direct, scientific experience with an immaterial, pre-biological conscious entity. Additionally, we do not see that human consciousness can create life, for example, or that it could mutate bacteria to create plants, birds, fish, animals and other humans. We don’t see that human consciousness can intervene and create other consciousnesses. We might say that the entire population of human beings has affected the earth – would this suggest that there is a huge population of designers affecting mutations?
I’d think that the activity of mutations within organisms is such that a continual monitoring would be required in order to achieve designed effects, but perhaps not. Even if it is only cells where there were innovations that seems to be quite a lot of intervention.
I think this cuts against your concern about complex algorithms. The designer may be very complex. Algorithms created by the designer may be complex also. Additionally, I do not think that science has established that human consciousness is a non-physical entity, or that human consciousness can exist separated from a body.
The options I see for this introduction of information are:
1. Direct creation of vertibrates
2. Guided or tweaked mutations
3. Pre-programmed innovations that were triggered by various criteria
4. Mutation rates are not constant but can be accelerated at times
5. We don’t know
GP
I think we have to say that we do not know. As previously, you stated that we do not know how complex the designer is. An algorithm is method of calculation which would be resident in the mind of the designer. The level of complexity of that algorithm, for a designer capable of creating life on earth, does not seem to be a problem.
The algorithm could be computed by an immaterial entity. The designer, I think you’re saying, created immaterial consciousnesses (human) so could create immaterial algorithms that programmed life from the beginning. So, there would be one single Design act, and then everything after that is an output.
If the computing agent is immaterial then you could have no scientific evidence of it.
I think we are saying that science cannot know this. Additionally, you refer to “the designer” but there could be millions of designers. Again, science cannot make a statement on that.
You propose an immaterial designer — it is subject to conditions of space and time? In any case, that proposal can have no scientific value. Science cannot directly investigate immaterial entities. Science can look at effects of entities, but cannot evaluate them.
I don’t think that conclusion is obvious. Why did the design have to occur when needed and not before. And again the algorithm could have been administered by an immaterial agent, which we never could observe scientifically. There’s no way for science to know this.
ET at #109:
Correct. But, as I have said, the designer needs not be physical. I believe that consciousness can exist without being necessarily connected to a physical body. I have explained at #101 (to SilverAsiatic). I quote myself:
“Is the designer a biological organism? Is the designer a physical entity?”
I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.
An algorithm, instead, needs to be physically instantiated. An algorithm is not a conscious agent. It works like a machine. It need a physical “body” to exist and work.
ATP synthase squeezes the P using mechanical force from a proton gradient. It works like a water mill. Do you really believe that any generic algorithm would design such a thing, if the designer does not code the idea and a lot of details in the algorithm itsef?
Algorithms compute, and do nothing else. They are sophisticated abacuses, nothing more. The amazing things that they do are simply due to the specific cponfigurations designed for them by conscious intelligent beings.
Maybe the designer needed some algorithm to do the computations, if his computing ability is limited, like ours. Maybe not. But, if he used some algorithm, it seems not to have happened on this planet, ot he accurately destroyed any trace of it. Don’t you think that these are just ad hoc reasonings?
I am not aware that waht Spetner says is true by default. Again, I don’t know his thought in detail, and I don’t want to judge.
But there are a lot of facts that tell us that most mutations are random, neutral or deleterious. I have mentioned the many human diseases causes by mutations that follow no specific pattern, botn normal mutations and transposon associated. See comments #64 and #96.
The always precious Behe has clearly shown that differentiation at low level (let’s say inside families) is just a matter of adaptation thorugh loss of information, never a generation of new functional information. To be clear, the loss of information is random, due to deleterious mutations, and the adaptation id favoured by an occasionl advantage gained in specific environments, therefore to NS. This is the level where the neo-darwinian model works. But without generating any new functional information. Just by losing part of it. This is Behe’s model (see polar bears). And it is mine, too.
For the rest, actual design is always needed.
gpuccio:
I don’t see any issues with it. There is a Scientific America article from over a decade ago titled “Evolving Inventions”. One invention had a transistor in it that did not have its output connected to anything. The point being is the only details required are what is needed to get the job done, ie connecting a “P” to ADP.
And for every genetic disease there are probably thousands of changes that do not cause one.
Silver Asiatic at #110:
Well, when you have facts science has to propose hygpotheses to explain them. Neo-darwinism is one hypothesis, and it does not explain what it should explain. Design is another hypothesis. You can’t just say: it happened, and not try to explain it. That’s not science.
Everything is possible. But my points are:
a) There is no trace of those algorithms. They are just figments of the imagination.
b) There are severe limits to what an algorithm can do. An algorithm cannot find solutions to problems for which it has not been programmed to find solutions. An algorithm just computes. Only consciousness has cognitive representations, understanding and purpose.
Regarding innovations, I am afraid they are limited to what Behe describes, plus maybe some limited cases of simple computational adaptation. Innovations exist, but they are always simple.
I strongly disagree. Here you are indeed assuming methodologic naturalism, something that I consider truly bad philosophy of science (even if I have been recently accused of doing exactly that).
Science can investigate anything that produces observable facts. In no way it is limited to “matter”. Indeed, many of the most important concepts in science have nothing to do with matter. Abd science does debate ideas and realities about which we still have no clear understanding, see dark matter and especially dark energy. Why, Because those things, whatever they may be, seem to have detectable effects, to generate facts.
Moreover, consciousness is in itself a fact. It is subjectively perceived by each of us (you too, I suppose). Therefore is can and must be investigated by science, Even if, at present, science has no clear theory about what consciousness is.
Design is an effect of consciousness. There is no evidence that consciousness need to be physical. Indeed, there are good evidences of the constrary, but I will not discuss them now.
However, design, functional information and consciousness are certainly facts that need to be investigated by science. Even if the best explanation, maybe the only one, is the intervention of some non physical conscious agent.
Correct.
Not physical, therefore not biological. Terrestrial? I don’t know. a non physical entity could well, in principle be specially connected to out planet. Or not, of course. If we don’t know we don’t know.
You seem to make some confusion about three different concepts: functional information, life and consciousness.
ID is about the origin of functional information, in particular the functional information we observe in living organisms. It can say nothing about what life and consciousness are, least of all about how to generate those things.
Functional information is a configuration of material objects to implement some function in the world we observe. Nothing else. Complex functional information originates only from conscious agents (we know that empirically), but it tells us nothing about what consciousness is or how it is generated. And life itself cannot easily be defined, and it is probably more than the information it needs to exist.
As humans, we can design functional information. We can also design biological functional information, even rather complex. OK, we are not really very good. We cannot design anything like ATP synthase. But, in time, we can improve.
Designers can design complex functional information. More or less complex, good or bad. But they can do it. But human designers, at present, cannot generate life. Indeed. we don’t even know what life is. Even more that is true of consciousness.
And again, I don’t think we can say how many designers have contributed to biological design. Period.
It is a lot of intervention. And so?
He could also be very simple.
Science has established practically nothing about the nature of consciousness. But there is time. Certainly, it has not established that consciousness derives fron the physical body.
5 is true enough, but after that 2 is the only reasonable hypothesis. Intelligent selection can have a role too, of course, like in human protein engineering. But I think that transposons act as a form of guided mutation.
Gp states, ” I think that at present universality seems more likely, but I am not really sure. I think the question remains open.”
Thank you very much for at least admitting that degree of humility on your part.
Silver Asiatic at #111
I disagree. Algorithms, as I have already explained, are configurations of material objects. We were discussing algorithms on our planet, not imaginary algorithms in the mind of a conscious agent of whom we know almost nothing.
My statement was about a real alforithm really implemented in material objects. To compute ATP synthase, that algorithm would certainly be much more complex than ATP stnthase itself.
But all these reasonings are silly. We have no example of algorithms in nature, even in the biological world, which do compute new complex functional objects. Must we still waste our time with fairy tales?
OK, I hope it’s clear that this is the theory I am criticizing. Certainly not mine.
And I have never said, or discussed, that “The designer created immaterial consciousnesses (human)”. As said, ID can say nothing about the nature of consciousness. ID just says that functional information derives from consciousness. And the designer needs not have “created” anything. Design is not creation.
The designer designs biological information. Not human consciousness, or any other consciousness, Not “immaterial algorithms”. design is the configuration of material objects, starting from cosncious representations of the designer. As said so many times.
Not true, as said. Immaterial realities that cause observable facts can be inferred from those facts.
Instead, a physical algorithm existing on our planet should leave some trace of its physical existence. This was my simple point.
Not having a physical body does not necessarily mean that an entity is not subject to space and time. The interventions of the designer on matter are certainly subject to those things.
About science, I have already answered. Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.
ET at #113:
Well, I do. Let’s say that we have different ideas about that.
Of course. And they are called neutral or quasi neutral random mutations. When they are present in more than 1% of the whole population, they are called polymorphisms.
PeterA and all:
An interesting example of complexity is the CBM signalosome. As said briefly in the OP, it is a protein complex made of three proteins:
CARD11 (Q9BXL7): 1154 AAs in the human form. Also known as CARMA1.
BCL10 (O95999): 233 AAs in the human form.
MALT1 (Q9UDY8): 824 AAs in the human form.
These three proteins have the central role in transferring the signal from the specific immune receptors in B cells (BCR) and T cells (TCR) to the NF-kB activation system (see Fig. 3 in the OP).
IOWs, they signal the recognition of an antigen by the specific receptors on B or T cells, and start the adaptive immune response. A very big task.
The interesting part is that those proteins practically appear in vertebrates, because the adaptive immune system starts in jawed fishes.
So, I have made the usual analysis for the information jump in vertebrates of these three proteins. Here are the results, that are rather impressing, especially for CARD11:
CARD11: absolute jump in bits: 1280; in bits per aminoacid (bpa): 1.109185
BCL10: absolute jump in bits: 165.1; in bits per aminoacid (bpa): 0.7085837
MALT1: absolute jump in bits: 554; in bits per aminoacid (bpa): 0.6723301
I am adding to the OP a graphic that shows the evolutionary history of those three proteins, in terms of human conserved information.
GP (101)
“…we should have some evidence of that. But there is none.”
This is where you lost me. Isn’t what you so painstakingly analyse here and in other OPs something that constitutes the said evidence? Maybe I am wrong and I have missed out part of the conversation. But it is exactly what we observe that strongly suggests design. It is precisely that. All the rest is immaterial. Consequently, it must be the evidence that you are saying does not exist. I hope I am just misinterpreting what you said there.
EugeneS:
The statement was:
““Is the designer a biological organism? Is the designer a physical entity?”
I will answer these two together. While we cannot say who or what the designer (or designers) is, I find very reasonable that he should not be a physical organism. The reason for that is, again, empirical, and is similar to my “confutation” of the imaginary algorithm: if one or more physical designers had been acting on our planet throughout natural history, we should have some evidence of that. But there is none. So the best hypothesis is that the designer or designers are not physical like us.”
What I mean is that the continuing presence of one or more physical designers, with some physical body, should have left some trace, reasonably. A physical designer has to be physically present at all design intervertions. And physical agents, usually, leave some trace of themselves. I mean, betond the design itself.
Of course the design itself is evidence of a designer. But in the case of a non physical designer, we don’t expect to find further physical evidence, beyond the design itself. In the case of a physical designer, I would expect something, especially considering the many acts of design in natural history.
This is what I meant.
GP @108:
“Maybe you can look at this more detailed figure for the different stimuli, receptors and receptor connections to the activation pathway:
https://rockland-inc.com/nfkb-signaling-pathway.aspx”
Oh, no! Wow!
OK, you have persuaded me.
I’m convinced now.
Thanks!
GP
Yes, of course. I agree. I have missed out ‘physical’.
Maybe, it is a distraction from the thread but anyway. I recall one conversation with a biologist. I had posted something against Darwin’s explanation of why we can’t see another sort of life emerging. Correct me if I am wrong but my understanding is that, basically, Darwin claimed that organic compounds that would have easily become life are immediately consumed by the already existing life forms. I was saying that this is a rubbishy argument. But according to my interlocutor, it actually wasn’t. My friend said it wss extremely difficult to get rid of life in an experimental setting for abiogenesis. In relation to what we are discussing here, this claim effectively means that the existing life allegedly devours any signs of emerging life as soon as they appear. My answer at the time was, why don’t they put their test tubes in an autoclave? He said that this was not so easy as I thought as getting rid of existing life also destroys the organic chemicals, and defeats the purpose.
Today, I still strongly believe it is a bad argument but for a different reason, i.e. due to the impossibility of the translation apparatus that relies on a symbolic memory and semiotic closure self-organizing. There is no empirical warrant to back the claim that such self-organization is possible.
What do you think about Darwin’s argument and, in particular, about the difficulty of creating the right conditions for a clean abiogenesis experiment?
EugeneS:
Of course they would never succeed, in an autoclave or elsewhere.
I suppose that Darwin’s argument was that, in the absence of existing life, the first organic molecules generated (by magic, probably) would have been more stable than what we can expect today. Indeed, today simple organic molecules have very short life in any environment because of existing forms of life.
The argument is however irrelevant. The simple truth is that simple organic molecules (Darwin was probably thinking of proteins, today they should be RNA to be fashionable) are completely useless to build life of any form.
Let’s be serious: even if we take all components, membrane, genome, and so on, for example by disrupting bacteria, and put them together in a test tube, we can never build a living cell.
This is the classic humpty dumpty argument, made here time ago, if I remember well, by Sal Cordova. It remains a formidable argument.
All reasonings about OOL from inanimate matter are, really, nothing more than fairy tales, They don’t even reach the status of bad scientific theories.
GPuccio
Again, thank you for clarifications and even repeating things you stated before. It has been very helpful.
I am not fully understanding several of your points which I will illustrate below:
Do you think that science can investigate God?
I believe that design is the ultimate creative act. Design is an action of creation with and for a purpose. It begins as a creative act in a conscious mind – a thought which did not exist before is created for a purpose. This thought is then implemented through various means. But how can there be design without creation? How can a purposeful act occur without it having been created by a mind?
How are immaterial objects constrained by space and time? What measurements can be performed on immaterial entities?
As I quoted you above ” Science can investigate anything that produces observable facts”, why is not ID evaluating the designer?
What scientific evidence do you have to show that the designer did not design human consciousness? Where do you think human consciousness comes from?
Again, an algorithm is a process or set of rules used for calculation or programmatic purposes. A designer can create an immaterial algorithm in an agent that acts on biological entities. There could be no direct evidence of such a thing, but the effects of it can be seen in the development of biological organisms.
GP
I mentioned Mozart’s symphonies which were designed in his conscious mind. They weren’t designed on paper or by musical instruments.
Also, if an immaterial entity created other immaterial entities, you would say “that is not an act of purposeful design”?
GP @106:
Regarding Fig. 1 in the OP:
“the figure is there just to give a first general idea of the system”
I agree. And it does it very well, specially within the context of the fascinating topic of your OP.
Even without the missing information that you listed:
the figure has many details that give a convincing idea of functional complexity.
Thus, after carefully studying the figure to understand the flow of functional information, you reveal how much is still missing, one can only wonder how would anyone believe that such a system could arise through unguided physio-chemical events.
GP
Thanks very much. Could you point to the ‘humpty dumpty’ OP you mentioned?
Silver Asiatic:
As said many times, I don’t discuss God in a scientific context.
The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.
You are equivocating on the meaning of “creation”. Of course all actd of design are “creative” in a very general sense. But of course, as everyone can understand, that was not the sense I was using. I was clearly speaking of “creation” in the specific philosophical/religious meaning: generating some reality from nothing. Design is not that. In materila objects, design gives specific configurations to existing matter.
I always speak of design according to that definition, that I have given explicitly here:
https://uncommondescent.com/intelligent-design/defining-design/
This definition is the only one that is necessary in ID, because ID infer design from the material object.
You speak of a “creative act in a conscious mind”. Maybe, maybe not. We have no idea of how thoughts arise in a conscious mind. Moreover, as we are not trying to build a theory of the mind, or of consciousness, we are not interested in that.
The process of design begins when some form, already existing in the consciousness of the designer as a representation, is outputted to a material object. That is the process of design. That is what we want to infer from the material object. It is not creation, only the input of a functional configuration to an object.
Energy is not material, yet it exists in space and time. Dark energy is probably not material: indeed, we don’t know what it is. Can you say that it cannot exist in relation to space and time? Strange, because it apparently accelerates the expansion of the universe, and that seems to be in relation, very strongly, with space and time.
If we can or cannot measure something has nothing to do with the properties of that something. Things don’t wait for our measures to be what they are. Our ability to measure things evolves with our understanding of what things are.
You quote me saying: “Indeed, ID is not evaluating anything about the designer…” and then you comment:
This is quote mining of the worst kind. The original statement was:
” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”
Shame on you.
Again, misinterpretation, maybe intentional. Of course I am speaking of what we can infer according to ID theory, The designer that we infer in ID is the designer of biological information. We infer nothing about the generation of consciousness (I don’t use the term design, because as I have explained I speak of design only for materila objects). As said, nobody here is trying to build a theory of consciousness. I have alredy stated clearly that IMO science has no real understanding of what consciousness is, least of all of how it originates. We can treat consciousness as a fact, because it can be directly observed, but we don’t understand what it is.
Could the designer of ciological objects be also the originator of human consciousness? Maybe. Maybe not. I have nothing to infer an answer. Certainly not in ID theory. Which is what we are discussing here. And certainly I have no duty to show that the designer did not originate human consciousness, or that he did, because I have made absolutely no inferences about the origin of human consciousness. I have only said that we infer a designer for biological objects, not for human cosnciousness.
Again, everything is possible. I am not interested in what is possible, but in what is supported by facts.
You use the word “algorithm” to indicate mental contents. I have nothing against that, but it is not the way I use it, and it is of no interest for ID theory.
Again, ID theory is about inferring a design origin for some material objects. To do that, we are not interested in what happens in the consciousness of the designer, those are issues for a theory of the mind. We only need to know that the form we oberve in the object originated from some conscious, intelligent and purposeful agent who inputted that form to the object starting from some conscious representation. If the configuration comes directly from a conscious being, design is proved.
All thhis discussion about algorithms is because some people here believe that the designer does not design biological objects directly, but rather designs some other object, probably biological, which then after some time, deisgne the new biological objects by aòlgorithmic computation programmed originally by the designer.
IOWs, this model assumes that the designer designs, let’s call it so, a “biological computer” which then designs (computes) new biological beings.
I have said many times that I don’t believe in this strange theory, and I have given my reasons to confute it.
However, in this theory the algorithm is not a conscious agent who designs: it is a biological machine, IOWs an object. That’s why in this discussion I use algorithm to indicate an object that can compute. Again, the algorithm is designed, because it is a configuration given to a biological machine by the designer, a configuration that can make computations.
If you want to know if a mental algorithm in a mind is designed, I cannot answer, because I am not discussion a theory of the mind here. Certainly, it is not designed according to my definition, because it is not a material object.
ID theory is simple, when people don’t try to pretend that it is complicated. We observe some object. We observe the configuration of the object. We ask ourselves if the object is designed, IOWs is the configuration we observe originated as a conscious representation in a conscious agent, and was then inputted purposefully into the object. We define an objective property, functional information, linked to some function that can be implemented using the object and that can be measured. We measure it. If the complexity of the function that can be implemented by the object is great enough, we infer a design origin for the object.
That’s all.
EugeneS:
I remember the argument mentioned by Sal Cordova, but it seems that the original argument was made by Jonathan Wells (or maybe someone else before him).
Here is an OP by V. J. Torley (the old VJT 🙂 ), defending the argument. It gives a transcript of the argument bt Wells.
https://uncommondescent.com/intelligent-design/putting-humpty-dumpty-back-together-again-why-is-this-a-bad-argument-for-design/
IMO. the argument is extremely strong. OOL theories imagine that in some way some of the molecules necessary for life originated. That some life was produced.
The simple fact is: we cannot produce life in any way, even using all the available molecules and structures that are associated to life on our whole planet.
The old fact is still a fact: life comes only from life.
Even when Venter engineers his modified genomes, he must put them in a living cell to make them part of a living being.
When scientists clone organisms, they must use living cells.
You cannot make a living cell from inanimate matter, however biologically structured it is.
And yet these people really belive that natural events did generate living cells, from completely unstructured inanimate matter!
It is simply folly. I will tell you this: if it were not for the simple ideological necessity that “it must have happened without design, because ours is the only game in town”, no serious scientist would ever consider for a moment any of the current theories for OOL. As I have said, they are not even bad scentific theories, They are mere imagination.
Silver Asiatic:
No. According to the definitions I have given, and that I always use when discussing ID. Mozart’s symphonies were designed when he put them on paper. Before that, they were conscious representations, and not designed objects. As said, we are not discussing how conscious representations take form in consciousness. In ID we are interested only in the design of objects.
Again, that would not be design in the sense I have given, Indeed, that problem has nothing to do with ID theory. Immaterial entities do not have a configuration that can be observed, and therefore no functional information can be measured for them. ID theory is not appropriate for immaterial entities. It is about designed objects.
For all interested:
About polar bears, and in support of Behe’s ideas:
Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears
https://www.cell.com/cell/fulltext/S0092-8674(14)00488-7
See also comments #75 and #112.
Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts. “Lack of pigmentation”? It’s a translucent hollow tube! Luminescence- when sunlight shines on it there is a reaction we call luminescence (another great word for sobriety check points). The skin is black.
To claim that differential accumulation of genetic accidents, errors and mistakes just happened upon luminescence for polar bears, is extraordinary and without a means to test it. Count the number of specific changes already discussed and compare that to waiting for TWO mutations. You will see there isn’t enough time in the universe for Darwinian processes to pull it off.
GP @131:
Here’s another article also mentioning the cute polar bears:
Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism
Matteo Fumagalli, Stephane M Camus, Yoan Diekmann, Alice Burke, Marine D Camus, Paul J Norman, Agnel Joseph, Laurent Abi-Rached, Andrea Benazzo, Rita Rasteiro, Iain Mathieson, Maya Topf, Peter Parham, Mark G Thomas, Frances M Brodsky
eLife 2019;8:e41517 DOI: 10.7554/eLife.41517
And here’s another one;
Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA)
Heli Routti, Mari K. Berg, Roger Lille-Langøy, Lene Øygarden, Mikael Harju, Rune Dietz, Christian Sonne & Anders Goksøyr
Scientific Reports volume 9, Article number: 6918 (2019)
DOI: 10.1038/s41598-019-43337-w
Here’s an article about the brown bears that mentions the polar bear cousins too:
Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains
Alba Rey-Iglesia, Ana García-Vázquez, Eve C. Treadaway, Johannes van der Plicht, Gennady F. Baryshnikov, Paul Szpak, Hervé Bocherens, Gennady G. Boeskorov & Eline D. Lorenzen
Scientific Reports volume 9, Article number: 4462 (2019)
DOI: 10.1038/s41598-019-40168-7
Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials?
🙂
GP @129:
Thanks for referencing the discussion about the Humpty Dumpty argument. Very interesting indeed.
If all of king’s horses and all of king’s men couldn’t put Humpty together again, who else can do it?
🙂
GP,
I appreciate your answers at 107.
Please, let me ask you another question:
Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function?
Gpuccio
I responded to your statement:
You then said:
Those two statements actually conflict with each other. You ask me to assume your meaning of various terms (as if the meaning is obvious) but in this case, I assume that your first statement is incorrect and you corrected it with the second.
I was using the general and ordinary meaning of the term “design”. Whatever is designed, even if using previously existing material, is an act of creation. If that which at one moment was inanimate matter, suddenly, by an act of an intelligent agent becomes a living organism – that is a creation. The designer created something that did not exist before. You limited the term creation to only those acts which are ex nihilo but that’s an artificial limit.
ID science is not limited to the study of biology. ID also looks at the origin of the universe. In that case, ID is making a claim about the origin of time, space and matter. It is not limited to reconfigurations of existing matter.
You’re trying to blame me for something here, but what you quoted did not answer the question. You avoided answering it when I asked about God also. You say that science can investigate anything that produces observable facts. You explain that by saying science can only make inferences from observable effects. As I said before, those two ideas contradict. In the first (bolded) you say that science can investigate “the producer” of the facts. You then shame me for asking why ID cannot investigate the designer by saying that ID can investigate the observable effects. As I said above, you corrected your first statement with the second – but you should not have blamed me for something that merely pointed to the conflict here.
I’m not trying to trick or trap you or win anything. You make a statement that contradicts everything I had known about ID, as well as what contradicts science itself (that science can investigate anything that produces observations). I’m not really worried about your personal views on these things, I was just interested in what seemed to be a confused approach to the issue.
As above, the designer we refer to in ID is the designer of the universe, not merely of biological information. We infer something about the generation of consciousness. In fact, the immaterial quality of consciousness is evidence in support of ID. We look for the origin of that which we can observe.
Mainstream evolution already assumes that consciousness is an evolutionary development. I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design. Consciousness separates humans from non-human animals. Evolutionary theory offers an explanation, and ID (not your version of ID but others) offers an opposing one.
More on the cute polar bears:
Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift
David C. Rinker, Natalya K. Specian, Shu Zhao, and John G. Gibbons
PNAS July 2, 2019 116 (27) 13446-13451;
DOI: 10.1073/pnas.1901093116
ET:
“Again- polar bears do NOT have white fur. That is elementary school level knowledge in Massachusetts.”
OK, we have no polar bears here in Italy, so I cannot share your expertise! 🙂
So, I read a little about the issue.
Polar bear’s fur is hollow and lacks any pigment. Indeed, it is rather transparent. The white color is due to optical effects. And the skin is black, as you say.
Brown bears has a fur that is solid and pigmented.
OK, what does that mean?
First of all, let’s say that the fact that the fur is not really white is not important in relation to the supposed selection of white in polar animals, because indeed polar bears appear white, so to the purpose of the supposed positive selcetion there is no real difference.
But that is not the real point, I would say.
The real point is: what is the mechanism of the divergence between brown bears and polar bears? The paper I mentioned puts the split at about 500000 years ago, that is not much. Some give a few million years. Whatever, it is certainly a rather recent event in evolutionary history.
So, can the divergence be explained by neo-darwinian mechanisms, or is it the result of design? Or of some biological algorithm embedded in the common ancestor?
The paper I mentioned of course has a neo-darwinian answe, but that could hardly be different.
Behe thinks that this can be a case of darwinian “devolution”: differentiation through loss of function which goves some environmental advantage.
You are definitely in favor of design (or an adaptation algorithm, I am not sure).
Who is right?
I think this is a case that shows clearly how ID theory is necessary to give good answers to that kind of problems.
IOWs, we can answer only if we can evaluate the functional complexity of the divergence.
The problem is that I cannot find any appropriate data in all the source that have been mentioned, or that I could find in my brief search, to do that. Why? Because nobody seems to know the molecular basis for the difference in fur structure and pigmentation. And it is not completely clear how functionally important the polar bear fur structure is, even if it is generally believe that it is under positive selection, therefor somehow functional in the appropriate environment.
If you have some better data, please let me know.
Of course, fur is not the only difference, but for the moment let’s focus on that.
So, from an ID point of view, we have different possible scenarios, if we could measure the functional information behind the difference in fur structure and pigmentation.
To safely infer design according to the classic procedure, we need some function that implies more than 500 bits of functional information.
However, as we are dealing here with a population (bears) rather limited in number and slow-reproducing, and with a rather short time window, I would be more than happy with 150 bits of functional information to infer design in this case.
The genomic differences highlighted in the paper I quoted seem to be rather simple. Most of them can be interpreted as one aminoacid mutations with loss of function, perfectly in the range of neo-darwinism and of Behe’s model. But I have no idea if those simple genetic differences are enough to explain what we observe. The lack of pigmentation is probably easier to explain. For the hollow structure, I have no ideas.
The problem is: we have to know the molecular basis, otherwise no computation of functional information can be made. Because, as we know, there are sometimes big morphological differences that have a vert simple biological explanation, and vice versa. So again, I must ask: have you any data about the molecular foundation of the differences?
In the meantime, I would say that yhe scenarios are:
1) The differences can be explained by one or more independent mutations affecting functions already present. Or, at most, 2 or 3 coordinated mutations where each one affects the same function in a relevant way, so that NS could intervene at each step (IOWs a simple tweaking pathway of the loss of function, as we see for example in antibiotic resistance). These scenarios are in the range of what RV + NS could in principle do, maybe even in a population like bears. In this case, I would accept a neo-darwinian mechanism as a reasonable explanation, until different data are discovered.
2) The differences imply a gain in functional information of 150+ bits. We can safely infer design. Polar bears were designed, some time about 400000 years ago, or a little more.
3) The differences imply something between 12 bits (3 AAs) and 150 bits. In this case, It would be wise to remain cautious. It is not the best scenario to infer design, even if it is rather unlikely for a neo-darwinian mechanism in that kind of population. Maybe some simple active adaptation algorithm embedded in brown bears could be considered. But such an algorithm should be in some way detailed and shown to be there, not only imagined.
IMO, this is how ID theory works. Through facts, and objective measurements of functional information. There is no other way.
Just a final note about the “waiting for two mutations” paper. That is of course a very interesting article. But it is about two coordinated mutations needed to generate a new function, none of which individually confers any advantage. IOWs, this is more or less the scenario of chloroquine resistance, again linked to Behe.
I agree that such a scenario, even if possible, is extremely unlikely in a population like bears. But the simple fact is that almost all the variations considered by Behe in his reasonings about devolution are very simple. One mutation is often enough to lose a function. One frameshift mutation can inactivate a whole protein, losing maybe thousands of bits of functional information. And we can have a lot of such individual independent mutations in a population like bears in 400000 years.
So, unless we have better data on the functional information involved in the transition to polar bears, I suspend any judgement.
Jawa at #134:
“Is it possible that the polar bears were affected by drinking so much Coca-Cola in TV commercials?”
Absolutely!
Let’s wait: if I develop translucent fur in the next few years, that will be a strong argument in favour of your hypothesis! 🙂
1- Bears with actual white fur exist
2- There are grizzly (brown) bears with actual white fur. They are not polar bears.
3- I am looking at the number of specific mutations it would take to get a polar bear from a common ancestor with brown bears. That would tell me if blind and mindless processes are up to the task. The paper gpuccio provided gives us a hint and it already goes against blind and mindless processes.
Pw at #137:
“Why is there a drop in the black line in the last graphic in your OP? What does that mean? Loss of function?”
You mean the small drop in amphibians in the blue line (BCL10)?
Yes, that kind of pattern can be observed often enough, usually in one or two classes.
The strict meaning is that the best homology hit in that class was lower than in the older class.
Here the effect is small, but sometimes we can see a whole unexpected drop in one class of organisms, while the general pattern is completelt consistent in all the other ones.
Technically, we are speaking of human conserved information. That’s what is measured here.
Probably, it is a loss of function in relation to that protein in that class. That is perfectly compatible with Behe’s concept of devolution. That form of the protein seems somtemise to be completely lacking in one class.
In some cases, it could also be a technical error in the databases, or in the blast algorithm. We can expect that, it happens. Some of the classes I have considered are more represented in the databases, some less. However, if one proteins lacks any relevant homology in one class in my graphic, that means that none of the organisms in that class showed any relevant homology, because I always consider the best hit among all the proteins of all the organisms os that class included in the Ncbi databases.
ET at #142:
Thank you for the further clarifications about bears. You are really an expert! 🙂
However, it is not really the numser of specific mutations that counts. It is the number of coordinated mutations necessaty to get a function, none of which has any functional effect alone. There is a big difference. I have tried to explain that at #140.
Thank you, gpuccio. We have a little impasse as I think it is the number of specific mutations and the functions are all the physiological changes afforded by them.
In his book “Human Errors”, Nathan Lents tells us that it is highly unlikely that one locus will receive another mutation after already getting mutated. And yet it has the same probability for change as any other site. So it looks like evolutionists are talking about the probability of a specific mutation happening regardless of function.
As for bears- living in Massachusetts I run into black bears all of the time. They come up on my deck at night. I have photos of them in my yard. And being a dog-person I have a keen interest. That’s all- I think they are really cool animals.
ET:
Thanks to you! 🙂
I suspected you had some special connection with bears! I am more a cat guy, but I do understand love and interest for all animals. 🙂
ET,
The Massachusetts bears may be cool animals, but didn’t get hired for Coca-Cola TV ads like their polar cousins. 🙂
Silver Asiatic at #138:
Oh, good heavens! That’s what happens when someone (you) discusses not to understand and be understood, but just to generate confusion. You are of course equivocating on the word “investigate”.
Maybe the second from is more precise, but the meaning is the same.
However, let’s clarify, for those who can be confused by your playing with words.
Science always starts from facts: what can be observed.
But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable.
Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed.
My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.
OK, let’s say that science can build hypotheses only to explain observed facts, but of course those hypotheses, those maps of reality, can include any cognitive content, if it is appropriate to the explanation.
The word “evaluate” can refer of course both to the gathering of facts and to the building of theories.
My original statement was.
” Indeed, ID is not evaluating anything about the designer, except for what can be inferred by the observable effects of his interventions.”
Wasn’t it clear enough for you?
The problem here is not the meaning of the word design, but the meaning of the word creation. The word creation here, in this blog and I would say in the whole debate about ID and more, is used in the sense of “creation ex nihilo”, something that only God can do. Why do you think that our adversaries (maybe you too) call us “creationists” and not “designists”?
It’s strange that one like you, that has been coming here for some time, is not aware of that, and suddenly inteprets “creation” in this debate as a statement about a movie or a book.
However, the problem is not the meaning of words, For that, it’s enough to clarify what we mean. Clearly, and without word plays.
More in next post.
GP @141:
But even in the case where you would develop translucent fur, I hope you’ll keep writing OPs for us here, right?
🙂
Gpuccio and Silver Asiatic,
A few of my thoughts about the relationship between science, philosophy, theology and religion.
Creationism is based on a religious text– the Jewish-Christian scriptures. ID, on the other hand, is at the very least a philosophical inference from the study of nature itself.
Even materialists recognize the possibility that nature is designed. Richard Dawkins, for example, has argued that “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”
He then goes on to argue that it is not designed.
So what is Dawkins argument? Let’s try out his quote as the main premise in a basic logical argument.
Premise 1: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”
Premise 2: Dawkins (a trained zoologist) believes that “design” is only an appearance.
Conclusion: Therefore, nothing we study in the biosphere is designed.
The conclusion is based on what? Are Dawkin’s beliefs and opinions self-evidently true? Is the science settled as he suggests? If the answer for those two questions is no (Dawkin’s arguments BTW are by no means conclusive) then what is the reason for not looking at living systems that have “the appearance of having been designed for a purpose?” Couldn’t they really have been designed for a purpose? That is a basic justification for ID. It begins from a philosophical neutral position (that some things could really be designed) whereas a committed Darwinian like Dawkins, along with other “committed” materialists, begins with the logically fallacious assumption that design is impossible.
Silver Asiatic at #138:
That’s correct. The cosmological argument, especially in the form of fine tuning, is certainly part of the ID debate.
But here I have never discussed the cosmological argument in detail. I think it is a very good argument, but many times I have said that it is different from the biological argument, because it has, inevitably, a more philosophical aspect and implication.
I have always discussed the biological argument of ID here, and it is also the main object of discussion, I belieev, since the ID movement started. Dembski, Behe, Meyer, Abel, Berlinski and others usually refer mainly to the biological argument. So I apologize if that created some confusion: all that I say about ID is referred to the biological argument. And biological design always happens in space and time.
As I have explained, there is no conflict at all. Of course the word “investigate” refers both to the analysis of facts and to the building of hypotheses. Every action of the mind in relation to science is an “investigation” and an “evaluation”, IOWs a cognitive activity in search of some truth about reality.
I think I have been clear enough at #128:
“The correct answer is always the same: science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.”
That should be clear, even to you. There are no limitations. If a concept of god were necessary to build a better scientific model of reality that explains observed things, there is no problem: god can be included in that model.
But I refuse, and always will refuse, in a scientific discussion, to start from some philosophical or religious idea of God and allow, without any conscious resistance on my part, that such idea influence my scientific reasoning. Science should work, or try to work, independently from any pre-conceived worldview. If scientific reasonings brings to the inclusion, or to the exclusion, of God in a good map of reality, scientific reasoning should follow that line of thought and impartially test it. The opposite is not good, IMO.
I hope that’s clear enough.
Neither am I. I am trying to clarify. When I don’t understand well what my interlocutor is saying, I ask. When they ask me, I answer. That’s the way.
It’s strange that my statements contradict everything you have known of ID. My application of the ID procedure for design inference is very standard, maybe with some more explicit definition. About God, an issue that I never discuss here for the reasons I have given, it is rather clear that all the official ID movement unanimously states that the design inference from biology tells nothing about God. Indeed, ID defenders are usually reluctant to tell anything about the biological designer.
I want to clarify well my position about that, even if I have been explicit many times here.
1) I absolutely agree with the idea that there is no need to say anything about the designer to make a valid design inference. This is a pillar of the ID thoughtm and it is perfectly correct. I ofetn say that the designer can only be describet as some conscious, intelligent and purposeful agent. But that is implicit in the definition of design, it is not in any way something we infer about any specific designer.
2) That said, I have always been available here, maybe more than other ID defenders, to make reasonable hypotheses about the biological designer in the measure that those hypotheses can be reasonably driven from known facts. That’s what I have done at #100 and #101, trying to answer a number of questions that you had asked. I know very well that trying to reason scientifically about those issues is always a sensitive matter, both for those in my filed and for those in the other. Or maybe just in.between. But I do believe that science must pursue all possible avenues of thought, provided that we always start form observable facts and are honest in building our theories.
Knowing that, I have also added, at the end of post #101:
“That’s the best I can do to answer your questions. Just a note: my answers here are highly tentative, but they are strictly empirical. They do not derive from any theological agenda. These are the ideas that, in my opinion, are more reasonable to explain the facts as we know them. Right or wrong that they may be, this is the spirit in which I express them.”
I can only repeat my statement: That’s the best I can do to answer your questions.
More in next post.
GP
I wasn’t “playing” with it. I was helping you clarify your statement. I’m not trying to say gotcha. I sincerely thought you believed that science could investigate (directly evaluate, measure, analyze) anything (like God) that produces observable facts.
I kept in mind that you said that science is not limited by matter. I’d conclude from that a belief that science can investigate (evaluate, analyze, measure, observe, describe) immaterial entities. You cited a philosophy of science to support that view. How am I supposed to know what you are thinking of? I asked you if science could “investigate” God, but you didn’t want to answer that.
Again, normally IDists would not say that science can Directly investigate, evaluate, analyze, measure or describe immaterial entities. You seem to disagree with that.
Evaluation is not the gathering of facts. Collecting facts comes from observation, measurement, or investigation. Evaluation can create some facts (such as logical conclusions) but in science it all must start with observation. After that, we can evaluate. To infer is to draw a logical conclusion from observations and evaluation.
As I have heard other ID theorists state, ID cannot observe anything about an immaterial designer or designers. I think you disagree with this. The only thing ID attempts to do is show that there is evidence of Intelligence at work. The effects that we observe in nature could have been produced by millions of designers, each one of which has less intelligence than a human being, but collectively create design in nature. If you are speaking about a designer that exists outside of space and time, then we do not have any experience with that.
We can observe various effects, but not the entity itself.
It seemed that you disagree with this and believe instead that science can directly observe an immaterial designer (or any immaterial entity) that produces effects in reality.
Silver Asiatic at #138:
Let’s see your last statements.
That’s not correct. As said, the inference of a designer for the universe, and the inference of a biological designer are both part of ID, but they are different and use completely different observed facts. Therefore, even if both are correct (which I do believe), there is no need that the designer of the universe is the same designer as the designer of biological information. I don’t follow your logic.
??? Again, I can’t follow you. Who is “we”? I am not aware that ID, especially in its biological form, but probably also in the cosmological form, is inferring anything about “the generation of consciousness”. Why do you say that?
No. Big epistemological errors here. Consciousness is a fact, because we can directly observe it. Being a fact, anyone can use its existence as evidence for what one likes.
But “the immaterial quality of consciousness” is a theory, not a fact. It’s a theory that I accept in my worldview and philosophy, but I would not say that we have incontrovertible scientific evidence for it. Maybe strong scientific evidence, at best. But the important point is: a theory is not a fact. It is never evidence of anything. A theory, however good, needs the support of facts as evidence. it is not evidence for other theories. At most, it is more or less compatible with them.
Correct, and as consciousness can be observed, it is perfectly reasonable to look for some scientific theory that explains its origin. But that theory is not ID. As I have said, ID is not a theory about the origin of consciousness. It is a theory that says that conscious agents are the orign of designed objects. I believe that you can see the difference.
Mainstream evolution assumes a lot of things. Most of them are wrong. And so?
Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.
??? Why do you say that? I believe that a cat or a dog are conscious. And I think that most ID thinkers would agree.
Ask ET about nears! 🙂
An explanation for what? For the origin of consciousness? But what ID sources have you been perusing?
One of the most famous ID icons is the bacterial flagellum, since Behe used it to explain the concept of irreducible complexity (a concept linked to functional complexity). Is that an explanation of human consciousness? I can’t see how.
Meyer has written a whole book about OOL and a whole book about the Cambrian explosion. Are those theories about the origin of human consciousness?
Of course ID thinkers certainly believe that some special human functions, like reason, are linked to the specific design of humans. But it is equally true that the special functions of bacteria (like the CRISPR system) are certainly linked to the specific design of bacteria. The desing inference is perfectly valid in both cases.
But consiousness is not “a function”. It is much more. It is a component of reality that we cannot in any way explain by objective configurations of external things. ID is not a theory of consciousness.
Jawa at #149:
Maybe translucent OPs. 🙂
JAD
It’s a complicated issue and I can see where you are going with this. At the same time, I think many prominent IDists will say that ID is not a philosophical inference. It’s a scientific inference from what science already knows about the power of intelligence. So, something is observed that appears to be the product of intelligent design, then science evaluates the probability that it came from natural causes. If that probability is too remote, intelligent design becomes the best answer since we know that intelligence can design things like that which has been observed.
On the other hand, with your view, there are different philosophical starting points for both ID and Dawkins. So, depending on what we mean it may be correct to say that ID is really a philosophical inference. It’s a different philosophy of science than that of Dawkins. I think Dembski and Meyer would disagree with this. They have attempted to show that ID uses exactly the same science as Dawkins does.
John_a_designer at #150:
I agree with what you say. I just want to clarify that:
1) IMO Dawkin’s biological arguments are very bad, but at least they are a good incarnation of true neo-darwinism, thereofre easy to confute. In that sense, he is better than many post-post-neo-darwinists, whose thoughts are so ethereal that you cannot even catch them! 🙂
2) On the contrary, Dawkin’s philosohical arguments are arrogant, superficail and ignorant. Unbearable. He should stick to being a bad thinker about biology.
3) To be fair to Dawkins, I don’t think that he assumes that “design is impossible”. On the contrary, he is one of the few who admit that design could be a scientific explanation. He just does not accept it as a valid scientific explanation. That is epistemologically correct, even if of course completely wrong in the essence.
GP
Your use of multiple question-marks and the personal digs (“even you can understand”) indicate to me that this conversation is getting too heated. You apologized previously, so thank you. I’ll also apologize for the tone of my remarks.
You asked about ID and consciousness:
Michael Egnor writes about consciousness as evidence supporting ID. I think here, BornAgain77 often posts resources that support this concept. I understand that your interest is in biological ID, and therefore limited to biological designer or designers.
You answered my questions adequately. Again, I appreciate your comments and I apologize for any misunderstandings that may have arisen in this conversation.
Richard Dawkins’ books should be in the “cheap philosophy” section of bookstores. But instead they have them in the Science section.
Specially after Professor Denis Noble has discredited them. Bizarre.
Silver Asiatic at #152:
I wasn’t “playing” with it. I was helping you clarify your statement.
Well, I hope I have clarified it. Thank you for the help.
Well, it seems that I have not clarified enough. Please, read again what I have written. Here are some more clues:
1) “investigate, evaluate, analyze, measure or describe” are probably too many different words. I quote myself:
“But science tries to explain facts building theories (maps of reality). Those theories need not include only what is observable. They just need to explain observed facts. For example, most scientific theories are based on mathematics, which is not something observable.
Another example. Most theories in empirical science are about possible relationships of cause and effect. But the relarionship of cause and effect is not something that can be observed.
My error was probably to use the word “investigate”, which was ambiguous enought to allow you to play with it.”
So, again. Science starts with facts: what can be observed. “Measures” are only made on what can be observed. I suppose that all your fancy words can apply to our interaction with facts:
– When we gather facts and observe their properties, it can be said, I suppose, that we are “investigating” facts, and “analyzing” them. And “eveluating” them or “describing” them. And of course taking measures is part of observing facts.
– When we build theories to explain observed facts, not all those terms apply. For example, let’s say that we hypothesize a cause and effect relationship. That is part of our theory, but we don’t take measures of the cause-effect relationship. At most, we infer it from the measures we have taken of facts. But in a wide sense building a theory can be considered an evaluation, certainly it is a form of investigation.
I have said clearly that we can use any possible concept in our theories, provided that the purpose is to explain facts. We use the cause-effect relationship, we use complex numbers in quantum mechanics, we can in principle use the concept of God, if useful. Or of immaterial entities. That does not mean that we can measure those things, or have further information about them except for what can be reasonably inferred from facts.
That should be clear, but I don’t know why I will not be suprised if again you don’t understand.
As you like. As said, it’s not a problem about words. You want to limit “evaluation” in some, not very clear to me, way, be my guest. I will simply avoid the word with you.
But please, note that logical conclusions are not facts. If you insist on that kind of epistemology, we cannot really communicate.
No. Why should I? Of course if a thing is immaterial it cannot be “observed”. The only exception is our personal consciousness, that each of us observes directly, intuitively.
I have only said that we can use the concept of immaterial entoities in our theories, and that we can make inferences about the designer from observed facts, be he material or immaterial.
Of intelligent designers.
I absolutely disagree. ATP synthase could never have been designed by a crowd of stupid designers. It’s the first time I hear such a silly idea.
I have never said that. I have said many times that the designer acts in space and time. Where he exists, I really don’t know. Have you some information about that?
That’s right. Like dark energy or dark matter. As for that, we cannot even observe conscious representations in anyone else except us, but still we very much base our science and map of reality on their effects and the inference tha thy exist.
This is only your unwarranted misinterpretation. I have said many times that science can directly observe some effects and infer a designer, maybe immaterial. It’s exactly the other way round.
Silver Asiatic at #157:
OK, I apologize too. Multiple question marks are not intended as an offense, only as an expression of true amazement. Some other statements may have been a little more “heated”, as you say. Let’s try to be more detached. 🙂
I have just finished commenting on your statements. Please, forgive any possible question marks or tones. My purpose is always, however, to clarify.
I am afraid that Egnor and BA are not exactly my main reference for ID theory. I always quote my main references:
Dembski (with whom, however, I have sometimes a few problems, but whose genius and importance for ID theory cannot be overestimated)
Behe, with whom I agree (almost) always.
Abel, who has given a few precious intuitions, at least to me.
Berlinsky, who has entertained me a lot with creative and funny thoughts.
Meyer, who has done very good work about OOL and the Cambrian explosion.
And, of course, others. Including many friends here. Let me quote at least KF and UB for the many precious contributions, but of course there are a lot more, and I hope nobody feels excluded: it would be a big work to give a coherent list.
SA,
Science itself rests on a number of empirically unprovable or metaphysical (philosophical) assumptions. For example:
That we exist in a real special-temporal world– that the world (the cosmos) is not an illusion and we are not “brains in a vat” in some kind of Matrix like virtual reality.
That the laws of nature are universal throughout time and space.
Or that there are really causal connections between things and things, people and things. David Hume famously argued that that wasn’t self-evidently true. Indeed, in some cases it isn’t. Sometime there is correlation without causation or “just coincidence.”
Again, notice the logic Dawkins wants us to accept. He wants us to implicitly accept his premise that that living things only have the appearance of being designed. But how do we know that premise is true? Is it self-evidently true? I think not. Why can’t it be true that living things appear to be designed for a purpose because they really have been designed for a purpose? Is that logically impossible? Metaphysically impossible? Scientifically impossible? If one cannot answer those questions then design cannot be eliminated from consideration or the discussion. Therefore, it is a legitimate inference from the empirical (scientific) evidence.
I have said this here before, the burden of proof is on those who believe that some mindless, purposeless process can “create” a planned and purposeful (teleological) self-replicating system capable of evolving further though purposeless mindless process (at least until it “creates” something purposeful, because, according to Dawkins, living things appear to be purposeful.) Frankly, this is something our regular interlocutors consistently and persistently fail to do.
As a theist I do not claim I can prove (at least in an absolute sense) that my world view is true. Can naturalists/ materialists prove that their world view is true? Personally I believe that all worldviews rest on unprovable assumptions. No one can prove that their world view is true. Is that true of naturalism/ materialism? If it can someone with that world view needs to step forward and provide the proof.
As whether or not ID is science. I am skeptical of the claim that Darwinism in the macro-evolutionary sense is science or that SETI is science (what empirical evidence is there that ETI’s exist?) How does NS + RV cause macro-evolutionary change? Science, needs to answer the question of how. Just saying “oh somehow it could” with any airy wave of the hand is not a sufficient explanation. But that applies for people on both sides of the debate.
SA: I have read ID researchers who have spoken about the irreducible quality of consciousness as evidence of design.
GP: Who? Where? As far as I know, complex specified information (or complex functional information) in objects has always been considered the mark of design. Dembski, Behe, Abel, Meyer, Berlinski, and so on.
“CSI is a reliable indicator of design” — William Dembski
“it is CSI on which David Chalmers hopes to base a comprehensive theory of human consciousness.” — William Dembski
https://www.asa3.org/ASA/PSCF/1997/PSCF9-97Dembski.html
JAD
Agreed. Science does not stand alone as a self-evident process. It is dependent upon philosophical assumptions. Dawkins has his own assumptions. If he said, for example, that science can only accept material causes for all of reality, that is just his philosophical view. If ID says that science can accept immaterial causes, then it is different science.
A person might also say that science must accept that God exists. That’s a philosophical starting point.
In the end, people who do science are carrying out a philosophical project.
If a person is willing to do enough philosophy to carry out the project of science, I believe they have the responsibility to carry the philosophy farther than science. The philosophical questions go beyond simply what causes we can accept.
But people like Dawkins and others do not accept this. They think that science simply has one set of rules, and they claim to be the ones following the true scientific rules, as if those rules always existed.
Some IDists have tried to convince the world that ID is just following the normal, accepted rules of science and that people do not need to accept a new kind of science in order to accept ID conclusions.
Others will say that mainstream science itself is incorrect and that people need a different kind of science in order to understand ID.
I think ID will even work with Dawkins’ version of science. He may say that “only material causes” can be considered. So, we observe intelligence and so some material cause created the intelligent output? The question for Dawkins would be what material cause creates intelligent outputs?
Silver Asiatic:
Theory of consciousness is a fascinating issue. A philosophical issue which, like all philosophical issues, can certainly use some scientific findings. I have my ideas about theory of consciousness, and sometimes I have discussed some of them here. But ID is not a theory of consciousness.
But it is true that ID is the first scientifc way to detect something that only conciousness can do: generate complex functional information. In this sense, the results of ID are certainly important to any theory of consciousness. The simple fact that there is something that only consciousness can do, and that there is a scientific way to detect it, is certainly important. It also tells us that consciousness can do things that no non comscious algorithm, however intelligent or complex, can do.
I usually say that some properties of conscious experiences, like the experience of understanding meaning and of feeling purposes, are the best rationale to explain why conscious agents can generate complex functional information while non conscious systems cannot. But again, ID is not a theory of consciousness.
All spheres of human cognition are interrelated: religion, philosophy, science, art, everything. But each of those things a specificity.
ID theory will probably be, in the future, part of a theory of consciousness, if and when we can develop a scientific approach to it. But at present it is only a theory about how to detect a specific product of consciousness, complex functional information, in material objects.
Jeffrey Schwartz and Mario Beauregard are neuroscientists who have dealed brilliantly with the problem of consciousness. the spiritual brain is a very good book. Chalmers is a philosopher who has given us a precious intuition with his concept of the hard problem of consciousness.
None of those approaches, however, is even near to understand anything about the “origin” of cosnciousness. Least of all ID.
I am absolutely certain that consciousness is in essence immaterial. But that is my philosophical conviction. the best scienctific evidence that I can imagine about that are NDEs, and they are not related to ID theory.
Gp @ #156,
Indeed, here is another stunning admission by Richard Dawkins:
https://www.youtube.com/watch?v=BoncJBrrdQ8
Dawkins concedes that (because nobody knows) first life on earth could have been intelligently designed– as long as it was an ET intelligence not an eternally existing transcendent Mind (God.)
Of course other atheists have admitted the same thing. See the following article which refers to a paper written by Francis Crick and British chemist Leslie Orgel.
https://blogs.scientificamerican.com/guest-blog/the-origins-of-directed-panspermia/
I believe it was Crick and Orgel who coined the term directed panspermia.
To be fair I think Dawkins later tried to walk back his position. Maybe Crick and Orgel did as well. But the point remains, until you prove how life first originated by mindless, purposeless “natural causes” intelligent design is a logical possibility– a very viable possibility.
Ironically, in the Ben Stein interview Dawkins said that if life were intelligently designed (by space aliens) the scientific research may be able to discover their signature. Didn’t someone write a book about the origin of life with the word signature in the title? Who was that? I wonder if he picked up the idea from Dawkins. Does anyone know?
Bonus question: Ben Stein was made famous by one word. Does anyone know what that one word was? Anyone?
GP
What I have been doing is questioning what ID can or cannot do and even questioning scientific assumptions along the lines of the ideas you’ve posted. You have explained your views on design and how consciousness is involved and even on whether the actions of conscious mind can be considered “creative acts”, as well as how we evaluate immaterial entities.
I have always argued that ID is a scientific project but I could reconsider that. ID does not need to be scientific to have value. I’ll respond to JAD in the next post with some thoughts that I question myself on and just respond to his feedback, but your definitions of science and ID will also be included in my considerations.
JAD
The kid in the movie – can’t remember his name. Travis?
It’s a great point.
I have argued for many years that ID is science. By that, I mean “the same science as Dawkins uses”. It is my belief that 90% of the scientists agree with Dawkins’ view of science – it’s the mainstream view.
I also believed that ID was a subterfuge – an apologetic for the existence of God. I don’t see anything wrong with that.
ID was going to use the exact same science that Dawkins uses, and then show that there is evidence of intelligent design. The method for doing that is to show that proposed natural mechanisms (RM + NS) cannot produce the observed effects. Intelligence can produce them, so Intelligence is the best, most probable inference.
However, what I learned from many IDists over the years (GP pointed it out to me just previously) is that to accept ID, one needs a different science than what Dawkins uses. I find that to be a big problem. If, in order to accept ID, a person first needs “a different kind of science” than the normal, mainstream science of Dawkins, then there’s no reason to start talking about ID first. Instead, one should start to convince everyone that a different kind of science should be used throughout the world.
Because for me, Dawkins’ version of science is fine. He just does what mainstream science does. They look at observations, collect data, propose causes. The first problem is that Dawkins’ mechanisms cannot produce the observed effects. So, even on his own terms, the science fails.
However, when Dawkins says that science can only accept material causes, that doesn’t make a lot of sense – as you have pointed out. Additionally, he’s talking about a philosophical view.
In that case, it is one philosophy versus another. The philosophy of ID vs Dawkins’ philosophical view. We can’t speak about science at that point.
So, I hate to admit it because so many of my opponents over the years said this and I disagreed, but I do now accept that ID has always been a game to introduce God into the closed world of materialistic science. The difference in my view now is that I don’t see anything wrong with that game. Why not try to put God in science? What’s wrong with that? If the only way to do this is to trick materialist scientists using their own words, concepts and reasoning, again – what’s wrong with that? Dishonest? I don’t think so. The motive for using a certain methodology (ID in this case) has no bearing on what the methodology shows. In the same way, it doesn’t matter what belief an evolutionist has, they have to show that the observations can be explained from their theory.
If, however, ID requires an entirely different science and philosophical view (that is possible also), then I don’t really see much need for the discussion on whether ID is a science or not. Why not just start with the idea that God exists, and then use ID observations to support that view? I don’t see why that is a problem. If IDists are saying “we don’t accept mainstream science”, then why appeal to mainstream science for credibility? Just create your own ID-science. But for me, I’m a religious believer with philosophical reasons for believing in God (as the best inference from facts and far more rational than atheism) so instead of trying to prove to everyone that we need a new science, I’d just start with God and then do science from that basis.
That’s the way it would be if ID is not science.
If, however, ID is science, for me that means “ID is the same science that Dawkins and all mainstream scientists use”. The inferences from ID can be shown using exactly the same data and observations that Dawkins uses.
For me, that would give ID a lot more value.
SA,
[The following is something I posted on UD before which defines my position about I.D. Please note, however, I see it nothing more than just a personal opinion and I am not stating it in an attempt to change anyone’s mind. Indeed it remains tentative and subject to change but over the years I have seen no reason to change it.]
Even though I think I.D. provokes some interesting questions I am actually not an I.D. proponent in the same sense that several other commenters here are. I don’t think I.D. is “science” (the empirical study of the natural world) any more than naturalism/materialism is science. So questions from materialists, like “who designed the designer,” are not scientific questions; they are philosophical and/or theological questions. However, many of the questions have philosophical/theological answers. For example, the theist would answer the question, “who designed the designer,” by arguing that the designer (God) always existed. The materialist can’t honestly reject that explanation because historically materialism has believed that the universe has always existed. Presently they are trying to shoehorn the multiverse into the discussion to get around the problem of the Big-Bang. Of course, this is a problem because there is absolutely no scientific evidence for the existence of a multiverse. In other words, it is just an arbitrary ad hoc explanation used in an attempt to try to wiggle out of a legitimate philosophical question.
However, this is not to say that science can’t provoke some important philosophical and theological questions– questions which at present can’t be answered scientifically.
For example:
Scientifically it appears the universe is about 14.5 billion years old. Who or what caused the universe to come into existence? If it was “a what”– just natural causes– how do we know that?
Why does the universe appear to exhibit teleology, or design and purpose? In other words, what is the explanation for the universes so-called fine tuning?
How did chemistry create the code in DNA or RNA?
How dose mindless matter “create” consciousness and mind? If consciousness and mind are “just an appearance” how do we know that?
These are questions that arise out of science which are philosophical and/or theological questions. Is it possible that they could have scientific explanations? Possibly. But even if someday some of them could be answered scientifically that doesn’t make them at present illegitimate philosophical/theological questions, because we don’t know if they have, or ever could have, scientific answers.
As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view.
Naturalism (or materialism) cannot provide:
Of course the atheistic naturalist will dismiss numbers 6 or 7 as illusions and make up a just-so story to explain them away. But how do they know they are illusions? The truth is they really don’t know and they certainly cannot prove that they are. They just believe. How ironic to be an atheist/naturalist/ materialist you must believe a lot– well actually everything– on the basis of faith.
JAD @169:
“As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is insufficient as a world view”
“do not think” “is insufficient”
Is that the combination you wanted to express?
I’m not sure if I understood it.
John_a_designer at #166:
I agree with what you say about Dawkins. He is probably honest enougheven if completely wrong, but he is really obsessed by his antireligious crusade.
The book you mention is “Signature in the Cell” by Stephen Meyer.
John_a_designer at #169:
I agree with almost everything that you say, except of course that ID is not science. For me, it is science without any doubt. It has, of course, important philosophical implications, like many other important scientific theories (Big Bang, Quantum mechanics, Relativity, Dark energy, and so on).
Peter A
Final edit:
“As far as philosophical naturalism goes, here is a summary of reasons why I do not think philosophical naturalism is sufficient as a world view.”
That is what I meant to say and luckily corrected before the edit function timed out. Hopefully that makes sense now.
Just to clarify, it’s not my view that ID doesn’t raise some very legitimate scientific questions. Behe’s discovery of irreducible complexity (IC) raises some important questions.
For example, in his book Darwin’s Black Box, Michael Behe asks,
Basically Behe is asking, if biochemical complexity (irreducible complexity) evolved by some natural process x, how did it evolve? That is a perfectly legitimate scientific question. Notice that even though in DBB Behe was criticizing Neo-Darwinism he is not ruling out a priori some other mindless natural evolutionary process, “x”, might be able to explain IC.
Behe is simply claiming that at the present there is no known natural process that can explain how irreducibly complex mechanisms and processes originated. If he and other ID’ist are categorically wrong then our critics need to provide the step-by-step-by-step empirical explanation of how they originated, not just speculation and wishful thinking. Unfortunately our regular interlocutors seem to only be able to provide the latter not the former.
Behe made another point which is worth keeping in mind.
In other words, a strongly held metaphysical belief is not a scientific explanation.
So why does Neo-Darwinism persist? I believe it is because of its a-priori ideological or philosophical fit with naturalistic or materialistic world views. Human being are hard wired to believe in something– anything to explain or make some sense of our existence. Unfortunately we also have a strong tendency to believe in a lot of untrue things.
On the other hand, if IC is the result of design, it has to answer the question of how was the design instantiated. If ID wants to have a place at the table it has to find a way to answer questions like that. Once again, one of the primary things science is about is answering the “how” questions.
Or as another example, ID’ists argue that the so-called Cambrian explosion can be better explained by an infusion of design. Okay that is possible. (Of course, I whole heartedly agree because I am very sympathetic to the concept of ID.) But how was the design infused to cause a sudden diversification of body plans? Did the “designer” tinker with the genomes of simpler life forms or were they specially created as some creationists would argue? (The so-called interventionist view.) Or were the new body plans somehow pre-programmed into their progenitors genomes (so-called front loading.) How do you begin to answer such questions that have happened in the distant past? At least the Neo-Darwinists have the pretense of an explanation. Can we get them to abandon their theory by declaring it impossible? Isn’t it at least possible, as Behe acknowledges, that there could be some other unknown natural explanation “x.”
Is saying something is metaphysically possible a scientific explanation? The goal of science is to find some kind of provisional proof or compelling evidence. Why for example was the Large Hadron Collider built at the cost of billions of dollars (how much was it in euros?) Obviously it was because in science mere possibility is not the end of the line. The ultimate quest of science is truth and knowledge. Of course, we need to concede that science will never be able to explain everything.
JAD @173,
Yes, that makes much sense.
OLV @139:
The paper you cited doesn’t seem to support Behe’s polar bear argument.
A few years ago here at UD one of our regular interlocutors who was arguing with me about the ID explanation for origin of life pointed out:
I responded,
“We have absolutely no evidence as to how first self-replicating living cell originated abiogenetically (from non-life). So following your arbitrarily made-up standard that’s not a logical possibility, so we shouldn’t even consider it… As the saying goes, ‘sauce for the goose is sauce for the gander.’”
When you argue that life originated by some “mindless natural process,” that is not an explanation how. Life is not presently coming into existence by abiogenetically, so if such process existed in the past it no longer exists in the present. Therefore you are committing the same error which you accuse ID’ists of committing. That’s a double standard, is it not?
This kind of reasoning on the part of materialists also reveals that they don’t really have any strong arguments based on reason, logic and the evidence. If they do, why are they holding back?
John_a_designer at #177:
Exactly!
That’s why I say that ID is fully scientific.
Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.
That reality must behave according to our religious convictions is an a priori wordview. That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.
That realy must behave according to our atheistic or materialistic convictions is an a priori wordview. That’s why our knid interlocutors should strive a lot to avoid, as much as humanly possible, any influence of their philosophy or atheology on their scientific reasonings.
The simple fact is that ID theory, reasoning from facts in a perfectly scientific way, infers a process of design for the origin of biological objects.
Now, our interlocutors can debate if our arguments are right or wrong from a scientific point of view. That’s part of the scientific debate.
But the simple idea that we have no other evidence of the existence of conscious agent, for example, at the time of OOL is not enough. Because we have no evidence of the contrary, too.
The simple idea that non physical conscious agents cannot exist is not enough, because it is only a specific philosophical conviction. Od course non physical conscious agents can exist. We don’t even know what consciousness is, least of all how it works and what is necessary for its existence.
My point is: the design inference is real and perfectly scientific. All arguments about things that we don’t know are no reason to ignore that scientific inference. They are certainly valod reasons to pursue any further scientific investigation to increase our knowledge about those things. That’s perfectly legitimate.
For example, I am convinced that our rapidly growing understanding of biology will certainly help to understand how the design was implemented at various times.
And, even if ID is not a theory of consciousness, there is no doubt that future theories of consciousness can integrate ID and its results. For example, much can be done to understand better if a quantum interface between conscious representations and physical events is working in us humans, as many have proposed and as I believe. That same model could be applied to biological design in natural history.
And of course, philosophy, physics, biophysics and what else can certainly contribute to a better understanding of consciousness, and of its role in reality.
A better study of common events like NDEs can certainly contribute to understand what consciousness is.
I would like to repeat hear a statment that I have made in the discussion with Silver Asiatic, that sums up well my position about science:
Science can, and must, investigate, everything that can be observed in reality. And, from observed things, infer ever better models of reality. Given that very broad definition, there are no other limitations.
Interesting conversation here,
‘sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”.’
“I have blasted sigma 70 from E. coli with human TFIIB and found no detectable homology (E value 1.4). So, there seems to be little conservation here.”
Is there an explanation for this disagreement?
Sven Mil:
“Is there an explanation for this disagreement?”
Thank you for the comment and welcome to the discussion.
Thank you also for addressing an interesting and specific technical point.
It is not really a disagreement, probably only a different perspective.
Reseacrhers interested in possible homologies (IOWs, in finding orthologs or paralogs for some gene) often use very sensitive algorithms. They find homologies that are often very weak, or maybe not real. Or they may look at structural homologies, that are not evident at the sequence level.
My point of view is different. In order to debate ID in biology, I am only interested in definite homologies, possibly very high homologies conserved for a long evolutionary time. My aim is specificity, not sensitivity. Moreover, as I accept CD (as discussed in detail in this thread) I have no interest in denying possible weak homologies. I just ignore them, because they are not relevant to my argument.
That’s why I always measure homology differences, not absolute homologies. I want to find information jumps at definite evolutionari times.
Another possibility for the different result is that I have not blasted the right protein form. For brevity (it was nort really an important aspect of my discussion) I have not blaste all possible forms of sigma factors against eukaryotic factor TFIIB. I have just blasted sigma 70 from E. coli. Maybe a more complete search could detect some higher homology.
OK, as you have raised the question, I have just checked the literature reference in the Wikipedia page:
The sigma enigma: Bacterial sigma factors, archaeal TFB and eukaryotic TFIIB are homologs
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4581349/
As you can see from the abstract, they took into consideration structure similarities, not only sequence alignments.
Maybe you can have a look at the whole article. Now I don’t think I have the time.
To all (specially UB):
One interesting aspect of the NF-kB system discussed here is that, IMO, it can be seen as a polymorphic semiotic system.
Let’s consider the core of the system: the NF-kB dimers in the cytoplasm, their inhibition by IkB proteins, and their activation by either the canonical or non canonical pathway, with the cooperation of the ubiquitin system. IOWs the central part of the system.
This part is certainly not simple, and has its articulations, for example the different kinds of dimers that can be activated. However, when looking at the whole system, this part is relatively simple, and it uses a limited number of proteins. In a sense, we can say that there is a basic mechanism that works here, with some important variations.
Well, like in all the many pathways that carry a signal from the membrane to the nucleus, even in this case we can consider the intermediate pathway (the central core just described) as a semiotic structure: indeed, it connects symbolically a signal to a response. The signal and the response have no direct biochemical association: they are separated, they do not interact directly, there is no direct biochemical law that derives the response from the signal.
It’s the specific configuration of the central core of the pathway that translates the signal, semiotically coupling it to the response. So, that core can be considered as a semiotic operator that given the operand (the signal) produces the result (the response at nuclear level).
But in this specific case there is something more: the operator is able to connect multiple operands to multiple specific results, using an essential bulk of tools. IOWs, the NF-kB system behaves as a multiple semiotic operator, or if we want as a polymorphic semiotic operator.
Now, that is not an exclusive property of this system. Many membrane-nucleus pathways behave, in some measure, in the same way. Biological signals and their associations are never simple and clear-cut.
But I would say that in the NF-kB system this polymorphic attitude reaches its apotheosis.
There are many reasons for that:
a) The system is practically universal: it works in almos all types of cells in the organism
b) There is a real multitude of signals and receptors, of very different types. Suffice it to mention cytokine stimuli (TNF, IL1), bacterial or viral components (LPS), specific antigen recognition (BCR, TCR). Moreover, each of these stimuli is connected to the central core by a specific, often very complex, pathway (see the CBM signalosome, for example).
c) There is a real multitude of responses, in different cells and in the same cell type in different contexts. Even if most of them are in some way related to inflammation, innate immune response or adaptive immune response, there are also responses about cell differentiation (neurons). In B and T cells, for example, the system is invlolved both in the differentiation of B and T cells and in the immune response of mature B and T cells after antigen recognition.
This is a really amazing flexibility and polymorphism. A complex semiotic system that implements, with remarkable efficiency, a lot of different functions. This is engineering and programming of the highest quality.
GP
As I was discussing with JAD, I have always argued that ID is a scientific project. But I am tending now to see it as a philosophical proposition. Your statement above is a philosophical view. You are giving a framework for what you think science should be.
But science cannot define itself or create its own limits. Science cannot even tell us what “matter” is or what it means for something to be “immaterial”. Those are philosophical concepts. Science also cannot tell us what causes are acceptable. Science cannot tell us that it should not have a commitment to a worldview.
So, for example, if I wanted to do “my own science”, I could establish rules that I want. Nobody can stop me from that.
I could have a rule: “For any observation that cannot be explained by known natural causes, we must conclude that God directly created what we observed”.
There is nothing wrong with that if that is “my science”. Of course, if I want to communicate I would have to convince people to believe in my philosophy of science. But that would have nothing to do with science itself, but rather my efforts to convince people of my philosophical view.
Now, we could have what we call “Dawkins Science”. I believe that’s what a majority of biologists accept today. Again, it is perfectly legitimate. Dawkins and all others like him will claim “science can only accept natural causes, or material causes”.
So, they establish rules. Science cannot tell us if those rules are correct or not. It is only philosophy that says it.
Then ID comes along, and IDists will say “ID is science”.
Here is where I disagree.
Whenever we make a sweeping statement about “science” we are talking about “the consensus”.
If Dawkins is the consensus, then to claim “ID is science” means that it is perfectly compatible with Dawkins’ science.
If, however, the claim “ID is science” means “you have to accept our version of science to accept ID”, then that’s a mistake.
Again, to claim something “is science” usually means it is the consensus definition of science.
To redefine science in any way one wants to, is not a scientific project. It is a philosophical project.
If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project.
With that, even if science accepted non-natural causes, I would still consider ID to be philosophical. ID uses scientific data, but the conclusions drawn are non-scientific. Only if ID stopped by stating “this is evidence of intelligence” – that would be science. But once the conversation moves to the idea that “where there is intelligence, there must be an intelligent designer” – that is philosophical. Science cannot even define what intelligence is. Those definitions are part of the rules of science that come from a philosophical view.
For example, there could be a pantheistic view that believes that all intelligence emerges from a universal mind which is present in all of reality. So, evidence of intelligence would not mean that there is an Intelligent Designer. It would only mean that the intelligence came from the spirit of the universe which is an impersonal spiritual force and is not a “designer” in that sense.
GP & JAD
Here is JAD’s comment on the topic of ID as science:
That is right. All science requires an a priori metaphysical commitment. “Mainstream science” has accepted one particular view. But nobody can say that view, or any view is “true science”. It comes down to the philosophical view of “what is reality”? Are there real distinctions between things or are those distinctions arbitrary? Western philosophy tells us one thing, but there are other philosophical views.
Again, if ID is saying that “Dawkins is using the wrong kind of science”, then that’s a philosophical debate about what science should be.
For me if ID can be fully compatible with the science that Dawkins uses, then that’s powerful. In that case, I think it would be more reasonable to say that “ID is science” since it is using the exact same understanding of science that people like Dawkins use.
Silver Asiatic:
OK, I disagree with you about many things. Not all.
Let’s see if I can explain my position.
You quote my statement:
“Science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview.”
And then you say that this is a philosophical view. And I absolutely agree.
That was clearly a statement of my position about philosophy of science. Philosophy of science is phisolophy.
I usually don’t discuss my philosophy here, except of course my philosophy of science, which is absolutely pertinent to any scientific discussion. So yes, when I say that science has the duty to make good inferences from facts, without any a priori commitment to any specific worldview, I am making a statement about philosophy of science.
I also absolutely agree that “science cannot define itself or create its own limits”. It’s philosophy of science that must do that.
Where I absolutely discgree with you is in the apparemt idea that philosophy of science is a completely subjective thing, and that everyone can “make his own rules”. That is completely untrue. Philosophy is not subjective, as science is not objective. They are different, but both are rather objective, with many subjective aspects.
There is good philosophy and bad philosophy, as there is good science and bad science. And, of course, there is bad philosophy of science.
You say: “So, for example, if I wanted to do “my own science”, I could establish rules that I want. Nobody can stop me from that.”
It is true that nobody can stop you, but it is equally true that it can be bad science, and everyone has a right to judge for himself if it is good science or bad science.
The same is true for philosophy of science.
The really unbearable part of you discourse is when you equate science to consensus. This is a good example of bad philosophy of science. For me, of course. And for all those who waht to agree. There is no need that we are the majority. There is no nedd for consensus.
Good science and good philosophy of science must be judged in the sacred intimacy of our own cosnciousness. we are fully responsible for our judgement, and we have the privilege and the duty to defend that judgement and share it with others, whether they agree or not, whether there is consensus about it or it is shared only by a minority.
Because in the end truth is the measure of good science and of good philosophy. Nothing else.
Consensus is only an hisotrical accident. Sometimes there is consensus for good thingsm many times for bad things. Consensus for bad science does not make it good science. Ever.
Then you insist:
“If ID requires a specific kind of science that allows for non-natural causes, for example, then I would not call ID a scientific project.”
ID requires nothing like that. ID infers a process of design. A process of design requires a designer. There is nothing non natural in that. Therefore ID is science.
Moreover, I could show, as I have done many times, the the word “natural” is wholly misleading. In the end, it just mean “what we accept according to our present worldview”. In that sense, any form of naturalism is the end of true science. Naturalism is a good example of bad philosophy of science.
And I know, that is not the consensus. I know that very well. But it is not “my own rule”. It is a strong philosophical belief, that I am ready to defend and share with anyone, and to which I will remain loyal to the end of my days, unlessof course I do not find some day some principle that is even better.
Just a final note. You say: ” Science cannot even tell us what “matter” is or what it means for something to be “immaterial”. Those are philosophical concepts.”
Correct. And I don’t think that even philosophy has good answers, at present, about those things. Indeed, I think that “matter” and “immaterial” are vague concepts.
But science can be more precise. For example, science can define if sometning has mass or not. Some entities in reality have mass, others don’t. This is a scientific statement.
in our discussion, I did not use the word “immaterial”. That word was introduced by you. I just stated, answering your question, that it seemed reasonable that the biological designer(s) did not have a physical body like us, because otherwise there should be some observable trace of that fact. This implies no sophisticated philosophical theory about what matter is. I suggested that, as we know that matter exists but we don’t know what it is, it is not unreasonable to think that it can exist without a physical body like ours. Not only it is not unreasonable, but indeed most people have believed exactly that for millennia, and even today, probably, most people believe that.
I could add that observable facts like the reports of NDEs strongly suggest that hypothesis.
True of false that it may be, the hypothesis that cosnciousness is not always linked to a physical body like ours is a reasonable idea. There is no reason at all to consider that idea “not natural” or to ban it a priori from any scientific theory or scenario. To do that is to do bad science and bad philosophy of science, driven by a personal philosophical committment that has no right to be imposed on others.
Silver Asiatic:
“For me if ID can be fully compatible with the science that Dawkins uses, then that’s powerful.”
But ID is fully compatible with the science that Dawkins uses. It’s Dawkins who uses that science badly and defends wrong theories. It’s Dawkins who refutes the good theories of ID because of ideological prejudices. We cannot do nothing about that. It’s his presonal choice, and he is a free individual. But there is no reason at all to be influenced or conditioned by his bad scientific and philosophical behaviour.
GP
I think you’re being inconsistent. That’s one thing I’m trying to point out. You agree that your statement about science (and therefore your foundation for ID) is a philosophical position. However, you often state something like this:
But it is simply not possible to avoid your philosophical view since that view is the basis of all your understanding of science and your scientific reasoning. In fact, I would say it’s unreasonable to insist that you’re trying to avoid your philosophical view. Why would you do that? Your philosophy is the most important aspect of your science. Why conceal it as if you could do science without a philosophical starting point?
At the risk of irritating you, I feel the need to repeat something continually through my response – and that is, almost everything you said was a philosophical discourse. I have been discussing on one of KF’s threads the objective foundation of philosophy, but after that (which is minimal) philosophy is almost entirely subjective. We can freely choose among options.
I disagree here and I offered a long explanation in debating with atheists on KF’s most recent thread. The only objective thing about philosophy is the starting point – that truth has a greater value than falsehood. We cannot affirm a value for falsehood. But after that, even the first principles of reason are not entirely objective. They must be chosen, for a reason. A person must decide to think rationally. For reasons of virtue which are inherent in the understanding of truth, we have an obligation to use reason. But this obligation is a matter of choice.
My repeated phrase here: That’s a philosophical view. Secondly, you are appealing to consensus “everyone can judge”. There are some cultures that forbid a Western approach to science. Their consensus will say that “mainstream science” is bad science. They have different goals and purposes in life. I think of indigenous cultures, for example, or some religions where they approach science differently.
In this case, truth follows from first principles. Science is not an arbiter of truth, it is only a method that follows from philosophy in order to gain understanding, for a reason. If a science follows logically from its first principles, then it is good science. I gave an example of a different kind of science where I could say that God is a cause. Or we could talk about Creation Science where the Bible establishes rules for science. Those are different first principles – different philosophical starting points. Creationism is perfectly legitimate philosophy and if science follows from it logically, then the science is “good science”. We may have a reason to reject Creationist philosophy but that cannot be done on an entirely objective basis. We decide based on the priority we give to certain values. We want something, so we want a science that supports what we want. But people can want different things.
Again, you offer your philosophical view. In your view, a process of design requires a designer. That is philosophy. If a person accepts your philosophy, then they can accept your ID science. I think the more usual statement of ID is that “we can observe evidence of intelligence” in various things. What I have not seen is that “all intelligent outputs require a designer”. That is a philosophical statement, not a scientific one. Science cannot establish that all intelligence necessarily comes from “a designer” or even what the term “a designer” means in this context. All science can do is say that something “looks like it came from a source that we have already classified as ‘intelligence'”. If that source is “a designer”, we do not know.
Again, these are philosophical concepts. Even to judge good science versus bad science requires a correlation with philosophical starting points. Again, there is no such thing as “good science” as if “science” exists as an independent agent. Science is a function of philosophical principles. If the science aligns with the principles, then it is coherent and rational (but even that is not required). But it is impossible to judge if science is good or bad without first accepting a philosophical basis. The idea that only material causes can be accepted in science is a perfectly valid limitation. To disagree with it and prefer another definition is a philosophical debate and it will come down to “what do we want to achieve with science”? There is nothing objective about that. Science is a tool used for a purpose and there is nothing that says “science must only have this purpose and no other”. People choose one philosophy of science or another. There is no good or bad. There can be contradictory or irrational application of science — where science conflicts with the stated philosophy. For example, if Dawkins said “science can only accept material causes” and then said later that “science has indicated that a multiverse exists outside of time, space and matter” – that would be contradictory. We could call that “bad science” because it is irrational. But even there, a person is not required, necessarily, to be entirely rational in all aspects of life. We are required to be honest and to tell the truth. But if Dawkins said, that he makes an exception for a multiverse, his science remains just as “good” as any. Science is not absolute truth. It’s a collection of rules used for measurement, classification, experiment to arrive at understanding within a certain context.
Again, this is entirely a philosophical view. There is nothing wrong with a science that says “we only accept what accords with our worldview”. That’s a philosophical starting point. People may have a very good reason for believing that. Or not. So, all of their science will be “natural” in that sense. Again, there is no such thing as “true science”. You are not the arbiter of such a thing. Even to say that “all science must follow strictly logical processes” is a philosophical bias. There can be scientific philosophies that accept non-logical conclusions and various paradoxical understandings.
When it say that it is “your own rule” I mean it is a rule that you have chosen to accept. You could have chosen another, like the consensus view. That is what I would prefer for ID, that it accept the consensus view on what “natural” means and basically all the consensus rules of science. I would not like to have to say that “ID requires a different understanding of terms and of science, than the consensus does”. But even if not, ID researchers are free to have their own philosophical starting points and defend them, as you would do. But as I said, I think the only aspect of philosophy that we are compelled to accept is the proto-first principles. Even there, a person must accept that thinking rationally is a duty. As I said, there can be philosophical systems that do not hold logic, analysis, and rational thought as the highest virtue. There can be other values more important to human life which would leave rational thought as a secondary value, and therefore not absolutely required in all cases. So, a contradictory scientific result would not be a problem in that philosophical view.
Yes, exactly. Science can tell us nothing about this. Your view would be reasonable as matched against your philosophy. Again, it depends if a person has a philosophical view that could accept such a notion. If the belief is that everything that exists is physical, then your point here would not be rational. The science would have nothing to do with it except to be consistent with one view or another.
I wouldn’t call that a “definition”. It is more like a classification. Science cannot define what “mass” is. There is no observation in nature that we can make to tell us that “this is the correct definition of mass”. In fact, there could be a philosophical view that does not recognize mass as an independent thing that could be classified. But there is a consensus view that has defined mass as a characteristic. Then science observes things and classifies them to see if they share what that thing (mass) is or not.
Silver Asiatic:
I have not the time now to answer all, but I want to clarify one point that is important, and that was not clear probably for my imprecision.
When I say:
“That’s why, as I have explained, I strive a lot to avoid, as much as humanly possible, any influence of my philosophy or theology on my scientific reasonings.”
I am not including in that statement philosophy of science. My mistake, I apologize, I should have specified it, but you cannot think of everything.
Of course I believe that our philosophy of science can and must guide the way we do science. Probably, it seemed so obvious to me that I did not think of specifying it.
What I meant was that our philosophy about everything else must not influence, as far as that is possible, our scientific reasoning.
As I have said, there is good science and bad science, good philosophy of science and bad philosophy of science. One is responsible both for his science and for is philosophy of science. But of course we have a duty to do science according to our philosophy of science. What else should we do?
However, even if of course there can be very different philosophies of science, some basic points should be very clear. I think that almost all who do good science would agree about the basic importance of facts in scientific reasoning. So, any philosophy of science, and related science, that does not put facts at the very center of scientific reasoning is a bad philosophy of science. For me (because I assume full responsibility for that statement), but not in the sense that I consider that a subjective aspect. For me, that is an objective requirement of a good philosophy of science.
OK, more later.
Silver Asiatic:
I disagree. My discourses here are rarely philosophical. Well, sometimes. But all my reasonings about ID detection, functional information, biology, functional information in biology, homologies, common descent, and so on, in practice most of what I discuss here is perfectly scientific, and in no way philosophical.
Of course, as said, my science is always guided by my philosophy of science. I take full responsibility for both.
And I fully disagree that “philosophy is almost entirely subjective”. That’s not true. There is much subjectivity in all human activities, including philosophy, science, art, and so on. But there is also a lot of objectivity in all those things.
One thing is certainly true: “We can freely choose among options.” Of course. In everything.
We can freely pursue what is true or what is wrong. What is good or what is bad. We can freely lie, or fight for what we believe to be true. We can freely love or hate. And so on. I think I give the idea.
Does that mean that truth, good, lies, love, are in no way objective?
I don’t believe that. But of course you can freely choose what to believe.
And yes, this is a philosophical statement.
GP
Right. Based on your philosophy and worldview it is objective. That is consistent and makes sense. Philosophically, you call some things “facts” and then you use those in your scientific reasoning. You have an overall understanding of reality. I’ll suggest that you cannot really separate “everything else” of your philosophy from your scientific view. As I see it, they’re all connected. This is especially true when you seek to talk about a designer or things like randomness or immaterial, natural, entities — all of these things.
This is where I agree that “ID is science” as long as “ID lines up with my philosophy of science”. To me, that is consistent and reasonable (although whether the philosophy and definitions should be aligned could be debated).
Someone like Dawkins will say “ID is not science” because he thinks that ID does not line-up with his philosophy of science. He has just defined ID out of the question. Dawkins will fail if he says: “My philosophy is consistent and rational and my science follows this”. But then later he indicates that he will not accept conclusions that his own scientific philosophy will support. Then he’s got a problem.
I always thought that’s what ID was trying to do. Use Dawkins’ own worldview and his own claims – all the things he already accepts — and show that ID is the most reasonable conclusion. It would all be based on his (or mainstream) science.
I know some creationists who say ID is “dishonest” because the worldview is concealed, but I think ID is just trying to play by the rules of the game (consensus view) and show that there is evidence for Design even using mainstream evolutionary views.
GP
I realize that this may seem irritating, but I even caught myself with that. There are people, perhaps, who think that all of our actions are determined by some cause. It’s the whole question of free-will.
My point here is that I think a coherent philosophy, beginning with first principles, has to be in place. After that, the people that we talk with have to either understand, or better, accept our philosophy.
If they have a bad philosophy, then I think the problem is to help them fix that. I think that has to happen before we can even get into the science.
My philosophy is rooted in classical Western theism and is linked to my theological views. I am leaning more and more to the idea that it is not worth the effort to adopt “Dawkins philosophy/science” for the sake of trying to convince people, and that it may be more effective to start with the clash of philosophies and world-views rather than start with science (ID). Not sure, but I am leaning that way. Putting philosophy and theology first, and then using ID inferences to support that might work better.
Silver Asiatic:
That’s definitely what ID is trying to do. That’s certainly what I am trying to do.
Maybe. But I think the two thinks can and should work in parallel. There is no conflict at all, as far as each activity is guided by its good and pertinent philosophy! 🙂
And, at least for me, the purpose is not to convince anyone, but to offer good idea to those who may be interested in them. In the end, I very much believe in free will, and free will is central not only in the moral, but also in the cognitive sphere.
To all:
Again about crosstalk.
It seems that our NF-kB system is continuously involved in crosstalks of all types.
This is about crosstalk with the system of nucleoli:
Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6210184/
Emphasis mine.
And this is about crosstalk with Endoplasmic Reticulum:
The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6027367/
Emphasis mine.
Another word that seems to recur often is “combinatorial”.
And have you read? These two signaling pathways “converge within the nucleus through ten major transcription factors (TFs)”. Wow! 🙂
GP,
the topic you chose for this OP is fascinating indeed.
Here’s a related paper:
Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation
Leah M. Williams, Melissa M. Inge, Katelyn M. Mansfield, Anna Rasmussen, Jamie Afghani, Mikhail Agrba, Colleen Albert, Cecilia Andersson, Milad Babaei, Mohammad Babaei, Abigail Bagdasaryants, Arianna Bonilla, Amanda Browne, Sheldon Carpenter, Tiffany Chen, Blake Christie, Andrew Cyr, Katie Dam, Nicholas Dulock, Galbadrakh Erdene, Lindsie Esau, Stephanie Esonwune, Anvita Hanchate, Xinli Huang, Timothy Jennings, Aarti Kasabwala, Leanne Kehoe, Ryan Kobayashi, Migi Lee, Andre LeVan, Yuekun Liu, Emily Murphy, Avanti Nambiar, Meagan Olive, Devansh Patel, Flaminio Pavesi, Christopher A. Petty, Yelena Samofalova, Selma Sanchez, Camilla Stejskal, Yinian Tang, Alia Yapo, John P. Cleary, Sarah A. Yunes, Trevor Siggers, Thomas D. Gilmore
doi: 10.1101/691097
To all:
OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP:
On chaotic dynamics in transcription factors and the associated effects in differential gene regulation
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6325146/
The abstract:
I think I will read it carefully and come back about it later. 🙂
To all:
The paper linked at #194 is really fascinating. I given it a first look, but I will certainly go back to digets better some aspects (probably not the differential equations! 🙂 ).
Two of the authors are from the Niels Bohr Institue in Copenaghen, a really interesting institution. The third author is from Bangalore, India.
For the moment, let’s start with the final conclusion (I have never been a tidy person! 🙂 ):
The emphasis on “toolbox” is mine, and the reason I have added it should be rather self-evident. 🙂
Let’s think about that.
To all:
Indeed, I have not been really precise at #194, I realize. I said:
“OK, this very recent paper (published online 2019 Jan 8) seems to be exactly about what I discuss in the OP:”
But that is not really true. This paper indeed adds a new concept to what I have discussed in the OP. In fact the paper, while briefly discussing also random noise, is mainly about the effects of a chaotic system, something that I had not considered in any detail in my OP. My focus there has been on random noise and far from equilibrium dynamics. Chaos systems certainly add a lot of interesting perspective to our scenario.
OLV at #193:
Interesting paper.
Indeed I blasted the human p100 protein agains sponges, and there is a good homology (total bitscore 523 bits).
So yes, the system is rather old in metazoa.
Consider that the same protein, blasted against single celled eukaryotes, gives only a low homology (about 100 bits), limited to the central ANK repeats. No trace of the DNA binding domain.
So, the system seems really to arise in Metazoa, and very early.
GP #129,
Thanks very much. I will give it a read.
Life comes from life, once it has been started, that is for sure. However, it does not apply equally either to creation (design, for the purposes of this discussion) or to imagined abiogenesis. It is clear that the vitalistic rule is violated in the case of abiogenesis, but it is also violated in the case of design because the relation between the designer and the designed is not that of birth/descent. It can be more likened to the relation between the painter and the painting. Fundamentally, the painting is of a different nature from the painter, whereas descent implies the same nature between the ancestor and the progeny.
As an aside, a grumpy remark, I do not like the new GUI on this blog 😉 The old one was way better. This one feels like one of .gov British sites for the plain English campaign. It is less convenient when accessed with a mobile phone. But it does not matter…
EugeneS
That is a great point and analogy. Yes, I think where there is design then there is a purposeful, creative act and what follows from that cannot be considered descent for the reason you give.
EugeneS:
That is an important point.
The question is: can life be reduce to the designed information that sustains it?
If that is the case, then design explains everything, both at OOL and later.
If the anwer is no, all is different.
As we still don’t understand what life is, from a scientific point of view, we have no final scientific answer. My personal opinion is that the second option is true, and that would explain why in our experience life comes only from life.
If life cannot be reduced to the designed information that sustains it, then certainly OOL is a case where both a lot of designed functional information appears and life is started, whatever that implies.
For what happens after OOL, all depends on the model one accepts. I don’t know if you have followed the discussion here between BA and me. In particular, the three possible models I have discussed at #43.
In my model (model b in that post) after OOL things happen by descent with added design. So, in that model, it is true after OOL that life always comes from life (if the descent is universal), and only OOL would be a special event in that sense. The new functional information, in all cases, is the product of design interventions.
In model c, instead, each new “kind” (to use BA’s term) is designed from scratch at some time. So, the appearance of each new kind has the same status as an OOL event.
Model a is just the neo-darwinian model, where everything, at all times, happens by RV + NS, and no design takes place, least of all a special, information independent start of life.
Gp,
I am still trying to define precisely what a transcription factor is. Earlier @ 91, I asked ”are there transcription factors for prokaryotes?” According to Google, no.
https://uncommondescent.com/intelligent-design/controlling-the-waves-of-dynamic-far-from-equilibrium-states-the-nf-kb-system-of-transcription-regulation/#comment-680819
(But maybe what I am not understanding is the result of a difference of semantics, context or nuance.)
Recently, I ran across another source which seemed to suggested that prokaryotes do have TF’s.
https://www.khanacademy.org/science/biology/gene-regulation/gene-regulation-in-eukaryotes/a/eukaryotic-transcription-factors
This article seems to suggest that the lac operon is a transcription factor but then in the next paragraph it states: “In humans and other eukaryotes, there is an extra step. RNA polymerase can attach to the promoter only with the help of proteins called basal (general) transcription factors.”
So is the lac operon a transcription factor? Is the term operon synonymous with transcription factor or is there a difference? In other words, do “operons” have same role in transcription as TF’s?
Is there a strong homology between the lac operon which turns on the gene for lactose metabolism in e coli and the TF/lactose metabolism gene in eukaryotes, including humans? Does this have anything to do with lactose intolerance?
John_a_designer:
OK, that’s how I see it.
In eukaryotes we must distinguish between general TFs, which act in much the same way in all genes and are required to initiate transcription by helping recruit RNA polymerase at the promoter site, and specific TFs, that bind at enhancer sites and activate or repress transcription of specific genes. The NF-kB system described in the OP is a system of specific TFs.
Now, in eukaryotes there are six general TFs. Archea have 3. In bacteria sigma factors have the role of general TFs. Sigma factors, archaeal general TFs and eukaryotic general TFs seem to share some homology. I think that the archaeal system, however, is much more similar to the eukaryotic system, and that includes RNA polymerases.
Then bacteria have a rather simple system of repressors or activators, specific for specific genes, or better operones. Those repressors and activators bind DNA near the promoter of the specific operon. They are in some way the equivalent of eukaryotic specific TAs, but the system in by far simpler.
You can find some good information about bacteria here:
https://bio.libretexts.org/Bookshelves/Cell_and_Molecular_Biology/Book%3A_Cells_-_Molecules_and_Mechanisms_(Wong)/9%3A_Gene_Regulation/9.1%3A_Prokaryotic_Transcriptional_Regulation
The operon is simply a collection of genes that are physically near, are transcribed together from one single promoter, and are functionally connected.
So, the lac operon is formed by three genes, lacZ, lacY, lacA, sharing one promoter. A sigma factor binds at the promoter, together with RNA polymerase. A repressor and an activator may bind DNA near the promoter to regulate operon transcription.
While archaea are more similar to eukaryotes in the system of general TF, the regulation of transcription by one or two suppressor or activator seems to be similar to what described for bacteria.
Finally, there is another important aspect where archaea are more similar to eukarya. Their chromatin structure is based on histones and nucleosome, like in eukaryotes, but the system is rather different from the corresponding eukaryotic system.
Instead, bacteria have their form of DNA compression, but it is not based on histones and nucleosomes.
This, as far as I can understand.
Thank you Gp,
The link you provided cleared up some misunderstanding on my part (operons are not TF’s but are grouping of genes that TF’s help activate) and clarified a number of other things.
To all:
From the paper above mentioned, a paragraph about the difference between random noise and chaos.
To all:
The paper is about a simplified model of interaction between two different oscillating systems, NF-kB and TNF. The interaction between the two can generate, in some circumstancases, a chaotic system.
Indeed, the main cause of the oscillations in the NF-kB system seems to be due to the alternating degradation of the IkB alpha (the inhibitor), IOWs the activation of the dimer, and the re-synthesis of IkB alpha, a form of negative feedback.
GP,
at what point it is believed that the oscillations in the NF-kB system appeared for the first time in biological history?
Pw:
I don’t think we have any idea about that.
To all:
OK, back to the paper about chaos.
So, the general idea is that, in the simplified (but very precise) model used by the authors, fixed priod (50 minites) oscillations in TNF concentration can act as an “external signal”, so that the NK-kB oscillation “locks on to the external signal’s frequency and phase” (Fig. 1c, bottom line).
But, according to the amplitude of the TNF oscillations, the effect changes. While for low amplitudes there is the “locking” effect, intermediate amplitudes transalte into some regular variation of the NF-kB amplitude, IOWs generate “multi stable cycles” of different amplitude in the NK-kB oscillations (Fig. 1c, intermediate line).
Finally, if the amplitude of the external signal furtherly increases, the system becomes chaotic, and the amplitude of the NF-kB system oscillations becomes completely unpredictable.
OK, that’s very interesting.
But the important point is: according to the authors, these variation of pattern in the NF-kB signal, induced vy variation in the amplitude of the external signal (TNF), will have definite effects on downstream transcription patterns in the nucleus. Indeed, the point made by the authors is that the chaotic pattern induces by high amplitudes in the external signall will have a definite and robuts effect: it will enhance transcription of genes with low affinity for the NK-kB TFs, (LAGs). In the other scenarios, instead, transcription of high affinity genes (HAGs) or medium affinity genes (MAGs) will prevail.
And the idea is that such a robust effect of an unpredictable pattern may well have a specifi role in transcription regulation, IOWs it can be a supplementary “tool” in the functional regulation of what genes are transcribed, and therefore of the type and level of the response.
Well, isn’t that interesting?
Of course, to do that in a functional way, there is the basic need that the high amplitude of the external signal and the chaotic pattern be correctly associated, so that the right signal generates the correct response. And that association is obviously semiotic.
So, is all this is true, the NF-kB system is not only a wonderful polymorphic semiotic system (see post #181), but also a semiotic system that uses, as a tool to connect the right stimulus to the right response, not only the usual biochemical configuration patterns, but also a very peculiar physical and mathematical effect of the type of oscillations involved in two different and separate systems.
Wow! 🙂
To all:
NK-kB is not the ony TF system that presents oscillations in concentration and nuclear occupancy. Another important example is p53:
Conservation and divergence of p53 oscillation dynamics across species
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5687840/
p53 is a very important tumor suppressor gene, involved mainly in the response to DNA damage.
GP @208:
“the NF-kB system is not only a wonderful polymorphic semiotic system (see post #181), but also a semiotic system that uses, as a tool to connect the right stimulus to the right response, not only the usual biochemical configuration patterns, but also a very peculiar physical and mathematical effect of the type of oscillations involved in two different and separate systems.”
I think in this case “Wow!” is an understatement. 🙂
I still didn’t quite understand how old is the NK-kB, what it evolved from, how that could happen.
Gpuccio,
Here’s a question I have:
Is what we perceive here as chaos just the result of overwhelming complexity and complex interactions of interacting, overlapping and numerous dynamic systems?
Since my background is not in the biological sciences but in mechanical engineering– specifically machine design– I try to find analogies from the world of machines and machine systems to help me understand what is happening or maybe happening with biochemical “molecular machines” and biological systems.
The analogy I started to think of from reading over the paper (cited @ 194) was urban traffic flow which from a time-lapse birds-eye-view can appear to be chaotic and even at times without rhyme or reason.
Here, for example, are several time lapse video clips of street and highway traffic in Atlanta, Georgia in the U.S.
https://www.youtube.com/watch?v=zOu-f-GdfhU
While there is a continuous dynamic flow of traffic, it also at times appears to be chaotic as cars and trucks appear to more or less at random change or merge from one lane of traffic to another, or stop at a street intersection to make a turn etc. If, however by analogy, we take a “microscopic” view of what each car or truck is doing we find that every vehicle has a destination and a purpose for its travel. What make the scene appear to be chaotic is that the individual travelers have different destinations and different purposes for their travel. For example, there may be some of travelers who are going to work or out to dinner or out to a sporting event or out shopping or going back home. There may be trucks delivering supplies and merchandise to businesses… or, there may be fire and rescue vehicles speeding to an accident or a fire or police responding to a crime. It appears to me that there is something like that going on in individual cells which is just compounded astronomically when we consider the complexity of higher organisms as a whole.
Of course as with all analogies the analogy breaks down. At present, at least until self-driving cars and trucks become widely available and viable, each car or truck is under the control of an intelligent agent. Biological systems are more analogous a world full of robots with the robots maintaining and propagating the system. To paraphrase Abraham Lincoln the robots would be of the system, by the system and for the system. Nevertheless, I still think on some level such a system would appear to be very chaotic but that would be due to its overwhelming complexity.
If such systems were truly chaotic they would cease to function correctly and eventually cease to function at all. The overwhelming complexity, of course, is evidence of design.
John_a_designer at #212:
The questions you ask are very good, and the subject is not so intuitive as it could seem. I will try to express how I understand it, but of course I am ready to consider any contribution about this important point.
The first important thing is that we must not confound randomness and chaos. I have quoted at #204 a paragraph from the paper which tries to explain the difference between the two. However, I must say that I am not completely happy wuth what is said there.
My first point is that we are dealing here with sustems that are, in essence, deterministic. Both chaotic systems and random systems are deterministic, in the sense that waht happens in those system is in the end governed by necessity laws, in particular the laws of physics or chemistry. I ahve said many times that the only field of science that probably implies a true randomness, what we could call intrinsic randomness, is quantum mechanics. In quantum mechanics, the wave function, if and when it collapses, collapses according to probabilitstic distributions that are, probably (it dependd on the intepretations), intrinsically random.
In all other nonquantum systems, we assume that the laws ofn physics are the real laws that govern the evolution of the system, and those laws, if not at quamtum level, are deterministic laws. Indeed, even quantum mechanics is mainly deterministic: the wave function evolves in a completely deterministic way, unless and until it collapses.
So, both random systems and chaotic systems, if we are not considering quantum effects, are completely deterministic systems.
So, what is the difference between what we call a deterministic system and what we call a random system?
As I have said many times, the difference is only in how we can describe the system and its evolution.
Let’s consider a simple deterministic system. Let’s say that we have a gear with two kinds of teeth, one kind shorter and one kind longer. Let’s say that the gear os rotating at a constant rate, and it interacts with another gear so that the long teeth evoke one type of output, and the shorter teeh evoke another type of output. So, we have a cyclic output with two states, which can be well predicted knowing the configuration of the first gear.
This is, very simply, a deterministic system, in the sense that we can fully describe it in terms of its initial configuration, and know with reasonable precision how the system will behave.
Not, let’s take instead a simple random system: the classic tossing of a fair coin. Here, too, the system is in essence deterministic: each coin tossing completely obeys the laws of classical mechanics. If we can know all the initial conditions of the tossing, we can, maybe with complex computations, know exactly if the result will be a head or a tail.
But that is not the case. There is no way we can know all the variables involved. Because there are too many of them, and we cannot measure or control all of them. The consequence is that we cannot ever know for certain if one specific tossing will give a head or a tail.
So, are we completely powerless in front of such a system? can we say nothing that helps us describe it?
No. If the coin is fair, we know that, on a big number of tossings, the percentage of heads and tails will be similar. Not exactly the same, but very much similar, and ever more similar if we increase the number of tossings.
This is a probabilistic description. We are applying a mathematical object, an uniform probability distribution where only two events are possible, and each has a probability of 0.5, to describe with some efficiency a simple system that we cannot describe in any other way.
This is randomness. The impossibility to compute a single event, but the possibility to describe with some precision a general distribution.
Now, there is no need that the probability dostribution is uniform. And there is no need that no necessity effect is detectable. In most real systems, including biological systems, random noise is mixed with necessity effects. If the random noise is strong enough, so that it cannot be ignored, the system is still random.
Let’s consider an unfair coin, where an uneven distribution of weight (a necessity effect) is strong enough that it modifies the neutral probability distribution, so that heads have a probability of 0.6 and tails a probability of 0.4. Is the system still random?
Of course it is. We have no way to know in advance what the result of our next tossing will be. The system is still random, because we can describe it only probabilistically. Still, the uneven distribution tells us that there is some necessity effect that favors heads.
OK, so this is randomness. Many different variables, that we cannot really measure or control, interact indipendently to generate a configuration that can be described only using a probability distribution. In no case can we know deterministically how the system wiill evolve.
It is interesting that many random systems in nature are not described well by a uniform distribution, even if loaded, but rather by other probability distributions, first of all the normal distribution. In the normal distribution, the system is random, but certain events are much more likely than others.
Chaos is another thing. Chaotic systems are deterministic systems, sometimes simple enough, where some special form of the mathematics that describes the system makes the evolution of the system extremely sensitive to small initial variations in the starting conditions. In the example of the model described in the paper, oscillations in the external signal determine the period and amplitude of the oscillations in the NF-kB system. If the amplitude in the external signal is low, the two systems are simply syncronized. That is a deterministic system.
But, if the amplitude of the external signal increases, if it is very big, then the mathematics governing the interaction between the two systems becomes chaotic: while the oscillations of the external signal remain regular, the oscillations in the NF-kB system become completely unpredictable in amplitude. That is chaos. The system is still simple, two systems are essentially interacting. The scenario seems not different from the scenario where the two system are simply synchronized. But, suddenly, a simple increase in the amplitude of the external system changes the mathematical realtionships, and the response of the NF-kB system becomes chaotic.
Now, let’s go back to your example of traffic. I am not completely sure, but I would say that that is a random system, not necessarily a chaotic system. Here the lack of order is due to the many variables involved, that interact independently. In a sense, it is like the tossing of the coin.
It is true that “every vehicle has a destination and a purpose for its travel”, as it is true that the coin obeys precise laws when it is tossed. But there are too many vehicles, and their destinations are unrelated and independent. That generates a random configuration that we cannot anticipate with precision, because we should know in advance all the destinations and purposes, and even the driving style or mood of each driver, and so on. We can’t. So, at best, we can describe the system by some probability distribution: maybe there is more probability of having traffic in one direction at certain times, and so on.
The important point in the paper quote is not so much that two systems can interact in a chaotic way: that happens sometimes in physical systems. The amazing point is that such interaction can be generated by specific biological stimuli (for example by regulating the amplitude of the oscillation in the TNF system), so that chaos is generated in the NK-kB system, and that such chaotic response can change in a robust way the pattern of genes that are activated (for example favoring low affinity genes), and that this whole system is functional. IOWs, as I have said, a specific signal is semiotically connected to the correct, complex response, involving hundreds of different genes, by a translation system that uses (among other tools) the induction of a chaotic state to link the two things.
Gpuccio, it worries me that your method is unable to detect homology between proteins that are similar with respect to structure and virtually identical with respect to function.
“I have no interest in denying possible weak homologies. I just ignore them, because they are not relevant to my argument.”
Not relevant? Just ignore them?
Your argument consists of pointing to these “large jumps in homology”, but isn’t that what we’d expect to see if you can’t detect low homology?
If you could only see things 2 miles above sea level would you assume that planes never land and that birds don’t actually exist?
How much are you actually missing? A whole lot I’d bet.
It seems like your method is extremely biased and capable only of detecting “steady-state”, not the actual evolutionary steps we’re interested in.
Sven Mil:
The “virtually identical with respect to function” seems to be your imagination, certainly in relation to the case we were discussing (the supposed homologies between sigma factor 700 and human TFIIB). How can you even think, least of all state so boldy, that those two proteins are “virtually identical with respect to function”?. That is a very telling indication of how serious your attitude is.
Moreover, your “argument” seems to be that as I am not trying to detect very weak homologies, the extremely strong jumps in human conserved information that I do detect in short evoluyionary times are explained. By what? By weak homologies that have nothing to do with those strong specific sequences that appear suddenly, that are conserved for hundreds of million of years, and that anyone can easily detect?
Is that even the start of an argument? No. It is just false reasoning, of the worst kind.
So, if you have anything interesting to say, please say it. If you can point to any credible pathway that can explain the appearance of thousands of bits of new functional information, through anything that you can detetct in the genomes and proteomes, pleaso do it. If you have any hint of the functional intermediates that are nowhere to be seem at the molecular level for that well detectable information, please show that to us.
On one point you are certainly right: my method to measure funtional information by homology conservation for long evolutionary times as shown by the Blast algorithm is, in one important sense, biased: it certainly underestimates the true functional information, as I have shown many times here.
Have a good time.
I agree that any argument against GP’s method for quantification of complex functional information in proteins, should clearly present a “credible pathway that can explain the appearance of thousands of bits of new functional information”.
GP,
“my method to measure funtional information by homology conservation for long evolutionary times as shown by the Blast algorithm is, in one important sense, biased: it certainly underestimates the true functional information, as I have shown many times here.”
Is this because you may ignore functional information if the number of bits is less than certain threshold value for the number of bits that perhaps is very high?
IOW, your method is very rigorous?
PeterA:
“Is this because you may ignore functional information if the number of bits is less than certain threshold value for the number of bits that perhaps is very high?”
No. That has nothing to do with the “bias” I have mentioned at #215. That is more or less what Sven Mil “suggested” (to say that he “argued” would really be inappropriate).
The simple point is: with my method I detect sudden appearances of new functional information at the sequence level. The sequence is what is measured: the blast measures homologies in sequence.
The procedure is meant to detect differences in huma conserved functional information. Those specific sequences that:
a) Did not exist before they appear
b) Are conserved for uindreds of million years after their appearance
So, if I say that a protein shows an information jump in vertebrates of, say, 1280 bits, like CARD11 (see post #118), I mean that those 1280 bits of homology to the human protein are added in vertebrates to whatever homology to the human form already existed before.
IOWs, in deuterostomia that are not vertebrates, including the first chordates, there may be any weak homology with the human protein. In the case of CARD11, it is really low, but detectable. Branchiostoma belcherii, e cephalochordate, exhibits 192 bits of homology between its form of CARD11 and the human form. The E value is 6e-37, and therefore the homology is certainly significant.
IOWs, the protein already existed in chordates that are not vertebrates. In a form that was, however, very different from the human form, even if detectable as homologous.
But in cartilaginous fishes, more than 1000 new bits of homology to the human protein are added to what already existed. Callorhincus milii exhibits 786 identities, and 1514 bits of homology to the human form. That is an amazing information jump, and it has nothing to do with minor homologies that are not considered or emphasized, as “suggested” by Sven Mil. That increment in sequence homology to the human form is very real, very sudden, and completely conserved. There is no way to explain it, except design.
The “bias” that I mentioned at #215 consists in the fact that the blast algorithms underestimates the informational value of homologies. It assign about 2 bits to identities, while we know that the potential informational value of an AA identity is about 4.3 bits. Even correcting for many factors, that is a big underestimation, considering that we are dealing with a logatithmic scale.
Another reason for the underestimation bias is that part of the sequence that is not conserved os often functional too, as I have argued many times, and here too at #29 with the very good example of RelA. I quote my conclusions there:
“IOWs, my measure of functional information based on conserved homologies through long evolutionary times does measure functional information, but usually underestimates it. For example, in this case the value of 404 bits would measure only the conserved function in the DBD, but it would miss completely the undeniable functional information in the TAD domains, because that information, while certainly present, is not conserved among species.
This is, IMO, a very important point.”
So, my procedure to evaluate functional information in proteins is certainly precise enough and reliable, but certainly biased in the sense of underestimation, for at least two important reasons:
a) The blast algorithm is a good but biased estimator of functional information: it certainly underestimates it.
b) The functional information in non conserved parts of the sequence is not detected by the procedure.
So, the simple conclusion is: my values of functional information are certainly a reliable indicator of true functional information in proteins, but the true value of functional information and of information jumps is certainly higher than the value I get from my procedure. IOWs, we can be sure tha the real value of functional information in that protein or in that jump is at least the value given by my procedure.
gpuccio,
Thanks for the detailed explanation. Now I understand what you meant.
To all:
Of course, it’s not only lncRNAs. Let’s not forget miRNAs!
The functional analysis of MicroRNAs involved in NF-kB signaling.
https://www.europeanreview.org/article/10746
For those who love fancy diagrams, have a look at Fig. 1. 🙂
Figure 1. Panoramic view of the NF-?B miRNA target genes and target genes of miRNAs.
Wow!
GP,
You’re keeping this discussion very interesting. Thanks.
Here’s a NF-kB article.
Here’s another NF-kB article.
GP,
you may have opened a can of worms with this OP. 🙂
This NF-kB seems to be all over the map.
Another NF-kB paper
One more NF-kB paper
and another one
OLV:
Thank you for the interesting links.
The first two papers quoted at #223 are specially intriguing, in the light of all that we have discussed:
The Regulation of NF-kB Subunits by Phosphorylation
https://www.mdpi.com/2073-4409/5/1/12/htm
And:
The Ubiquitination of NF-kB Subunits in the Control of Transcription
https://www.mdpi.com/2073-4409/5/2/23/htm
Phosphorylation and ubiquitination are certainly two very basic levels of regulation of almost all biological processes. They are really everywhere.
Hmmm, Gpuccio, where to begin.
You say (about sigma70 and TFIIB)
‘How can you even think, least of all state so boldy, that those two proteins are “virtually identical with respect to function”’
But previously you have even admitted “Sigma factors are in some way the equivalent of generic TFs”
(TFIIB is a generic TF)
And wikipedia apparently says
‘sigma factor “is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB”’
So both sigma and TFIIB’s main function is to catalyze RNA polymerase initiation.
And the paper you have cited above says
“several reports have indicated the possible functional analogy and/or evolutionary relatedness of bacterial ? factors and eukaryotic TFIIB”
“sigma factors and TFIIB both closely approach catalytic sites indicating direct and similar roles in initiation”
“Furthermore, sigma factors and TFIIB each have multiple DNA binding helix-turn-helix (HTH) motifs, which typically include three crossing helices and two turns: H1-T1-H2-T2-H3. H3 is referred to as the “recognition helix” because sequences within T2 and toward the N-terminal end of H3 are most important for sequence recognition within the DNA major groove.”
Sounds to me like the functions of these proteins are virtually identical.
Off topic but interestingly related to the concept of complex functional specified information:
Veins and Arteries Build Hierarchical Branching Patterns Differently: Bottom-Up versus Top-Down
Kristy Red-Horse, Arndt F. Siekmann
DOI: 10.1002/bies.201800198
Article
Full text
Sven Mil:
“Sounds to me like the functions of these proteins are virtually identical.”
Not to me. Not at all. Nothing in the things you quote justifies your conclusion.
However, if you like to think that way, it’s fine. This is a free world.
Gpuccio, if you can’t grasp the simple fact that these proteins perform virtually identical functions, how can you expect people to believe your attempted evaluations of protein function and homology?
Or maybe you refuse to admit this simple fact because you know that it means your “analyses” are garbage?
Sven Mil:
“Virtually identical”? Funny indeed.
Of course they both help starting transcription. That’s why they are “equivalent”, or have a “possible functional analogy and/or evolutionary relatedness”, or “similar roles”. In completely different organisms, having a very different transcription system, different proteins involved, different regulations.
They have almost no sequence homology, as clearly shown by Blast, and some generic structure similarity in the DNA binding site.
For you, that means that they have “virtually identical” functions. OK, everybody can judge what “virtually identical” means.
For me, it’s not identical at all. Maybe very much virtual.
And you know, I expect nothing from people, they can evaluate my facts and ideas and believe what they like.
And, certainly, I expect nothing from you.
Have a good day.
Virtual reality = reality?
🙂
Sven
How would you support the claim of virtually identical functions? Maybe start by defining virtually identical. If you pass on this then I have to assume you are making a rhetorical argument only with no real scientific value.
Bill Cole,
“making a rhetorical argument only with no real scientific value”
That’s what it looks like.
So, Gpuccio, you have to cling to this denial in order to support your design-of-the-gaps-BLASTing.
Got it.
The fact is, they don’t just “both help start transcription”.
They perform the same function within the process of initiation, in fact, they both “closely approach catalytic sites indicating direct and similar roles in initiation” according to the paper you cited.
There is only a handful of proteins that approach the RNA polymerase catalytic site in general (nevermind at the same time), and these are all associated with very specific functions (e.g. TFIIH).
For the two proteins we are talking about (sigma and TFIIB) to both be approaching the catalytic site at the same time (during polymerase initiation), it can safely be said that their function is virtually identical.
Sven Mil seems to have an interesting argument here.
PavelU
What argument do you think he is making?
Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function
Mary Lauren Benton, Sai Charan Talipineni, Dennis Kostka & John A. Capra
BMC Genomics volume 20, Article number: 511 (2019)
DOI: 10.1186/s12864-019-5779-x
Computational Biology Solutions to Identify Enhancers-target Gene Pairs
Judith Mary Hariprakash, Francesco Ferrari
DOI: 10.1016/j.csbj.2019.06.012
Sven Mil:
First of all, I don’t need to cling to anything to defend my procedure, because you have made no real argument against it. If and when you do, I will defend it.
I just noticed that the idea that the two functions are virtually identical, that you stated to add some apparent poison to your rethorical non argument, is simply wrong.
The two functions are similar, but certainly not identical, either virtually or in any other way.
Similar is a very simple English word. Can you understand it?
If you had said that the two functions are similar, I would have agreed with you. I have said the same thing from the beginning.
But the two proteins are very different, even if they are distant homologues and probably evolutionarily related.
One is specifically engineered to help initiate transcription in prokaryotes. The other one is specifically engineered to help initiate transcription in eukaryotes.
And, as everybody knows, transcription in prokaryotes and in eukaryotes is very different.
To Whom This May Concern:
GP’s method for quantifying relatively sudden appearances of significant amounts of complex functional information within protein groups, has been extensively explained many times in this website and it’s obviously very well supported both theoretically and empirically.
GP’s detailed explanations are freely available to anyone interested in reading and understanding them.
Oh brother Gpuccio, let me spell it out so that even you can understand.
These two proteins occupy the same space at the same time in their respective systems.
Just skimming the paper you cited yourself:
“In RNAP complexes with an open transcription bubble, sigma factors and TFIIB both closely approach catalytic sites indicating direct and similar roles in initiation.”
“Furthermore, sigma factors and TFIIB each have multiple DNA binding helix-turn-helix (HTH) motifs”… which contain the
“recognition helix” which is “most important for sequence recognition within the DNA major groove”
“Here, 2-HTH motifs of bacterial sigma factors and eukaryotic TFIIB are shown to occupy homologous environments within initiating RNAP and RNAP II complexes”
“Based on extensive apparent structural homology, amino acid sequence alignments were generated, supporting the conclusion that sigma factors, TFB and TFIIB are homologs.”
They detect homology, why can’t you Gpucc? =)
When modeling the structure of the entire RNA polymerase complex: “The two C-terminal sigma and TFIIB HTH motifs appear to occupy homologous positions in the structures.”
“Remarkably, sigma CLR/HTH3.0-3.1 and TFIIB CLR/HTH1 occupy homologous positions, and sigma CLR/HTH4.1-4.2 and TFIIB CLR/HTH2 also appear to occupy homologous positions.”
“The B-reader region approaches the RNAP II active site and, although not homologous by orientation (N?C) or sequence to sigma-Linker3.1-3.2, appears to have convergent functions in initiation and promoter escape.”
“TFB/TFIIB CLR/HTH2 binding to BREup anchors the initiating complex on ds DNA and establishes the direction of transcription analogously to the anchoring of sigma CLR/HTH4.1-4.2 binding to the ds -35 region of the bacterial promoter”
There’s tons more, I have filtered out most of the technical/jargony stuff for your benefit.
A quick look at that paper makes it clear that these two proteins perform the same function.
I can’t make it any clearer than that.
Now either you haven’t looked at the paper, or you are clinging to your denial for the sake of your method.
As for your method:
You have, “blasted sigma 70 from E. coli with human TFIIB and found no detectable homology”
So, to reiterate, you are unable to detect the relationship between these two proteins which perform the same function.
This raises many questions and issues with respect to your analyses.
– how much bias are you introducing into your analyses by only being able to detect high homology? (you probably have no idea)
– how much are you missing? (I bet it’s a whole lot and also that you probably have no idea)
– if you can only detect high homology (as you have already admitted that’s what your method does) wouldn’t you always have a jump in information?
(the jump is due to your method being unable to detect low-mid homology; not sudden inputs of information from a designer as you love to imply)
– how could two proteins, vastly different in sequence (according to your BLASTing) carry out the same function?
(either your method is just not good at assessing structure/function relationships, or your assumptions about protein function in sequence space are wrong)
(probably both)
Hopefully that was in simple enough English for you. Can you understand it?
Sven Mil:
Easily- how can two sentences with vastly different letter sequences, carry the same message? Better yet, what is the evidence that blind and mindless processes produced either of the proteins? How can such a concept be tested?
This discussion seems interesting, but flies high above my head.
What are the main differences between prokaryotic and eukaryotic cells?
I tried to search for it but got gazillion results and don’t know where to start from.
Here are some abbreviations used in this discussion:
BRE TFB/TFIIB recognition element
CLR/HTH cyclin-like repeat/helix-turn-helix domain
DPBB double psi beta barrel
DDRP DNA-dependent RNA polymerase
GTF general transcription factor
LECA last eukaryotic common ancestor
LUCA last universal common ancestor
Ms Methanocaldococcus sp. FS406-22,
PIF primordial initiation factor
RDRP RNA-dependent RNA polymerase
RNAP RNA polymerase
Sc Saccharomyces cerevisiae
TFB transcription factor B
TFIIB transcription factor for RNAP II
factor B Tt, Thermus thermophilus
Different binding partners in the function. The different binding partners can change the rate of transcription. You may be comparing a light switch to a light dimmer and not know it. Gpuccio’s method measures protein sequence divergence over time showing resistance to change based on purifying selection. This allows you to demonstrate substitutability and therefor genetic information. You first need to understand his method before trying to make an argument. So far you are talking over him. When you compare a eukaryotic cell to a prokaryotic cell you are using apples and oranges for your comparison and your argument fails.
Sven Mil:
As you have tried to make your non arguments more detailed, you certainly deserve a more detailed answer. As at present I can only answer from my phone, I will be brief for the moment (I am very bad at typing on the phone). Tomorrow I should be able to answer in greater length.
Your biggest errors (but not the only ones) are:
a) Thinking that I am denying that the two proteins are homologues, or evolutionary related. That is completely false. I have simply blasted the two proteins, and found no detectable homology. That is a simple fact. You can blast them too, and you will have the same result. That means that there is no obvious sequence homology using the default blast algorithm. Again, that is a very simple fact. I have also said that the authors of the paper I linked had used a different method, using structural considerations and different alignment algorithms, because they were interested in detecting a weak relationship to find a possible evolutionary relationship. That’s perfectly fine, but I have no interest in affirming or denying a possible evolutionary relationship. If the two proteins are evolutionarily related, that’s no problem for me. As you know, I believe in Common Descent by design.
b) Thinking that two similar functions are identical. I have already discussed that. Just to add a point, of course all proteins that bind DNA, and that includes all TFs, have a DBD. I don’t think that makes their functions identical, virtually or not.
c) Thinking that I have problems with the idea that two proteins with highly different sequence can have a similar function. I have no problems with that. But the simple fact remains that in most cases proteins that retain a highly similar, maybe almost identical function through billions of years, like the alpha and beta chains of ATP synthase, show high sequence conservation. Look also at histones and ubiquitin, and thousands and thousands of other examples. Nobody who really believes in the basics of modern biology can deny that sequence conservation through long evolutionary periods is a measure of functional constraint.
d) Thinking that I can detect only high sequence homologies. That is completely false. I use the default blast algorithm to have always the same tool in measuring sequence homology. And the default blast algorithms detects very well most sequence homologies, both low and high, and gives a definite measure of the relevance of those homologies in statistical terms, the E value. So, when I say that I could find no detectable homology, I mean a very precise fact: that blasting those two sequences, that I have clearly indicated, with the default blast algorithm, no homology is detected that reaches a significant E value. Again, you can blast the two sequences yourself. This is the method commonly used to detect homology between sequences.
e) So, my procedure detects sequence homologies, both weak and strong. I am interested in jumps not because I can only detect jumps, as you foolishly seem to suggest, buy because jumps are clear indicators of design. I find a lot of jumps, some of them really big, and I find a lot of non jumps. As my graphics clearly show. For example, as I have argued in this same thread, TFs usually do not show big jumps, for example at the vertebrate level, for two interesting reasons:
1) Their DBDs are highly conserved and very old, older usually than the vertebrate appearances, usually already well detectable in single celled eukaryotes.
2) Their other domains or sequences are usually poorly conserved during the evolutionary history of metazoa. However, there are strong indications that such a sequence diversification is functional, and not simply a case of neutral variation in non functional sequences. I have made this argument here for RelA, at post #29.
Well, that is enough for the moment.
Displacement of the transcription factor B reader domain during transcription initiation
Stefan Dexl, Robert Reichelt, Katharina Kraatz, Sarah Schulz, Dina Grohmann, Michael Bartlett, Michael Thomm
Nucleic Acids Research, Volume 46, Issue 19, Pages 10066–10081
DOI: 10.1093/nar/gky699
Design Principles Of Mammalian Transcriptional Regulation
Dynamic interplay between enhancer–promoter topology and gene activity
A genome disconnect
Highly rearranged chromosomes reveal uncoupling between genome topology and gene expression
The plot thickens:
Does rearranging chromosomes affect their function?
GP,
The plot thickens…
“changes in chromatin domains were not predictive of changes in gene expression. This means that besides domains, there must be other mechanisms in place that control the specificity of interactions between enhancers and their target genes.”
More control mechanisms?
Don’t we have enough control mechanisms to keep track of already?
🙂
Biology research seems like a never-ending story:
The more we know, more is there for us to learn from.
Really fascinating, isn’t it?
OLV:
Thank you for the very interesting links.
Yes, we are certainly not even near to a real understanding of how transcription is regulated.
More on these fascinating topics as soon as I can use again a true keyboard! 🙂
GPuccio,
It’s my pleasure to post links to interesting papers that sometimes I find in different journals. In some cases they may shed more light on the discussed topics.
Sven Mil, OLV and all:
Some more facts:
1) The archaeal TFB shows definite and highly significant sequence homology with human TFIIB. These are the results of the usual Blast alignment, always using the defaul algorithm and nothing else:
Proteins: Human general TFIIB (Q00403) vs archaeal TFB (A0A2D6Q6B7):
Identities: 93;
Positives: 154;
Bitscore: 172 bits;
E value: 2e-56
2) No significant sequence homology can be detected instead, using the same identical methodology, between bacterial sigma factor 70 and the archeal TFB:
Proteins: Sigma factor 70 E. coli (P00579) vs archaeal TFB (A0A2D6Q6B7):
Identities: 25;
Positives: 44;
Bitscore: 16.2 bits;
E value: 2.0
3) And, of course, as already said, no significant sequence homology can be detected, using the same identical methodology, between bacterial sigma factor 70 and human TFIIB:
Proteins: Sigma factor 70 E. coli (P00579) vs Human general TFIIB (Q00403):
Identities: 32;
Positives: 49;
Bitscore: 16.9 bits;
E value: 1.4
(plus three more non significant short alignments, with E values of 2.5, 2.7, 3.6)
These are simple facts, that can be verified by all. At sequence level, there is a definite homology (anyway only partial) between the archaeal protein and the human protein. That corresponds to the well known concept that transcitpion initiation in archaea is much more similar to transcription initiation in eukaryotes, while in bacteria it is very different. Indeed, no significant sequence homology can be detected, always using that same methodology, between the human and the bacterial protein, or between the bacterial and the archaeal protein. These simple facts are undeniable.
Check what I have written in my comment #202, to John_a_designer:
“Now, in eukaryotes there are six general TFs. Archea have 3. In bacteria sigma factors have the role of general TFs. Sigma factors, archaeal general TFs and eukaryotic general TFs seem to share some homology. I think that the archaeal system, however, is much more similar to the eukaryotic system, and that includes RNA polymerases.
…
While archaea are more similar to eukaryotes in the system of general TF, the regulation of transcription by one or two suppressor or activator seems to be similar to what described for bacteria.
Finally, there is another important aspect where archaea are more similar to eukarya. Their chromatin structure is based on histones and nucleosome, like in eukaryotes, but the system is rather different from the corresponding eukaryotic system.
Instead, bacteria have their form of DNA compression, but it is not based on histones and nucleosomes.”
GPuccio @253:
Excellent explanation. Thanks!
This discussion is the third most visited the last 30 days!
Definitely a fascinating topic.
Congratulations to GP!
To all:
It seems perfectly natural that a polymorphic semiotic system like NF-kB is strictly regulated by another universal semiotic system, the ubiquitin system. And the regulation is not simple at all, but deeply and semiotically complex:
The Met1-Linked Ubiquitin Machinery: Emerging Themes of (De)regulation
https://www.cell.com/molecular-cell/fulltext/S1097-2765(17)30649-4?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1097276517306494%3Fshowall%3Dtrue
To all:
Oh, this is really new. Did you know that TFs seem to have a key role not only in nuclear transcription regulation, but also in the regulation of those other strange genome-bearing organelles, the mitochondria?
Of course, NF-kB is one of the TF systems involved there, too:
Nuclear Transcription Factors in the Mitochondria: A New Paradigm in Fine-Tuning Mitochondrial Metabolism.
https://www.ncbi.nlm.nih.gov/pubmed/27417432
Emphasis mine.
A new paradigm in fine tuning? We are becoming accustomed to that kind of thing, I suppose! 🙂
To all:
Another rather exotic level of regulation of the NF-kB system: immunophilins.
Regulation of NF-kB signalling cascade by immunophilins
http://www.eurekaselect.com/131456/article
Emphasis mine.
You may rightfully ask: what are immunophilins?
Let’s take a simple answer from Wikipedia:
“immunophilins are endogenous cytosolic peptidyl-prolyl isomerases (PPI) that catalyze the interconversion between the cis and trans isomers of peptide bonds containing the amino acid proline (Pro). They are chaperon molecules that generally assist in the proper folding of diverse “client” proteins”.
Here is a recent review about them:
Biological Actions of the Hsp90-binding Immunophilins FKBP51 and FKBP52
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6406450/
In particular, section 6: “Immunophilins Regulate NF-kB Activity”
GPuccio,
Definitely you’re on a roll!
You’ve referenced several very interesting papers in a row.
GP @257:
“orchestrate and fine-tune cellular metabolism at various levels of operation.”
“A new paradigm in fine tuning? We are becoming accustomed to that kind of thing,”
Agree.
GP @256:
“It seems perfectly natural that a polymorphic semiotic system like NF-kB is strictly regulated by another universal semiotic system, the ubiquitin system. And the regulation is not simple at all, but deeply and semiotically complex:”
Is it also natural that those semiotic systems resulted from natural selection operating on random variations over gazillion years?
I’m looking for the literature where this is explained.
For example, what did those systems evolve from? What were their ancestors?
OLV @261:
Maybe Sven Mil can help to answer your questions, after he responds the GP’s comments addressed to him after his last comment @240?
🙂
jawa,
That discussion is over. GP took care of it appropriately and wisely continued to provide very interesting information on the current topic.
This thread has already exceeded my expectations.
GP,
There’s so much literature on transcription regulation that it’s difficult to look at them all. Here’s just a small sample:
(Note that you have cited some of these papers)
Transcription-driven chromatin repression of Intragenic transcription start sites
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6373976/
Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6585034/
Computational Biology Solutions to Identify Enhancers-target Gene Pairs
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6611831/
Detection of condition-specific marker genes from RNA-seq data with MGFR
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6542349/
Enhancer RNAs: Insights Into Their Biological Role
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6505235/
Epigenetic control of early dendritic cell lineage specification by the transcription factor IRF8 in mice
http://www.bloodjournal.org/co.....ecked=true
Competitive endogenous RNA is an intrinsic component of EMT regulatory circuits and modulates EMT
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6456586/
Delta Like-1 Gene Mutation: A Novel Cause of Congenital Vertebral Malformation
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6593294/
GP,
The increasing number of research papers on this OP topic definitely point to complex functional information processing systems with multiple control levels that can only result from conscious design.
Please, I would like to read your comments on any of the papers linked @264 that you haven’t cited before. Thanks.
Widespread roles of enhancer-like transposable elements in cell identity and long-range genomic interactions
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6314169/
The fact is Gpucc, you are unable to detect homology between two proteins that perform the same function and that have been shown to be homologs by other methods.
This means your method is simply not sensitive enough (as you have already admitted) to trace the evolution of proteins back in the way that you are attempting to.
You can detect high conservation (i.e. when a protein’s functional niche has become well-defined and locked into place evolutionarily speaking) but you are completely unable to detect the actual evolution of a protein
And that’s why you will always find your “jumps” if you go back far enough.
You seem smart enough that I’d bet you knew that from the start… Guess I shouldn’t really be surprised
sven mil:
How are using the word “homology”? Convergence explains two different proteins having the same function, as does a common design.
Is there any evidence that blind and mindless processes can produce proteins? I would think that gpuccio is open to the concept of proteins evolving by means of intelligent design.
GP, Sven Mil, ET, et al,
I’m ignorant of basic biology.
I’ve tried to understand what you’re discussing but can’t figure it out.
Please, explain this to me in easy to understand terms:
1. Are you comparing two proteins P1 and P2 which work for prokaryotes (P1) and eukaryotes (P2) respectively?
2. Could P1 work for eukaryotes too?
2.1. If YES then why wasn’t it kept in eukaryotes rather than being replaced by P2?
3. Any idea how P1 and P2 could have appeared?
I may have more questions, but these are fine to start.
Note that I would like to read the answers from all of you and from other readers of this discussion.
Thanks.
Sven Mil at #267:
You really don’t understand, do you?
My method (blast alignment by the default algorithm) is simply the method used routinely by almost all researchers to detect sequence homology. So, I am not doing anything particular, as you seem to believe.
Those who are interested in detecting weak and distant homologies, of course, can use other methods, like more sensitive algorithms of alignment and structural similarities, if they like. That will give higher sensitivity and lower specificity in detecting if two proteins are homologues. IOWs, more false positives.
As I have said many times, I am not trying to detect if two proteins are distant homologues, because that has nothing to do with my reasoning.
The researchers you quote say that sigma factor and human TFIIB are homologues? Maybe. Maybe not. Anyway, I have no problems with that statement. If they are, they are. That makes no difference in my reasoning.
More in next post.
Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6614138/
Natural antisense transcripts are common features of mammalian genes providing additional regulatory layers of gene expression. A comprehensive description of antisense transcription in loci associated to familial neurodegenerative diseases may identify key players in gene regulation and provide tools for manipulating gene expression.
This work provides evidence for the existence of additional regulatory mechanisms of the expression of neurodegenerative disease-causing genes by previously not-annotated and/or not-validated antisense long noncoding RNAs.
Sven Mil (and all):
What you really don’t understand (or simply pretend not to understand) is that I am in no way trying to “trace the evolution of proteins back”, as you seem to believe. I am trying to detect and locate in space and time the appearance of new complex functional information during the evolution of proteins, whatever their distant origin may be. That’s why I look for information jumps, in the form of new specific sequences that appear at some evolutionary time and are then conserved for hundreds of million years. I have explained the rationale for that many times.
You say that I “will always find those jumps if I go back far enough. That’s simply not true.
Take, for example, the case of the alpha and beta chains of ATP synthase, that I often use as an example. There is no jump there. Exact probably when those proteins first appeared, but we simply don’t know, because those proteins are present in bacteria and in all living eukaryotes. So, no jumps here: only thousands of bits of information conserved for billions of years. You just have to explain how that functional information came into existence.
Instead, I have described a lot of functional jumps in the transitions to vertebrates: functional proteins that usually already existed, or sometimes appear de novo, and whose sequence specificity is then conserved for the next 400 million years.
So, I detect those jumps for the simple reason that they are there. Those proteins, even if they already existed in previous deuterostomia and chordates, have been highly re-engineered in vertebrates.
Do I “always find jumps”?
Absolutely not. For a lot of proteins, there is no jump at the transition to vertebrates. They remain almost the same, or you can observe those weak and gradual differences that are compatible with neutral evolution. The simple reason for that is that those proteins have not been re-engineered in vertebrates, they have just kept their old function. The alpha and beta chains of ATP synthase are good examples.
But a lot of other proteins do show big jumps at the transition to vertebrates. Those are the jumps that I have discussed in my OPs.
IOWs, I detect jumps if they are there, and i do not detect them if they are not there. As it should be.
IOWs, Blast as I use it is a very good tool to detect and measure sequence homology between proteins.
As serious scientists all over the world know very well.
GP @270 & 272:
Clear concise explanations. Thanks.
Let’s hope Sven Mil gets it this time.
Sometimes the penny doesn’t drop right away.
🙂
PeterA @269:
I’m biology-challenged too, but regarding your third question, I think most proteins are produced through gene expression: transcription, post-transcriptional modifications, translation, post-translational modifications.
Thank you, gpuccio. “Sequence homology” is not functional similarity. Sven Mil seems to have the two confused.
August 4, 2019 at 1:41 am
Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6614138/
Natural antisense (AS) transcripts are RNA molecules that are transcribed from the opposite DNA strand to sense (S) transcripts, partially or fully overlapping to form S/AS pairs. It is now well documented that AS transcription is a common feature of genomes from bacteria to mammals
Thousands of lncRNA genes have been identified in mammalian genomes, with their number increasing steadily
It is now clear that lncRNAs can regulate several biological processes, including those that underlie human diseases and yet their detailed functional characterization remains limited.
Altogether, our results highlight the enormous complexity of gene regulation by antisense lncRNAs at any given locus
Regarding the confused criticism presented by Sven Mil, I feel sorry for the guy. It must feel bizarre to try so hard and get nothing out of it. Wasted effort, unless he finally understands GP’s idea. Let’s hope so. 🙂
This OP reminds us of this fact:
“biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.”
“It is, from all points of view, amazing.”
“Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.”
“And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?”
“Do you still have any doubts?”
So far only a confused commenter (Sven Mil) has attempted unsuccessfully to present a counter argument.
The last comment by Sven Mil (@267) was clearly responded @270 and @272.
Is there another reader that would like to present contrarian arguments?
[crickets]
Here’re some interesting papers cited in this OP:
The Human Transcription Factors
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme
Selectivity of the NF-?B Response
30 years of NF-?B: a blossoming of relevance to human pathobiology
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
NF-kB oscillations translate into functionally related patterns of gene expression
NF-?B Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
Some papers cited in the comments:
@3:
Two of the papers I quote in the OP:
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
and:
NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
are really part of a research topic:
Understanding Immunobiology Through The Specificity of NF-kB
including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology.
Here are the titles:
Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB
An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms
Cellular Specificity of NF-kB Function in the Nervous System
Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF
Techniques for Studying Decoding of Single Cell Dynamics
NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP)
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP)
Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics
+++++++
@13:
Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B
@15:
Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression
@17:
Introduction to the Thematic Minireview Series: Chromatin and transcription
@20:
Cellular Specificity of NF-?B Function in the Nervous System
@21:
Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB
@29:
Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation
@52:
Lnc-ing inflammation to disease
@67
Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit
@96
The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases
@131
Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears
@133
Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism
Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA)
Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains
@139
Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift
@192
Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis
The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection.
@193
Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation
@194
On chaotic dynamics in transcription factors and the associated effects in differential gene regulation
Off topic:
When I read a paper that mentions proteins automatically GP’s quantitative method comes to mind. 🙂
In the following text several proteins are mentioned.
The given article claims that most of them are very conserved through numerous biological systems.
I wonder what are the genetic regulatory mechanisms associated with the mentioned proteins.
Here’s the text:
In essence, SAC is a cellular signaling pathway. Multiple mitotic kinases and their substrates are involved in this signaling. Therefore, the correct position of specific kinases to its substrates is of great importance for the functional integrity of the SAC. We envision the kinetochore localization of SAC factors may serve several functions. First, the kinetochore localization of Mps1 kinase (and Bub1, Plk1 kinase and CDK1-Cyclin B) positions the kinase close to their substrates (i.e., Knl1). Second, the kinetochore localization of Bub1 serves as a scaffold to recruit its downstream factors such as BubR1, Mad1/Mad2 and RZZ. Last, the kinetochore localization of Mps1 and Bub1 may facilitate their own activation due to the higher local concentration at kinetochore.
The hierarchical recruitment pathway of SAC is becoming elucidated gradually. In brief, Aurora B activity boosts the kinetochore recruitment and activation of Mps1. Then, Mps1 phosphorylates Knl1, and in turn, phosphorylated Knl1 recruits Bub1/Bub3. Bub1 works as a scaffold to recruit BubR1/Bub3, Mad1/Mad2, RZZ and Cdc20. Despite important progress, many outstanding questions remain. For example, an exact molecular delineation of how Aurora B activity and ARHGEF17 promote Mps1 kinetochore recruitment remains elusive. Future studies to address these questions will definitely deepen our understanding on SAC signaling. Advanced protein structural analyses, protein-protein interaction interface delineation and protein localization dynamics analyses using super-resolution imaging tool combination with optogenetic operation will pave our way in future.
Recent Progress on the Localization of the Spindle Assembly Checkpoint Machinery to Kinetochores
Cells. 2019 Mar; 8(3): 278.
doi: 10.3390/cells8030278
In response to the comment @279 we hear only the sound of silence.
🙂
OLV @279:
You should be patient. Sven Mil is probably a very busy scientist, hence he can’t comment here on demand. You have to wait. It’s possible he’s related to professors Art Hunt or Larry Moran. 🙂
The Human Transcription Factors
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme
Selectivity of the NF-?B Response
30 years of NF-?B: a blossoming of relevance to human pathobiology
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
NF-kB oscillations translate into functionally related patterns of gene expression
NF-?B Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
30 years of NF-?B: a blossoming of relevance to human pathobiology
+++++++
@3:
Two of the papers I quote in the OP:
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle
and:
NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration
are really part of a research topic:
Understanding Immunobiology Through The Specificity of NF-kB
including 8 very interesting and very recent papers about NF-kB, at Frontiers in Immunology.
Here are the titles:
Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF-kB
An NF-kB Activity Calculator to Delineate Signaling Crosstalk: Type I and II Interferons Enhance NF?B via Distinct Mechanisms
Cellular Specificity of NF-kB Function in the Nervous System
Immune Differentiation Regulator p100 Tunes NF-kB Responses to TNF
Techniques for Studying Decoding of Single Cell Dynamics
NF-kB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration (quoted in the OP)
Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle (quoted in the OP)
Signal Distortion: How Intracellular Pathogens Alter Host Cell Fate by Modulating NF-kB Dynamics
+++++++
@13:
Signaling Crosstalk Mechanisms That May Fine-Tune Pathogen-Responsive NF?B
@15:
Transcription factor oscillations in neural stem cells: Implications for accurate control of gene expression
@17:
Introduction to the Thematic Minireview Series: Chromatin and transcription
@20:
Cellular Specificity of NF-?B Function in the Nervous System
@21:
Transcriptional Control of Synaptic Plasticity by Transcription Factor NF-kB
@29:
Single-molecule dynamics and genome-wide transcriptomics reveal that NF-kB (p65)-DNA binding times can be decoupled from transcriptional activation
@52:
Lnc-ing inflammation to disease
@67
Long non-coding RNA: a versatile regulator of the nuclear factor-kB signalling circuit
@96
The Impact of Transposable Elements in Genome Evolution and Genetic Instability and Their Implications in Various Diseases
@131
Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears
@133
Genetic diversity of CHC22 clathrin impacts its function in glucose metabolism
Environmental contaminants modulate the transcriptional activity of polar bear (Ursus maritimus) and human peroxisome proliferator-activated receptor alpha (PPARA)
Evolutionary history and palaeoecology of brown bear in North-East Siberia re-examined using ancient DNA and stable isotopes from skeletal remains
@139
Polar bear evolution is marked by rapid changes in gene copy number in response to dietary shift
@192
Crosstalk between NF-kB and Nucleoli in the Regulation of Cellular Homeostasis
The Crosstalk of Endoplasmic Reticulum (ER) Stress Pathways with NF-kB: Complex Mechanisms Relevant for Cancer, Inflammation and Infection.
@193
Transcription factor NF-kB in a basal metazoan, the sponge, has conserved and unique sequences, activities, and regulation
@194
On chaotic dynamics in transcription factors and the associated effects in differential gene regulation
@209
Conservation and divergence of p53 oscillation dynamics across species
@220
The functional analysis of MicroRNAs involved in NF-kB signaling.
@222
gga-miR-146c Activates TLR6/MyD88/NF-?B Pathway through Targeting MMP16 to Prevent Mycoplasma Gallisepticum (HS Strain) Infection in Chickens
Temporal characteristics of NF-?B inhibition in blocking bile-induced oncogenic molecular events in hypopharyngeal cells
@223
The Regulation of NF-?B Subunits by Phosphorylation
The Ubiquitination of NF-?B Subunits in the Control of Transcription
A Role for NF-?B in Organ Specific Cancer and Cancer Stem Cells
@226
Veins and Arteries Build Hierarchical Branching Patterns Differently: Bottom-Up versus Top-Down
@236
Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function
@237
Computational Biology Solutions to Identify Enhancers-target Gene Pairs
@245
Displacement of the transcription factor B reader domain during transcription initiation
@246
Design Principles Of Mammalian Transcriptional Regulation
@247
Dynamic interplay between enhancer–promoter topology and gene activity
@248
A genome disconnect
Highly rearranged chromosomes reveal uncoupling between genome topology and gene expression
Does rearranging chromosomes affect their function?
@256
The Met1-Linked Ubiquitin Machinery: Emerging Themes of (De)regulations
@257
Nuclear Transcription Factors in the Mitochondria: A New Paradigm in Fine-Tuning Mitochondrial Metabolism.
@258
Regulation of NF-kB signalling cascade by immunophilins
Biological Actions of the Hsp90-binding Immunophilins FKBP51 and FKBP52
@264
Transcription-driven chromatin repression of Intragenic transcription start sites
Genome-wide enhancer annotations differ significantly in genomic distribution, evolution, and function
Computational Biology Solutions to Identify Enhancers-target Gene Pairs
Detection of condition-specific marker genes from RNA-seq data with MGFR
Enhancer RNAs: Insights Into Their Biological Role
Epigenetic control of early dendritic cell lineage specification by the transcription factor IRF8 in mice
Competitive endogenous RNA is an intrinsic component of EMT regulatory circuits and modulates EMT
Delta Like-1 Gene Mutation: A Novel Cause of Congenital Vertebral Malformation
@266
Widespread roles of enhancer-like transposable elements in cell identity and long-range genomic interactions
@271
Antisense Transcription in Loci Associated to Hereditary Neurodegenerative Diseases
@282
Recent Progress on the Localization of the Spindle Assembly Checkpoint Machinery to Kinetochores
GP,
Off topic: the plot thickens…
Another ubiquitin-related stuff? 🙂
How Does SUMO Participate in Spindle Organization?
https://www.mdpi.com/2073-4409/8/8/801
OLV:
Yes, SUMO is a very interesting “side actor” in the already extremely complex ubiquitin system! 🙂
By the way, thank you for the very detailed summaries, my friend. 🙂
Also, as far as I understand sumo tags must be cleaved (removed) prior to ubiquitin guided protein degradation.
GP @287:
My pleasure.
Bill Cole @288:
Any idea how is that cleaving mechanism established and activated?
NF-kB
Hepatoprotective Effects of Morchella esculenta against Alcohol-Induced Acute Liver Injury in the C57BL/6 Mouse Related to Nrf-2 and NF-kB Signaling
A novel curcumin analog inhibits canonical and non-canonical functions of telomerase through STAT3 and NF-kB inactivation in colorectal cancer cells
Validation of the prognostic value of NF-kB p65 in prostate cancer: A retrospective study using a large multi-institutional cohort of the Canadian Prostate Cancer Biomarker Network
Olv
It is a cleaving enzyme so it is transcribed at some interval. If it does not work properly it can be potentially responsible for certain diseases. As far as I can tell regulation is from transcription rates. Gpuccio, do you agree?
Graphic
Bill Cole,
Thanks.
The NF-kB Signaling is quite simple. 🙂
Another NF-kB paper
Popular Posts (Last 30 Days)
Now Steve Pinker is getting #MeToo’d, at Inside… (2,614)
Controlling the waves of dynamic, far from… (2,330)
Atheism’s problem of warrant (–>… (1,850)
Chemist James Tour calls time out on implausible… (1,209)
Are extinctions evidence of a divine purpose in life? (1,196)
NF-kB is all over the map. 🙂
It’s funny that before this OP I didn’t notice this NF-kB, but now it seems to pop up in many papers.
I like the poetic way this OP ends:
I would add another question: any objection?
PeterA,
Perhaps Sven Mill will answer all those questions next time he comes back to respond GP’s comments @270 and @272. 🙂
Maybe Dr Art Hunt or Dr Larry Moran could assist Sven Mil to write a coherent objection to your comment @299. 🙂
Just wait… be patient. 🙂
NF-kB graphical illustrations (links):
NF-kB Signaling
NF-kB mechanism of action (OP) B
NF-kB Activation in Lymphoid Malignancies
NF-kB Pathway
NF-kB Signalling
NF-kB more images
Deletion of NFKB1 enhances canonical NF-?B signaling and increases macrophage and myofibroblast content during tendon healing
GP @2:
“It should be rather obvious that, if the true purpose of biological beings were to achieve the highest survival and fitness, as neo-darwinists believe, life should have easily stopped at prokaryotes.”
That’s an interesting observation indeed.
Far from stopping at the comfortable fitness level of prokaryotes, evolution produced a mind boggling information jump to eukaryotes!
How come? How can one explain that?
If my children and grandchildren ask me why and how that phenomenal jump occurred, what should I tell them?
PeterA:
“If my children and grandchildren ask me why and how that phenomenal jump occurred, what should I tell them?”
Tell them that it’s widely accepted that it all resulted from long evolutionary processes, mainly RV+NS.
PavelU:
Why tell them a lie?
ET,
That’s what is written in the textbooks. Are you implying that the textbooks are incorrect? Really?
There’s abundant literature supporting RV+NS.
PavelU- There isn’t any literature supporting the claim that NS, which includes RV, can do anything beyond merely changing allele frequency over time, within a population. Speculation based on the assumption abounds in textbooks. But no one knows how to test the claim that NS, drift or any other blind and mindless process can actually do as advertised.
That is why probability arguments exist. There isn’t any actual data, observations or experiments to support it.
If the textbooks claim otherwise then they are promoting lies, falsehoods and misrepresentations.
ET,
I like your comment. Good point. But I doubt PavelU will understand it, because the poor guy seems oblivious. He should wake up and smell the flowers in the garden. 🙂
Jawa- “Their” argument is and always has been “that X exists and we know (wink, wink) it wasn’t via intelligent design. It’s just a matter of time before we figure it all out.” It does make for a nice narrative, though. I was impressed when I went to the Smithsonian and saw the short movie on how life’s diversity arose. But it all seemed so Lamarckian, though, as it still does. They always talk about physical transformations without any discussion of the mechanisms capable of carrying them out. There is never any genetic link.
And that is very telling
ET,
Here’s something for you and your friends to learn from before you write your next comment:
A New Clue to How Life Originated
A long-standing mystery about early cells has a solution—and it’s a rather magical one.
https://www.theatlantic.com/science/archive/2019/08/interlocking-puzzle-allowed-life-emerge/595945/
Last part of PavelU’s cited article:
To wit:
Wow, PavelU- I had just finished reading that article about a half hour ago. You do realize that not just any membrane will do, right? You have to get nutrients in and waste out. You also have to be able to communicate with the different compartments. But most of all, without some internal replication mechanism, nothing will ever come from lipid bubbles with amino acids.
But yes, it is all interesting stuff an shows how desperate some people are.
But I digress- This at least seems to produce another catch-22. Lipid bubbles can’t survive without amino acids and the molecules of life cannot survive without some environmental barrier. And lipid bubbles present such a barrier.
So a cytoplasm filled with amino acids- to some extent- would create a stable barrier along with the raw materials needed to produce proteins. But not just any protein will do. And the method of producing them will be too slow to be effective- if it’s even capable.
barriers pores pumps and gates
From a design standpoint this all makes sense- this foundational requirement- the ready made selectively permeable membrane. It has a cytoplasm teeming with amino acids for structural support of that membrane. And they are also raw materials for making proteins. The proteins used in creating the pores and channels.
Back to the OP topic:
Inhibition of LPS-Induced Oxidative Damages and Potential Anti-Inflammatory Effects of Phyllanthus emblica Extract via Down-Regulating NF-kB, COX-2, and iNOS in RAW 264.7 Cells
Loss of BAP1 Is Associated with Upregulation of the NFkB Pathway and Increased HLA Class I Expression in Uveal Melanoma
Decursinol angelate ameliorates 12-O-tetradecanoyl phorbol-13-acetate (TPA) -induced NF-kB activation on mice ears by inhibiting exaggerated inflammatory cell infiltration, oxidative stress and pro-inflammatory cytokine production
A novel curcumin analog inhibits canonical and non-canonical functions of telomerase through STAT3 and NF-kB inactivation in colorectal cancer cells
Azithromycin Polarizes Macrophages to an M2 Phenotype via Inhibition of the STAT1 and NF-kB Signaling Pathways
In the papers cited @314 note the following topics associated with the current OP:
NF-kB down-regulation
NF-kB up-regulation
NF-kB activation
NF-kB inactivation
Inhibition of NF-kB Signaling Pathway
GP,
Please, help me with this:
A lesson in homology ?
https://doi.org/10.7554/eLife.48335
https://elifesciences.org/articles/48335#x339477c1
Does that mean that something else is involved in this complex process besides the genes and the signaling pathways ?
Thanks.
GP,
Here’s a major hint to answer the question @316:
I’m beginning to like that cool co-option idea.
🙂
Have you seen any coherent explanation of how it all works ?
Could this be a potential topic for a future OP?
OLV at #316-317:
I had a look at the paper. I am nor sure what the problem is.
It is not surprising, IMO, that some master TFs have an important role in the spacial definition of limbs and appendages, in different types of animals. What’s the problem there? Establishing three dimensional axes seems to be a very basic engineering tool, I am not suprised at all that it is basicallty implemented by the same TF families in different beings.
Of course, that does not explain at all the differences between limbs: those must be explained by other types of information, other genes or other epigenetic networks.
The establishment of axes and simmetries is one thing. The morphological definition of limbs is another thing.
It is rather clear that biological engineering takes place at different, well ordered levels. Some functions remain similar and are conserved, others neeed completely new implementations to generate diversity of function.
I am not sure of the supposed role of “cooption” in all this.
To all:
At #118 I have mentioned the complexity of the CBM signalosome, whose role in T-cell receptor (TCR)-mediated T-cell activation is fundamental. I have also mentioned how CARD11 is a wonderful example of a very big and complex protein exhibiting a huge information jump in vertebrates (see also the additional figure at the end of the OP).
Well, here is another very recent paper about CARD11:
Coordinated regulation of scaffold opening and enzymatic activity during CARD11 signaling.
https://www.ncbi.nlm.nih.gov/pubmed/31391255
Interesting. Coordinated regulation. Actively coordinate scaffold opening and the induction of enzymatic activity. See also the very interesting Fig. 6.
Oh, and please note the use of the words “has evolved to” in the end, just to mean “is able to”. 🙂
GP @319:
🙂
GP @318:
Excellent explanation, as usual. Thanks.
GP @319:
All that resulted from RV+NS+T?
🙂
That’s a very interesting paper that GP posted @319.
Here’s the link again for those who don’t want to scroll up to GP’s original comment:
http://m.jbc.org/content/early.....d=31391255
Structures of autoinhibited and polymerized forms of CARD9 reveal mechanisms of CARD9 and CARD11 activation
Nat Commun. 2019; 10: 3070.
doi: 10.1038/s41467-019-10953-z
https://www.nature.com/articles/s41467-019-10953-z
Communication codes in developmental signaling pathways
Pulin Li, Michael B. Elowitz
Development 2019 146: dev170977
doi: 10.1242/dev.170977
“communication codes” ?
Deletion of NFKB1 enhances canonical NF-kB signaling and increases macrophage and myofibroblast content during tendon healing
https://www.nature.com/articles/s41598-019-47461-5
This is interesting:
Popular Posts (Last 30 Days)
1. Now Steve Pinker is getting #MeToo’d, at Inside… (2,534)
Visited 2,679 times, 32 visits today
Posted on July 17
1 comment
2. Controlling the waves of dynamic, far from… (1,292)
Visited 2,543 times, 79 visits today
Posted on July 10
326 comments
Hey!
Has anybody seen Sven Mill lately?
Will he ever come back to respond GP’s comments @270 and @272?
Did he run out of objections?
Maybe Dr Art Hunt or Dr Larry Moran could assist him with writing a coherent counterargument ?
🙂
This is interesting, isn’t it?
Popular Posts (Last 30 Days)
Controlling the waves of dynamic, far from… (1,304)
(Visited 2,758 times, 250 visits today) Jul 10, 329 replies
Are extinctions evidence of a divine purpose in life? (1,272)
(Visited 1,272 times, 37 visits today). Aug 4, 11 replies
Chemist James Tour calls time out on implausible… (1,140)
(Visited 1,238 times, 9 visits today) Aug 19, 16 replies
Apes and humans: How did science get so detached… (959)
“Descartes’ mind-body problem” makes nonsense of materialism (947)
Jawa at #329:
Thank you for the statistics!
It’s good to see that the thread is still going rather well, even if I have been rather busy with other things. 🙂
OLV at #325:
Interesting paper.
Indeed, communication between cells, often very distant and different cells in multicellular organisms, requires at least three different levels of coding:
1) The message: a specific requirement to be sent to the target cells from the cells that originate the signals. This is a symbolic coding, because of course the “messenger” molecules, be they hormones, cytokines, or anything else, have really nothing to do with the response they are destined to evoke in the end. They are symbolic messengers, and nothing else. Moreover, the coding implies not only the type of messenger molecules, but also their concentration, distribution in the organism, and possibly modifications or interactions with other structures, as we have seen for example in the OP dedicated to the extracellular fluid.
2) First decoding step and transmission to the nucleus. This is usually a very complex step, where multiple levels of decoding interact in an extremely articulated way, often implying a lot of control of random noise and chaotic components, as seen in this thread. Moreover, many layers are superimposed here, starting from membrane receptors, their modulations, their immediate translation systems, and then the more complex pathways that translate the partially decoded message to the nuclear environment. Please note that at this level the message has already been partially decoded, but is still transmitted in rather symbolic form, usually as chains of molecular interactions that can assume multiple configurations and forms. The NF-kB system described in the OP is a very good example, with its many semiotic polymorphisms.
3) Finally, the ultimate decoding takes place in the nucleus, where the complex codes and subcodes of TFs, with their manifold interactions and tweakings, must in some way transform the initial message into an effective modulation, in space and time and intensity, of the transcription of multiple specific genes (often hundreds, or even thousands of them). The final result in cell behaviour modifications will be the controlled, and usually very efficient, consequence of the original message started by the activity of many distant cells in the organism.
All that is certainly beautiful and fascinating. But also amazing. Very much indeed.
To all:
Our good friend, the CBM signalosome, discussed in some detail in the OP and in the thread, has recently been the object of a very interesting “Research Topic” in Frontiers in Immunology.
Here is the link to the 15 articles:
Research Topic: CARMA Proteins: Playing a Hand of Four CARDs
https://www.frontiersin.org/research-topics/6853/carma-proteins-playing-a-hand-of-four-cards#articles
And here is the Editorial:
Editorial: CARMA Proteins: Playing a Hand of Four CARDs
https://www.frontiersin.org/articles/10.3389/fimmu.2019.01217/full
A few thoughts:
Very interesting.
GP @332:
This is very interesting indeed.
Thanks!
GP @332:
That collection of articles is a biology research treasure trove.
CARMA3: Scaffold Protein Involved in NF-kB Signaling
And this is just one of the 15 articles in the given collection.
Very interesting indeed.
“increased activation of NF-kB and MAPK via NFKB1 deletion enhance macrophage and myofibroblast content at the repair, driving increased collagen deposition and biomechanical properties.”
Sci Rep. 2019; 9: 10926.
doi: 10.1038/s41598-019-47461-5
PMCID: PMC6662789
PMID: 31358843
Has anybody heard of Sven Mil lately?
When GP politely deflated Sven Mil’s hostile pseudo arguments, the guy simply disappeared from the scene.
Did he run for the doors in complete panic?
Or did he go to consult professors Art Hunt and Larry Moran, who had embarrassing experiences in this website?
🙂
This discussion remains among the top 5 most popular the last 30 days according to number of visits.
PS. Please, note that the comment @337 is related to the one @328.
Guess what interesting topic will GP write about in his next OP?
🙂
After this OP, now NF-kB seems to pop up everywhere. 🙂
The NF-kB Signaling Pathway
https://www.creative-diagnostics.com/The-NF-kB-Signaling-Pathway.htm
Fight Inflammation by Inhibiting NF-kB?
https://www.lifeextension.com/magazine/2019/7/fighting-inflammation-by-inhibiting-nf-kb/page-01
To all:
Speaking of semiotic polymorphism, this CBM signalosome, and the amazing CARDs involved in it, are a really good example of that concept.
Look, for example, at this article (one of the 15 menttioned at #332, and already mentioned by OLV at #335):
CARMA3: Scaffold Protein Involved in NF-?B Signaling
https://www.frontiersin.org/articles/10.3389/fimmu.2019.00176/full
Now, don’t be confused. We humans are simply adding some unnecessary complexity to the topic by calling these fascinating proteins with two different names:
CARD proteins (Caspase recruitment domain proteins)
or
CARMA proteins (Caspase recruitment domain and membrane-associated guanylate kinase-like proteins)
They are, however, the same thing.
Always to clarify, here is the corresponding nomenclature for the four proteins discussed in the Research topic quoted at #332:
CARMA1 = CARD11 (1154 AAs)
CARMA2 = CARD14 (1004 AAs)
CARMA3 = CARD10 (1032 AAs)
CARD9 seems to be, simply, CARD 9 (536 AAs)
OK, the paper mentioned here deals with CARMA3 in particular, but gives some interesting background information about the first three proteins:
So, let’s try to understand, maybe giving a look at Fig. 1 in the paper.
a) These 3 proteins are similar, and share similar domains. True, but beware: these are very different proteins, with individual sequences. A simple Blast shows that they share, at most, 400 bits of homology (in the human form), and that is really very little, for one thousand AA long proteins.
b) They are expressed from different genes and in different tissues. We already know that CARMA1/CARD11 is expressed in the lymphoid tissue, both B and T lymphocytes. CARMA2/CARD14 is expressed, instead, in the skin and mucosa. CARMA3/CARD10, finally, is expressed in the lungs, heart and liver. Such a strict compartmentalization is, in itself, very interesting.
c) Different tissue expression, of course, is connected to different roles. We already know that CARMA1/CARD11 is essential in the transmission of the specific immune signal in B and T lymphocytes, from BCR and TCR to the NF-Kb system. Its defects are involved in extremely serious hereditary immune definciencies. CARMA2/CARD14, on the other hand, “plays a critical role mediating IL-17RA signaling in keratinocytes”, and an anomaly of ots working seems to be connected to psoriasis. Finally, CARMA3/CARD10 “functions as an indispensable adaptor protein in modulating NF-?B signaling downstream of some GPCRs (G protein-coupled receptors), including angiotensin II receptor and lysophosphatidic acid receptor, as well as receptor tyrosine kinases (RTKs), such as epidermal growth factor (EGF) receptor and insulin-like growth factor (IGF) receptor (12–14). Recent studies indicate that besides NF-?B signaling, CARMA3 also serves as a modulator in antiviral RLR signaling, providing a new understanding of CARMA3.”
d) However, the other two components of the CBM signalosome, the two proteins BCL10 and MALT1, are ubiquitously expressed in all tissues, and they work in a similar way with all the different CARMA/CARD proteins.
e) Moreover, all these three different pathways, starting at completely different membrane receptors in completely different cells, and activated by each of the three mentioned CARMA/CARD proteins, converge, through BCL10 and MALT1, on the same basic pathway destined to transmit the signal to the nucleus: our well known NF-kB system, with all its complexity and flexibility.
f) Finally, of course, in the nucleus each different signal that was at the start of the activation, in some way, is translated into a completely different transcription pattern, involving maybe hundreds, or thousands, of different genes. So, B and T lymphocytes respond in a very specific way, and so keratynocytes, and so heart cells or liver cells.
That’s what I call semiotic polymorphism. At its best! 🙂
“semiotic polymorphism”?
No wonder the “3rd Way of evolution” folks are looking desperately for an quick extension to the neo-Darwinian theory.
At the pace you keep citing more papers revealing complex functionality and functional complexity within biological systems, soon they won’t know how to explain anything in biology, therefore will be forced to stick to the old established doctrine: RV+NS did it.
🙂