Transcription is certainly the essential node in the complex network of procedures and regulations that control the many activities of living cells. Understanding how it works is a fascinating adventure in the world of design and engineering. The issue is huge and complex, but I will try to give here a simple (but probably not too brief) outline of its main features, always from a design perspective.
Fig. 1 A simple and effective summary of a gene regulatory network
Introduction: where is the information?
One of the greatest mysteries in cell life is how the information stored in the cell itself can dynamically control the many changes that continuosly take place in living cells and in living beings. So, the first question is: what is this information, and where is it stored?
Of course, the classical answer is that it is in DNA, and in particular in protein coding genes. But we know that today that answer is not enough.
Indeed, a cell is an ever changing reality. If we take a cell, any cell, at some specific time t, that cell is the repository of a lot of information, at that moment and in that state. That information can be grossly divided in (at least) two different compartments:
a) Genomic information, which is stored in the sequence of nucleotides in the genome. This information is relatively stable and, with a few important exceptions, is the same in all the cells of a multicellular being.
b) Non genomic information. This includes all the specific configurations which are present in that cell at time t, and in particular all epigenetic information (configurations that modify the state of the genomic information) and, more generally, all configurations in the cell. The main components of this dynamic information are the cell transcriptome and proteome at time t and the sum total of its chromatin configurations.
Now, let’s try to imagine the flow of dynamic information in the cell as a continuous interaction between these two big levels of organization:
- The transcriptome/proteome is the sum total of all proteins and RNAs (and maybe other functional molecules) that are present in the cell at time t, and which define what the cell is and does at that time.
- The chromatin configuration can be considered as a special “reading” of the genomic information, individualized by many levels of epigenetic control. IOWs, while the genomic information is more or less the same in all cells, it can be expressed in myriads of different ways, according to the chromatin organization at that moment, which determines what genes or parts of the genome are “available” at time t in the cell. In this way, one genomic sequence can be read in multiple different ways, with different functional meanings and effects. So, if we just stick to protein coding genes, the 20000 genes in the human genome are available only partially in each cell at each moment, and that allows for a myriad of combinatorial dynamic “readings” of the one stable genome.
Fig. 2 shows the general form of these concepts.
Fig. 2
Two important points:
- The interaction between transcriptome/proteome and chromatin configuration is, indeed, an interaction. The transcriptome/proteome determines the chromatin configuration in many ways: for example, changing the methylation of DNA (DNA methyltransferases); or modifying the post-trascriptional modifications (methylation, acetylation, ubiquitination and others) of histones (covalent histone-modifying complexes), or creating new loops in chromatin (transcription factors); or directly remodeling chromatin itself (ATP-dependent chromatin remodeling complexes). In the same way, any modification of the chromatin landscape immediately influences what the existing transcriptome/proteome is and can do, because it directly changes the transcriptome/proteome as a result of the changes in gene transcription. Of course, this can modify the availability of genes, promoters, enhancers, and regulatory regions in general at chromatin level. That’s the meaning of the two big red arrows connecting, at each stage, the two levels of regulation. The same concept is evident in Fig. 1, which shows how the output of transcription has immediate, complex and constant feedback on transcription regulation itself.
- As a result of the continuous changes in the trascriptome/proteome and in chromatin configurations, cell states continuously change in time (yellow arrows). However, this continuous flow of different functional states in each cell can have two different meanings, as shown by the two alternative big brown arrows on the right:
- Cells can change dramatically, following a definite developmental pathaway: that’s what happens in cell differentiation, for example from a haematopoietic stem cell to differentiated blood cells like lymphocytes, monocytes, neutrophils, and so on. The end of the differetiation is the final differentiated cell, which is in a sense more “stable”, having reached its final intended “form”.
- Those “stable” differentiated cells, however, are still in a continuous flow of informational change, which is still drawn by continuous modifications in the transcriptome/proteome and in chromatin configurations. Even if these changes are less dramatic, and do not change the basic identity of the differentiated cell, still they are necessary to allow adaptation to different contexts, for example varying messages from near cells or from the rest of the body, either hormonal, or neurologic, or other, or other stimuli from the environment (for example, metabolic conditions, stress, and so on), or even simply the adherence to circadian (or other) rythms. IOWs, “stable” cells are not stable at all: they change continuously, while retaining their basic cell identity, and those changes are, again, drawn by continuous modifications in the transcriptome/proteome and in the chromatin configurations of the cell.
Now, let’s have a look at the main components that make the whole process possible. I will mention only briefly the things that have been known for a long time, and will give more attention to the components for which there is some recent deeper understanding available.
We start with those components that are part of the DNA sequence itself, IOWs the genes themselves and those regions of DNA which are involved in their trancription regulation (cis-regulatory elements).
Cis elements
Genes and promoters.
Of course, genes are the oldest characters in this play. We have the 20000 protein coding genes in human genome, which represent about 1.5% of the whole genomic sequence of 3 billion base pairs. But we must certainly add the genes that code for non coding RNAs: at present, about 15000 genes for long non coding RNAs, and about 5000 genes for small non coding RNAs, and about 15000 pseudogenes. So, the concept of gene is now very different than in the past, and it includes many DNA sequences that have nothing to do with protein coding. Moreover, it is interesting to observe that many non protein coding genes, in particular those that code for lncRNAs, have a complex exon-intron structure, like protein coding genes, and undego splicing, and even alternative splicing. For a good recent review about lncRNAs, see here:
Let’s go to promoters. This is the simple definition (from Wikipedia):
In genetics, a promoter is a region of DNA that initiates transcription of a particular gene. Promoters are located near the transcription start sites of genes, on the same strand and upstream on the DNA (towards the 5′ region of the sense strand). Promoters can be about 100–1000 base pairs long.
A promoter includes:
- The transcription start site (TSS), IOWs the point where transcription starts
- A binding site for RNA polymerase
- General transcription factors binding sites for , such as the TATA box and the BRE in eukaryotes
- Other parts that can interact with different regulatory elements.
Promoters have been classified as ‘focused’ or ‘sharp’ promoters (those that have a single, well-defined TSS), and ‘dispersed’ or ‘broad’ promoters (those that have multiple closely spaced TSS that are used with similar frequency).
For a recent review of promoters and their features, see here:
Eukaryotic core promoters and the functional basis of transcription initiation
Enhancers
Enhancers are a fascinating, and still poorly understood, issue. Again, here is the definition from Wikipedia:
In genetics, an enhancer is a short (50–1500 bp) region of DNA that can be bound by proteins (activators) to increase the likelihood that transcription of a particular gene will occur. These proteins are usually referred to as transcription factors. Enhancers are cis-acting. They can be located up to 1 Mbp (1,000,000 bp) away from the gene, upstream or downstream from the start site. There are hundreds of thousands of enhancers in the human genome. They are found in both prokaryotes and eukaryotes.
Enhancers are elusive things. The following paper:
Transcribed enhancers lead waves of coordinated transcription in transitioning mammalian cells
reports a total of 201,802 identified promoters and 65,423 identified enhancers in humans, and similar numbers in mouse (this in 2015). But there are probably many more than that number.
Working with specific TFs, enhancers are the main responsibles of the formation of dynamic chromatin loops, as we will see later.
Here is a recent paper about human enhancers in different tissues:
Genome-wide Identification and Characterization of Enhancers Across 10 Human Tissues.
Abstract:
Background: Enhancers can act as cis-regulatory elements (CREs) to control development and cellular function by regulating gene expression in a tissue-specific and ubiquitous manner. However, the regulatory network and characteristic of different types of enhancers(e.g., transcribed/non-transcribed enhancers, tissue-specific/ubiquitous enhancers) across multiple tissues are still unclear. Results: Here, a total of 53,924 active enhancers and 10,307 enhancer-associated RNAs (eRNAs) in 10 tissues (adrenal, brain, breast, heart, liver, lung, ovary, placenta, skeletal muscle and kidney) were identified through the integration of histone modifications (H3K4me1, H3K27ac and H3K4me3) and DNase I hypersensitive sites (DHSs) data. Moreover, 40,101 tissue-specific enhancers (TS-Enh), 1,241 ubiquitously expressed enhancers (UE-Enh) as well as transcribed enhancers (T-Enh), including 7,727 unidirectionally transcribed enhancers (1D-Enh) and 1,215 bidirectionally transcribed enhancers (2D-Enh) were defined in 10 tissues. The results show that enhancers exhibited high GC content, genomic variants and transcription factor binding sites (TFBS) enrichment in all tissues. These characteristics were significantly different between TS-Enh and UE-Enh, T-Enh and NT-Enh, 2D-Enh and 1D-Enh. Furt hermore, the results showed that enhancers obviously upregulate the expression of adjacent target genes which were remarkably correlated with the functions of corresponding tissues. Finally, a free user-friendly tissue-specific enhancer database, TiED (http://lcbb.swjtu.edu.cn/TiED), has been built to store, visualize, and confer these results. Conclusion: Genome-wide analysis of the regulatory network and characteristic of various types of enhancers showed that enhancers associated with TFs, eRNAs and target genes appeared in tissue specificity and function across different tissues.
Promoter and enhancer associated RNAs
A very interesting point which has been recently clarified is that both promoters and enhancers, when active, are transcribed. IOWs, beyond their classical action as cis regulatory elements (DNA sequences that bind trans factors), they also generate specific non coding RNAs. They are called respectively Promoter-associated RNAs (PARs) and Enhancer RNAs (eRNAs). They can be short or long, and both types seem to be functional in transcription regulation.
Here is a recent paper that reciews what is known of PARs, and their “cousins” terminus-associated RNAs (TARs):
Classification of Transcription Boundary-Associated RNAs (TBARs) in Animals and Plants
Here, instead, is a recent review about eRNAS:
Enhancer RNAs (eRNAs): New Insights into Gene Transcription and Disease Treatment
Abstract:
Enhancers are cis-acting elements that have the ability to increase the expression of target genes. Recent studies have shown that enhancers can act as transcriptional units for the production of enhancer RNAs (eRNAs), which are hallmarks of activity enhancers and are involved in the regulation of gene transcription. The in-depth study of eRNAs is of great significance for us to better understand enhancer function and transcriptional regulation in various diseases. Therefore, eRNAs may be a potential therapeutic target for diseases. Here, we review the current knowledge of the characteristics of eRNAs, the molecular mechanisms of eRNAs action, as well as diseases related to dysregulation of eRNAs.
So, this is a brief description of the essential cis regulatory elements. Let’s go now to trans regulatory elements, IOWs those molecules that are not part of the DNA sequence, but work on it to regulate gene transcription.
Trans elements
The first group of trans acting tools includes those molecules that are the same for all transcriptions. They are “general” transcription tools.
I will start with a brief mention of RNA polymerase, which is not a regulatory element, but rather the true effector of transcription:
DNA-directed RNA polymerases
This is a family of enzymes found in all living organisms. They open the double-stranded DNA and implement the transciption, synthesizing RNA from the DNA template.
I don’t want to deal in detail with this complex subject: suffice it to say, for the moment, that RNA polymerases are very big and very complex proteins, with some basic information shared fron prokaryotes to multicellular organisms. In humans, RNA polymerase II is the one responsible of the transcription of protein coding mRMAs, and of some non coding RNAs, including many lncRNAs. Just as an example, human RNA Pol II is a multiprotein complex of 12 subunits, for a sum total of more than 4500 AAs.
Now, let’s go to the general regulatory elements:
General TFs
Grneral TFs are transcription factors that bind to promoter to allow the start of transcription. They are called “general” because they are common to all transcriptions, while specific TFs act on specific genes.
In bacteria there is one general TF, the sigma factor, with different variants.
In archaea and in eukaryotes there are a few. In eukaryotes, there are six. The first that binds to the promoter is TFIID, a multiprotein factor which includes as its core the TBP (TATA binding protein, 339 AAs in humans), plus 14 additional subunits (TAFs), the biggest of which, TAF1, is 1872 AA long in humans. Four more general TFs bind sequencially the promoter. The sixth, TFIIA, is not required for basal transcription, but can stabilize the complex.
So, the initiation complex, bound to the promoter, is essentially made by RNA Pol II (or other) + the general TFs.
The following is a good review of the assembly of the initiation complex at the promoter:
Structural basis of transcription initiation by RNA polymerase II (paywall)
I quote here the conclusions of the paper:
Conclusions and perspectives
The initiation of transcription at Pol II promoters is a very complex process in which dozens of polypeptides cooperate to recognize and open promoter DNA, locate the TSS and initiate pre-mRNA synthesis. Because of its large size and transient nature, the study of the Pol II initiation complex will continue to be a challenge for structural biologists. The first decade of work, which started in the 1990s, provided structures for many of the factors involved and several of their DNA complexes. The second decade of research provided structural information on Pol II complexes and led to models for how general transcription factors function. Over the next decade, we hope that a combination of structural biology methods will resolve many remaining questions on transcription initiation, and elucidate the mechanism of promoter opening and initial RNA synthesis, the remodelling of the transient protein–DNA interactions occurring at various stages of initiation, and the conformational changes underlying the allosteric activation of initiation and the transition from initiation to elongation. Important next steps include more detailed structural characterizations of TFIIH and the 25-subunit coactivator complex Mediator, not only in their free forms but also as parts of initiation complexes.
For the Mediator, see next section.
And here is another good paper about that:
Zooming in on Transcription Preinitiation
which has a very good Figure summarizing it:
Fig. 3: From Kapil Gupta, Duygu Sari-Ak, Matthias Haffke, Simon Trowitzsch, Imre Berger: Zooming in on Transcription Preinitiation, https://doi.org/10.1016/j.jmb.2016.04.003 Creative Commons license
Transcription PIC. Class II gene transcription is brought about by (in humans) over a hundred polypeptides assembling on the core promoter of protein-encoding genes, which then give rise to messenger RNA. A PIC on a core promoter is shown in a schematic representation (adapted from Ref. [5]). PIC contains, in addition to promoter DNA, the GTFs TFIIA, B, D, E, F, and H, and RNA Pol II. PIC assembly is thought to occur in a highly regulated, stepwise fashion (top). TFIID is among the first GTFs to bind the core promoter via its TBP subunit. Nucleosomes at transcription start sites contribute to PIC assembly, mediated by signaling through epigenetic marks on histone tails. The Mediator (not shown) is a further central multiprotein complex identified as a global transcriptional regulator. TATA, TATA-box DNA; BREu, B recognition element upstream; BREd, B recognition element downstream; Inr, Initiator; DPE, Down-stream promoter element.
The Mediator complex
The Mediator complex is the third “general” component of transcription initiation, together with RNA Pol II and the general TFs. However, it is a really amazing structure for many specific reasons.
- First of all, it is really, really complex. It is a multiprotein structure which, in metazoan, is composed of about 25 different subunits, while it is slightly “simpler” in yeast (up to 21 subunits). Here is a very simplified scheme of the structure:
Fig. 4: Diagram of mediator with cyclin-dependent kinase module. By original figure: Tóth-Petróczy Á, Oldfield CJ, Simon I, Takagi Y, Dunker AK, Uversky VN, et al.editing: Dennis Pietras, Buffalo, NY, USA [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Mediator4TC.jpg
- Second, and most important, is the fact that, while it is certainly a “general” factor, because it is involved in the transcription of almost all genes, its functions remain still poorly understood, and it is very likely that it works as an “integrration hub” which transmits and modulates many gene-specific signals (for example, those from specific TFs) to the initiation complex. In that sense, the name “mediator” could not be more appropriate: a structure which mediates between the general complex transcription mechansim and the even more complex regulatory signals coming from the enhancer-specific TFs network, and probably from other sources.
- Third, this seems to be an essentially eukaryotic structure, while RNA POL II, TFs, promoters and enhancers, while reaching their full complexity only in eukaryotes, are in part based on functions already present in prokaryotes. The proteins that make the Mediator structure seem to be absent in prokaryotes (as far as I can say, I have checked only a few of them). Moreover, many of them show a definite information jump in vertebrates, as we have seen in important regulatory proteins.
Fig. 5 shows, for example, the evolutionari history of 4 of the biggest proteins in the Mediator complex, in terms, as usual, of human conserved information. The big information jump in vertebrates is evident in all of them.
Fig. 5
Here is a paper (2010) about Mediator and its functions:
The metazoan Mediator co-activator complex as an integrative hub for transcriptional regulation
Abstract:
The Mediator is an evolutionarily conserved, multiprotein complex that is a key regulator of protein-coding genes. In metazoan cells, multiple pathways that are responsible for homeostasis, cell growth and differentiation converge on the Mediator through transcriptional activators and repressors that target one or more of the almost 30 subunits of this complex. Besides interacting directly with RNA polymerase II, Mediator has multiple functions and can interact with and coordinate the action of numerous other co-activators and co-repressors, including those acting at the level of chromatin. These interactions ultimately allow the Mediator to deliver outputs that range from maximal activation of genes to modulation of basal transcription to long-term epigenetic silencing.
Fig. 2 in the paper gives a more detailed idea of the general structure of the complex, with its typical section, head, middle, tail, and accessories.
This more recent paper (2015) is a good review of what is known about the Mediator complex, and strongly details the evidence in favor of its key role in integrating regulation signals (especially from enhancers and specific TFs) and delivering those signals to the initiation complex.
The Mediator complex: a central integrator of transcription
In Box 3 of that paper you can find a good illustration of the pre-initiation complex, including Mediator. Fig. 3 is a simple summary of the main actors in transcription, and it introduces also thet looping created by the interaction between enhancers/specific TFs on one part, and promoter/initiation complex on the other, that we are going to discuss next. It also introduces another important actor, cohesin, which will also be discussed.
Finally, this very recent paper (2018) is an example of the functional relevance of Mediator, as shown by its involvement in human neurologic diseases:
Abstract:
MED12 is a member of the large Mediator complex that controls cell growth, development, and differentiation. Mutations in MED12 disrupt neuronal gene expression and lead to at least three distinct X-linked intellectual disability syndromes (FG, Lujan-Fryns, and Ohdo). Here, we describe six families with missense variants in MED12 (p.(Arg815Gln), p.(Val954Gly), p.(Glu1091Lys), p.(Arg1295Cys), p.(Pro1371Ser), and p.(Arg1148His), the latter being first reported in affected females) associated with a continuum of symptoms rather than distinct syndromes. The variants expanded the genetic architecture and phenotypic spectrum of MED12-related disorders. New clinical symptoms included brachycephaly, anteverted nares, bulbous nasal tip, prognathism, deep set eyes, and single palmar crease. We showed that MED12 variants, initially implicated in X-linked recessive disorders in males, may predict a potential risk for phenotypic expression in females, with no correlation of the X chromosome inactivation pattern in blood cells. Molecular modeling (Yasara Structure) performed to model the functional effects of the variants strongly supported the pathogenic character of the variants examined. We showed that molecular modeling is a useful method for in silico testing of the potential functional effects of MED12 variants and thus can be a valuable addition to the interpretation of the clinical and genetic findings.
By the way, Med12 is one of the 4 proteins shown in Fig. 5 in this OP: it is 2177 AAs long, and exhibits a huge information jump in vertebrates.
Specific TFs
OK, let’s abandon, for the moment, the promoter and its initiation complex, and consider what happens at the distant enhancer site. Here, in some apparently unrelated place in the genome, which can be even 1 Mbp away, sometimes even on other chromosomes, the enhancer/specific TFs interaction takes place.
Now, we have already seen the general TFs that work at the promoter site. However fascinating, they are 6 in total (in metazoa).
But what about specific TFs?
Specific TFs are the molecules that are the true center of transcription regulation: they are the main regulators, even if of course they act together with all the other things we have described and are going to describe.
Here is a very recent reciew (2018):
The Human Transcription Factors
Abstract:
Transcription factors (TFs) recognize specific DNA sequences to control chromatin and transcription, forming a complex system that guides expression of the genome. Despite keen interest in understanding how TFs control gene expression, it remains challenging to determine how the precise genomic binding sites of TFs are specified and how TF binding ultimately relates to regulation of transcription. This review considers how TFs are identified and functionally characterized, principally through the lens of a catalog of over 1,600 likely human TFs and binding motifs for two-thirds of them. Major classes of human TFs differ markedly in their evolutionary trajectories and expression patterns, underscoring distinct functions. TFs likewise underlie many different aspects of human physiology, disease, and variation, highlighting the importance of continued effort to understand TF-mediated gene regulation.
The paper is paywalled, but for those who can access it, I would really recommend to read it.
TFs are a very deep subject, so I will just list a few points about them that seem particularly relevant here:
- TFs are medium sized molecules. Median length in humans, for a set of 1613 TFs derived from the paper quoted above, is 501 AAs, and 50% of those TFs are in the 365-665 AAs range.
- They are highly modular objects. In essence, almost all TFs are made of at least two components:
- A highly conserved, well recognizable domain, called the DNA binding comain (DBD), which interacts with specific, short DNA motifs (usually 6-12 nucleotides).
- DBDs can be rather easily recognized and classified in families. There are about 100 known eukaryotic DBD types. Almost all known TFs contain at least one DBD, sometimes more than one. The most represented DBD families in humans are C2H2 zinc finger (more than 700), homeodomain (almost 200) and bHLH (more than 100). DBD domains are often rather short AA sequences: zinc fingers, for example, are about 23 AAs long (but they are usually present in multiple copies in the TF), while bHLH is about 50 AAs long, and homeodomains are about 60 AAs long. As said, they are usually old and very conserved sequences.
- DNA motifs are short nucleotide sequences (6-12 nucleotides), spread all over the genome. In total, over 500 motif specificity groups are present in humans. However, motifs are not at all specific or sufficient in determinining TF binding, and many other factors must cooperate to achieve and regulate the actual binding of a TF to a DNA motif.
- At least one other sequence, which is usually longer and does not contain recognizable domains. These sequences are often highly disordered, are less conserved, and may have important regulatory functions. In some cases, other specific domains are present: for example, in the family of nuclear receptors, the TF shows, together with the DBD and the non domain sequence, a ligand domain which interacts with the hormone/molecule that conveys the signal.
- A highly conserved, well recognizable domain, called the DNA binding comain (DBD), which interacts with specific, short DNA motifs (usually 6-12 nucleotides).
- There are a lot of them. The above quoted paper, probably the most recent about the issue, gives a total of 1639 proteins that are known or likely TFs in humans, but the list is almost certainly not complete. It is very likely that there are about 2000 TFs in humans, which is about 10% of protein coding genes. Of course, all these are specific TFs (except for the 6 general TFs mentioned earlier). So, this is probably the biggest regulatory network in the cell.
- The way they work is still poorly understood, except of course for the DNA binding. It is rather certain that they usually work in groups, combinatorially, and by recruiting also other (non TF) proteins or molecules. The above mentioned paper lists many possible mechanisms of action and regulation for TF activity:
- Cooperative binding: TFs often aid each other in binding to DNA: that can imply also forming homodimers or higher order structures.
- Interaction and competition with nucleosomes, in some cases by recruiting ATP-dependent chromatin remodelers and other TFs
- Recruiting of cofactors (‘‘coactivators’’ and ‘‘corepressors’’) which are frequently large multi-subunit protein complexes or multi-domain proteins that regulate transcription via several mechanisms. The ligand-binding domains of nuclear hormone receptor subclass of TFs. already quoted before, are a special case of that.
- Exploiting unstructured regions and/or DBDs to interact with cofactors
- It is also wrong to classify individual TFs as “activators” or “repressors” I quote from the paper: Because effects on transcription are so frequently context dependent, more precise terminology may be warranted, in general— for example, reflecting the biochemical activities of TFsand their cofactors. On a global level, however, there is no comprehensive catalog of cofactors recruited by TFs. Moreover, the biochemical functions required for gene activation orcommunication between enhancers and promoters remain largely unknown
- As a class, their evolutionary history in terms of human conserved information is well comparable to thne mean pattern of the whole human genome. In particular, they do not ehibit, as a class, any special information jump in vertebrates (mean = 0.293 baa in TFs vs 0.288 baa in the whole human proteome).
Fig. 6 shows the mean evolutionary history of 1613 human TFs, in terms of baa of human conserved information, as compared to the mean values for the whole human proteome:
So, in brief: one of more specific TFs bind some specific enhancer in some part of the genome, and the specific big structure at the enhancer (enhancer + specific TFs + cofactors) in some way binds the general big structure at the promoter (promoter + RNA Pol II + general TFs + Mediator), and, probably acting on the Mediator complex, regulates the activity of the RNA polymerase and therefore the rate of transcription.
The interaction between a distant enhancer and the promoter has one important and immediate consequence: the chromatin fiber bends, and forms a specific loop (Fig. 7):
Fig. 7: Diagram of gene transcription factors
By Kelvin13 [CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons https://commons.wikimedia.org/wiki/File:Transcription_Factors.svg
And, just as a final bonus about trans regulation of transcription, guess what is implied too? Of course, long non coding RNAs! See here:
Noncoding RNAs: Regulators of the Mammalian Transcription Machinery
Abstract
Transcription by RNA polymerase II (Pol II) is required to produce mRNAs and some noncoding RNAs (ncRNAs) within mammalian cells. This coordinated process is precisely regulated by multiple factors, including many recently discovered ncRNAs. In this perspective, we will discuss newly identified ncRNAs that facilitate DNA looping, regulate transcription factor binding, mediate promoter-proximal pausing of Pol II, and/or interact with Pol II to modulate transcription. Moreover, we will discuss new roles for ncRNAs, as well as a novel Pol II RNA-dependent RNA polymerase activity that regulates an ncRNA inhibitor of transcription. As the multifaceted nature of ncRNAs continues to be revealed, we believe that many more ncRNA species and functions will be discovered.
Finally, we have to consider the role of chromatin states.
Chromatin states and epigenetics
Chromatin accessibility
For all those things to happen, one condition must be satisfied: the DNA sequences implied, IOWs the gene, promoter and specific enhancers, must be reasonably accessible.
The point is that chromatin in interphase is in different states and different 3D configurations and different spacial distributions in the nucleus, especially in relation to the nuclear lamina. In general, heterochromatin is the condensed form, functionally inactive, and is mainly associated with the nuclear lamina (the perifephery), while euchromatin, the lightly packed and transcriptionally active form, with its trancriptional loops, is more in the center of the nucleus.
However, things are not so simple: chromatin states are not a binary condition (heterochromatin/euchromatin), and they are extremely dynamic: the general map of chromatin states is different from cell to cell, and in the same cell from time to time.
One way to measure chromatin accessibility (IOWs, to map what parts of the genome are accessible to transcription in a cell at a certain time) is to use a test that directly binds or marks in some way the accessible regions. There are many such tests, and the most commonly used are DNase-seq (DNase I cuts only at the level of accessible chromatin) and ATAC-seq (insertions by the Tn5 transposon are restricted to accessible chrmatin). ATAC-seq has also been applied at the single cell level, and the results are described in this wonderful paper:
A Single-Cell Atlas of In Vivo Mammalian Chromatin Accessibility
Summary:We applied a combinatorial indexing assay, sci-ATAC-seq, to profile genome-wide chromatin accessibility in ∼100,000 single cells from 13 adult mouse tissues. We identify 85 distinct patterns of chromatin accessibility, most of which can be assigned to cell types, and ∼400,000 differentially accessible elements. We use these data to link regulatory elements to their target genes, to define the transcription factor grammar specifying each cell type, and to discover in vivo correlates of heterogeneity in accessibility within cell types. We develop a technique for mapping single cell gene expression data to single-cell chromatin accessibility data, facilitating the comparison of atlases. By intersecting mouse chromatin accessibility with human genome-wide association summary statistics, we identify cell-type-specific enrichments of the heritability signal for hundreds of complex traits. These data define the in vivo landscape of the regulatory genome for common mammalian cell types at single-cell resolution.
Epigenetic states
Fig. 8 DNA methylation landscape By Mariuswalter [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons https://commons.wikimedia.org/wiki/File:DNAme_landscape.png
uses 9 different histone marks, 5 methilations and 2 acetylations of histone H3, 1 methylation of histone H4 andthe mapping of CTCF (see later) to map 30 different states in the genome of 3 different types of human cells. For example, you can see in Figure 2a the 30 states (N1-30) and the 14 known transcriptional states that they are related to (the color code on the left). So, for example, state N8, which corresponds to the brown color code of “poised enhancer“, is marked by high expression of H3K27me3 (IOWs trimethylation of lysine 27 on histone H3) and low expression of H4K20me1 (IOWs monomethylation of lysine 20 on histone H4). The first modification has a meaning of transcriptional repression, while the second is a marker of transcriptional activation. IOWs, these nucleosomes are pre-activated, but “poised”. A similar situation can be observed in state N7, corresponding to “bivalent promoter“, where the repressive mark of H3K27me3 is associated to mono, di and trimethylation of lysine 4, always on histone H3, which are activating signals. These bivalent conditions, both for promoters and enhancers, are usually found in stem cells, where many genes are in a “pre-activated state”, momentarily blocked by the repressive signal, but ready to be activated for differentiation.
This is just to give an idea. So, this kind of analysis can well predict some of the results of the already mentioned Chromatin accessibility tests, and is also well related to the investigation of chromatin 3D configurations, which we will discuss in next section.
The following video is a good and simple review of the main aspects of the histone code.
But how are these histone modifications achieved?
Again, each of them is the result of very complex pathways, many of them still poorly understood.
For example, H3K4me3, one of the main activating marks, is achieved by a very complex multi-protein complex, involving at least 10 different proteins, some of them really big (for example, MLL2, 5537 AAs long). Moreover, the different pathways that implement different marks obviously exhibit complex crosstalks, creating intricate networks. Moreover, those pathways are not only writers of histone marks, but also readers of them: indeed, the modifictions effected are always determining by the reading of already existing modification. And, of course, there ae also eraser proteins.
All these concepts are dealed in some detail in the following paper:
The interplay of histone modifications – writers that read
A final and important question is: how do histone modifications implement their effects, IOWs the chromatin modifications that imply activation or repression of the genes? Unfortunately, this is not well understood. But:
- For some modificaions, especially acetylation, part of the effect can probably be ascribed to the direct biochemiacal effect of the modification on the histone itself
- Most effects, however, are probably implemented thorugh the recruitement by the histone modification, often in combinatorial manner, of other “reader” proteins, who are responsible, directly or indirectly, of the activation or repression effect
The second modality is the foundation for the concept of histone code: in that sense, histone marks work as signals of a symbolic code, whose effects in most cases are mediated by complex networks of proteins which can write, read or erase the signals.
3D configuration of Chromatin
As said, one the final effects of epigenetic markers, either DNA methylation or histone modifications, is the chenge in 3D configuration of chromatin, which in turn is related to chromatin accessibility and therefore to transcription regulation.
This is, again, e very deep and complex issue. There are specific techniques to study chromatin configuration in space, which are independent from the mapping of chromatin accessibility and of epigenetic markers that we have already discussed. The most used are chromosome conformation capture(3C) and genome-wide 3C(Hi-C). Essentially, these techniques are based on specific procedures of fixation and digestion of chromatin that preserve chromatin loops and allow to analyze them and therefore the associations between distant genomic sites (IOWs, enhancer promoter associations) in specific cells and in specific cell states.
Again to make it brief, chromatin topology depends essentially on at least two big factors:
- The generation of specific loops throughout the genome because of enhancer-promoter associations
- The interactions of chromatin with the nuclear lamina
As a result of those, and other, factors, chromatin generates different levels of topologic organiazion, which can be described, in a very gross simplification, as follows, going from simpler to more complex structures:
- Local loops
- Topologically associating domains (TADs): This are bigger regions that delimit and isolate sets of specific interation loops. They can correspond to the idea of isolated “trancription factories”. TADs are separated, at genomic level, by specific insulators (see later)
- Lamina associated domains (LADs and Nucleolus associated domains (NADs): these correspond usually to mainly inactive chromatin regions
- Chromosomal territories, which are regions of the nucleus preferentially occupied by particular chromosomes
- A and B nuclear compartments: at higher level, chromatin in the nucleus seems to be divied into two gross compartments: the A compartment is mainly formed by active chrmain, the B compartment by repressed chromatin
Figure 9 shows a simple representation of some of these concepts.
Fig. 9 A graphical representation of an insulated neighborhood with one active enhancer and gene with corresponding enhancer-gene loop and CTCF/cohesin anchor loop. By Angg!ng [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons https://commons.wikimedia.org/wiki/File:InsulatedNeighborhood.svg
The concept of TAD is particularly interesting, because TADs are insulated units of transcription: many different enhancer-promoter interactions (and therefore loops) can take place inside a TAD, but not usually between one TAD and another one. This happens because TADs are separated by strong insulators.
A very good summary about TADs can be found in the following paper:
This is taken from Fig. 1 in that paper, and gives a good idea of what TADs are:
Fig. 10: Structural organization of chromatin
(A) Chromosomes within an interphase diploid eukaryotic nucleus are found to occupy specific nuclear spaces, termed chromosomal territories.
(B) Each chromosome is subdivided into topological associated domains (TAD) as found in Hi-C studies. TADs with repressed transcriptional activity tend to be associated with the nuclear lamina (dashed inner nuclear membrane and its associated structures), while active TADs tend to reside more in the nuclear interior. Each TAD is flanked by regions having low interaction frequencies, as determined by Hi-C, that are called TAD boundaries (purple hexagon).
(C) An example of an active TAD with several interactions between distal regulatory elements and genes within it.Source: Matharu, Navneet (2015-12-03). “Minor Loops in Major Folds: Enhancer–Promoter Looping, Chromatin Restructuring, and Their Association with Transcriptional Regulation and Disease“. PLOS Genetics 11 (12): e1005640. DOI:10.1371/journal.pgen.1005640. PMID 26632825. PMC: PMC4669122. ISSN 1553-7404.
Author: Navneet Matharu, Nadav Ahituv
By Navneet Matharu, Nadav Ahituv [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons
There is, of course, a good correlation between the three types of anaysis and genomic mapping that we have described::
- Chromatin accessibility mapping
- Epigenetic marks
- Chromatin topology studies
However, these three approaches are different, even if strongly related. They are not measuring the same thing, but different things that contribute to the same final scenario.
CTCF and Cohesin
But what are these insulators, the boundaries that separate TADs one from another?
While the nature of insulators can be complex and varies somewhat from species to species, in mammals the main proteins responsible for that function are CTCF and cohesin.
CTCF is indeed a TF, a zinc finger protein with repressive functions. While it has other important roles, it is the major marker of TAD insulators in mammals. It is 727 AAs long in humans, and its evolutionary history shows a definite information jump in vertebrates (0.799 baa, 581 bits) as shown in Fig. 5, which is definitely uncommon for a TF.
We have already encountered CTCF as one of the epigenetic markers used in histone code mapping. Its importance in transcription regulation and in many other important cell functions cannot be overemphasized.
Cohesin is a multiprotein complex which forms a ring around the double stranded DNA, and contributes to a lot of important stabilizations of the DNA fiber in different situations, especially mitosis and meiosis. But we know now that it is also a major actor in insulating TADs, as can be seen in Fig 4, and in regulating chromatin topology. Cohesin and its interacting proteins, like MAU2 and NIPBL, are a fascinating and extremely complex issue of their own, so I just mention them here because otherwise this already too long post would become unacceptably long. However, I suggest here a final, very recent review about these issues, for those interested:
Forces driving the three‐dimensional folding of eukaryotic genomes
Abstract:
The last decade has radically renewed our understanding of higher order chromatin folding in the eukaryotic nucleus. As a result, most current models are in support of a mostly hierarchical and relatively stable folding of chromosomes dividing chromosomal territories into A‐ (active) and B‐ (inactive) compartments, which are then further partitioned into topologically associating domains (TADs), each of which is made up from multiple loops stabilized mainly by the CTCF and cohesin chromatin‐binding complexes. Nonetheless, the structure‐to‐function relationship of eukaryotic genomes is still not well understood. Here, we focus on recent work highlighting the biophysical and regulatory forces that contribute to the spatial organization of genomes, and we propose that the various conformations that chromatin assumes are not so much the result of a linear hierarchy, but rather of both converging and conflicting dynamic forces that act on it.
Summary and Conclusions
So this is the part where I should argue about how all the things discussed in this OP do point to design. Or maybe I should simply keep silent in this case. Because, really, there should be no need to say anything.
But I will. Because, you know, I can already hear our friends on the other side argue, debate, or just suggest, that there is nothing in all these things that neo-darwinism can’t explain. They will, they will. Or they will just keep silent.
So, I will briefly speak.
First of all, a summary of what has been said. I will give it as a list of what really happens, as far as we know, each time that a gene starts to be transcribed in the appropriate situation: maybe to contribute to the differentiation of a cell, maybe to adjust to a metabolic challenge, or to anything else.
- So, our gene was not transcribed, say, “half an hour ago”, and now it begins to be transcribed. What has happened ot effect this change?
- As we know, first of all some specific parts of DNA that were not active “half an hour ago” had to become active. At the very least, the gene itself, its promoter, and one appropriate enhancer. Therefore, some specific condition of the DNA in those sites must have changed: maybe through changes in histone marks, maybe through chromatin remodeling proteins, maybe through some change in DNA methylation, maybe through the activity of some TF, or some multi-protein structure made by TFs or other proteins, maybe in other ways. What we know is that, whatever the change, in the end it has to change some aspects of the pre-existing chromatin state in that cell: chromatin accessibility, nucleosome distribution, 3D configuration, probably all of them. Maybe the change is small, but it must be there. In our Fig. 2 (at the beginning of this long post) the red arrows are therefore acting from left to right, to effect a transition from state 1 to state 2.
- So, the appropriate DNA sequences are now accessible. What happens then?
- At the promoter, we need at least that the multiprotein structure formed by our 6 general TFs and the multiprotein structure that is RNA Pol II bind the promoter. See Figure 3.
- Always at the promoter, the huge multiprotein structure which is the Mediator complex must join all the rest. See Figure 4.
- At the enhancer, one or more specific TFs must bind the appropriate motif by the appropriate DBD, interact one with the other, recruit possible co-factors.
- At this point, the structure bound at the enhancer must interact with the distant structure at the promoter, probably through the Mediator complex, generating a new chromatin loop, usually in the context of the same TAD. see Fig. 7.
- So, now the 3D configuration of chromatin has changed, and transcription can start.
- But as the new protein is transcribed, and then probably translated (through many further intermediate regulation steps, of course, like the Spliceosome and all the rest), the transcriptome/proteome is changing too. In many cases, that will imply changes in factors that can act on chromatin itself, for example if the new protein is a TF, or any other protein implied directly or indirectly in the above described processes, or even if it can in some way generate new signals that will in the end act on transcription regulation. Maybe the change is small, but it must be there. In our Fig. 2 (at the beginning of this long post) the red arrows are now probably acting from right to left, possibly initiating a transition from state 2 to state 3.
- After all, that is what must have happened at the beginning of this sequence, when some new condition in the transcriptome/proteome started the transcription of our new protein.
And now, a few considerations:
- This is just an essential outline: what really happens is much, much more complex
- As we have seen, the working of all this huge machinery requires a lot of complex and often very specific proteins. First of all the 2000 specific TFs, and then the dozens, maybe hundreds, of proteins that implement the different steps. Many of which are individually huge, often thousands of AAs long.
- The result of this machinery and of its workings is that thousands of proteins are transcribed and translated smoothly at different times and in different cells. The result is that a stem cell is a stem cell, a hepatocyte a hepatocyte and a lymphocyte a lymphocyte. IOWs, the miracle of differentiation. The result is also that liver cells, renal cells, blood cells, after having differentiated to their “stable” state, still perform new wonders all the time, changing their functional states and adapting to all sorts of necessities. The result is also that tissues and organs are held together, that 10^11 neurons are neatly arranged to perform amazing functions, and so on. All these things rely heavily on a correct, constant control of transcription in each individual cell.
- This scenario is, of course, irreducibly complex. Sure, many individual components could probably be shown not to be absolutely necessary for some rough definition of function: transcription can probably initiate even in the absence of some regulatory factor, and so on. But the point is that the incredibly fine regulation of the whole process, its management and control, certainly require all or almost all the components that we have described here.
- Beyond its extraordinary functional complexity, this regulation network also uses at its very core at least one big sub-network based on a symbolic code: the histone code. Therefore, it exhibits a strong and complex semiotic foundation.
So, the last question could be: can all this be the result of a neo-darwinian process of RV + NS of simple, gradual steps?
That, definitely, I will not answer. I think that everybody already knows what I believe. As for others, everyone can decide for themselves.
PS: Here is a scatterplot of some values of functional information obtained by my method as compared to the values given by Durston, as per request of George Castillo. As can be seen, the correlation is quite good, even with all the difficulties in comparing the two methods, that are quite different under many aspects. However, my method definitely underestimates functional information as compared to Durston’s (or vice versa).
PPS: More graphs added as per request of George Castillo. The explanation in in comment #270.
Yeah GP!
Hi, UB! 🙂
This OP originated form an interesting discussion in this older thread:
https://uncommondescent.com/intelligent-design/chromatin-topology-the-new-and-latest-functional-complexity/
in some way originated by comments made by George Castillo.
I hope we can continue some discussion here with those interested.
Yet another GP tour de force.
Hi KF! 🙂
Always good to hear from you.
UB:
I hope you appreciate the part about the histone code! 🙂
It’s interesting that the ubiquitin code, that, as you know, I have discussed elsewhere:
https://uncommondescent.com/intelligent-design/the-ubiquitin-system-functional-complexity-and-semiosis-joined-together/
and the histone code that I discuss here have some special overlap, and probably crosstalk.
Indeed, ubiquitination is one of the histone marks involved in the histone code.
Here is a paper about that:
Histone Ubiquitination: Triggering Gene Activity
https://www.sciencedirect.com/science/article/pii/S1097276508001330
For example, for ubiquitination of Lys 120 in histone H2B, an activating marker, a very specific E3 ligase is required:
RNF20/40 E3 ubiquitin-protein ligase complex
a heterodimer of about 2000 AAs. The interesting thing is that the same protein seems to be a “prerequisite” for specific methylations of histone H3, as can be read in Uniprot:
Crosstalks, crosstalks…
Wow!
Here he goes again!
🙂
Hi OLV,
good to see you again! 🙂
A heavy-duty biology jewel indeed!
Thank you, Peter. 🙂
It required a little bit of work, but it was very rewarding. Writing an OP is always a great occasion to delve more deeply into these fascinating issues!
Delightfully written.
The “boring simplicity” revealed by the leading edge biology research these days has been publicly unveiled here by gpuccio.
Well done!
jawa,
Yes, that’s exactly right.
This OP is much more insightful than some papers I’ve seen in prestigious peer-review journals. Definitely a university textbook material.
gpuccio,
Thank you!
I’m studying this excellent product of your dedicated work.
BTW, it’s funny that George Castillo deserves some credits for inspiring -at least partially- the author of this OP.
This article is too technical for my poor knowledge of the topic, but the title contains a word that is not “politically correct” these days.
May God bless gpuccio.
PeterA, jawa, PaoloV:
Thank you for your beautiful words.
You have been a great source of inspiration to me, with your contributions to the discussion in the previous thread where the discussion began.
And yes, I agree: George Castillo deserves some credits too! 🙂
Paolo,
Actually, the title contains two terms that could be unacceptable in certain circles. It’s widely known that the appearance of “engineering” within biology is only an illusion. Obviously the “m” word is totally “out of context” but perhaps gpuccio didn’t understand George Castillo’s comments in the older discussion. 🙂
gpuccio,
I appreciate your tremendous dedication to help others understand this fascinating but very difficult area of science. I’m sure other folks agree with me in this.
Hey guys,
Let’s get to work.
There’s plenty of very interesting material in this OP that is suitable for discussion.
Let’s read it carefully and ask any questions about the content.
OLV,
Good point. Totally agree. Thanks.
Guys,
I woud like to quote here one really amazing statement made by George Castillo in the thread where the discussion originated.
Here:
Chromatin Topology: the New (and Latest) Functional Complexity
https://uncommondescent.com/intelligent-design/chromatin-topology-the-new-and-latest-functional-complexity/
at comment #182, he said:
(Emphasis mine)
I disagree.
A lot beyond that has been shown to exist in vivo, during interphase. This OP is of course an attempt to show that.
I have often seen reductionism and denial, but the above statement, IMO, really is the ultimate!
gpuccio,
The statement you quoted disregards the available evidences.
This new OP discredits that quoted statement even further.
Hello GP, long day, I am just now able to sit down and take in your OP. Thank you dearly for writing it.
By the way, George Castillo just resurfaced in the old thread to add some new nonsense.
Being probably a little self-destructive, I invited him to come here, read and, if he likes, comment!
Wow, GP!
If you don’t mind, a question slightly related to this topic as well since you have pictures of information quantities.
When you calculate the number of states that can be visited by evolutionary walk, you arrive at a value of 2^140 states. Then you reason about information deltas: if the absolute value of a delta related to some biological change is greater than 140 bits, then we infer design. It is this “then” that is not entirely clear to me.
Could you expound a bit on how this estimate of 140 bits relates to the methods of determining the functional information quantities in polypeptydes. What is not clear to me in this reasoning is how we connect the dots between these two things: (a) the estimate based on the number of states visitable by evolutionary walk and (b) the methods of calculating functional information is an amino acid sequence.
E.g. an estimate of the absolute value of information content for the human genome is about 70 MB. Does it mean that any deltas I can get by evolution are within 140 bits on top? What about duplication and recombination? From information theory books, we can get that information gain per generation with sexual reproduction is of the order sqrt(G) where G is the genome size (if I remember rightly). Surely it can be greater than 140 bits.
Can you see my question?
To all:
As usual, I will try to use the discussion to highlight in more detail some recent aspects of what has been discussed in the OP.
Let’s start with non coding RNAs, a subject that is always interesting and important from an ID perspective. The ex-jink, let’s say. 🙂
I have briefly mentioned in the OP (at the end of the “Specific TFs section) their newly discovered roles in transcription regulation.
Well, this paper is brand new (July 2018):
Emerging Roles of Non-Coding RNA Transcription
https://www.cell.com/trends/biochemical-sciences/fulltext/S0968-0004(18)30104-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS096800041830104X%3Fshowall%3Dtrue
Emphasis mine.
This is definitely a new and interesting concept: that non coding RNAs may act in a location dependent, and sequence and structure independent, modality. That could help explain their low conservation even in the presence of refined function.
So, non coding RNAs could act as a strange mixture of cis and trans regulatory elements.
This is not so suprising, after all, if we consider the double nature of promoters and enhancers too, as described in the OP: they certainly act as cis regulatory elements, as DNA sequences, but while they do that they are also transcribed into promoter and engancer associated RNAs, and those RNAs too have direct effects on transcription regulation.
Definitely, things become more complex (and more interesting) practically with each new day. 🙂
GP,
And can I please use my chance to reiterate my request to this blog to create an index by author. GP, I am following your contributions here and it is getting out of control at this end 😉 I’d like to have an index here like on evolutionnews.org instead of having to bookmark stuff in the browser.
Thanks.
Eugene S:
Thank you for the interesting question, which requires a detailed answer.
“When you calculate the number of states that can be visited by evolutionary walk, you arrive at a value of 2^140 states.”
That’s correct. Only, remember that this is really a big overestimation, just to be on the safe side. The real number of individual states that can be reached is probably much lower.
I would like that with the word “state” I mean some specific new genomic configuration, as it can derive from reproduction which involves some genomic variation. The reson for that is that the whole genome, as it emerges from reproduction, is the functional unit that is subject to natural selection, if any.
“Then you reason about information deltas: if the absolute value of a delta related to some biological change is greater than 140 bits, then we infer design.”
This is not really correct, if you are referring to absolute information content, as it seems from your following remarks. I believe that this could be your main misunderstanding, so I will try to clarify it better. I apologize in advance if instead the concept was already clear to you.
The important point is: I never reason in terms of absolute information content, only in terms of functional information. Indeed, all the “jumps” I analyze and discuss are jumps (deltas) in functional information.
Why? Because I have applied a special procedure, that I have tried to explain in soem detail when possible.
What I measure is “human conserved information”, IOWs the bits of homology to the human form of the protein. Another way to say it is that I use human proteins as “probes” to measure the evolutionary history of proteins in relation to the form that the protein assumes in humans
I could as well use as probes the proteins in bees, for example (indeed, I have done that in some cases, for specific comparisons). My choice of huma proteins as “measuring probes” has, however, a few important motivations:
a) Of course, we are naturally interested in human functions
b) The human proteome is probably the best investifated and reviewed
c) Human are a very recent species, so human proteins can be considered, in their final form, a recent result
d) I am specially interested in the transition to vertebrates, and humans are recent vertebrates. So, the time distance from the originary split to vertebrates (and then from cartilaginous fish to bo0ny fish) and the recent split to humans is more than 400 million years
So, why is a jump in human conserved information observed after the slit to vertebrates (in my graphs, that is the transition from non vertebrate deuterostomes to cartilaginous fish) a good measure of a variation in functional information?
Well, the reasoning is simple, and it relies on assumptions that cannot be easily denied, especially by neo-darwinists.
If some specific sequence appears for the first time after the split to vertebrates, and is then conserved for more than 400 My, then we can safely assume that the sequence is functional. Indeed, the measure that it is conserved (IOWs, the bits of information that appear for the first time in vertebrates and are conserved to humans) is a measure of its functional restraint.
So, if a protein homolog a some huma protein (let’s call it A) has a maximum homology hit with the human form, before the appearance of vertebrates, of say 300 bits, and then we find 1000 bits of homology in cartilaginous fish, then we can say that 700 bits of human conserved, and therefore functional, information have appeared at the transition to vertebrates. If theprotein A is, say, 1000 AAs long, that is a jump of 0.7 bits per aminoacid site (about one third of the total information content of the protein in a blast comparison).
The 400 million years gap is important: indeed, it is an evolutionary distance that guarantees that any non functional sequence homology will be completely cancelled by neutral variation, as can be easily seen in synonimous sites. Therefore, any homology that is conserved for such a time is under extremely strong functional constraint.
So, I hope that I have explaine my “methods of calculating functional information in an amino acid sequence”, your b) question. This is not the only way to do it, of course. The Durston method, that has inspired all my reasonings, is slightly different. However, I find this method quick and reliable.
The connection with your a) point (“the estimate based on the number of states visitable by evolutionary walk”) is rather direct: if we exclude that natural selection is a relevant factor for comple functions (and the arguments to exclude it are different in nature, and you can find them in my OP:
What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson
https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/
then the only way for a non design system to reach a functional isalnd is to have probabilisitc resources comparable to the functional complexity of that functional island.
Remember that the functional complexity, measured as described above, is a measure not of the absolute information content, but of the ratio between the target space and the search space, IOWs a measure of the probability to find the target space by a single random search. That’s why blast homologies are directly transformed, in the blast algorithm, into E values, that are a slitly different, but related concept (the expected number of random hits of that level by the type of search done by the algorithm in the available database).
I am not sure why you say that:
“an estimate of the absolute value of information content for the human genome is about 70 MB”
The absolute value of information content for the whole human genome (3 Gbp) is certainly much more than that. The number you give is probably related to protein coding genes only.
However, as said, the total information content is not relevant in ID: only the functional information counts.
Duplication is not really an increase in functional information. It is similar to printing two copies of the same book.
Of course, if duplication in itself generates a new function, then the limited functional information linked to that event should be computed again by dividing the target space (the possible duplications that generate that function) by the search space (all the possible duplication events).
The same is true for recombinations.
The important point is that duplications and recombinations reuse existin functional information at the sequence level: they don’t create it. The only new functional information can derive from the new disposition.
So, let’s say that, in a very simple case, a recombination shifts two sequences in a protein, maybe changing the function. However, using a blast comparison, the sequence homology will not be significantly changed (because blast is a local alignment).
In the same way, a blast hit of a human protein against a group of organism does not depend on how many copies of that sequence are present in the organism: it just gives the highest homology hit in the group, the single sequence that is most similar to the human one.
So, neither sexual reproduction nor duplication nor recombination can generate any new functional sequence of more than 140 bits of functional information. Because 140 bits measn that the event has a probability of 1:2^140 to happen, and the probabilistic resources of our biological world just cannot do it.
Please feel free to ask new questions, if my answers are not clear or sufficient. This is an important point, and it is not at all easily grasped.
Eugene S:
I absolutely agree with you that an index by author would be precious. But that is not something that I can do.
I don’t even know if it would be ab easy task. Maybe Barry, Denyse, or those who work at the maintenance of the site, will consider your request, that has already gained the support of a few people, including me. 🙂
Eugene S:
A few more clarifications.
A recombination, whatever it is, is still only one new state. The same is true for duplications.
Any random variation, or group of variations, that happens in an ancestor and is transmitted is indeed one new state, be it functional, neutral or deleterious.
Let’s say that you need to generate a specific sequence of 100 AAs to implement a new function, and that the new function cannot really be implemented with any lower sequence information specificity.
Well, in theory you can get the right new sequence even in one attempt, for example by a framework mutation. But the probability that one random framewrok mutation may give the correct seqeunce are about 2^430. Even with all the probablistic resources of our biological world, there is no real probability that such a sequence may be found in that way.
And, of course, neither duplication nor recombination nor sexual reproduction can help. Those resources are part of the probabilisitc resource, part of the 140 bits. There is no way that they can really find the needed sequence.
Because the needed sequence simply does not exist before. It is not there. So, no duplication or recombination or sexual crossing over have any superior chance to find it, because they are still random events in relation to the sequence to be found. Each of them is still one random variation attempt, and nothing more.
GPuccio
Thank you very much for your prompt and detailed answers. I will give them a read offline.
And a special thank-you for raising your voice to push for an index by author here on this blog.
I really appreciate it.
GPuccio
Thank you again. Your answers are an OP in their own right. Of course, it is not just information, it is functional information! Different assumptions and metrics lead to different conclusions. I have only two questions as of now.
1. Just out of interest, could you sketch out how you got your estimate of 2^-430 of the probability that one random framework mutation produces the correct sequence.
2. You mentioned that Durston’s method is a bit different. Could you give more details. I read the original paper but that was a long time ago. Coming back is always good for the reinforcement of one’s understanding.
You generate your OPs at a rocket rate. I can’t keep up reading them. What’s more, they always instigate interesting discussions. I try to do my best to get through them as well.
Many Thanks!
Eugene S:
Thanks to you! 🙂
1. Oh, that was my error. I forgot to say that I was assuming a specific sequence of 100 AAs, which implies a functional information (absolute, not derived from blast comparisons) of about 4.3 bits per AA, and therefore 430 bits in total. I will correct my previous comment.
Please note that the absolute information value of one specific aminoacid is about 4.3 bits (log2 of 20), while the highest bitscore that you get from a blast comparison, for identity, is about 2.2 bits per aminoacid. That is due to how the blast algorithm works, and is evidence that my results, based on blst comparisosn, are probably a vast underestimate of the real functional information. Which is good, for me, because I prefer to be always on the safe side.
2. The Durston method is different because he starts with a selected group of homolgue proteins in different species, then does a multiple alignment of all of them, and applies a definite computetion of the reduction of uncertainty for each aminoacid site, based on Shannon’s formula. IOWs, he is comparing the variance at each aminoacid site in a set of proteins restrained by a common function with the theoric variance for random sequences, where each AA site can be occuoied by each AA. He then sums the values for each site to get a globbal functional information value for that protein family.
In that case, one absolutely conserved AA contributes as 4.3 bits of functional information, while a site which has a random variance of AAs contribute 0 bits, and then there are all possible intermediate situations.
Durston’s method is very good, but it is much more difficult to apply: you have to choose your set of proteins, align them and do the computations for each site. It has its potential biases too, because of course the choice of the sequences, and the manual review of the alignment, are very important.
In a sense, my method is easier to apply, and easier to verify by anyone. It just requires a correct use of the blast algorithm and of the available protein databases at the blast site.
The choice of considering the best hit for each group of organism is very reliable, because at the levels of exponential improbabilities that are interesting for ID the simple existence of some high homology for more than 400 million years is an undeniable sign of extreme functional constraint. If there are no errors in the database, it’s impossible that such high exponential values of homology may be due to any random variance.
The choice of using human proteins as probes to measure functional information has its definite reasons, as I have explained in my previous comments.
gpuccio,
I’m glad you left the other thread and started this.
Thanks.
GP
“a specific sequence of 100 AAs”
Yes, I would have thought that that was missing in the comment! Now everything clicks in.
Thank you very much for clarifying the differences with Durston’s method.
GP, I have now had a chance to read your OP a couple of times. Again, thank you for taking the time to write it. I think you have provided a fairly concise general overview of the process, and I really appreciated the extra links and graphics. I’m confident you’ve given interested UD readers a chance to become more familiar with transcription and regulation. It is nice to have so many aspects of the system covered in a single article, and I suspect that your readers will use this overview as a guide to seek more information. If your goal was to enable the opportunity to take (yet another) step in appreciating the vast complexity of such systems, then you’ve certainly hit your target. Bravo!
UB:
Yes, that ws my goal indeed! Thank you. 🙂
I thought that it was important to have a reasonably detailed overview of the whole regulation of transcription in one article. It’s indeed one irreducible complex network, and we can only stand in awe of its intrinsic complexity and beauty.
And yes, it is definitely an invitation, to myself first of all, to deepen the knowledge and analysis of many different aspects of it.
For example, I give here a brief list of a few topic that certainly deserve great attention, here in the discussion or in the future:
1) The role of the Mediator complex as a hub that integrates different signals and filters them to the transcription machinery.
2) The role of RNAs transcribed from the promoter and from the enhancer.
3) The role of lncRNAs in transcription regulation.
4) The histone code.
5) The role of TADs as mega units of regulation.
To all:
Let’s say something about TADs (topologically associating domains). The idea is that promoter-enhancer contacts and loops happen inside greater compartments (TAds), delimited by specific insulators. So, enhancers in a TAD will often interact eith promoters in the same TAD, while enhancer-promoter interaction between different TADs are possible, but rare.
TADs seem to be relatively stable, but their boundaries, as much as their states (acitvated or inactivated) can change in different cell types.
Here is a recent paper about TADs:
TADs are 3D structural units of higher-order chromosome organization in Drosophila.
Another one:
Principles of Chromosome Architecture Revealed by Hi-C.
https://www.ncbi.nlm.nih.gov/pubmed/29685368
And another one:
Gene functioning and storage within a folded genome.
https://www.ncbi.nlm.nih.gov/pubmed/28861108
Fig. 1 is simple and nice, showing the hyerarchy between:
– Chromosomal territories
– A and B compartments
– TADs
– Chromatin loops
Other statements from that paper:
Interesting perspectives indeed! 🙂
#37 is a fascinating topic. Another article by itself.
It’s interesting to see how this excellent OP and the additional information posted in gpuccio’s comments attract so few commenters. The issues described here are a fundamental part of the current revolution in biology, which is going to take many people by surprise, when it finally gets noticed by the mainstream media.
Note that it took quite a long time for Nicolaus Copernicus’ brilliant discoveries to be published and much longer to get accepted.
The same is happening now in biology, which has become the actual queen of science, using heavy math, physics, chemistry, bioinformatics, modeling, electronics to understand the complex functionality seen in research.
The medical field depends on the advance in biology research.
The whole society benefits from it too.
Amazing discoveries in the near future will surprise many folks out there.
Just wait and see.
In the meantime, our appreciation to gpuccio for his strong dedication to studying these fascinating topics of biology and for sharing with the rest of us what he has learned.
I’m still processing the abundant information gpuccio shared here.
I may have a few questions about the OP, but will have to wait till I find more time to pose them clearly.
jawa,
Agree with your commentary.
However, regarding the statement “when it finally gets noticed by the mainstream media.” I would rather say that many biologists could be taken by surprise too. It will be interesting to see their reaction.
jawa, PeterA:
Good toughts! 🙂
One of the problems, IMO, is that the current ideology in science and biology is to ignore intentional function, design, and in general teleology. At all costs.
It must be difficult, even for professional biologists, to really appreciate the multi-layered beauty of biological engineering and at the same be forced to deny the functional depth and the wonderful richness of thought that pervade that engineering in all its parts.
You know, only for a limited number of times can one pretend to be amazed at the unending cleverness of unguided evolution, and really keep one’s intellectual, cognitive and moral integrity. Defending what is false, at all costs, has a definite price, even for the best and most intelligent people.
I like jawa’s analogy to Copernicus. The old stablishment is deeply entrenched in their archaic ideas that will fall like a house of cards.
gpuccio,
Are the histones also product of the DNA – transcription – mRNA – translation process?
Are they part of the transcription mechanism?
If they require transcription but they are part of the transcription mechanism, how do the first histone get produced?
Maybe i’m missing something.
Thanks.
jawa:
Of course histones are proteins, so they are the product of the transcription and translation machineries.
Like all other proteins, including those necessary for transcription, starting from the absolutely essential DNA-directed RNA polymerase and many others, and those necessary for translation, including the 20 aaRNA synthetases, the about 80 ribosomal proteins (in eukaryotes) ans many others.
So, if you ask:
If proteins require transcription but they are part of the transcription mechanism, how does the first protein get produced?
you are definitely missing something, the same thing that we are all missing: how all the complex machinery that allows life to exist was generated.
Of course the only possible answer is that those things were designed, and that they were designed according to a general plan. However, even from a design point of view, it remains IMO a beautifulk mystery.
Regarding transcription, in particular, we must consider that my OP is essentially about transcription regulation in eukaryotes. In prokaryotes, transcription regulation is rather different, and it is certainly simpler than in eukaryotes, but not simple at all! I have given only a few hints about the differences between prokaryotes and eukaryotes in the OP, because my purpose was to discuss eukaryotic transcription.
The simple fact is that eukaryotic transcription has a lot of new and unprecedetned layers of implementation and regulation, even if it certainly reuses many features that are alredy there in prokaryotes. The role of histones, of the Mediator complex, of chromatin and nuclear organization, are just a few important examplse of eukaryotic novelties.
I would say that in the amazing history of life on our planet each single complex event, even the generation of a single new protein, is a wonderful example of design and engineering. But there are certainly a few major steps where the level of design innovation is almost undescribable. They are (at least):
a) Origin of life
b) The appearance of eukaryotes
c) The appearance of metazoa
d) The explosions of different organisms, body plans and phyla in metazoa, in particular the Ediacaran and Cambrian explosions
And, of course, there are many others (the explosion of flower plants, the transition to vertebrates, and so on).
The biggest mystery of all, probably, is that so many people are still convinced that the neo-darwinist theory can be a good explanation for those major events, while it can not even explain the appearance of a single new complex functional protein.
To all:
A few further thoughts about enhancers:
1) The most recent estimates of their number are now in the range of more than one million, or even millions. The simple truth is: nobody really knows how many of them can be found, for example, in the human genome.
2) According to the most recent estimates, they can well represent about 12% of the whole human genome: see Table 1 from the following paper:
GeneHancer: genome-wide integration of enhancers and target genes in GeneCards
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5467550/
Line 4 of the table, “All sources combined” gives the following numbers:
Number of elements. 434 139
Mean length: 1233 bp
% of the whole genome: 12.4%
So much for junk DNA!
So, enhancer DNA would be more than 8 times more than protein coding DNA, at least.
But if enhancers were really one million, or even more, as many believe, that figure could go up to 25% of the whole genome or more.
3) Enhancers apparently form some higher association structures, regions where many enhancers are present and that could represent specially important nodes of transcritpion regulation. Some refer to these as “super enhancers”.
4) Great progress is neing made in techniques that can image enhancer-promoter activity and therefore £D chromatin topology dynamically, in space and time: we can expect amny new important discoveries from that kind of research. Here are a couple of very recent examples:
Enhancer functions in three dimensions: beyond the flat world perspective
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5981187/
and:
Dynamic interplay between enhancer–promoter topology and gene activity
https://www.nature.com/articles/s41588-018-0175-z
Gpuccio, Lovely work 🙂
I’m to busy to post much, but lurking. And the Chromatin take down was excellent 😉
If I have time, may ask questions or add over the weekend once I have time to digest your full post!
ex-junk! Hmmm … yep… that’s a keeper! 🙂
former junk, formerly thought to be junk, surprise, this is not junk, DNA Junk found to have function Junk! 😉
Hahaha…. oh my!
When will blind, unguided Darwinist run out of room Junk for their theory to work Junk?
Have fun guys! This will be fun reading.
#45:
“The biggest mystery of all, probably, is that so many people are still convinced that the neo-darwinist theory can be a good explanation for those major events, while it can not even explain the appearance of a single new complex functional protein.”
This made me laugh unstoppably. How funny, though a sad reality at the same time. Well written.
Thanks.
DATCG:
Welcome, I was missing you! 🙂
I am sure you will contribute brilliantly as always. There is no rush, take your time. 🙂
PeterA:
Thanks to you. I think that really came from my heart!
Peter,
Don’t laugh. There’s no such a mystery anymore. Ask George Castillo in the chromatin thread to reveal it. Apparently he knows how the eukaryote histones evolved from their bacteria ancestor proteins. Quite simple.
He made gpuccio run for the hills.
🙂
Paolo,
Keep those silly jokes off this serious discussion.
If George Castillo knows so much, why did he avoid answering gpuccio’s and UB’s questions?
Wake up and smell the coffee!
You may want to study a basic biology 101 before commenting here.
George’s performance on the chromatin thread is a non-starter. He pretends to himself that he has somehow penetrated the thrust of the GP’s argument, when in fact he hasn’t even made a dent. Anyone who has followed GP’s argument knows very well that descent is not an issue with GP. It never has been. He’s made the argument for descent himself many times. The issue is the mechanism involved, and to that George has zilch.
And the reason he won’t answer my question is because a) logic, empirical evidence, and history are not in his favor, and b) the comfortable vagueries of materialism must be protected at all costs. Answering that question in earnest is strategic suicide. So he replaces the answer with insults and puffery; the intellectual equivalent of whistling past the graveyard.
UB:
I think you are absolutely right.
Indeed, George Castillo has stated more than once that I don’t want to accept his “evidence” for a bacterial origin of eukaryotic histones because it goes against my personal opinions.
That’s really strange, because I have always accepted without any difficulty the evidence he quoted for an archaeal origin of the eukaryotic histone fold.
So, why should a bacterial origin be against my personal opinions, while I am glad to accept an archaeal origin? Am I so partial to archaea? What have bacteria done to me personally? (OK, some fever here and there, I suppose 🙂 )
The simple truth is that I am convinced by the evidence he quoted for archaea, while I consider, at best, very inconclusive the evidence he quoted for bacteria.
One thing that I find really depressing is having to discuss with someone who is not interested at all in truth, or even in other’s ideas, and considers everything only as a personal fight for some not well defined agenda.
Better to just avoid that kind of people.
Reading some of the papers listed, leading to papers not listed, I am struck by the use of the word “mark”.
With discrete objects described as “marks”, we can understand the functioning of these systems. Without them, we would only measure and describe the dynamics of the system (using language and descriptions we already have) but we would understand nothing else.
How appropriate that an abstract concept is at the very center of our descriptions and understandings; an organizational utility used to specify something among alternatives, a control.
It’s interesting that the parts of system that must be recorded in our descriptions (in order for those descriptions to be useful to us) are the teleological and the irreducible. Materialists just can’t catch a break.
UB,
Very interesting observation.
Thanks.
UB:
You really make a great point at #55! 🙂
You have caught a profound concept, which is probably related to the central idea of consciousness and its properties.
Indeed, an engineered algorithm is something that we understand, not just a series of connections between steps. That’s the difference netween artificial intelligence and intelligence, where “artificial” in the end stays for “not really true”.
We write progarms using programming languages, and not directly machine code, for the same reason: we need to understand what is happening. Semiosis, the ability to project abstract thinking and intuitions into material events, is what makes us humans. And gives us power that no other material process in the universe seems to have, including the power to generate complex functional information.
That’s why ID is such an important worldview: it’s not only the best explanation for biological realities, and probably for the universe itself; it’s a key to understanding the deeper levels of reality that are hidden behind everything we experience.
If neo-darwinists are happy with describing things without understanding them, we certainly beg to differ. And, in the end, even those who do not recognize understanding as a precious and unique experience are forced to use understanding words and constructs to just communicate what they believe.
To all:
About super-enhancers.
This has just come out on Pubmed (August 31, 2018).
Super-enhancers are transcriptionally more active and cell type-specific than stretch enhancers
https://www.tandfonline.com/doi/abs/10.1080/15592294.2018.1514231
A pre-version of the full paper is available at biorxiv, here:
https://www.biorxiv.org/content/biorxiv/early/2018/04/30/310839.full.pdf
Let’s try to understand.
It seems that in the last few years two special classes of enhancers have been independently defined and studied by different groups of researchers:
1) Super-enhancers are “defined based on their enrichment for binding of key master regulator TFs, Mediator, and chromatin regulators. These cluster of enhancers are cell type-specific, control the expression of cell-identity genes, are sensitive to perturbation, associated with disease, and boost the processing of primary microRNA into precursors of microRNAs”.
2) Stretch-enhancers, instead, are large genomic regions with enhancer characteristic and defined based on their size (>3kb).
So, the concept of super-enhancer depends essentially on enrichment of the binding of Master reguklators, while the concept of strecth-enhancer is based only on the length of the enhancer region.
The point is that many think that the two concepts are overlapping. Both these special classes of enhancers seem to be cell type-specific and related to the control of cell identity genes.
The quoted paper, instead, finds important difference bewteen the two classes. They considered existing databses of known sequences already independently defined as super-enhancers or stratch-enhancers, and analyzed those sequences in 10 human cell lines.
The results are very interesting, and you can see them detailed in Fig. 1 of the paper.
In brief they found, in those 10 cell lines:
a) An average of 745 superenhancers with mean size 22,812 bp
b) An average of 11,160 stretch enhancers with mean size 5,060 bp
c) So, super-enhancers seem to be significanlty longer than stretch-enhancers, but they are much less numerous. See Fig. 1, which details the number of the two types of regions in each cell line.
c) Fig. 1 b shows that, in each cell line, the two categories cover a different fraction of the genome, always greater for stretch-enhancers, with maximum values of about 5% for stretch-enhancers and about 1% for super-enhancers.
d) Super-enhancers are usually nearer to the promoter and the TSS (Fig. 1 c).
e) Super-enhancers are much more evolutionary conserved (Fig. 1 d).
f) Super-enhancers are highly enriched for active chromatin marks, while strecth-enhancers are highly enriched for poised chromatin mark (Fig. 2, a and b).
g) Super-enhancers are significantly more active and located in open regions than stretch enhancers, which are more likely to be poised.
h) Super-enhancers are enriched with cohesin and CTCF binding, a sign of active loop formation (Fig. 3, a and b).
i) Super-enhancers are transcriptionally more active than stretch enhancers, as shown by RNA Pol II bindign and other markers (Fig. 4, a,b).
j) Super-enhancers are transcriptionally more active than stretch enhancers, IOWs generate more eRNA (Fig. 4, c,d).
k) Stretch enhancers are less cell-type-specific than super-enhancers.
l) While the two classes are definitely distinct, there is some overlap which has definite features: a vast majority of super-enhancers (85%) overlap with only a small number of stretch enhancers (13%), and the overlapping regions (super-stretch enhancers) are definitely smaller in size (Fig. 5, a,b,c).
m) These special overlapping regions (super-stretch enhancers) are cell-type-specific and control key cell identity genes.
n) In general, enhancers are more likely to be cell-type-specific, transcriptionally active, and frequently
interacting when found in clusters at the genomic scale, whatever their sizes.
I think these things are very interesting. A whole new level of detail is rapidly unfolding.
Junk DNA. Right.
All that transcription, ain’t no big deal.
UB #55
Absolutely! Shallit and other romantic defenders of all good from all bad cannot do anything of substance with the fact that, apart from living organisms, in all known universe signalling systems are observed only in correlates of intelligence.
I have seen different tactics of dissenters that they employ against ID ranging from panpsychism to a complete dismissal of abductive reasoning as ‘a simulacra, an ideosyncrasy of Chalse Peirce’.
One of them, remarkably, said, in an attempt to debunk ID, that all manners of things must have happened and did happen, but we now see only what survived. He asked me, why does a photon from a distant star get right into my retina? Mind you, the only problem is to demonstrate the ease with which life originates…
What a disgrace!
And, of all people, these then claim that they are standing for science against obscurantism.
Charles Peirce, of course…
EugeneS:
The photon retina nonsense that you quote seems to be a creative variant of the old and infamous “deck of cards argument” which goes more or less:
“When you draw the 52 cards in some specific random order, that result is extremely unlikely (probability = 8.065818e-67). But it has happened! Therefore, extremely unlikely events happen all the time.”
That is of course full evidence, for the fans of the argument, that ID is doomed.
I think that someone raised again a similar argument recently, in a discussion here, but I cannot remember who.
Suffice it to say that this kind of reasoning is one of the best examples of the depths of inanity that can be reached by the human mind.
To all:
What happens at super-enhancer regions?
Very complex things, it seems. Involving not only the expected biochemical reactions and protein protein interactions, but also intrinsically disordered regions (IDRs), interesting phase separations, and so on.
See here:
Coactivator condensation at super-enhancers links phase separation and gene control.
https://www.ncbi.nlm.nih.gov/pubmed/29930091
IDRs are a favourite of our small group of commenters, epsecially DATCG and me! 🙂
To all:
This is very recent and definitely very much in favor of the central role of intrinsically disordered regions (IDRs) and intrinsically disordered proteins (IDPs):
The evolutionary origins of cell type diversification and the role of intrinsically disordered proteins.
https://www.ncbi.nlm.nih.gov/pubmed/29394379
Emphasis mine, just to show the connection to this thread.
Note the “disordered/ductile” double meaning proposed for the “D”! 🙂
I will agree that answering your question is strategic suicide, upright, because
1. you have carefully designed the question in a way that will force most people (read: people with little knowledge of biology) to eventually pidgeonhole themselves into saying that the first aaRs was “made from memory” after some number of other aaRs were somehow made not from memory (by chance? I guess is the alternative?)
2. the entire premise of your question is a strawman as you are unflinchingly rigid in the definition of an aaRs, its functions/roles, and the system itself, but the conversation is in fact about the evolution of the system which occured millenia ago. Your question is not representative of how anyone thinks this system evolved.
3. no one knows how the translation system evolved; how information was first encoded in a genome and how that genome was converted into a functional molecule. It is an incredibly difficult question to ask and to try to answer. But saying the evolution of this process is so complex, or that transcription is so complex and therefore they must have been designed, is not the answer. Invoking some designer when the road gets tough might make it easy for you to sleep at night, but if everyone did that, we’d still be banging stones together to cook our dinner.
George Castillo,
you’re completely off target, buddy.
the issue is not about how much we know or don’t know, it’s that such evolutionary mechanism in the case we are discussing is purely imaginary, it doesn’t exist at all.
we’re not talking about complexity, we’re talking about functional complexity that definitely has been designed
you won’t find a non-design way to get that, no matter how much time you give it.
time to wake up and smell the flowers in the garden
just see the trend… every new research discovery points to design
the inexorable march of the design revolution is going to take many people by surprise, but then we will tell them “I told you so”
the poisonous pseudo-scientific hogwash should be removed from science-related publications
As Tom Hanks said in the movie “Sully”:
“can we get serious now?”
George Castillo,
do you have any argument against what has been presented in this thread?
go ahead, tell us what it is
George Castillo,
as somebody said in this website before, it’s not what we don’t know, but what we do know, that points unambiguously to intelligent design… and tomorrow we shall know more
have a cup of tea and listen to this
George Castillo:
However difficult it is to say it, welcome here. 🙂
I agree only with one of the things that you have said: it is easy for me to sleep at night.
To all:
So, now we have had our:
“There is nothing in all these things that neo-darwinism can’t explain”.
We are not alone any more. Hooray! 🙂
“do you have any argument against what has been presented in this thread?”
I don’t really see anything besides a high school-level regurgitation of some wikipedia pages with a smattering of some copy-pasting from a handful of research articles.
What do you think I should have an argument against?
GP
Exactly!
The agument against this nonsense is an a priori specification!
The ‘Five of a kind’ pattern repeated N times in a row admits a short description, whereas ‘retina in the eye’=’random hand’ does not.
George,
If you don’t mind my interference, the existence of code translation is a prerequisite of biological evolution, not the other way around. For evolution to even kick-start, a system MUST be self-replicating, open-ended and semantically closed.
“No one knows how the translation system evolved”.
This is already wrong! It did not! You, guys, are painting yourselves in the corner by a priori dismissing design. Hysteretic graduality is not an answer.
It must have been front-loaded, not evolved. Just like we humans front-load programs on to a computer to execute. There is no such thing as “algorithmic causality” in nature. Any pair {code,interpreter} in nature points to intelligence.
The price you, guys, pay for dismissing the awkward questions is dismissing evidence.
To all:
This is an even more detailed and irrefutable form of the argument:
“There is nothing in all this high school-level regurgitation of some wikipedia pages with a smattering of some copy-pasting from a handful of research articles that neo-darwinism can’t explain”.
EugeneS:
Of course, the probability of having some sequence when we draw 52 cards is:
1 = necessity
That’s how “improbable” that result is!
Of course, the probability of having some specific sequence, declared in advance in its contingent detail, is 8.065818e-67.
And the probability of having some ordered sequence depends on how we define “ordered” and is certainly higher than the probability of one single sequence, depending on how big a target space is defined by our defintition of order. However, in most cases, the target space will be extremely smaller than the search space, and the probability will be extremely low. The result is that well ordered sequences will never be observed.
To all:
A few useful hints to produce really scientific posts:
a) Never look at Wikipedia
b) Never go to high school
c) Never copy and paste anything: if possible, copy everything manually
d) Strictly avoid research articles, especially handfuls of them. It is probably admissible to refer to one, or to thousands at a time, but never handfuls.
Courtesy of George Castillo.
To all:
A few useful hints to produce really scientific posts:
a) Whoa! Look how complex this is!
b) It must be designed!
Courtesy of UD et al.
To all:
This is, again, about chromatin topology:
Genomic meta-analysis of the interplay between 3D chromatin organization and gene expression programs under basal and stress conditions
https://epigeneticsandchromatin.biomedcentral.com/articles/10.1186/s13072-018-0220-2
Here they are considering one of the highest levels of topology, the A-B compartments, in different cell lines, both at basal conditions and in response to various treatments.
There are many interesting things here, but for the moment I would point to the following:
a) Table 1, that shows thatin each cell line there are about 14000 – 15000 protein coding genes assigned to compartment A and 4000 – 5000 assigned to compartment B.
b) Fig.1, that shows that compartment B is always transcriptionally less active than compartment A in all cell lines (but certainly not inactive)
c) Fig. 2, that shows how two cell lines, when compared, show definite differences in gene assignment to compartments A and B, and therefore in gene expression.
d) Table 2, that shows how differences in A-B assignment between the same two cell types correspond to definite differences in epigenetic markers.
e) Table 3, that shows how specific cell treatmens are associated to specific variations in assignments to A-B compartments.
George Castillo:
Have you finally understood that we in ID infer design by evaluating the functional complexity of biological objects?
My compliments.
Hi gpuccio
Thank you so much for posting this. I look forward to reviewing it in detail. I think you have continued to strengthen the case the eukaryotic cell was a unique origin event as was the first multicellular life.
bill cole:
Hi Bill, glad to hear from you! 🙂
Yes, OOL, eukaryotes and metazoa are really the three biggest steps in the fascinating history of life engineering.
To all:
One of the big unsolved problems is how specific TFs bind enhancers, and how the specificity of that interaction draws the amazinf specificity of the global interaction between 1 million enhancers, tens of thousands of promoters, and tens of tousand of genes, both protein coding and non protein coding.
This recent paper gives some interesting hints:
Intrinsic DNA Shape Accounts for Affinity Differences between Hox-Cofactor Binding Sites.
https://www.ncbi.nlm.nih.gov/pubmed/30157419
So, TFs can bind to different DNA sequences with different 3D shapes, but with similar final conformations. The difference between the basic conformation of the dNA sequence and the final conformation of the bound complex translates into lower affinity. It is amazing to think how that kind of structural information can have a role in modulating the global, complex and highly functional results of transcription.
The point is: there are many different ways that functional information about the procedures to be implemented seems to be written in DNA, and particularly in enhancers: sequence, position, spatial relationship to promoters and to other enhancers, length, structure. And may be something else that we still have to understand.
Multiply that by 1 million individual nodes of functional information…
Hi George
As a minimum the design argument has put the run away speculation of Neo-Darwinism in check. If you read many of the papers in the scientific literature that assume that Neo-Darwinism or universal common descent by mutational modification is true as a working hypothesis you can see how this unsupported set of assertions has mislead science.
George Castillo:
Well given your kindergarten level of understanding I can understand what was posted is way over your head, so much so it has you all upset.
The design inference is more than the existence of mere complexity. And it still stands that any given design inference can be easily refuted if someone could demonstrate non-telic processes can produce it.
So, instead of being in denial why don’t you just step up and do something beyond your hand-wave and flailing? Or does that help you sleep better at night?
George Castillo:
Nice equivocation. Intelligent Design is NOT anti-evolution. The point being is no one even has a clue as to how blind and mindless processes could produce such a thing. And yet we have ample evidence of intelligent agencies doing so.
Science 101 therefore mandates the design inference and all it entails.
Hi George
-or how DNA evolved
-or how metabolic systems evolved
-or how the spliceosome evolved
-or how the ubiquitin system evolved
-or how the nuclear pore complex evolved
-or how exons and introns evolved
We can go on for ever but one question remains unanswered. With all this uncertainty how is it some people still claim that life is the result of evolution?
bill cole:
Good points. 🙂
“No one knows” and yet we have to believe that everybody knows.
At the same time, everybody knows how functionally complex things originate: they are always designed by conscious intelligent beings. Everybody knows, and yet we have to believe that such universal knowledge must never be applied to biological objects.
It’s a really strange world we live in.
George-
Intelligent Design operates via Four Rules of Scientific Reasoning from Principia Mathematica– as science should:
Following those rules provides ID with a means of being falsified. It also tells us to use and trust our knowledge of cause and effect relationships. Science 101
In short, we infer these things were intelligently designed because everything we know says that they were.
Hi George
This is not the design argument. The design argument states that the only known source of sustainable amounts of functional information is conscious intelligence. We know this mechanism works so we infer it as the cause of genetic information. Couscous intelligent beings are capable of designing functional sequences.
Functional sequences are partially responsible for the complexity we are observing in biology.
To all:
I think that when George Castillo, like many other in the noe-darwinist field, makes a caricature of the argument:
“How complex this is! It must be designed!”
he is well aware that when we speak of compelxity we mean functional complexity, and not just absolute complexity.
Nobody really looks at the disposition of the grains of sand on a beach, or of stains on a wall, and says: Oh, how complex that is, it must be designed!
And yet, grains of sand and stains have great absolute complexity: you need a lot of bits just to describe their configurations.
But if we look at a computer, or a watch, or an airplane, then it is perfectly normal to say: how complex it is, it must be designed.
Because what we see is functional complexity, not absolute complexity.
Now, the main positions of our interlocutors are usually:
a) There is no such thing as functional complexity (which, of course, is completely false, otherwise a computer and the stains on a wall would be the same kind of thing).
b) It’s impossible to measure functional complexity (completely false, look at Szostak or Durston, for example. And I have given measures of functional complexity in biological objects in almost all my posts)
c) There is no connection between functional complexity and design (completely false, the only examples we know of functional complexity are human designed artifacts and biological objects. There is no known example of functional complexity that originates from a non design system).
d) Well, even if all known examples of functional complexity whose origin can be traced with certainty are the result of design, still it is possible that the second class of functionally complex objects, biological objects, originated from a non design system.
I believe that the only vaguely reasonable position for design critics is d). Of course, that position can be held only if supported by some serious attempt at explaining how and why biological objects are the only exception to a very general rule. IOWs, some serious attempt to explain how functional complexity arises in biological object. I am aware of no such attempt. Of course, neo-darwinism is an attempt, but it is certainly not serious. 🙂
But George Castillo seems to stick to c): there is no connection between functional complexity and design.
Or, more precisely, if we invoke a designer for designed things “we’d still be banging stones together to cook our dinner”.
So, I believe that George Castillo is still looking for some credible explanation for computers, watches and airplanes that does not involve design. So that he can cook his dinner more comfortably, maybe using designed machines.
Oh, I know what his next “argument” will probably be. But I am not going to say it. Let’s wait for him.
George #77
You missed out one critical bit, an a priori specification of function.
Can you give an empirical example of a semiotic relationship (“sign–referent”) arising in inanimate nature outside of an explicit decision making process?
Bill Cole #86
A good observation indeed.
The root cause of this is that evolutionism is a faith. The same faith in ‘all-powerful eternal matter’ as articulated by Epicirus, Plotinus, Origen, Spinoza, Hegel, Lenin and others. The only difference is that today it camouflages as science.
A very characteristic revealing and self-contradictory quote indeed linking religious belief (‘evolution’) to science (‘the central theorem’).
I feel really sorry for George Castillo.
The poor guy had lost the discussion before it started, but he’s not aware of this yet.
Bill Cole:
“Couscous intelligent beings”.
Heh.
Sorry, but sometimes autocorrect produces hilarious results. This must have been a Pastafarian intervention. All hail the durum wheat deity.
Gpuccio, can you point me to the paper where your functional complexity calculation originates from?
George-
Start with these:
Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling, Vol. 4:47 (2007)
and
Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, Functional information and the emergence of biocomplexity , Proceedings of the National Academy of Sciences, USA, Vol. 104:8574–8581 (May 15, 2007).
Which one of those does gpuccio use to calculate functional complexity/information, I can’t tell?
You haven’t read either of them. Read them and then ask questions
Have you read them?
As far as I recall, gpuccio’s method is his own, which evolved through gradual random variations from the ones described in the papers cited by ET. I could’ve misunderstood that though.
Better wait for gpuccio’s reply.
This is off topic but reading through this, and several other UD themes recently, has suggested a good test to my feeble little mind. Let me explain.
There are a few themes that frequently resurface here:
1) ID is a science with a clearly defined theory, it is falsifiable and makes testable claims.
2) Science is inherently biased against ID (or, against anything that is anti-evolution).
3) There is systemic censorship in the science community and the media.
4) Linked to #3, the peer review process is seriously flawed.
GP, KF and others have put extensive work, thought and effort to produce some very well thought out and presented articles. My suggestion, to test 2 through 4 above would be to have one of these authors draft a publication quality manuscript on the subject of this OP. Preferably, it should be co-authored by a few of the more prolific authors here. It can then be submitted to a journal in one of the respected publishing houses (eg, Springer or Elsevier). If it gets rejected, the author(s) can then write an OP here or at Evolution News, including links to the manuscript and the reviewers comments.
I don’t see a down side to this little experiment. If it is accepted and published, this information will be widely disseminated amongst the people who can actually make changes to the system. If it is rejected, the reviewers comments and editor’s final decision may shine a light on the mindset of those who control the flow of information in the scientific community. Just food for thought.
“As far as I recall, gpuccio’s method is his own”
Why reinvent the wheel?
“draft a publication quality manuscript…can then be submitted to a journal…I don’t see a down side to this little experiment”
The authors know their intended audience: laymen.
Ask a scientist in this field to review what has been posted here and it would be torn apart.
And I can guarantee they will ask the same questions I did; where did the method of calculating functional info/complexity come from, and also why did you feel the need to invent your own method, and why not use one of the methods that are already published?
GC@102. I agree that the current OP would be torn apart. That is why I suggested that it be re-drafted as a publication quality paper (abstract, introduction, methods, results, discussion, references). It may still get torn apart, but that would be educational in itself. What reasons are they using to tear it apart? It is these reasons that must eventually be addressed if ID hopes to be accepted as a legitimate alternative to evolution.
George Castillo and R J Sawyer,
How long have you been following gpuccio’s OPs?
To all:
Wow, some activity here!
OK, let’s start with George Castillo.
I have explained a few aspects about my method to compute functional information in protein in my comment #32 here, in answer to EugeneS.
What PeterA saya at #100 is correct: the method I use in my OPs and comment is my own. However, it is based on the same principles used by Durston in his paper:
Measuring the functional sequence complexity of proteins
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2217542/
already quoted by ET at #96.
The other paper quoted by ET:
Functional information and the emergence of biocomplexity
http://www.pnas.org/content/104/suppl_1/8574
is very interesting too, because it is by non ID friendly authors, Szostak and others, and it deals very seriously with the concept of functional information, its definition and some applications of it, even if it does not apply it to proteins in particular.
My method is based on the same assumption used by Durston: evolutionary conservation of protein sequence is a marker of functional constraint. That is an assumption that no serious neo-darwinist could deny, because it is at the core itself of neo-darwinian thought.
What I do is simply to use that conservation to measure functional information using blast, a tool universally used to compare protein sequences, and its homology score as a mesure of functional information, provided that the blast results be between protein which are separated by a very big evolutionary gap: in most of my examples, I have compared proteins between pre-vertebrtaes and vertebrates, therefore separated by more than 400 million years in their evolutionary history.
So, as you can see, I have not “reinvented the wheel”. I have made my personal wheel, and it works very well.
Now, you can agree with what I do and with my consclusions, or (much more likely) you can disagree. That’s your personal choice.
But, if you want to discuss what I do, and have my attention, you have to say something specific about what I do: just declaring that my method is not published in the scientific literature is simply boring: I know, you know, everybody knows. Many times I have said that I have no desire to publish anything about ID in the scientific literature. I publish my ideas here, and that is a definite choice. I have my reasons, and I have also explained them occasionally. And, of course, you are free to say all that you want about my choice. But again, that is simply boring and irrelevant.
But, as you too are publishing your ideas here and not on a scientific journal, if you want attention to your ideas you must make them interesting. If you disagree with what I do, explain why. All the rest is irrelevant.
GP@105. I used your OP as an example simply because I was reading through it at the time that the idea came to my mind. There are others here who post equally comprehensive and well thought out OPs. KF and Johnnyb come to mind. They could run with it if they wanted to.
I just think that submitting a manuscript to one of the respected publishers would be a very informative excersise. If a paper is properly drafted, I suspect it would be rejected and the reviews would be along ideological lines rather than a criticism of its scientific merits. Which would be very revealing because I don’t recall anything like this being done before. The risk, obviously, is if it is rejected and the reasons for rejecting it are based completely on its scientific merits. But even that unlikely outcome would be informative as it would provide valuable input as to where the ID concepts must be further researched.
R J Sawyer:
Thank you for your kind suggestions.
Regarding your 4 points, I certainly agree with the first 3. I think you should add the word “current” to “science” at number 2, but for the rest I agree.
Numer 4 is more complex. Peer review is probably not the main problem. Journals can reject a paper without even submitting it to peer review. They can just say that the paper may be good,but it is simply not what they are interested in.
Peer review has many problems, and prejudice against ID is certainly not the only one. Perhaps I would not say that it is “seriously flawed”, but it is not very efficient and reliable. One thing is certain, that it does not guarantee that what is published is good, or that all that is good will be published.
Your “experiment”, while certainly proposed in good faith, is not very practical, IMO. That kind of paper would certainly be rejected, but that would not demonstrate anything. Journals reject a lot of papers all the time, many of them without any peer review process. And so? In most cases, there is no ideological bias behind those decisions, only more or less valid reasons of other kinds.
Moreover, this OP in particular does not qualify as independent research: it is more a review of the literature, even if made from an ID viewpoint and with some small personal additions. Other OPs that I have published here are more original in their content.
Finally, as I have said many times, I have no intention to publish in the scientific literature about ID. My personal conviction is that publishing my ideas here is much more appropriate and, in the end, useful.
I have serious doubts that the recognition of ID as an important scientific paradigm wil happen through a gradual admission of ID friendly papers in the literature, even if that can help a little. My personal idea is that ID will become the main paradigm of science beacuse the scientific world, and all the good components in it, will become tired of defending obviously false ideas after some time, and as the accumulation of facts that prove beyond any doubt that those ideas are false will reach a point where almost anyone will be ashamed to deny the evidence. That will come mainly from “traditional” research. Exactly that kind of research that I quote in my OPs.
In the meantime, it is important that those people who are already fully convinced of the superiority of the ID paradigm may continue to express their ideas, to reason and discuss things from an ID perspective. The ID point of view is important, precious I would say, and it must be defended.
Here is a good place to do that, and that’s what I try to do.
R J Sawyer at #106:
I think I have already answered at #107 (even if I had not yet read your new comment).
One point: there is absolutely no doubt that “ID concepts must be further researched”. I am absolutely convinced of that.
ID is at present only a paradigm. Its specific application to biology is still very limited, and the reason for that is very simple: resources are extremely limited.
Even the theoretical approach is still in its starting steps: there is still much work to do.
That’s why I always say that ID is a paradigm: it is not simply a theory (even if it includes many different theoretical approaches), and it is not a movement (even if of course there are some organizational aspects in the ID field). It is a paradigm, a way of thinking inspired by the recognition of the importance of conscious design in the natural world and of the folly inherent in denying it a priori, as current science does.
But frankly, I don’t think that credible suggestions about how to improve ID thinking will come form peer review. It will come from ID itself, or simply from good science out there.
In the end, only one thing will promote the ID paradigm as time goes by: the fact that it is true.
Maybe some folks her don’t agree with this, but let’s admit that as George Castillo got some credits for adding some “heat” to the “chromatin” thread, now he should get credits for doing the same here. Perhaps the website administrators should think of some kind of incentive rewards for cases like this?
🙂
Peter,
C’mon buddy, can you stay serious ?
There are important issues discussed here.
Can you leave the jokes for another occasion?
Hi George
Why are you so confident it will be torn apart?
bill cole at #111:
Indeed, I think that it is quite accurate as a review of the literature. Not complete, certainly, otherwise I should have spent much more time in preparing it, but I believe that the most important things are there.
Of course my personal additions would not be welcome. And of course the general approach has been to keep things as simple and clear as possible, so that also non technical readers could get an idea of things.
That said, I have tried to do the best I could, and to include some recent and not so obvious concepts from the latest research.
PeterA at #109:
Always ready to give credit where credit is due. 🙂
PeterA at #109:
Jokes apart, of course the lack of comments from people on the other side does not help.
I understand that this OP, in particular, is mainly a review of what is known, and as such there is not much that can be said against the things presented here.
But it is equally true that things are presented here in a certain context, because I believe that those things strongly support the ID point of view.
So, someone could try to explain why that should not be true, possibly something more detailed than the usual:
“There is nothing in all these things that neo-darwinism can’t explain”.
For example, some thoughts about how neo-darwinisn is supposed to explain complex regulatory networks, codes, millions of enhancers that control the specificity of transcription, cell differentiation, and so on.
Or some thoughts about the ever shrinking size of “non functional” DNA, with enhancers, promoters, lncRNAs and so on constantly emerging as big functional parts of the genome.
Just suggestions…
GP, thank you for your kind response. I will leave you with just one final comment. I don’t want to take this thread too far off topic.
In the early 20th, Alfred Wegener proposed continental drift to the scientific community, using the science communication means of the day. He knew it would receive much opposition because it went against the commonly held belief. And it did receive extensive push-back. But he kept plugging away, drawing evidence from different fields including geology, biology, botany and palaeontology. What he lacked until the day of his death was a viable mechanism. This was later discovered and formed the field of plate tectonics. My feeling is the best hope for exceptance of ID is to follow a similar approach.
Thanks for listening.
To all:
OK, this is not exactly about transcription, but it is very interesting.
Widespread evolutionary crosstalk among protein domains in the context of multi-domain proteins
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0203085
The idea presented here is simple and convincing: while it is true that domains are functional modules, it is equally true that when they are used in multi-domain proteins they need to be adapted to the new function, and that adaptation has to be “concerted”.
Of course they call it “concerted evolution”. I would call it, more realistically, concerted re-engineering of the individual modules for the new meta-function.
I have seen that happen clearly when I blast proteins. While domains are often greatly conserved in distant proteins, for example the DBDs in TFs, they are also different, and that difference has all the features of a specific functional reengineering. I think that happens not only in new multi-domain proteins, but also in the same protein adapted to a new branch of organisms.
R J Sawyer:
And thanks to you for the very interesting contribution.
What you say is interesting. I think, however, that the opposition against ID has deeper ideological reasons than ID simply going against the commonly held scientific belief. At present, ID goes against the general worldview apparently “supported” by current science. It’s more an ideological (religious?) bias than simply a defence of some scientific opinion.
Sorry I’m late to the party. It’s a long holiday weekend in my part of the world, and I live on a lake, so you can do the math on that. 🙂
I see George has re-appeared for some more dismissal of evidence. Let’s see what he has to say:
You apparently believe it’s somehow illegitimate (or trickery) to ask details about how your model of biological origins results in something being the way we find it today. Obviously, I strongly disagree with you on that point, as should anyone and everyone. It’s not a trick question to point out the well-documented features of the system and ask how they came into being under your paradigm. If those features are so distinct and unique that they’ve been described in the literature as the fundamental and necessary conditions of such systems, and if they were predicted in logic and then experimentally confirmed to true, then I think it would be rather careless (fairly stupid) to not ask these questions. Indeed, I would like to think that a person with your level of certitude would anticipate the questions, and be ready with a logical answer. Your response, on the other hand, has been to launch insults and complain about the questions. Perhaps you can explain why you should be exempted from responding to physical evidence.
Alan Turing wrote a paper in the 1940s where he presented a programmable information system that (in order to function) mirrored Charles Peirce’s model (written decades earlier) of a necessary triadic relation between an object, a medium of information about that object, and a discrete interpretation of that medium. Turing’s paper would lead directly to the information explosion we are living in today. John Von Neumann then took Turing’s machine (with its Percean logic intact) and used it to predict the necessary physical conditions of an autonomous self-replicator. Francis Crick et al then experimentally demonstrated the medium of information in DNA, as well as its basic encoding structure. He then went on to predict that a set of discrete objects would be found within the system to serve as the interpretants of the code. These objects were experimentally described later by Hoagland and Zemecnik, confirming Crick as well as Von Nuemann, with Turing and Peirce in tow. After Nirenberg and others cracked the code, setting off the information revolution in biology, Howard Pattee presented the specific material conditions of the system in the language of physics, and described the semantic closure required for the system to function (i.e. Von Neumann’s “threshold of complexity”).
Since you seem to be suggesting that Von Neumann (Peirce, Turing, Pattee) is wrong about those conditions, perhaps you can tell me where the model is incorrect? In order to function, does the system require both the medium of information and the constraints to interpret it? Does it require semantic closure in order to replicate itself? If you cannot tell me where the established model is wrong, then please do tell me why I shouldn’t refer to it, or why your arguments should be exempted from it.
This is assuming your conclusion. This is assuming your conclusion in the face of universal evidence to the contrary, followed by an irrelevant (and fallacious) appeal to authority.
This is just more of the same.
George, if you intend to answer no questions, to deal with no details, or consider how any evidence might impact your beliefs, then at least try to be entertaining.
Eugene Selensky is entirely correct; it is the presence of the system and its semantic closure that enables Darwinian evolution to occur, not the other way around.
GP, I missed where you suggested we might be better off to just ignore George and his endless ideological defenses. Sorry for taking your thread off course.
I wholeheartedly agree with Bill Cole at #111.
“Why?”
Yes, George, I have read the papers.
Nonsense. Ask a scientist in this field to say how blind and mindless processes could have done it and you will see a scientist implode from failure.
UB at #120:
Don’t worry. George Castillo has contributed to the discussion, and made it more lively, inspiring good commenters like you and others to clarify important points.
Unfortunately, good antagonists have become really scarce here, so we are grateful for what we have. 🙂
Earth to R J Sawyer- There isn’t anything in peer-review that supports evolution by means of blind and mindless processes. And yet that is the mainstream position- that evolution proceeds by means of blind and mindless processes.
The scientists who reject ID do so for personal reasons. They definitely cannot refute any of its claims and that is very telling. To refute ID all they have to do is find support for their own position and they can’t.
That means the best hope for ID is to have the old ignorant guard die out
R J Sawyer:
Clueless. Intelligent Design is NOT anti-evolution– so learning what the debate is all about would be a good place to start. And unlike blind watchmaker evolution ID makes testable claims.
So the question is what is a scientifically viable alternative to Intelligent Design?
ET:
Well, I can live with the potential terror of being torn apart by some imaginary peer review.
In the meantime, I am just waiting to be torn apart by George Castillo, or by any other interlocutor here. After all, they (like you and the other friends) are my peers: we write on this forum without any pretense of authority, and the things we say are the only stuff that counts.
ET:
“That means the best hope for ID is to have the old ignorant guard die out”
Sad but true.
gpuccio- I understand your point as constructive criticism, the kind offered by scientists who care about their craft, is always a good thing.
Here you are, acting in good faith and actually presenting an argument reasoned from the evidence. Doing what “they” say we cannot, have not and will not do- yes even in the face of the overwhelming evidence to the contrary. And all “they” can do is flail away and then puke out a “threat” of someone, someday, gonna tear it apart.
It would be hilarious if it wasn’t so pathetic.
“So, as you can see, I have not ‘reinvented the wheel’. I have made my personal wheel…..”
Uhh, what?……is it square by any chance?
Let’s just put that little tidbit to the side for now.
Wouldn’t it make more sense to just use the published methods for calculating functional bits?
Do you have a good reason not to use that method?
Why do you think your method is better?
gpuccio,
Delightful presentation of such a fascinating topic!
I’m enjoying it.
Thanks.
EugeneS,
Excellent comments, as usual. Thanks.
UB, ET, bill cole:
I like your valuable contributions. Thanks.
129 George Castillo,
gpuccio has explained his clever method more than once in this website.
You may want to read it to understand it well, before you can comment on it. Just a suggestion.
I’m sure gpuccio will enjoy discussing that topic.
102 George Castillo,
“laymen”? “torn apart?”
this is an open website that can be read by anybody with internet access that is interested in the discussed topics.
here’s a case where a scientist tried hard to tear apart gpuccio’s presentation a couple of years ago:
Who?
ID debate with a professor 11, 14, 18, 25, 26, 27, 33
102 George Castillo,
“laymen”? “torn apart?”
Here’s another debate between gpuccio and a scientist who tried unsuccessfully to tear apart gpuccio’s presentation: 25, 50, 51, 56, 130, 164
After seeing such an embarrassing failure, do you think another scientist would like to go through a similar experience here?
You tell us.
George Castillo,
You may want to verify what you write before you post a comment here. Just a friendly suggestion.
Transcriptional Regulation by Chromatin and RNA Polymerase II
George Castillo at #129:
All wheels are not the same. Without being square, they differ in size, structure, materials, specific purposes, and so on.
My method and Durston’s both rely on evolutionary conservation to measure functional complexity. Therefore, they are both wheels. The basic idea is the same.
However, the way to use sequence conservation as an estimate of functional information is different.
I have already discussed that here with EugeneS and referred you to that discussion (at #105).
However, for your convenience, I quote here myself from #32, the relevant part:
And this is from my comment #27:
I have also explained in detail some of my procedures here:
Bioinformatics tools used in my OPs: some basic information.
https://uncommondescent.com/intelligent-design/bioinformatics-tools-used-in-my-ops-some-basic-information/
And the procedure itself is explained in some further detail in this OP, which is the first where I have extensively applied it:
The highly engineered transition to vertebrates: an example of functional information analysis
https://uncommondescent.com/intelligent-design/the-highly-engineered-transition-to-vertebrates-an-example-of-functional-information-analysis/
Gpuccio, thanks, will try to catch up as I can.
My weekend ended up different than planned with cousins in coming town.
But I see you have things well in hand as usual 🙂
Upright Biped @53,
re: Chromatin post
Yes, much puffery, insults and paper bluffing. I read 1st one GC listed on the Chromatin thread. Found it to be usual stuff with usual caveats of Darwinist propaganda thrown in, with usual claims and appeals using “could be” “might be” yada yada, thrown in, but nothing overturning the failure of neo-Darwinism to account for life by blind, unguided steps of the kind leading to macro-form evolutionary events.
What’s so funny is GC is defending today what many Darwinist are now admitting is weak, failed, needs replacing or even by the staunchest of Darwinist defenders, at least be updated. Especially since the findings of ENCODE have torn asunder the House of Darwin.
Royal Society has proposed major changes and openly admitted the failures of neo-Darwinism.
Many scientist today openly admit these failures. For example at Royal Society. Not to go off-topic, but really the problems have yet to be solved.
As reported by Paul Nelson and David Klinghoffer on the meeting of the Royal Society in 2016…
Meanwhile, Dan Graur steams with his “Junk” DNA meltdown and proclamations that at least 75% of DNA must be “JUNK.” Of course this is based upon neo-Darwinist rhetoric by Dan Graur himself – doubling down on old assumption.
https://evolutionnews.org/2017/07/dan-graur-anti-encode-crusader-is-back/
From Evolution News…
Thanks Dan Graur – “If ENCODE is right, then evolution is wrong” of course meaning Darwinist evolution of blind, unguided mutations is wrong, not however if guided.
GC is living in the past much like Graur. Assumptions made upon failed speculations of the past, largely based upon ignorance. And unfortunately text books still teaching the failures of the past along with scientist in research journals maybe, coulda, mighta, … happened in the past 😉
Darwinism = Story Telling.
ENCODE is a game changer. Dan Graur has every right to be fearful and angry. He knows the stakes.
As he said and is worth repeating, “If ENCODE is right, then evolution(blind, unguided) is wrong.”
Have a good day guys. And sorry Gpuccio for going off topic 🙂
The incredible amount of new functions once written off as “Junk” that will be found will continue to undermine people like Graur and Castillo’s claims. We’re just at the beginning of all of this…
https://evolutionnews.org/2017/09/design-in-the-4th-dimension-the-4d-nucleome-project/
Gpuccio @5
The Ubiquitin Magic hat PTM trick I see is working 🙂
Post Translation Modification – just magically poofed into being by a series of blind, unguided events 😉
Ubiquitination requires recognition does it not? To target and mark for PTM? Forgive me, can’t remember if recognin’s are involved with Histones or not.
I forget all the steps you listed Gpuccio on the other link to the great Ubiquitination Post.
Anyways, yep, amazing stuff again being coordinated by specific actions and reactions to events. Not accidental mutations, but by targeted guidance systems.
OK, must run.
Interesting…
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6033341/
“Thus, UHRF1 can be considered as a master regulator of TSGs as it coordinates DNA methylation and histone modifications at their promoters [13, 14, 17–19]”
A TF was mentioned as well, ReIB…
Got it, gpuccio, so you have reinvented the wheel simply to have a method that is “easier to apply.”
That would surely go over great with any reviewers!
Have you at least done a comparison to Durston to see just how bad your method is?
No, George, clearly all you have is your belligerence and ignorance.
That will never go over with anyone
DATCG,
Very informative posts, as usual.
Thanks.
George Castillo:
It’s not easy to make a comparison. Durston gives values only for 35 protein families, and most of them are not comparable to human and vertebrate proteins, or there is not enough information to identify correctly what domain was used for the alignements.
In the very few cases where I could make a reasonable comparison, my method definitely underestimates functional complexity as compared to Durston’s (in average, my results are 60-70% of his). That’s what I have always said, and is connected to how the blast algorithm works. So, my method seems to be more favourable to newo-darwinists.
But again, the cases I could examine are really just a handful.
If you prefer to believe that Durston’s method is much better, be my guest. His values are higher and worse for your cause.
DATCG:
Very good contributions, as usual. Thank you. 🙂
Now I have no time, but I hope I can comment more in detail tomorrow.
Gpuccio, if I’m not mistaken, yours and Durston’s method are solely dependent on known sequences.
Surely you could produce some correlation plots that compare functional information of proteins based on your method versus Durston’s.
George, you directed your post at 65 towards me, and after I responded, you’ve gone mute. In order to function, does the system require both a medium of information as well as the set of constraints to interpret it? Did Turing’s machine require both the tape and the state transformations? Does the system require semantic closure in order to replicate?
In his memoirs, pioneering biologist and Nobel Laureate Sydney Brenner commented, “you would certainly say that Watson and Crick depended on von Neumann, because von Neumann essentially tells you how it’s done.”
What say you? Was Brenner wrong, along with Von Neumann, Turing, Peirce, and Pattee? Was Crick’s prediction a logical one? Could Nirenberg have calculated the discontinuous association of the gene code, or did he have to demonstrate it? If you cannot articulate any details where the established model of translation is wrong, then why should I not refer to it? Why should the materialist’s claims be exempt from it?
George Castillo:
As said, it is difficult to compare the numbers in Durston’s table to mine. I could do that in some way for 8 proteins. Even with that small number, there is a very good correlation (p = 0.00007752, adjutsted R square = 0.9273).
I have neither the time nor the resources to compute Durston’s values for a new set of proteins, so that will have to suffice.
I am adding a scatterplot to the OP.
To all:
Here is an absolutely recent (September 2018) review about the known mechanisms by which lncRNAs implement their functions and are involved in cancer. I highly recommend it:
Exploring the mechanisms behind long noncoding RNAs and cancer
To all:
The introduction of the paper quoted at #151 is a really good summary of the general features of lncRNAs.
a) It gives the actual number at about 59000, and the typical range at 1000–10,000 nucleotides.
b) It reminds us that they are very much similar to protein coding mRNAs, because “they are generally transcribed by RNA polymerase II, 5? capped, 3? polyadenylated, and often undergo splicing of multiple exons via canonical genomic splice motifs”.
c) It explains clearly the 4 main types of lncRNAs (see Fig. 1):
– Intergenic
– Bidirectional
– Antisense
– Sense overlapping
IOWs, lncRNAs are transcribed from all possible parts of the genome, in practically all possible ways.
d) It also mentions, as a separate category, the enhancer transcribed lncRNAs (eRNAs).
e) It reminds us that “Despite minimal overall sequence conservation across species, many lncRNAs have evolutionarily conserved function, secondary structure, and regions of short sequence homology”
f) It reminds us that their transcription is often regulated by “well-studied transcription factors and epigenetic marks”
g) It reminds us that their expression “is often unique to specific cell types, tissues, developmental time frames, and disease”
h) It reminds us that the small percentage of lncRNAs that have been studied in depth “have been implicated in X chromosome inactivation (Xi), genomic imprinting, nuclear compartmentalization, splicing, stem cell pluripotency, cell cycle progression, cellular reprogramming, apoptosis, and many diseases”
i) It reminds us that lncRNAs effect their functions “through the regulation of gene expression, translational control, structural cellular integrity, protein localization and degradation”
j) It reminds us that they “can associate with a wide range of interaction partners including RNA binding proteins (RBPs), transcription factors, chromatin-modifying complexes, nascent RNA transcripts, mature mRNA, microRNA, DNA, and chromatin”
Wow! 🙂
To all:
But how do lncRNAs really implement their functions?
That’s exactly the real subject of the paper quoted at #151. It lists some very interesting modalities, each of them well documented by known examples:
a) They serve as guides “for the proper localization/organization of factors at specific genomic loci for regulation of the genome”. IOWs, they “bind to regulatory or enzymatically active proteins, such as transcription factors and chromatin modifiers, to direct them to precise locations in the genome”.
b) They serve as dynamic scaffolds, providing a central platform, often short-lived, “for the transient assembly of multiple enzymatic complexes and other regulatory co-factors”.
c) They work as decoys, “sequestering RNA-binding proteins, transcription factors, microRNAs, catalytic proteins and subunits of larger modifying complexes”, and limiting their availability.
Quite a range of different, interesting, creative and very specific mechanisms, I would say. And this is only what we understand today.
To all:
The rest of the paper quote at #151 is dedicated to the well proven role of many lncRNAs in various kinds of cancer. I will not go into detail about that, but just have a look at Fig. 3 in the paper if you want to get an idea of how complex is this subject, even with the little we know at present.
DATCG at #142:
Wow, E3 ubiquitin-protein ligases that “commit auto-ubiquitination” under the menace of anticancer phytochemicals, TFs whose activity is enhanced by polyubiqutination…
Truth is definitely stranger than fiction!
Gpuccio @155,
Thought you might like that 🙂 The coordination is amazing as is often the timing. There’s a window of time for all of these interactions taking place or it becomes to costly, inefficient, even lethal as multiple process dependent layers require completion and notifications during each procedural event.
On your scatter plot – nice! I thought your explanation was sound and reasonable prior to providing it.
As I read above the scatter plot, noticed your point on Histone Code.
“… at least one big sub-network based upon a symbolic code: the histone code.”
Which as we’ve seen interacts with multiple other networks, sub-networks and codes. Like the Ubiquitin Code.
Yes 🙂 TFs activity enhanced by “poly-ubiquitination” and here are TF’s regulated by ubiquitination…
Here’s the link:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5666869/
Regulation of E2F1 Transcription Factor by Ubiquitin Conjugation
Laurence Dubrez1,2
The dance of life is a series of highly regulated, highly coordinated networked and sub-network systems that are transcribed along with post-translation-modification and regulation.
The only way a system knows what to modify – post translation is to have a target pattern to match it. The Codes and Presets exist prior to the post-translation modifications. The information is already existent. Semiotics is real and obvious in the Interactome and interactivity between shared systems networks within the nucleus and across it’s boundary.
It’s not blindly allowing mutations to appear. There’s not a magical set of new mutated information waiting to be called upon to effect change to forms or cellular processing.
What we see is highly organized, prescriptive, preventive measurements and Coded, Conditional Reactive States just like any programming technique by Design.
There is of course sometimes a purpose for built-in random allowances, like the immune system.
But usually mutations to these networked systems are deleterious as a mutation enters any PTM phase, Code, or interaction. And might be fatal.
Gpuccio, you end with this …
“So, the last question could be: can all this be the result of a neo-darwinian process of RV + NS of simple, gradual steps?”
IMHO, no. First gradualism was killed off long ago I thought, but that’s another trail to take.
Next, this neo-Darwinian “process” of RM* + NS is insufficient for creation of novel forms, much less the network systems we see today within the cells, that must be coordinated interactions and timing with multiple sub-networks and coded systems, any one of which might be compromised by a single deleterious mutation.
Far from being an innovator, Darwinism is a stopper, as is evidenced by the findings of ENCODE and thousands upon thousands of new papers since ENCODE discovering more new functions 24/7 around the globe. Debunking “Junk” DNA daily and killing the blind-folded messengers of Darwinism.
*RM – note I utilized random mutation, not random variation although Gpuccio I agree Variation is the key, allowable variation within constraints I think.
Darwinist always use random mutation, but since ENCODE has dawned and the light shined in on these incredible Information Networks within the cells, the fact they keep thinking random mutations will be an innovative force for survivable new forms is extremely perplexing.
It’s like Dan Graur and the old school cannot comprehend the science in front of them today and are so stuck in the past, they keep clinging to antiquated, ignorant beliefs.
To credit others, though, many are moving on despite those like Graur who will not let go of their blind security blanket.
DATCG:
It’s absolutely evident that RM + NS cannot explain any of the things descrbed in the OP and in the following comments. I just thought that it was redundant to say it again.
Dan Graur! Never seen any better example of blind dogmatism and bias.
Yes, ENCODE, FANTOM and others are marching on. Because, very simply, they are on the right side: the side of science and truth.
The functions of non coding DNA and RNAs are no more a hypothesis, least of all a rare exception: they are the absolute, all pervading rule.
Gpuccio 157,
Well said, indeed, and the pace is ever increasing for new functions found – in “junk” DNA
🙂
Forgot to add the following Intro paragraph from the TF paper I posted in #156…
GP, DATCG,
NICE!
Hey Upright, hope you’re doing well 🙂
It’s very difficult to catch up with the growing amount of interesting information posted in this thread… gpuccio keeps adding more insightful comments pointing to juicy papers and now to make things “worse” DATCG appeared out of the blue sky and started to flood this thread with interesting comments also pointing to interesting papers.
OLV @162
“It’s very difficult to catch up with the growing amount of interesting information”
Yes, I agree. This is why I have been arguing for a long time for an index by author on this blog.
Also, maybe it’s even worth coming back to comment rating because there are some really brilliant comments.
To all:
Where do the strangest things happen?
Of course, in the brain.
This is about the mouse:
The Evf2 Ultraconserved Enhancer lncRNA Functionally and Spatially Organizes Megabase Distant Genes in the Developing Forebrain
The paper is paywalled.
However, in brief:
a) The actor is Evf2, a lncRNA transcribed from a non coding region between two homoebox Tf genes, Dlx5 and Dlx6.
b) The non coding region from which Evf2 is transcribed is an ultra-conserved enhancer (UCE). So, Evf2 is an UCE-lncRNA.
c) This lncRNA is expressed at sites of sonic hedgehog-activated interneuron (IN) birth in the mouse embryonic forebrain, and has many other functions, acting in part together with other homeobox TFs.
d) To make it simpel, this lncRNA (Evf2) controls in trans the interactions between the original DNA sequence, the UCE and many other important genes, in an unprecedented range of 31 Mb, and in very complex patterns, through complex effects on topology.
e) The function of Evf2 is of paramount importance in the development of the mouse brain, as shown by multiple evidences.
Hello ES,
You may already have this, but if you don’t: recent threads by GPuccio …
(I apologize if they are out of order)
Transcription regulation: a miracle of engineering
The Ubiquitin System: Functional Complexity and Semiosis joined together.
The spliceosome: a molecular machine that defies any non-design explanation.
Isolated complex functional islands in the ocean of sequences: a model from English language, again.
Bioinformatics tools used in my OPs: some basic information.
Functional information defined
What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world
Defending Intelligent Design theory: Why targets are real targets, probabilities real probabilities, and the Texas Sharp Shooter fallacy does not apply at all.
Interesting proteins: DNA-binding proteins SATB1 and SATB2
The amazing level of engineering in the transition to the vertebrate proteome: a global analysis
The highly engineered transition to vertebrates: an example of functional information analysis
Information jumps again: some more facts, and thoughts, about Prickle 1 and taxonomically restricted genes.
What are the limits of Natural Selection?
Bioinformatics tools used in my OPs: some basic information.
We’ll, it appears my last post is stuck in the filter. Oops.
UB:
Thank you so much for the very good work. It should have been me doing it! 🙂
Upright Biped,
Thanks for the brilliant post on the history of the theoretical underpinnings of the biosemiotics argument for ID and for a list of OPs by GP. That is really great! In my blog, I have a special tag dedicated to GPuccio 😉 I have something to add to it.
I have one comment that sort of coagulated only recently. A majority, if not all, of these people you mentioned who undoubtedly substantially contributed to an understanding of the semiotic core of life, were naturalists in the sense that they augmented the sign as an add-on to a description of living organisms as physical systems.
We, supporters of ID, see the establishment of symbolic boundary conditions on the dynamics of matter in living systems, as a hallmark of conscious design.
They remained naturalists. They believed that life could be modeled as a Turing machine, or equivalently, as a cellular automaton that can be described using something like:
state(t+1) = F(state(t=0), state(t)).
I am not aware of anyone of them openly supporting ID in the strong form we are discussing here (maybe I am wrong). At best, they could probably subscribe to the weak ID positing that for life to appear it is enough to have serendipitous starting conditions.
To my knowledge, they never supported ID. And some of them openly denounced ID. If I remember rightly from what I read, H.Pattee was one of them. As far as H.P is concerned, his latest addition to the list of his publications on academia.edu is an example. In the section where he gives examples of evolvability, he says:
Emphasis mine. In these examples he is not convincing because it is his belief, not a clear objective demonstration.
True, H.Pattee acknowledges elsewhere that for this to happen, life needs to start from an open-ended semiotic system. However, he never subscribed to an ID position (in what I read).
David Abel emphasized it (personal communication). All these great minds were and are Darwinists in the wide sense as far as the origin of life is concerned.
I totally agree with you regarding the implications of their reasoning in terms of ID but we need to bear this in mind.
This is one of the reasons why I find what GP writes very important.
UB
Just in addition to my latest post. Thankfully, these people’s contribution to science (philosophy aside) is objective 😉 And we can assess the power of their modeling by tangible advances of technology that followed.
Evgeny,
Thank you very much for your 169 and 170. It will be this evening (US) before I can respond.
EugeneS:
Very good thoughts at #169.
Yes, those people were not supporters of ID. But we must consider that th ID paradigm, even if always present in some form, has only recently gained more relevance and strength. 10, 20 or 30 years ago the power of materialistic naturalism in science was practically absolute. Now it is still the dominant religion, but there is some valid opposition, luckily.
Regarding the passage from Pattee that you quote, it is a good example of how good ideas can be used badly.
First there is the basic admission that only biological objects and humans implement complex symbolic languages. Which in itself should mean a lot. Then there is the false reasoning that they “must have evolved”.
But again, there is no mention of the fact that human language, even if it evolved, is the result of consciousness, of its intuitions, of its ability to understand and to generate symbols and connections.
While the “evolution” of biological objects, and therefore of the complex codes implemented in them, is supposed to be consciousness independent, mindless, purposeless, devoid of any understanding.
There is a big, enormous difference, and yet the two examples (the only two examples in the known universe) are happily put together as though they were simply two different, and self-supporting, examples of the same process.
This is what dogma and prejudice can do, even to the best minds.
By the way, a tag?!!
I am really honored! 🙂
GP
“Which in itself should mean a lot.”
I can relate to what you write in comment 172.
“But again, there is no mention of the fact that human language, even if it evolved, is the result of consciousness, of its intuitions, of its ability to understand and to generate symbols and connections.”
That is probably the greatest flaw in the whole edifice of contemporary scientific thought. In fairness, biosystems themselves are decision making systems (even if unconscious). But attributing all this starting complexity of life to non-telic (‘mindless’) factors is what escapes me. The complexity explosion(s) they keep talking about are inexplicable without conscious design.
As you rightly pointed out in the discussion under one of your OPs, the seemingly simple rules of cellular automata already implicitly encode the resultant complex behaviour. Nothing comes out of nothing in reality, putting the illusory world of an evolutionist aside.
“I am really honored!”
Yes, you are now famous in the Russian blogosphere. Trouble is, I am not that famous myself 😉 Anyhow, whoever reads my posts can read yours!
Regarding the glaring gap between the castles of smoke and reality… Hawking was quoted as saying that all that is necessary to explain the world is gravity. And then, when asked where gravity came from, he seriously answered: “From the M-theory!”
Eugene S:
Would you mind sharing a link to your blog here?
Thanks.
Eugene S,
Oxford University professor John Lennox has said that nonsense remains so regardless of who says it.
To all:
One of the big unsolved questions in this huge issue of transcription regulation is: what controls cell differentiation?
I think the only answer still is: we really don’t know.
However, a lot of new precious information is available.
While this issue probably deserves some future detailed discussion, maybe a new OP, I would like to gather here some interesting data that certainly open big possibilities.
It is well known that a few special TFs can start important changes in the differentiation state of a cell. The best demonstration of that is the possibility of inducing a stem cell state from differentiated somatic cells, just by adding a few TFs, or similar molecules.
The classic work by Yamanaka et al., in 2006, showed that human fibroblasts could be transormed into induced Pluripotent Stem Cells by adding 4 TFs, which started a process of dedifferentiation lasting a few weeks. The process has been improved in terms of efficienxy, and replicated with different combinations of TFs, miRNAs and other factors.
So, we can understand form that that dedifferentiation (and therefore, probably, differentiation) is a very complex process, but that it can be started by some relatively simple initial “switch”, generated by a few important molecules, in particular TFs, that in that sense act as “master regulators”.
But how do these “top acting” TFs implement their function?
This recent paper gives interesting details about that:
GRHL2-Dependent Enhancer Switching Maintains a Pluripotent Stem Cell Transcriptional Subnetwork after Exit from Naive Pluripotency.
https://www.cell.com/cell-stem-cell/fulltext/S1934-5909(18)30287-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS193459091830287X%3Fshowall%3Dtrue
The paper is paywalled, but you can look at a clear graphical summary at the link I gave for the abstract.
To make it simple:
a) They have studied, in mouse cells vitro, the transition from a more embrional state (ESCs in the graphical abstract) to a slightly more differentiated state, corresponding to the epiblast EpiLCs in the graphical abstract), before the differentiation of the three primary germ layers.
b) The interesting thing is that these two states, both of them very embrional, have huge epigenetic differences: studying just one activation signal, cohesin binding, they found that more thant 5000 genes were specially active in both kinds of cells, but only 2205 were common to the two states. That’s a big difference, for two cell types that are apparently very similar.
c) A further analysis focused on GRHL2, a TF that seems to have an important role in this transition, for a specific subset of genes.
d) The very interesting thing is that GRHL2 seems to be necessary and sufficient to induce a specific epigenetic transition for a very specific subset of genes.
e) The effect of GRHL2 is to activate a whole set of enhancers that are inactive in ESCs and become active in EpiLCs.
f) But the really surpisinf fact is that the genes regulated by this new set of enhancers were already expressed, at a similar level, in ESCs. IOWs, the activation of a completely new set of enhancers for those genes does not increase transcription of those genes in EpiLCs.
g) The reason for that is that those genes were already transcribed in ESCs, but their transcrition was regulated by a different set of enhancers.
h) So, the amazing conclusion is that GRHL2 contributes to the transition from ESCs to EpiLCs, both of them very high level stem cells, by changing the set of enhancers that regulated the transcription of a specific subset of genes, without changing the level of transcription of those genes.
i) The authors believe, and in part demonstrate, that the meaning of such change in the set of active enhancers for the same subset of genes is to prepare the new cell (EpiLC) for further differentiation, that will take place only in the following phase, the differentiation of the three primary germ layers, specifically towards epitelial differentiation.
That’s really hot stuff. This really shows that big epigenetic rearrangement precede visible differentiation and visible changes in transcription, and that specific discrete states with complex epigenetic rearrangements precede explicit transcription and differentiation states.
This has the clear flavour of intelligent programming, of a process implemented in definite steps, each of them extremely purposeful and oriented to the final result.
Eugene S,
I like your commentaries.
Thanks.
gpuccio,
Your post #177 is a real winner. Thanks.
gpuccio’s post #177 is a real 3-pointer (basketball).
Ok, sticking to Peter’s basketball terminology, gpuccio’s post 177 is just another slam dunk among the many gpuccio has already done here.
gpuccio’s team (DATCG, UB, Eugene S) is unbeatable.
All their stubborn opponents can do safely is running for the hills.
OLV,
I like the entire post 177, but specially where it hints at a future OP on the subject. Yeah!
OLV and jawa,
I agree with all you said, except putting pressure on gpuccio.
We know he’ll write those excellent OPs when he can.
We must respect that.
Peter,
Don’t be so sensitive.
Nobody is putting pressure on gpuccio to write the OP he has hinted at.
jawa is simply highlighting gpuccio’s own hinting at it.
Such a possibility is exciting indeed.
Peter,
I agree with OLV.
An OP by gpuccio on such a fascinating biology topic could be a paradisiacal meal for thoughts.
This paper says that “transcriptional regulation” is well studied?
But according to gpuccio it hasn’t answered some important questions yet.
Did they mean extensively studied?
“Gene expression is determined through a combination of transcriptional and post-transcriptional regulation. While transcriptional regulation is well studied, less is known about how post-transcriptional events contribute to overall mRNA levels.”
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6086665/
jawa,
Perhaps that’s what they meant.
jawa:
What they probably mean is that we know a few things about transcriptional regulation, and much less about post-transcriptional regulation. Which is probably true.
Both issues, however, are still black holes where we need to find a true direction.
I would add translational regulation, and post translational regulation. And a few other things (some of them possibly not even imagined!). 🙂
OLV:
It’s our team, and you are all part of it. 🙂
To all:
OK, I agree: #177 is definitely interesting.
But credit where credit is due: the paper is really very good! 🙂
gpuccio:
Now I understand. Thanks.
Hello ES,
Who doesn’t?! 🙂
Yes, they were. I agree with GP’s sentiments in #172; he hit all the salient points and I don’t have much to add. The fact that the gene system requires a language structure is one of several critical observations.
As for the question about naturalism, I would only add that what I’ve learned from these naturalists has nothing to do with their naturalism. First and foremost, I’ve learned important things about the gene system, which comes from good science and descriptions, not personal metaphysics. Their naturalism was irrelevant to the production of good descriptions, just as it should be, and I thank all of them.
UB
“what I’ve learned from these naturalists has nothing to do with their naturalism”
Exactly! What they produce is scientific models that are testable with an objective quality criterion named ‘practice’.
Science is itself a product of design and stands on an assumption that, based on historical observations, an objective rational model can be constructed to predict future observations. And it really works, at least to a limit, and sometimes remarkably well. Why should science work at all? What is there in mathematics that enables it to describe the world so efficiently? Various prominent scientists did not feel ashamed to call this a miracle (Wigner, Feynman, Planck to name a few). The only rational answer to this can be that the world itself is a product of design.
OLV, jawa
Thanks very much! I don’t think that giving links to Russian blogs will be of much interest for the English speaking audience. But anyway:
mns2012.livejournal.com (personal blog)
biosemiotics.livejournal.com (a series of notes and re-posts in support of the biosemiotic argument for ID)
Apologies for the broken links. One more time:
one
two
Eugene S:
Thank you for the links!
I know somebody who is fluent in Russian language and will like to read your blogs.
To all:
The sequence suggested in Fig. 2 of the OP seems to be supported by recent papers, like the one quoted at comment #177.
Here is another interesting one, which seems to show a similar connection between TFs, epigenetic marks and cell states.
The Epigenetic Factor Landscape of Developing Neocortex Is Regulated by Transcription Factors Pax6 -> Tbr2 -> Tbr1
https://www.frontiersin.org/articles/10.3389/fnins.2018.00571/full
blockquote cite>Epigenetic factors (EFs) regulate multiple aspects of cerebral cortex development, including proliferation, differentiation, laminar fate, and regional identity. The same neurodevelopmental processes are also regulated by transcription factors (TFs), notably the Pax6? Tbr2? Tbr1 cascade expressed sequentially in radial glial progenitors (RGPs), intermediate progenitors, and postmitotic projection neurons, respectively. Here, we studied the EF landscape and its regulation in embryonic mouse neocortex. Microarray and in situ hybridization assays revealed that many EF genes are expressed in specific cortical cell types, such as intermediate progenitors, or in rostrocaudal gradients. Furthermore, many EF genes are directly bound and transcriptionally regulated by Pax6, Tbr2, or Tbr1, as determined by chromatin immunoprecipitation-sequencing and gene expression analysis of TF mutant cortices. Our analysis demonstrated that Pax6, Tbr2, and Tbr1 form a direct feedforward genetic cascade, with direct feedback repression. Results also revealed that each TF regulates multiple EF genes that control DNA methylation, histone marks, chromatin remodeling, and non-coding RNA. For example, Tbr1 activates Rybp and Auts2 to promote the formation of non-canonical Polycomb repressive complex 1 (PRC1). Also, Pax6, Tbr2, and Tbr1 collectively drive massive changes in the subunit isoform composition of BAF chromatin remodeling complexes during differentiation: for example, a novel switch from Bcl7c (Baf40c) to Bcl7a (Baf40a), the latter directly activated by Tbr2. Of 11 subunits predominantly in neuronal BAF, 7 were transcriptionally activated by Pax6, Tbr2, or Tbr1. Using EFs, Pax6? Tbr2? Tbr1 effect persistent changes of gene expression in cell lineages, to propagate features such as regional and laminar identity from progenitors to neurons.
We are here in the mouse embrional neocortex, where 4 cell types can be defined, ino order of differentiation:
1) Radial glial progenitors (RGPs)
2) Intermediate progenitors a (aIPs)
3) Intermediate progenitors b (bIPs)
4) Postmitotic projection neurons (PNs)
Now, as can be seen in Fig. 1 of the paper, the three transitions that lead from the stem cell (RPG) to the differentiated neuron (PN) are controlled by the sequential expression (cascade) of three TFs, master regulators of the process:
Pax 6 -> Tbr2 -> Tbr1
So again, we can see that the individual expression of one TF controls the transition from one state to another: master regulator TFs definitely act as powerful swithces.
But again, if you read the paper, you can see that each of the three sequenctial TFs acts in a very specific and complex way on the epigenetic landscape of the cell. To assess that, the authors considered a group of specific epigenetic factor genes (EFs), as described.
The three TFs acted on those EFs in many different ways:
a) By acting on N-methyltransferases, and therefore on DNA methylation and demethylation:
“Pax6, Tbr2, and Tbr1 regulate this system by repressing and activating key genes, including repression of the caudal marker (Gadd45g) by Pax6 and Tbr2 (Figure 2F). Thus, DNA methylation and demethylation may regulate not only neuron differentiation (Sharma et al., 2016) and astrogenesis (Fan et al., 2005), but also cortical regionalization under the control of Pax6 and Tbr2.”
b) By acting on histone marks:
– Acetylation and deacetylation:
“The present analysis identified several HATs and HDACs with cell-type-specific expression, and extensive regulation by Pax6, Tbr2, and Tbr1 (Figure 3). ”
– Methylation and demethylation, through:
— Trithorax/COMPASS Activating Complexes:
“These results indicate that deposition and removal of TrxG marks are actively regulated by Tbr2 and Tbr1 during neuronal differentiation (Figure 4F)”
— Polycomb Repressive Complex 1:
“These data suggest that canonical PRC1 complexes are present in all types of cortical cells (although most abundant in progenitors), and are minimally regulated by Pax6? Tbr2? Tbr1. In contrast, non-canonical PRC1 complexes exhibit differentiation-related changes, such as upregulation of Rybp in IPs and new PNs. Notably, Tbr1 directly activated two non-canonical PRC1 subunits (Rybp, Auts2) implicated in brain development”
c) By acting on ATP-Dependent Chromatin Remodeling Complexes, especially BAF Chromatin Remodeling Complexes.
d) By acting on Non-coding RNA-Mediated Epigenetic Regulation:
“Together, these findings indicate that several lncRNAs are specifically expressed at high levels in IPs and new PNs, and that several miR genes are expressed with cellular or regional specificity. The gradient of Mir99ahg, and its possible targeting Fgfr3, suggest a new role for miR in cortical patterning. Finally, their direct regulation by Tbr2 and Tbr1 suggests that lncRNA and miR genes have significant functions in cortical development (Figure 12G).”
Table 1 sums up many of these EFs, and how they are regulated by the three TFs.
Another important point is that the TF cascade controls not only the differentiation of inbdividual cells, but also their localization in the neocortex, their “regional identity”, which is of course as important to the generation of the final structur and function as the differentiation if individaul cells. The paper give interesting hints on that process too.
So, there is this central cascade of three master proteins, and in many ways it controls the intermediate states of differentiation. But the effects of each of these master regulators are extremely complex, and work mainly thorugh specific and delicate regulations of epigenetic factors. A complex network of feedforward and feedback regulation guarantees that the process proceeds through the various ordered steps.
Again, the transcriptome/proteome (for example, the expression of each of the three TFs in cancade) modifies the chromatin configuratio0n (the epigenetic landscape), determining some specific state. And the complex feefback of the epigenetic landscape to the transcriptome/proteome makes the process constantly dynamic, and determines the transition to a new state.
The paper’s conclusions:
Where is George Castillo now?
🙂
gpuccio:
When they say:
“Coordinate Regulation of Cortical Development by TFs and EFs”
Does it mean that the coordinate regulation is done by TFs and EFs or by something else using TFs and EFs as important tools?
For example, what determines how many
TFs and EFs, when and where they should be there?
Or are they always available anyway?
Thanks.
OLV:
You ask difficult questions! 🙂
I think that the concept of coordinate regulation means that, at all times, the different levels of regulation act one on another, and the state transition is the global result of all those levels and of all those interactions.
However, the last papers I quoted seem to show that some specific TFs, those that act as master regulators of some vast differentiation scenario, are important general switches that can activate or deactivate whole general procedures.
In a sense, they are a regulation backbone that can define the complex procedure that will be active in that cell at some time.
So, the develoment process could be in a way modular: a more high level thread would define the ordered expression of the master regulators, guided by specific information (more on that later). Then, at a lower (and more complex) level each specific “master regulator” scenario defines in detail what specific procedure will be implemented.
How? We don’t really know, but as you have seen we know certainly more than, say, a few years ago.
The most reasonable idea is that, while many of the details for each differentiation procedure are written at all regulation levels, certainly a very important role is reserved to the following interaction:
TFs + enhancers
We have seen at #177 that master regulators can act by changing the enhancer landscape. Indeed, those master regulators could be special TFs that can access specific chromatin sites even when they are not accessible, so that they can make them accessible to otehr TFs.
Now, let’s reason a moment.
If we really have about 1 million enhancers in the human genome (as many believe), even assuming that we want to choose a combination of 50 specific enhancers to define one high level differentiation landscape (which is a rather conservative hypothesis), the possible gross combinations are:
3.283924e+235
which is quite a number.
It is quite reasonable that only very few of them make sense for an oredered cell development of some kind.
So, again, we are in front of a huge problem of complex functional information, just to select a functional combination of elements from the search space of enhancers.
Of course, the complexity increases exponentially if the number of requested enhancers is bigger: for 100 enhancers, the combinations are:
1.066219E+442
So the idea is, if you have 1 million different enhancers available, and 2000 TFs, and 60000 lncRNAs, and many other variables, you can really write a lot of specific procedures by intelligently manipulating them.
Each enhancer can be long, in average, 1000 bp or more. There is a lot of search space to individualize them so that they can be important information tools.
Of course, a lot of questions remain unsolved. For example:
a) What guides the correct cascade of master regulators?
b) What makes the procedures robust? They are very complex, so they are certainly subject to many possible errors.
c) Are there other levels of control and regulation that we don’t know of, at present?
The third answer is the easiest, so I will give it first: definitelyy yes. I am sure that there are many levels of regulation and control that, at present, we cannot even imagine.
However, as science requires, we must reason with what we already know, otherwise we could become like our neo-darwinist friends! 🙂
To try to answer, very partially, the first two questions, we must remember a very important thing, that I have tried to emphasize just at the beginning of this OP:
The working information that is avalable in a cell at each specific time and in each specific state is always the sum total of all the information that is active at genetic and epigenetic level, IOWs the sum total of all the active information in any part of the cell at that moment.
A cell never exists in a generic state, it never starts from scratch. The genome is never completely available, never completely blocked. The transcriptome is ever changing. The non coding RNAs landscape is ever changing. The proteome is ever changing.
So, there is no such thing as “a cell”. There is always “a cell in one specific informational state at one specific time”.
Life is a continuum of states, never an object.
So, if we conventionally put a start at some place, we are just defining a conventional start in a continuum.
For example, let’s say that we start from the zygote, as soon as it becomes one cell with its new genetic information.
And, of course, with its specific epigenome that reads in a specific way that genetic information.
So, the program that is active in the zygote is engineered so that that particular cell can proceed to its following states.
Each program written in the dynamic cell is a specific selection of information that can guide that specific cell in that specific state to some new specific state. And so on.
Of course, much of that must be written in the DNA sequence: protein genes and promoters and enhancers and non coding genes are all written there. But they can only work in the appropriate dynamic context, and nowhere else.
The complexity of that all is overwhelming. Add to that that many factors come from outside the cell, from the “environment”.
But that environment is, of course, part of the program, part of the engineering.
It includes signals from other cells, or even signals from environmental niches. But those signals are functional, not random. They have been engineered, too.
The program, with its complexity, could never work if the signal from the environment were random. That’s why the embryo requires a very protected and controlled environment.
Of course, random noise can always happen: it does happen, and often it can destroy or deform the program. As we well know.
But, in general, the program works very well. Because the procedures are robust. And the general control is robust.
There is a lot of very, very good engineering there. A lot of extremely good Intelligent Design.
Great comments, questions guys, papers and collection of information. Having a hard time keeping up, but continuing to follow for now.
Upright @155, thanks for consolidated Gpuccio Post links 🙂
GP @All – Duuuuuude 😉 EFs and TFs be BFFs – for Life! Haha 😉
OLV @199,
great questions 🙂 Who regulates the regulators?
I see GPuccio has it covered at #200
Gpuccio @200,
What of an initial Zero State Zygote?
Might we call it an Initialized State of Cell Being? Prior to launch so-to-speak of it’s free interaction with surrounding cells and environments?
A Prescribed or Pre-loaded state? Influenced/Updated by ancestry and epigenetic factors, environment, etc., up the line prior to cell creation.
So that each part of this cellular puzzle is prescribed to work together and vary according to environmental thresholds? Heat, cold, food limitations, rain, sun, disease, etc., etc.
Essentially there are some well known thresholds to life. And none of them seem to be compatible with blind mutations, that may mutate any number of these thousands of epigenetic factors, TFs, PTMs, RNAs, etc., etc.
And…
But Gpuccio, playing Darwin’s advocate I thought all of this was a result of blind, unguided “processing” coming together while in the safe confines say, of duplicate genes?
Where novel functions are built… to “coordinate” with other novel functions built by blind, unguided “processing” units like EFs and TFs 😉
“Processing” – Can blind random events creating mutations blindly, be correctly termed a “process?”
I guess, but isn’t that a bit of a stretch?
A truly random, unguided process produces functional units that can interact with other functional units denovo?
Thus when EvoLabs set up by Robert Marks and Dembski was created, they ran their experiments and found the concept of Conservation of Information was a key component of these so-called evolution models? The evolutionist were cheating, sneaking in information to the models to watch for and save.
Thus the Evolution programs were building in key recognition pattens and/or plateaus reached by the program of functional units to “evolve” from each point to eventually count as “blind evolution” when in fact they were sneaking in Intelligent selection.
Showing that Guided, intelligent evolution is the only way they could “recreate” “blind” evolution.
Kinda makes you laugh at the circle of logic by blind evolutionist.
An exercise in futility. Nothing was evolving through a blind process, it was compared to preconceived functional points, saved and repeated.
Bunch of money spent to essentially do what Dawkins did at a higher cost.
Oh my! So, I forgot to include the link to EvoLAbs, Robert Marks and crew. Do not think Dembski’s involved recently.
But I went to get the link and look what I found at top of their page!
http://www.evoinfo.org/index/
A paper on unbounded evolution. Ha!
“OBSERVATION OF UNBOUNDED NOVELTY IN EVOLUTIONARY
ALGORITHMS IS UNKNOWABLE!”
http://robertmarks.org/REPRINT.....ovelty.pdf
Really love the work that Marks, Dembski, Ewert(Awesome work at Discovery now), and others have done.
They’re shining a good amount of light on many Darwinist assumptions that fail when closely examined.
Oh fun! 🙂
Read a bit on the PreInitiation Paper Gpuccio, fascinating!
I always enjoy reading a paper where scientist use new words to convey what is happening in a programming environment of assembled protein complexes.
“Preinitiation” is not listed in
https://www.merriam-webster.com/dictionary/preinitiation
Essentially a programmed Complex(“PreInitiation” stage) is assembled and stationed for engagement awaiting all systems Go for Transcription service.
It’s a normalization setup step prior to Transcript Initialization phase.
It’s a Pre-processor step. It locates, sets up(denatures) and preps the site or pre-processes – not “preinitiates” which is not a word.
Darwinist make this to hard.
From a Design or programming perspective, it’s normal pre-processing techniques that allow flexibility in coding for multiple transcriptions and purposes. I suspect it’s modular for efficiency and different processing requirements.
I expect there are several versions of these Pre-processor Units for Transcription which should be named appropriately.
I may be wrong. I’ve not taken time to look at any other papers or information on the “PreInitiation Complex” pre-processor.
But that’s essentially what it is and we should have tons of pre-processors, just like we have post-processors(Ubiquitin Post Translation for example).
Pre and Post processing are uniquely Design concepts, not blind, certainly not random and must be coordinated with the application – Transcription for example.
Darwinist are in real trouble, they’re just to blind to see it.
DATCG at #205:
“From a Design or programming perspective, it’s normal pre-processing techniques that allow flexibility in coding for multiple transcriptions and purposes. I suspect it’s modular for efficiency and different processing requirements.”
Well said. It certainly is.
The preinitiation complex at the promoter is a strange thing. In a sense, it is the “universal” part of the transcription process: it is more or less the same for all transcriptions. And yet, it is extremely complex and flexible.
The complex that is created at the enhancer is certainly the part that confers most of the specificity for each different transcription process, modulating its rate, speed and so on. Enhancers and specific TFs, with everything that helps them, are certainly a forest of engineered specificity.
And yet, the preinitiation complex at the promoter is the tool that receives and interprets that tidal wave of specificity and meaning.
And the mediator complex seems to be the main interface between the two poles.
Fascinating, indeed! 🙂
Gpuccio, you’re having to much fun 🙂
It is fascinating, incredible design fo sure 😉
So, quickly, this paragraph from the PIP paper you posted above in the OP…
“Recent landmark studies on human and yeast PIC formation provided more differentiated views of the first steps in the transcription initiation process, corroborating the concept of stepwise assembly while also hinting at significant differences that may be present between the species [18], [19] (reviewed in Ref. [20]).
I suspect the sub-units or sub modules(ie. subroutines) will vary across different species in the PIC-preprocessor? I’ve not had time to read the entire paper or review any others.
Will try to check in later tonight or tomorrow. Great stuff again Gpuccio, thanks for all the work you do on these topics!
One last comment…
The PIP is “universal” but maybe a rough analogy is like a universal joint for cars and trucks. It comes in different flavors based upon different models(or in case of Humans and Yeast – species).
gpuccio:
What you wrote in 200 is the material for a presentation at a conference. Very interesting.
Thank you so much.
To all:
Of course, a big question remains: how, in the course of evolution, are specific differentiation procedures acquired by organisms and genomes?
One thing is certain: they are acquired by design, not certainly by the imaginary RV + NS mechanism postulated by neo-darwinists.
But yet, the interesting question remains: how is design implemented? How are, for example, the complex and specific networks of enhancers that define different tissues and organisms written in the genomic sequence?
Those who are familiar with what I have been writing here already know that I am strongly convinced that a special role in writing the procedures can be assigned to guided transposon activity. IOWs, transposons as design tools.
So, I am happy to mention here a recent paper (June 2018) that is rather appropriate to support my old view:
Transposable elements generate regulatory novelty in a tissue-specific fashion
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6006921/
The paper is open access, so I invite anyone who is interested in this aspect to read it.
gpuccio,
I like to see how you pull out all these recent papers out of nowhere (like a magician) to support your ID concepts.
The comment you just wrote in 210 makes me wonder in awe.
Thanks.
jawa,
Yes, I coincide with you on that. I enjoy seeing all these recent papers gpuccio pulls out of the magician’s hat so easily and cites them here for our delight.
It’s interesting that leading-edge science research is providing all that material suitable for ID to be reaffirmed. It appears as purposely done.
BTW, has somebody seen George Castillo around lately?
Gpuccio @210,
Nice lead-in paragraph of “Results.” Specifically my interest piqued with assignment of “intronic” regions.
These Introns and Intronic regions are turning out to be important components and not throw-away Junk as Darwinist once proposed.
I’m also curious what your thoughts are on the “tissue-specific” connection of TEs/TFs and is this how you see it possible adding Macro changes by Design?
Gpuccio,
FYI, browsing one of your post linked by Upright Biped, found some broken links under the Graphs.
https://uncommondescent.com/intelligent-design/the-amazing-level-of-engineering-in-the-transition-to-the-vertebrate-proteome-a-global-analysis/
DATCG:
What links, exactly? They seem to be working to me…
DATCG at #214:
Yes, the enrichment at introns seems specially interesting.
PeterA:
“It’s interesting that leading-edge science research is providing all that material suitable for ID to be reaffirmed.”
Yes. I do believe that ID will be vindicated, first of all, by mainstream research. IOWs, by facts.
Off-topic but still seems to confirms what gpuccio, UB and DATCG have been saying all along lately:
Building the right centriole for each cell type
To all:
Enhancers are still very elusive. the key to their specificity and specificity modulation is still quite a mystery, but some information has been gathered.
It seems certain thhat the modulation of short motifs, usually consisting of a few nucleotides, can influence a lot the specificity and functions of anhancers in their relationship’s with TFs, even with the same TF.
So, small sequence modifications can give outstanding regulation results, in a continuuom of possibilities.
Here are two good papers about that issue:
Dissection of thousands of cell type-specific enhancers identifies dinucleotide repeat motifs as general enhancer features
https://genome.cshlp.org/content/24/7/1147.full
And this more recent one:
A massively parallel reporter assay reveals context-dependent activity of homeodomain binding sites in vivo.
https://www.ncbi.nlm.nih.gov/pubmed/30158147
Note how cooperative binding and three nucleotide specific spacings can influence a lot the modalities by which one specific TF, a master regulator, works. Regulating, as a whole, the very delicate process of photoreceptor development.
gpuccio,
the second paper about the CRX seems to add intrigue to the plot. Definitely another area to dig in.
Thanks.
This one seems related:
CRX directs photoreceptor differentiation by accelerating chromatin remodeling at specific target sites
I bolded some text.
I’m trying to find the mechanism to select the binding sites they act on.
Never mind. I had misunderstood it. Aren’t the binding sites determined by the coding?
How is that code represented? Is that the domain structure of the protein? Isn’t this the same known biochemistry rule of chemical binding? Is that what they refer to as uniquely-coded? IOW, simple biochemistry?
Please, can you explain this? Thanks.
is this related?
Gene regulation underlies environmental adaptation in house mice
OLV:
I don’t think that these issues are well understood.
However, as can be seen in the papers I quoted, one of the components that determine the binding of some TF to specific enhancer sites is certainly the presence in those enhancer sites of specific motifs that are recognized by the TF.
These motifs are usually short, a few nucleotides.
However, the relationship between motif and binding is complex, and many other factors are involved. The paper quoted at #220 shows how small variations in the motif can cause differences in the binding. But motifs alone cannot explain all the specificity of the binding.
With master regulators, the general idea seems to be that they are capable of recognizing their specific enhancer sites even if they are not accessible at the time. So, the master regulator can access those sites and make them active.
So, just as an imaginary example, let’s say that at some time, as a result of something that happened before, a master regulator TF, let’s call it A, becomes highly expressed in a cell which is still scarcely differentiated.
So, A can find a number of specific enhancers, let’s imagine there are 300 of them in the genome, whatever their state at that moment (accessible or not).
So, A accesses those 300 enhancers, and binds to them. Probably, with different affinity and specificity, according to small differences in the sequences (motifs or else) in each enhancer, and maybe other factors. As we have seen for CRX, maybe one molecule of TF binds some enhancers, while a homodimer or a heterodimer ninds to others, with different specificity.
IOWs, A creates a specific map of activated enhancers in the cell. Because it is a master regulator.
That map is the foundation for what happens after that. The activated enhancers start their specific work. Each of them binds other TFs, or other factors, and binds to some specific promoter, creating specific loops and reorganizing chromatin structure.
So, transcription changes: new mRNAs and new proteins are synthesized. New non coding RNAs too. The transcriptome/proteome changes, radically. The cell differentiates. Maybe it has, now, photoreceptors that did not exist before.
Probably, many master regulators must act in sequence to give a final differentiated state. Each of them working with a lot of different subordinate TFs.
OK, this is just a tentative scenario, but it seems to be consistent with what we know.
gpuccio,
the tentative scenario you described seems quite complex.
OLV,
what you wrote is an understatement.
OLV,
“seems quite complex”?
are you kidding?
It’s definitely very complex (functionally).
No doubt about it.
BTW, gpuccio’s description of such a tentative scenario is excellent as far as I can see.
Pioneering, chromatin remodeling, and epigenetic constraint in early T-cell gene regulation by SPI1 (PU.1)
Co-regulation of ribosomal RNA with hundreds of genes contributes to phenotypic variations
Gpuccio @216,
Apologies, not links, but Graphs. It was just below Figure 1, with the sentence referencing the following Figure 2 graph.
“Figure 2 shows a plot of the density distribution of human-conserved functional information in the various groups of organisms.”
It’s resolved now as I clicked on the broken graph and it was giving me a Certificate error. I hit continue and now it’s resolved. It might be my browser. Both graphs work now.
This is one of the graphs that initially was broken…
https://www.uncommondescent.com/wp-content/uploads/2017/03/FigA.jpg
I failed to copy down the “certificate error” but it’s fine now after I hit continue on it.
Gpuccio @226,
These are very interesting thoughts you listed in your example…
OK, that gives me a good picture of how you see these possible scenarios. But how do you think it’s guided? By surrounding environmental cues?
Awesome, but are the new photoreceptors a result of guided/directed conditions?
That interaction of “many master regulators” acting in sequence means an ability to recognize the new photoceptors as legitimate additions and not foreign to the cell? Or, maybe a better way to say it, is the overall systems applications must recognize it as “safe” while being built as new novelty and not an attack or mutation to repair. Which leads me to ask how does the system know it’s valid?
I’m trying to understand how these radical changes are permitted to survive.
“OK, this is just a tentative scenario, but it seems to be consistent with what we know.”
That’s a … for me that is, it’s a large area to cover. Do you mind giving a bit more detail on what is “consistent with what we know?” Are you describing fossil records, or molecular evolution, or both?
Thanks Gpuccio! As always your post and comments always give me new information and ideas to ponder.
DATCG (232):
I agree that gpuccio’s comment @ 225 is thought provoking.
Glad you raised those questions. Thanks.
This widens the territory to explore.
Peter and jawa,
Yes, you’re right. I was too cautious in my statement. Thanks for the correction.
DATCG,
It seems like gpuccio’s OP and comments opened a can of worms.
Too late now. 🙂
Where’s George Castillo when we need him?
I’m sure he could provide a much simpler explanation than gpuccio’s “tentative scenario”
🙂
To all:
Guys, I suspected that my comments at #225 could evoke a few reactions! 🙂
OK, I will try to answer your comments as well as I can. Of course, always consider that a lot of things are not yet understood. But we like to deal with these difficult questions, so let’s have fun! 🙂
To all:
OLV at #226:
PeterA at #227:
jawa at #228:
OLV at #234:
Guys, your are definitely right!
It is not only functionally complex. It is mind-boggingly functionally complex!
Just think: my tentative scenario is an oversimplification of what could happen just for one process. And it it just the simple backbone of it.
Each differentiation process involves probably a lot of different specific procedures (a cell does not need only photoreceptors to differentiate, but a lot of other things).
And we have myryads of different differentiation pathways and states.
And those differentiation pathways must be integrated in the general plan of tissues, organs, organisms.
And the information for all that must be present, in some way, in the genome and epigenome of the original embryo.
Now, let’s ignore for the moment the component in the dynamic epigenome. Let’s consider for a moment only the storage memory that is the DNA sequence. That information is potentially the same in all cells of an organism, with few exceptions.
Now, our friend darwinists have lived more or less in the conviction that most of that information was stored in the protein coding genes. 20000 genes, 1.5% of the human genome.
But we know very well, now, that that is not the case.
Let’s consider just the enhancers.
So, let’s say we have 1 million enhancers in the human genome.
Each of them is a very specific depository of information. We have seem that even enancers which bind the same TF can have important differences that will condition what they do after having bound the master TF.
So, let’s say that each of those 1 million enhancers has very specific information about one or more possible downstream procedures.
That’s a lot of information, indeed!
But the really important point is that this information is combinatorial. Those 1 million enhancers are used in groups, and build different pathways. Each of them can contribute to different procedures. The possibilities are, really, mind-boggling.
And, of course, enhancers are only one component. One important component, but by far not the only one. Of course we have the genes themselves, both coding and non coding, and the promoters, the TFs, the non coding RNAs, the mediator complex, and so on.
This is just to offer a few thoughts! 🙂
OLV at #229 and #230:
Thank you for the links! 🙂
OLV at #229:
Wow!
Pioneering, chromatin remodeling, and epigenetic constraint in early T-cell gene regulation by SPI1 (PU.1)
https://genome.cshlp.org/content/early/2018/08/31/gr.231423.117#aff-3
Emphasis mine.
What can I say? Lots of confirmations here.
Of course, T cell differentiation is one major scenario. We have to cover it as soon as possible.
Again, the concept of a pioneering and transient master regulator is exceedingly intriguing!
DATCG at #231:
I am happy you solved the problem! 🙂
DATCG at #232:
Interesting questions, as usual!
“OK, that gives me a good picture of how you see these possible scenarios. But how do you think it’s guided? By surrounding environmental cues?”
In a sense, I believe that the scenario develops according to a pre-defined plan written in the global information present in the zygote (genetic and epigenetic, and whatever).
We have evidence of that because a similar and functional scenario develops each time a new organism is born. Flies develop as flies, mice as mice, humans as humans. With all the possible individual variation, with all the possible errors, the program works very fine. Each single multicellular being on our planet is evidence of that.
So, the program has the information to develop itself. Part of it is in the genome, both coding and non coding. Part of it is in the constantly changing epigenome, starting (conventionally) at the zygote.
What about environmental cues?
Well, I believe that contingent environmental factors can only act as background noise, variables that have to be factored and controlled by the program. For example, the embryo implantation in mammals and humans is subject to many unpredictable contingent variables (position, local conditions, and so on). The program must be able to manage random contingency, but it is not certainly guided by it.
But there are environmental cues that are, instead, created by the program itself. Signals from local environment (the environment created by the program), or from other cells that are part of the program.
IOWs, the program generates different parts and components, and those parts and components not only differentiate following their own procedures, but also are constantly exchanging signals, cues, correction, information, guidelines.
It’s overwhelming.
Cell-cell communication (for example, cytokines) is another major issue that I would like to discuss, sooner or later.
You ask:
“Awesome, but are the new photoreceptors a result of guided/directed conditions?”
Well, I would say that everything that is complex and functional is the result of guided/directed conditions.
Just a clarification. When I said:
“The cell differentiates. Maybe it has, now, photoreceptors that did not exist before.”
I was not referring to the evolution of a new system, only to the development of the system in a new cell type. The differentiated cell expresses structures that did not exist in the stem cell. This is devo, not evo.
You say:
“That interaction of “many master regulators” acting in sequence means an ability to recognize the new photoceptors as legitimate additions and not foreign to the cell? … I’m trying to understand how these radical changes are permitted to survive.”
I may be wrong, but I think that you are discussing here the evolutionary origin of the changes. But that was not what I was saying.
In the developmental program, of course, the new structures are “expected”, indeed, “wanted”. The program generates them, and creates the right environment for them.
The evolutionary origin of all that is all another matter. Of course, it certainly requires a global re-engineering of everything, a wide range implementation. Gradual it could be, but always global in its perspectives and results.
Regarding the “consistent with what we know”: I was just referring to the information in the recent papers I quoted about transcription regulation, and in many others that I did not quote for brevity. While my scenario is certainly tentative and extremely simplistic, it is based, however, on facts described in those papers.
gpuccio:
I highly appreciate that you wrote such a good explanation.
The big picture is getting a little clearer.
Thanks.
OLV and Peter,
Gracious gpuccio said that we were right but really we were off my a lot. He claims that his “tentative scenario” is an oversimplification of the real deal, which is not well understood yet. Also he elevated the complexity level to the mind-boggling category.
As gpuccio himself has said this is fascinating indeed.
I’m enjoying every minute of it.
jawa,
I totally agree with your comment.
gpuccio’s comments @225 and 238 are real gems that must be read carefully, along with all his OPs and posts in this website. They are thought-provoking.
gpuccio’s contributions to explaining the intelligent design concept are very valuable.
Note that his discussion threads seem to get many visits from readers who don’t identify themselves. That’s interesting to me. I suspect some of those visitors disagree with him but don’t dare to jump into the discussion for lack of solid arguments that could withstand gpuccio’s well detailed explanations.
DATCG, Peter, OLV, UB, all,
did you notice this statement at the end of #240?
“Of course, T cell differentiation is one major scenario. We have to cover it as soon as possible.”
The hint for another potential topic in a new OP by gpuccio makes me feel like a Pavlov’s drooling dog after hearing the sound of the coming food.
I see y’all are having a party here.
Did you invite your oponent George Castillo to your party too?
🙂
Paolo,
the invitation is open to all who like evidence-based science seriously. Some folks may not qualify though.
Peter @245:
You missed listing 242 as gem too.
Paolo,
That person doesn’t have much to celebrate here. 🙂
To all:
This not too recent (2014) paper gives us a clear definition of “pioneering TF”:
Pioneer transcription factors in cell reprogramming
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4265672/
So, the idea is that pioneering TFs are capable of engaging enhancers even if in inactive state, when the target DNA sequence is still “masked” by the nucleosome structure.
However, higher order chromatin states may still be inaccessible even to pioneering TFs.
This adds a new possible layer of regulation (in case we needed more of them ! 🙂 ).
This very recent paper:
Cryo-EM structure of the nucleosome containing the ALB1 enhancer DNA sequence
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5881032/
deals with the pioneering task of understanding how pioneering TFs work at the structure level.
The pioneering TF here is FoxA, the enhancer ALB1, and the scenario is liver precursor cells, and liver differentiation in embryos. Again, a major developmental pathway.
From the final conclusions:
IOWs, there may be local condition, involving specific histone-DNA interactions, that make the nucleosome DNA “a little more accessible”, even if it is definitely inaccessible.
Again, this adds a new possible layer of regulation (in case we needed more of them ! 🙂 ).
To all:
A very recent review about the role of enhancers:
Enhancer Logic and Mechanics in Development and Disease.
https://www.ncbi.nlm.nih.gov/pubmed/29759817
OK, emphasis is mine: but the rest was already there. 🙂
#252
Well that just screams “Randomness!” doesn’t it, GP.
#251:
“This adds a new possible layer of regulation (in case we needed more of them !)”
“Again, this adds a new possible layer of regulation (in case we needed more of them !)”
That’s the most concise botom line summary I’ve read in a long time. In this case with a refreshing hint of humor.
Thanks.
UB @ 252:
Yeah, right. Unguided randomness, to be more precise.
🙂
Peter,
First, UB’s comment is @ 253 referring to gpuccio’s comment @ 252.
Second, your statement is a tautology. Is there a guided randomness?
UB at #253:
It certainly screams! 🙂
It seems a little like the telephone game at parties. The original whispered message is: “Design!”. But, as the game goes on, the final words seem to constantly become: “Random variation + Natural selection, of course!”.
So, everybody at the party is happy.
But, as the whispers become loud screams, maybe someone is going to wonder what has been happening! 🙂
To all:
Now, another important question is:
How are all these complex structures really organized in the nucleus?
This is pioneering work. Of course, we know much from techniques like Hi-C seq and similar, and we know about TADs, but the detailed topology in the nucleus? That is much more difficult.
Let’s see. The whole human genome is linearly about 2-3 metres long. But it is packed and arranged in a space that is about 6 micrometres in diameter. That’s so amazing that we often forget it.
But is the nucleus just an empty container for the genome?
Not at all. It has definite structure, and a very complex one.
Just have a quick look at the “cell nucleus” page on Wikipedia (is George Castillo around?). In particular, the 2.5 section: “Other subnuclear bodies”. We will come back to that.
So, new techniques are being developed to study in more detail nuclear spacial organization.
Here is one:
Mapping 3D genome organization relative to nuclear compartments using TSA-Seq as a cytological ruler
http://jcb.rupress.org/content.....07108.long
And here is another one:
Higher-Order Inter-chromosomal Hubs Shape 3D Genome Organization in the Nucleus
https://www.cell.com/cell/fulltext/S0092-8674(18)30636-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867418306366%3Fshowall%3Dtrue
The two papers use two completely different techniques (named TSA-seq and SPRITE, respectively) to explore nuclear organization of chromosomes in relation to nuclear bodies, but they reach remarkably similar conclusions.
In particular, they both agree on the importance of the nulceolus, and even more of nuclear speckles, for genome organization and transcription activity.
Now, everybodies knows, more or less, what the nucleolus is.
But what are nuclear speckles?
Here is a paper of 2011 that gives some answers:
Nuclear Speckles
http://cshperspectives.cshlp.o.....00646.full
And here is a more recent update:
Nuclear speckles: molecular organization, biological function and role in disease
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5737799/
In brief, these wonderfully complex nuclear structures, that contain almost no DNA, seem to be favourite sites for two old friends: the spliceosome and ubiquitination procedures! 🙂
Now, if you look at the results section in the second paper quoted in this comment (the one using SPRITE technology), you can read:
And so on…
So, the plot definitely thickens! 🙂
Believe me, I am not making all this up!
To all:
Let’s go deeper into nuclear topology.
This recent paper is about interaction between different chromosomes, and their newly discovered roles in transcription regulation. With a touch of romanticism:
Interchromosomal interactions: A genomic love story of kissing chromosomes
http://jcb.rupress.org/content......201806052
The paper is open access, and it well deserves to be read.
Here are just the subtitles of different sections:
Principles of chromosomal structure and nuclear organization
Kissing chromosomes: NHCCs
NHCCs affect distinct transcriptional programs of biological pathways
NHCCs and nuclear bodies
Interchromosomal contacts between homologous chromosomes (transvection)
Toward identifying NHCCs with molecular techniques
Location matters for NHCCs in health and in disease
LncRNAs are involved in the 3D organization of NHCCs
Watching kissing chromosomes in real time: live-cell imaging of NHCCs
Perspective
Nuclear speckles are also mentioned. Specially interesting is the information about the formation of two very peculiar nuclear bodies:
a) The nucleolus:
” In human nuclei, about 300 ribosomal genes located on five different acrocentric chromosomes (six in mouse) come into physical proximity to build the ribosomal preassembly in the nucleus (Fig. 1 B; Németh et al., 2010; Pliss et al., 2015; McStay, 2016). This spatial formation of the nucleolus is a conserved phenomenon and validates that nonhomologous chromosomes can intermingle in a nonrandom manner in all nuclei.”
b) The olfactosome:
“A structure equally as fascinating is the OR gene cluster, in which individual NHCCs allow the expression of single ORs in each cell to create a diverse repertoire of OR expression at the tissue level. At any given time, only a few of the about 1,400 OR genes located on 18 different chromosomes converge in the same interchromosomal space (Horta et al., 2018). The regulation of OR genes is orchestrated by binding of Ldb1, Lhx2, and Ebf transcription factors to highly similar transcription factor motifs of multiple enhancers on different chromosomes, thereby leading to nondeterministic mono-allelic OR gene expression (Lomvardas et al., 2006; Markenscoff-Papadimitriou et al., 2014; Monahan and Lomvardas, 2015; Monahan et al., 2017, 2018). Remarkably, the monogenic and mono-allelic gene expression of OR genes is explained by the spatial clustering of inactive genes to the same heterochromatic foci in the olfactosome (Fig. 1, C and D; Clowney et al., 2012). Recent in situ Hi-C experiments of FACS-sorted, differentiated olfactory sensory neurons determined that, at very large scales (i.e., 500-kb resolution), NHCCs between OR genes are highly specific and frequent, and that they consist of multiple different chromosomes to regulate selectively and specifically the transcription of each individual OR gene (Horta et al., 2018).”
Wow!
How does gpuccio get all these interesting papers so easily?
“newly discovered roles” ???
“four-dimensional organization” ???
huh?
DOI: 10.1083/jcb.201806052 | Published September 4, 2018
Wow!
This is as fresh from the oven as one can get.
jawa at #262:
Four dimensional: including time.
Human Genome’s Spirals, Loops and Globules Come into 4-D View
https://www.scientificamerican.com/article/human-genome-s-spirals-loops-and-globules-come-into-4-d-view-video/
Funny video here! 🙂
jawa,
These papers are being published at a fast rate lately. Just look for them and you’ll find them. Obviously gpuccio knows what he’s looking for.
As gpuccio has said in this discussion, there are many things still unknown or poorly understood at best. Therefore there’s plenty of room to still find newly discovered roles.
4D organization perhaps refers to spatiotemporal synchronization or arrangement. “in the right place at the right time”
Indeed this last paper gpuccio just posted is another gem in the growing collection within this thread.
To all:
A collateral discussion strictly related to the issues debated here has started at this thread:
https://uncommondescent.com/genetics/bee-genome-changes-dramatically-through-life/#comment-665055
It could be interesting to consider the two discussions as part of a whole. 🙂
To all:
There is another aspect we have yet to consider.
Stem cells are known to exist in a delicate balance between two possible cell fates:
a) Cell renewal
b) Differentiation
The ability to keep a balance between these two types of fate is the foundation to maintain stem cell compartments in the organism, while at the same time supporting the different differentiation cascades that derive from those compartments.
Now, for each cell in the stem cell compartment the choice between the two fates, at each cell division, seems to be a remarkable balance of stochastic factors and control components. IOWs, the individual decision cannot easily be anticipated, but the behaviour of the whole compartment is strictly controlled.
How is that obtained? I don’t think it is really understood. But, of course, different epigenetic transcription regulations contribute to that balance.
That said, here is a very recent paper that connects, in that scenario, two important factors.
One of them has been a major part of the discussions in this thread: histone modifications.
But the second factor is more of a novelty in this context: alternative splicing.
Here is the paper:
Alternative splicing links histone modifications to stem cell fate decision.
https://genomebiology.biomedcentral.com/articles/10.1186/s13059-018-1512-3
Well, I have blasted the two isoforms shown in Figure 5c of the paper, PBX 1a (430 AA) and PBX 1b (347 AA), that according to the paper seem to have such differentiated roles in stem cell fate.
The N terminal part, the first 333 AAs, is completely identical.
The C terminal part (97 AAs for PBX 1a, 14 AAs for PBX 1b) is completely different.
That seems to make the whole difference in role.
Please excuse my absence, I completely forgot about our little conversation.
Thank you for the plot, the correlation is higher than I expected but I’m not sure why you can’t use more of the 20 or so proteins in Durston’s paper.
Anyways, have you plotted bits vs protein length as in Durston’s figure 2a for the human proteome using your method? You should do that with comparisons to each of the organisms from your evolutionary history plots.
Also, for the D.C. Pluto art history plots, you should show shading or errors bars that represent variation in the data rather than just single points. It would make things much more believable to see that.
Another interesting plot would be the evolutionary history plot for the human proteome, but grouped by size. Have you made any of these very basic plots to look for any biases in your method?
Please excuse my absence, I completely forgot about our little conversation.
Thank you for the plot, the correlation is higher than I expected but I’m not sure why you can’t use more of the 20 or so proteins in Durston’s paper.
Anyways, have you plotted bits vs protein length as in Durston’s figure 2a for the human proteome using your method? You should do that with comparisons to each of the organisms from your evolutionary history plots.
Also, for the evolutionary history plots, you should show shading or errors bars that represent variation in the data rather than just single points. It would make things much more believable to see that.
Another interesting plot would be the evolutionary history plot for the human proteome, but grouped by size. Have you made any of these very basic plots to look for any biases in your method?
George Castillo:
Here are your answers:
1) The protein families listed in Durston’s paper are 35. To many of them my method cannot be applied because they are not proteins present in the human proteome. I use human proteins as probes to measure functional complexity, therefore I can only do that with proteins present in the human proteome. Moreover, my database is restricted only to human verified proteins in Uniprot, that is the about 20000 reliable reference sequences identified in humans. So, proteins like Vif (Virion infectivity factor), Viral helicase1, Bac luciferase, SecY, DctM and many others in the list have no clear homologues in the human proteome.
2) The relationship with length is very strong in my data as in Durston’s, as expected. I am adding two scatterplots for deuterostomia – not vertebrates and for cartilaginous fish, the two groups that are important for the computation of the jump in vertebrates, at the end of the OP. Consider that my values are given for about 20000 proteins.
3) My evolutionary history plots represent the individual value for individual proteins. So, I do not understand what error bars you are referring to. If you want the distribution of the reference values for organism groups, I can give you the standard deviation values, even if I don’t understand what is their utility in this context. However, here they are:
Cnidaria: mean 0.5432765 baa; sd 0.4024939 baa
Cephalopoda: mean 0.5302676 baa; 0.3949502 baa
Deuterostomia (not vertebrates): mean 0.6705278 baa; sd 0.4280898 baa
Cartilaginous fish: mean 0.9491001; sd 0.5180335 baa
Bony fish: mean 1.06373 baa; sd 0.4992876 baa
Amphibians: mean 1.106878 baa; sd 0.509575 baa
Crocodiles: mean 1.2175 baa; sd 0.5166932 baa
Marsupialia: mean 1.354032 baa; sd 0.5016414 baa
Afrotheria: mean 1.628872 baa; sd 0.43412 baa
However, as explained, these are just standard deviations of the values for the whole human proteome as compared to each group of organisms. In no way are they “error bars”. Moreover, as you can certainly understand from the values of the standard deviations, the distributions here are certainly not normal.
When comparing values for different groups of proteins, indeed, I always use non parametric methods, such as Wilcoxon test for independent groups. For examples, I have identified a group of 144 human proteins which are involved, according to Go functions, in neuronal differentiation. You may wonder if the jump from prevertebrate to vertebrate human conserved information is significantly higher in this group, as compared to all other human proteins.
And it is. The median value in the neuronal differentiation group is 0.4534413 baa, as compared to the median value of 0.2629808 baa in the rest of human proteins. The difference is highly significant. p value, as computed by the Wilcoxon test, is 1.202e-12. I am adding the boxplot for that comparison at the end of the OP.
This is just an example of how a correct analysis can be done using my values as applied to different protein groups.
4) Not sure what you mean. I have already given the plots by size at point 2.
To all:
Here is something more about pioneer TFs:
Pioneer transcription factors shape the epigenetic landscape
https://www.ncbi.nlm.nih.gov/pubmed/29507097
Emphasis mine.
Again, we see here highest specificity, probably through different functional mechanisms.
And even heterochromatin, once believed to be functionally inert, seems to come in many different “flavors”. 🙂
gpuccio (266):
Good suggestion. Thanks.
gpuccio (267):
“The N terminal part, the first 333 AAs, is completely identical.
The C terminal part (97 AAs for PBX 1a, 14 AAs for PBX 1b) is completely different.
That seems to make the whole difference”
Very convincing evidence. Thanks.
gpuccio (267):
“The N terminal part, the first 333 AAs, is completely identical.
The C terminal part (97 AAs for PBX 1a, 14 AAs for PBX 1b) is completely different.
That seems to make the whole difference in role.”
How many bits of new information is in that difference ?
gpuccio (267):
“But the second factor is more of a novelty in this context: alternative splicing.”
Indeed the plot continues to thicken. 🙂
Interesting paper.
Glad to see George Castillo back in the discussion.
gpuccio (270):
Though you’ve explained your analysis methodology several times, it’s always refreshing to see it again.
Thanks
gpuccio (271):
Really fascinating topic on pioneer TFs associated with epigenetic mechanisms that lead to high specificity.
This screams conscious design, doesn’t it?
Thanks.
DNA Methylation and Regulatory Elements during Chicken Germline Stem Cell Differentiation
To all:
Of course we know that epigenetic regulations of transcription happen in time, but probably that specific aspect is often underemphasized. While many studies have, understandably, focused on transcriptional landscapes during differentiation, little is known of how epigenetic regulations of transcription change in differentiated cells in response to outer stimuli.
IOWs, what I have called, in Fig. 2 in the OP, “Dynamic adaptation of cell in stable state”.
But new facts are accumulating about that aspect too.
The following paper is about the transcriptional response at the level of histone modifications in dendritic cell in the mouse after lipopolysaccharide (LPS) stimulation.
Dendritic cells are important cells in the immune response, and LPS is a standard stimulator of the immune system.
Here is the paper:
Waves of chromatin modifications in mouse dendritic cells in response to LPS stimulation
https://genomebiology.biomedcentral.com/articles/10.1186/s13059-018-1524-z
Waves of chromatin modifications? Sounds interesting, doesn’t it?
To all:
Speaking of cross talk. Both transcription regulation and ubiquitination have been the subject of my OPs. So, it’s beautiful to see them together:
TRIM59 regulates autophagy through modulating both the transcription and the ubiquitination of BECN1
https://www.tandfonline.com/doi/abs/10.1080/15548627.2018.1491493?journalCode=kaup20
So, the same molecule does both things: it regulates transcription and it regulates ubiquitination.
The function for TRIM59 at Uniprot is as follows:
“May serve as a multifunctional regulator for innate immune signaling pathways.”
Consistently, TRIM59 (as seen in humans) is almost absent in pre-vertebrates (0.258 baa) and has a definite, important jump in cartilaginous fish (+0.695 baa).
gpuccio @180:
“Waves of chromatin modifications? Sounds interesting, doesn’t it?”
Intriguing. The plot continues to thicken. For how long? 🙂
gpuccio @181:
More plot thickening.
Those multitasking proteins humble me.
I hardly can do one task at a time.
🙂
All these interesting papers gpuccio has been pulling out of his magic hat here lately add more encouraging news for the neo-Darwinian folks, for they seem to show the amazing capacity of RV+NS to produce all that marvelous machinery. 🙂
Excellent paper references guys 🙂
Gpuccio @281,
the TRIM59 Dual purpose usage increases the chances of degradation and disease I assume if wrongly mutated?
I know there’s much to know regarding all the sequences, but hmmmm, seems delicate from a regulatory view and must therefore be tightly controlled to reduce errors, to protect it.
Am I on the correct track? The wrong mutation might have multiple dire consequences for regulatory requirements.
The other question is, how many “multi-function” regulators exist?
And how does a blind, unguided series of chances create a multi-functional protein that creates two transcript variations by alternative splicing?
The coordination and timing must be precise, correct? If the alternative splice is wrong, if any of the domains incur mutations, then all could be stopped down, irrelevant, and marked for degradation by the UPS-Ubiquitin Proteaosome System or other degrading mechanisms.
I’m uncertain if combinatorial factors increase for such a sharing of specificity and therefore limit unguided mutations as a necessary protection against change.
DATCG,
Intriguing issues indeed.
To all:
A special type of TFs are Nuclear receptors (NR). These are special TFs that are triggered by hormones, or hormone-like molecules, that arrive to the nucleus. The interaction with the specific NR activates the TF activity, and therefore the related cascade of transcription modifications. NRs include the receptors for thyroid hormones, steroid hormones and many other ligands.
NRs are interesting because they have a rather constant structure, made essentially of three domains (and a few hinge regions):
1) A ligand binding domain, usually C terminal.
2) A DNA binding domain, like all TFs.
3) An intrinsically disordered N terminal domain (NTD)
The intrisically disordered region in the NTD is, of course, specially interesting. There are many evidences of its important funtional role, in many NRs.
The following paper is a good example:
Intrinsically disordered N-terminal domain of the Helicoverpa armigera Ultraspiracle stabilizes the dimeric form via a scorpion-like structure
https://www.sciencedirect.com/science/article/pii/S0960076018301924?via%3Dihub
I am sure that intrinsically disordered regions will be a source of great surprise, as research goes on.
I am not sure of what “an unexpected scorpion-like structure” really is or looks like, but it definitely sounds intriguing! 🙂
To all:
One of the points that I made just from the beginning of this OP is that the working information in any cell, including the zygote, is always the sum total of genetic + specific epigenetic information.
The zygote inherhits most of its epigenetci information from the oocyte. IOWs, from the mother, another organism.
Here is an interesting paper about the role that non coding RNAs, and in particular intron-derived non coding RNAs, derived from the maternal oocyte, can have in embryo development in Drosophila:
Generation of Drosophila sisRNAs by Independent Transcription from Cognate Introns
https://www.sciencedirect.com/science/article/pii/S2589004218300658?via%3Dihub
gpuccio (288):
There you go again with another interesting recent paper that seems pulled out of a magician’s hat. 🙂
Thanks.
“The identification of abundant polyadenylated maternal sisRNAs in Drosophila suggests that this paradigm may be more widely conserved than previously thought.”
gpuccio (287):
“I am sure that intrinsically disordered regions will be a source of great surprise, as research goes on.”
Well, the paper you cite is itself a source of jaw-dropping, eyebrow-raising information:
“As a result of alternative splicing, different isoforms of NRs are generated, which are often characterized by different spatial and temporal distributions within various cells.”
Indeed the plot continues to thicken at a fast pace.
Thanks
Regarding the plot thickening, how long could it take?
shouldn’t there be a time when the plot thickening process should start to slow down until it eventually stops? Are we approaching that point yet?
OLV:
“shouldn’t there be a time when the plot thickening process should start to slow down until it eventually stops?”
Maybe. Maybe not.
“Are we approaching that point yet?”
No.
To all:
About OLV’s comment at #291, I want to point (again) to an old OP by our friend GilDodgen, a brief and clear argument that expresses a very deep truth:
ID and the Trajectory of Observational Resolution
https://uncommondescent.com/intelligent-design/id-and-the-trajectory-of-observational-resolution/
If you have time, read it. Definitely. 🙂
gpuccio,
That old OP you suggested is a real gem, which remains as valid (or more) aswhen it was prophetically written.
I read it all and wanted to thank you for calling my attention to it.
gpuccio @292:
Your answers don’t seem encouraging, except for those looking for long term job security in biology research.
PavelU,
What’s discouraging in gpuccio’s comment?
To all:
This open access paper, while not extremely recent (2015), is a very clear review about the role of enhancers.
The selection and function of cell type-specific enhancers
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4517609/
I quote a section that is very relevant to the discussions we had here about the role of pioneer TFs:
The idea that the function of pioneer TFs is based on collaborative interactions is very interesting, because we have seen that those TFs are the main “switches” that select a specific line of differentiation.
That means that even pioneer TFs are not simple switches: they are, themselves, a network, a collaborative network, and therefore a level of important multiple regulation.
gpuccio (292,293):
Thanks for answering my questions.
gpuccio (297):
Very interesting paper. Thanks.
To all:
Transcription factors and nucleosomes seem to be two major actors in the drama of transcription regulation, often competing for control of precious DNA motifs.
The following recent paper gives us a glimpse of the complex dance between these two complex components:
The interaction landscape between transcription factors and the nucleosome.
https://www.ncbi.nlm.nih.gov/pubmed/30250250
jawa @296:
Did you understand the scientific implication of gpuccio’s short answers to OLV’s questions?
To all:
For fans (like me) of transposons and of enhancers:
Systematic perturbation of retroviral LTRs reveals widespread long-range effects on human gene regulation
https://elifesciences.org/articles/35989
Emphasis mine.
And here is an article that comments on the previous one:
Gene Expression: Transposons take remote control
https://elifesciences.org/articles/40921
This is very interesting indeed! 🙂
gpuccio (302):
“For fans (like me) of transposons and of enhancers”
You have effectively persuaded me to join this fans’ club too. 🙂
The two papers you cited are very interesting.
Here are a few quotes from the first paper in 302:
—
—
—
—
—
—
—
emphasis added
I would like to read your comments on the highlighted text, at your convenience. Specially the terms ‘cooption’, ‘repurposing’, ‘species-specific attributes’ and ‘recent burst’ that are used in some of the quoted statements. Thanks.
BTW, the second paper in the same comment makes the plot even thicker. 🙂
gpuccio (300):
The TF/nucleosome abstract gives a preview to what should be a very interesting article. Too bad it’s paywall.
OLV at #303:
It’s very simple: such functional and complex and specific results of transposon activity in a very short evolutionary time are obviously explained only by design, IOWs guided transposon activity.
As the true explanation cannot be accepted, our neo-darwinist friends must believe that unguided, random transposon activity can generate specific regulation networks in a few million years. Hence the various “cooptions”, “repurposings”, and similar.
There is only one word for that: fairy tales. 🙂
gpuccio,
I totally agree with what you wrote @305. But it implies what we should humbly admit: that our neo-Darwinist friends have a much more prolific imagination than we could ever have. They have proven it beyond any doubt.
🙂
To all:
As said, higher order chromatin compartments are apparently somewhat stable.
But it seems that this is not really the case.
The following recent paper shows how even higher order levels of organization, like the A and B compartments and Topologically Associating Domains (TADs), are extremely dynamic and change a lot during specific pathways of differentiation:
Genome-Wide Chromatin Structure Changes During Adipogenesis and Myogenesis
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6158721/
gpuccio (305):
Thanks for providing insightful comments on various “novel” terms encountered in biology research publications.
gpuccio (307):
Another very interesting recent paper. Thanks for citing it here and commenting on it too!
The plot continues to thicken:
Genome-Wide Chromatin Structure Changes During Adipogenesis and Myogenesis
Emphasis added.
No wonder that the anti-ID folks are so conspicuously absent from this discussion.
Dynamic regulation of transcription factors by nucleosome remodeling
Cis-regulatory determinants of MyoD function
Helicase promotes replication re-initiation from an RNA transcript
Emphasis added.
replisome? huh?
how many of these “*somes” are there in biology?
Nascent chromatin occupancy profiling reveals locus and factor specific chromatin maturation dynamics behind the DNA replication fork
Could it be that this promiscuity allows the powerful RV+NS to create novel complex functional specified information through the years?
Any reasonable objection to this possibility?
The Chd1 chromatin remodeler can sense both entry and exit sides of the nucleosome
The ATPase motor of the Chd1 chromatin remodeler stimulates DNA unwrapping from the nucleosome
Structure of the chromatin remodelling enzyme Chd1 bound to a ubiquitinylated nucleosome
The Sequence of Nucleosomal DNA Modulates Sliding by the Chd1 Chromatin Remodeler
The Latest Twists in Chromatin Remodeling
Emphasis added.
perhaps gpuccio has clearly answered this question before, but it’s not quite understood to me yet:
We see a number of proteins involved in the transcription regulation. However, aren’t they synthesized by the same machinery they form part of? Perhaps they aren’t. Maybe they come from a simpler process that eventually evolved into this more sophisticated stuff we see now? What prompted such a change? How did that happen? Can somebody explain this? Am I missing something in the picture? Thanks.
jawa,
aren’t those questions a little off-topic here?
jawa- It is all a catch-22 for the anti-IDists. It is an unbreakable loop. But our opponents still feel confident they can find a way to break the loop and find the origin of the cycle.
ET,
there is abundant literature explaining jawa’s questions.
You should look for it yourself. Nobody has time to do it for you.
Basically the RNA world, which has been proven beyond any doubt, provides the main answers to your question.
You may want to start learning from this detailed explanation. It may get too technical for your level at some point, though.
Hi PavelU- The RNA world is imaginary and most likely >99% chance of being pure BS.
There isn’t any evidence for the RNA world, just a dire need
PavelU, are you serious?
didn’t you present the same boring argument here ?
Did you understand what Dr. Eugene S commented here about your irrelevant contribution?
ET,
did you watch the video? Did you understand the detailed explanation? Isn’t that sufficient to explain it all? Do you need more? What else?
jawa,
yes, I presented that argument there. It may be boring to you because you don’t understand it. You should start from Biology 101 before you engage in a discussion here.
I did not know Eugene S is a Doctor. But anyway his comment was not accurate, because he claimed that the argument I presented is old, but that’s incorrect, because the given video was published very recently:
Yes, I watched as much of that crap as I could stand. It was lacking science and evidence. The guy thinks that RNA self-replicates- it doesn’t.
Two words refute that video and the RNA world- Spiegelman’s Monster
PavelU,
Are you an unconscious robot?
How would you react to this simple statement:
Wake up and smell the coffee!
🙂
PavelU at 318: “did you watch the video?”
Do you really consider hand waving evidence? or are you just mocking the video? It’s hard to tell.
OT to someone who can post (e.g.: News) : Please add some post about the Nobel in Chemistry about Evolution: e.g.:
https://www.nytimes.com/2018/10/03/science/chemistry-nobel-prize.html
Thanks
PeterA,
You’re not funny. Don’t quit your day job yet. SNL won’t hire you.
I prefer the version about smelling flowers, rather than burnt coffee beans, which add more pollution to the environment, thus increasing the greenhouse effect that is causing the man-made global warming that is raising the sea level and will soon flood all the coastal populations.
es58@323. I agree. And being Canadian, and female, News should also post something about the Nobel prize for Phidias, shared by a female researcher from Waterloo. She is only the third woman to win the Nobel for physics. Certainly something to celebrate.
es58,
By “hand waving” do you mean what gpuccio does when he hides his convoluted arguments behind a mysterious concept of a “conscious” agent as the only possible source of what he calls “complex functionality” or something like that?
Perhaps he still relies on David Chalmer’s outdated concept of “the hard problem of consciousnesses” which lately has been shown to be not so hard after all?
PavelU,
I think you went too far this time. You wrote so much nonsense in one single comment that it’s a new record.
gpuccio’s arguments are far from being convoluted. Many students would dream to have a professor who explains difficult concepts with such clarity as gpuccio does here.
Definitely you should wake up and smell the flowers in the garden, if you prefer that to smelling freshly brewed coffee in the morning.
PavelU @326:
Please, can you post links to support your claims? Thanks.
es58,
Thanks for posting that information about the Nobel Prize.
Emphasis added
Wow! What has happened here?
The discussion thread has gone completely off topic!
Let’s take it back to serious stuff.
Major Determinants of Nucleosome Positioning
Interdomain Communication of the Chd1 Chromatin Remodeler across the DNA Gyres of the Nucleosome
Structural rearrangements of the histone octamer translocate DNA
A twist defect mechanism for ATP-dependent translocation of nucleosomal DNA
Cryo-EM of nucleosome core particle interactions in trans
Gee… I have some catching up to do! Great stuff Gpuccio, once again! 🙂
This OP and discussion remains a scientific treasure trove.
Missing gpuccio’s excellent contributions.
https://ijponline.biomedcentral.com/articles/10.1186/s13052-020-00838-z