Uncommon Descent Serving The Intelligent Design Community

The Ubiquitin System: Functional Complexity and Semiosis joined together.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This is a very complex subject, so as usual I will try to stick to the essentials to make things as clear as possible, while details can be dealt with in the discussion.

It is difficult to define exactly the role of the Ubiquitin System. It is usually considered mainly a pathway which regulates protein degradation, but in reality its functions are much wider than that.

In essence, the US is a complex biological system which targets many different types of proteins for different final fates.

The most common “fate” is degradation of the protein. In that sense, the Ubiquitin System works together with another extremely complex cellular system, the proteasome. In brief, the Ubiquitin System “marks” proteins for degradation, and the proteasome degrades them.

It seems simple. It is not.

Ubiquitination is essentially one of many Post-Translational modifications (PTMs): modifications of proteins after their synthesis by the ribosome (translation). But, while most PTMs use simpler biochemical groups that are usually added to the target protein (for example, acetylation), in ubiquitination a whole protein (ubiquitin) is used as a modifier of the target protein.

 

The tool: Ubiquitin

Ubiquitin is a small protein (76 AAs). Its name derives from the simple fact that it  is found in most tissues of eukaryotic organisms.

Here is its aminoacid sequence:

MQIFVKTLTGKTITLEVEPSDTIENVKAKIQDKEGIPPD

QQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG

Essentially, it has two important properties:

  1. As said, it is ubiquitous in eukaryotes
  2. It is also extremely conserved in eukaryotes

In mammals, ubiquitin is not present as a single gene. It is encoded by 4 different genes: UBB, a poliubiquitin (3 Ub sequences); UBC, a poliubiquitin (9 Ub sequences); UBA52, a mixed gene (1   Ub sequence + the ribosomal protein L40); and RPS27A, again a mixed gene (1 Ub sequence + the ribosomal protein S27A). However, the basic ubiquitin sequence is always the same in all those genes.

Its conservation is one of the highest in eukaryotes. The human sequence shows, in single celled eukaryotes:

Naegleria: 96% conservation;  Alveolata: 100% conservation;  Cellular slime molds: 99% conservation; Green algae: 100% conservation; Fungi: best hit 100% conservation (96% in yeast).

Ubiquitin and Ubiquitin like proteins (see later) are characterized by a special fold, called  β-grasp fold.

 

The semiosis: the ubiquitin code

The title of this OP makes explicit reference to semiosis. Let’s try to see why.

The simplest way to say it is: ubiquitin is a tag. The addition of ubiquitin to a substrate protein marks that protein for specific fates, the most common being degradation by the proteasome.

But not only that. See, for example, the following review:

Nonproteolytic Functions of Ubiquitin in Cell Signaling

Abstract:

The small protein ubiquitin is a central regulator of a cell’s life and death. Ubiquitin is best known for targeting protein destruction by the 26S proteasome. In the past few years, however, nonproteolytic functions of ubiquitin have been uncovered at a rapid pace. These functions include membrane trafficking, protein kinase activation, DNA repair, and chromatin dynamics. A common mechanism underlying these functions is that ubiquitin, or polyubiquitin chains, serves as a signal to recruit proteins harboring ubiquitin-binding domains, thereby bringing together ubiquitinated proteins and ubiquitin receptors to execute specific biological functions. Recent advances in understanding ubiquitination in protein kinase activation and DNA repair are discussed to illustrate the nonproteolytic functions of ubiquitin in cell signaling.

Another important aspect is that ubiquitin is not one tag, but rather a collection of different tags. IOWs, a tag based code.

See, for example, here:

The Ubiquitin Code in the Ubiquitin-Proteasome System and Autophagy

(Paywall).

Abstract:

The conjugation of the 76 amino acid protein ubiquitin to other proteins can alter the metabolic stability or non-proteolytic functions of the substrate. Once attached to a substrate (monoubiquitination), ubiquitin can itself be ubiquitinated on any of its seven lysine (Lys) residues or its N-terminal methionine (Met1). A single ubiquitin polymer may contain mixed linkages and/or two or more branches. In addition, ubiquitin can be conjugated with ubiquitin-like modifiers such as SUMO or small molecules such as phosphate. The diverse ways to assemble ubiquitin chains provide countless means to modulate biological processes. We overview here the complexity of the ubiquitin code, with an emphasis on the emerging role of linkage-specific degradation signals (degrons) in the ubiquitin-proteasome system (UPS) and the autophagy-lysosome system (hereafter autophagy).

A good review of the basics of the ubiquitin code can be found here:

The Ubiquitin Code 

(Paywall)

It is particularly relevant, from an ID point of view, to quote the starting paragraph of that paper:

When in 1532 Spanish conquistadores set foot on the Inca Empire, they found a highly organized society that did not utilize a system of writing. Instead, the Incas recorded tax payments or mythology with quipus, devices in which pieces of thread were connected through specific knots. Although the quipus have not been fully deciphered, it is thought that the knots between threads encode most of the quipus’ content. Intriguingly, cells use a regulatory mechanism—ubiquitylation—that is reminiscent of quipus: During this reaction, proteins are modified with polymeric chains in which the linkage between ubiquitin molecules encodes information about the substrate’s fate in the cell.

Now, ubiquitin is usually linked to the target protein in chains. The first ubiquitin molecule is covalently bound through its C-terminal carboxylate group to a particular lysine, cysteine, serine, threonine or N-terminus of the target protein.

Then, additional ubiquitins are added to form a chain, and the C-terminus of the new ubiquitin is linked to one of seven lysine residues or the first methionine residue on the previously added ubiquitin.

IOWs, each ubiquitin molecule has seven lysine residues:

K6, K11, K27, K29, K33, K48, K63

And one N terminal methionine residue:

M1

And a new ubiquitin molecule can be added at each of those 8 sites in the previous ubiquitin molecule. IOWs, those 8 sites in the molecule are configurable switches that can be used to build ubiquitin chains.

Her are the 8 sites, in red, in the ubiquitin molecule:

MQIFVKTLTGKTITLEVEPSDTIENVKAKIQDKEGIPPD

QQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG

Fig 1 shows two ubiquitin molecules joined at K48.

Fig 1 A cartoon representation of a lysine 48-linked diubiquitin molecule. The two ubiquitin chains are shown as green cartoons with each chain labelled. The components of the linkage are indicated and shown as orange sticks. By Rogerdodd (Own work) [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

The simplest type of chain is homogeneous (IOWs, ubiquitins are linked always at the same site). But many types of mixed and branched chains can also be found.

Let’s start with the most common situation: a poli-ubiquitination of (at least) 4 ubiqutins, linearly linked at K48. This is the common signal for proteasome degradation.

By the way, the 26S proteasome is another molecular machine of incredible complexity, made of more than 30 different proteins. However, its structure and function are not the object of this OP, and therefore I will not deal with them here.

The ubiquitin code is not completely understood, at present, but a few aspects have been well elucidated. Table 1 sums up the most important and well known modes:

Code

Meaning

Polyubiquitination (4 or more) with links at K48 or at K11 Proteasomal degradation
Monoubiqutination (single or multiple) Protein interactions, membrane trafficking, endocytosis
Polyubiquitination with links at K63 Endocytic trafficking, inflammation, translation, DNA repair.
Polyubiquitination with links at K63 (other links) Autophagic degradation of protein substrates
Polyubiquitination with links at K27, K29, K33 Non proteolytic processes
Rarer chain types (K6, K11) Under investigation

 

However, this is only a very partial approach. A recent bioinformatics paper:

An Interaction Landscape of Ubiquitin Signaling

(Paywall)

Has attempted for the first time a systematic approach to deciphering the whole code, using synthetic diubiquitins (all 8 possible variants) to identify the different interactors with those signals, and they identified, with two different methodologies,  111 and 53 selective interactors for linear polyUb chains, respectively. 46 of those interactors were identified by both methodologies.

The translation

But what “translates” the complex ubiquitin code, allowing ubiquinated proteins to met the right specific destiny? Again, we can refer to the diubiquitin paper quoted above.

How do cells decode this ubiquitin code into proper cellular responses? Recent studies have indicated that members of a protein family, ubiquitin-binding proteins (UBPs), mediate the recognition of ubiquitinated substrates. UBPs contain at least one of 20 ubiquitin-binding domains (UBDs) functioning as a signal adaptor to transmit the signal from ubiquitinated substrates to downstream effectors

But what are those “interactors” identified by the paper (at least 46 of them)? They are, indeed, complex proteins which recognize specific configurations of the “tag” (the ubiquitin chain), and link the tagged (ubiquinated) protein to other effector proteins which implement its final fate, or anyway contribute in deffrent forms to that final outcome.

 

The basic control of the procedure: the complexity of the ubiquitination process.

So, we have seen that ubiquitin chains work as tags, and that their coded signals are translated by specific interactors, so that the target protein may be linked to its final destiny, or contribute to the desired outcome. But we must still address one question: how is the ubiquitination of the different target proteins implemented? IOWs, what is the procedure that “writes” the specific codes associated to specific target proteins?

This is indeed the first step in the whole process. But it is also the most complex, and that’s why I have left it for the final part of the discussion.

Indeed, the ubiquitination process needs to realize the following aims:

  1. Identify the specific protein to be ubiquitinated
  2. Recognize the specific context in which that protein needs to be ubiquitinated
  3. Mark the target protein with the correct tag for the required fate or outcome

We have already seen that the ubiquitin system is involved in practically all different cellular paths and activities, and therefore we can expect that the implementation of the above functions must be a very complex thing.

And it is.

Now, we can certainly imagine that there are many different layers of regulation that may contribute to the general control of the procedure, specifically epigenetic levels, which are at present poorly understood. But there is one level that we can more easily explore and understand, and it is , as usual, the functional complexity of the proteins involved.

And, even at a first gross analysis, it is really easy to see that the functional complexity implied by this process is mind blowing.

Why? It is more than enough to consider the huge number of different proteins involved. Let’s see.

The ubiquitination process is well studied. It can be divided into three phases, each of which is implemented by a different kind of protein. The three steps, and the three kinds of proteins that implement them, take the name of E1, E2 and E3.

 

Fig. 2 Schematic diagram of the ubiquitylation system. Created by Roger B. Dodd: Rogerdodd at the English language Wikipedia [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

 The E1 step of ubiquitination.

This is the first thing that happens, and it is also the simplest.

E1 is the process of activation of ubiquitin, and the E1 proteins is called E1 ubiquitin-activating enzyme. To put it simply, this enzyme “activates” the ubiquitin molecule in an ATP dependent process, preparing it for the following phases and attaching it to its active site cysteine residue. It is not really so simple, but for our purposes that can be enough.

This is a rather straightforward enzymatic reaction. In humans there are essentially two forms of E1 enzymes, UBA1 and UBA6, each of them about 1000 AAs long, and partially related at sequence level (42%).

 

The E2 step of ubiquitination.

The second step is ubiquitin conjugation. The activated ubiquitin is transferred from the E1 enzyme to the ubiquitin-conjugating enzyme, or E2 enzyme, where it is attached to a cysteine residue.

This apparently simple “transfer” is indeed a complex intermediate phase. Humans have about 40 different E2 molecules. The following paper:

E2 enzymes: more than just middle men

details some of the functional complexity existing at this level.

Abstract:

Ubiquitin-conjugating enzymes (E2s) are the central players in the trio of enzymes responsible for the attachment of ubiquitin (Ub) to cellular proteins. Humans have ∼40 E2s that are involved in the transfer of Ub or Ub-like (Ubl) proteins (e.g., SUMO and NEDD8). Although the majority of E2s are only twice the size of Ub, this remarkable family of enzymes performs a variety of functional roles. In this review, we summarize common functional and structural features that define unifying themes among E2s and highlight emerging concepts in the mechanism and regulation of E2s.

However, I will not go into details about these aspects, because we have better things to do: we still have to discuss the E3 phase!

 

The E3 step of ubiquitination.

This is the last phase of ubiquitination, where the ubiquitin tag is finally transferred to the target protein, as initial mono-ubiquitination, or to build an ubiquitin chain by following ubiqutination events. The proteins which implement this final passage are call E3 ubiquitin ligases. Here is the definition from Wikipedia:

A ubiquitin ligase (also called an E3 ubiquitin ligase) is a protein that recruits an E2 ubiquitin-conjugating enzyme that has been loaded with ubiquitin, recognizes a protein substrate, and assists or directly catalyzes the transfer of ubiquitin from the E2 to the protein substrate.

It is rather obvious that the role of the E3 protein is very important and delicate. Indeed it:

  1. Recognizes and links the E2-ubiquitin complex
  2. Recognizes and links some specific target protein
  3. Builds the appropriate tag for that protein (Monoubiquitination, mulptiple monoubiquitination, or poliubiquitination with the appropriate type of ubiquitin chain).
  4. And it does all those things at the right moment, in the right context, and for the right protein.

IOWs, the E3 protein writes the coded tag. It is, by all means, the central actor in our complex story.

So, here comes the really important point: how many different E3 ubiquitin ligases do we find in eukaryotic organisms? And the simple answer is: quite a lot!

Humans are supposed to have more than 600 different E3 ubiquitin ligases!

So, the human machinery for ubiquitination is about:

2 E1 proteins  –  40 E2 proteins – >600 E3 proteins

A real cascade of complexity!

OK, but even if we look at single celled eukaryotes we can already find an amazing level of complexity. In yeast, for example, we have:

1 or 2 E1 proteins  –  11 E2 proteins – 60-100 E3 proteins

See here:

The Ubiquitin–Proteasome System of Saccharomyces cerevisiae

Now, a very important point. Those 600+ E3 proteins that we find in humans are really different proteins. Of course, they have something in common: a specific domain.

From that point of view, they can be roughly classified in three groups according to the specific E3 domain:

  1. RING group: the RING finger domain ((Really Interesting New Gene) is a short domain of zinc finger type, usually 40 to 60 amino acids. This is the biggest group of E3s (about 600)
  2. HECT domain (homologous to the E6AP carboxyl terminus): this is a bigger domain (about 350 AAs). Located at the C terminus of the protein. It has a specific ligase activity, different from the RING   In humans we have approximately 30 proteins of this type.
  3. RBR domain (ring between ring fingers): this is a common domain (about 150 AAs) where two RING fingers are separated by a region called IBR, a cysteine-rich zinc finger. Only a subset of these proteins are E3 ligases, in humans we have about 12 of them.

See also here.

OK, so these proteins have one of these three domains in common, usually the RING domain. The function of the domain is specifically to interact with the E2-ubiquitin complex to implement the ligase activity. But the domain is only a part of the molecule, indeed a small part of it. E3 ligases are usually big proteins (hundreds, and up to thousands of AAs). Each of these proteins has a very specific non domain sequence, which is probably responsible for the most important part of the function: the recognition of the specific proteins that each E3 ligase processes.

This is a huge complexity, in terms of functional information at sequence level.

Our map of the ubiquinating system in humans could now be summarized as follows:

2 E1 proteins  –  40 E2 proteins – 600+ E3 proteins + thousands of specific substrates

IOWs, each of hundreds of different complex proteins recognizes its specific substrates, and marks them with a shared symbolic code based on uniquitin and its many possible chains. And the result of that process is that proteins are destined to degradation by the proteasome or other mechanisms, and that protein interactions and protein signaling are regulated and made possible, and that practically all cellular functions are allowed to flow correctly and smoothly.

Finally, here are two further compoments of the ubuquitination system, which I will barely mention, to avoid making this OP too long.

Ubiquitin like proteins (Ubl):

A number of ubiquitin like proteins add to the complexity of the system. Here is the abstract from a review:

The eukaryotic ubiquitin family encompasses nearly 20 proteins that are involved in the posttranslational modification of various macromolecules. The ubiquitin-like proteins (UBLs) that are part of this family adopt the β-grasp fold that is characteristic of its founding member ubiquitin (Ub). Although structurally related, UBLs regulate a strikingly diverse set of cellular processes, including nuclear transport, proteolysis, translation, autophagy, and antiviral pathways. New UBL substrates continue to be identified and further expand the functional diversity of UBL pathways in cellular homeostasis and physiology. Here, we review recent findings on such novel substrates, mechanisms, and functions of UBLs.

These proteins include SUMO, Nedd8, ISB15, and many others.

Deubiquitinating enzymes (DUBs):

The process of ubiquitination, complex as it already is, is additionally regulated by these enzymes which can cleave ubiquitin from proteins and other molecules. Doing so, they can reverse the effects of ubiquitination, creating a delicately balanced regulatory network. In humans there are nearly 100 DUB genes, which can be classified into two main classes: cysteine proteases and metalloproteases.

 

By the way, here is a beautiful animation of the basic working of the ubiquitin-proteasome system in degrading damaged proteins:

 

 

A summary:

So, let’s try a final graphic summary of the whole ubiquitin system in humans:

Fig 3 A graphic summary of the Ubiquitin System

 

Evolution of the Ubiquitin system?

The Ubiqutin system is essentially an eukaryotic tool. Of course, distant precursors for some of the main components have been “found” in prokaryotes. Here is the abstract from a paper that sums up what is known about the prokaryotic “origins” of the system:

Structure and evolution of ubiquitin and ubiquitin-related domains.

(Paywall)

Abstract:

Since its discovery over three decades ago, it has become abundantly clear that the ubiquitin (Ub) system is a quintessential feature of all aspects of eukaryotic biology. At the heart of the system lies the conjugation and deconjugation of Ub and Ub-like (Ubls) proteins to proteins or lipids drastically altering the biochemistry of the targeted molecules. In particular, it represents the primary mechanism by which protein stability is regulated in eukaryotes. Ub/Ubls are typified by the β-grasp fold (β-GF) that has additionally been recruited for a strikingly diverse range of biochemical functions. These include catalytic roles (e.g., NUDIX phosphohydrolases), scaffolding of iron-sulfur clusters, binding of RNA and other biomolecules such as co-factors, sulfur transfer in biosynthesis of diverse metabolites, and as mediators of key protein-protein interactions in practically every conceivable cellular context. In this chapter, we present a synthetic overview of the structure, evolution, and natural classification of Ub, Ubls, and other members of the β-GF. The β-GF appears to have differentiated into at least seven clades by the time of the last universal common ancestor of all extant organisms, encompassing much of the structural diversity observed in extant versions. The β-GF appears to have first emerged in the context of translation-related RNA-interactions and subsequently exploded to occupy various functional niches. Most biochemical diversification of the fold occurred in prokaryotes, with the eukaryotic phase of its evolution mainly marked by the expansion of the Ubl clade of the β-GF. Consequently, at least 70 distinct Ubl families are distributed across eukaryotes, of which nearly 20 families were already present in the eukaryotic common ancestor. These included multiple protein and one lipid conjugated forms and versions that functions as adapter domains in multimodule polypeptides. The early diversification of the Ubl families in eukaryotes played a major role in the emergence of characteristic eukaryotic cellular substructures and systems pertaining to nucleo-cytoplasmic compartmentalization, vesicular trafficking, lysosomal targeting, protein processing in the endoplasmic reticulum, and chromatin dynamics. Recent results from comparative genomics indicate that precursors of the eukaryotic Ub-system were already present in prokaryotes. The most basic versions are those combining an Ubl and an E1-like enzyme involved in metabolic pathways related to metallopterin, thiamine, cysteine, siderophore and perhaps modified base biosynthesis. Some of these versions also appear to have given rise to simple protein-tagging systems such as Sampylation in archaea and Urmylation in eukaryotes. However, other prokaryotic systems with Ubls of the YukD and other families, including one very close to Ub itself, developed additional elements that more closely resemble the eukaryotic state in possessing an E2, a RING-type E3, or both of these components. Additionally, prokaryotes have evolved conjugation systems that are independent of Ub ligases, such as the Pup system.

 

As usual, we are dealing here with distant similarities, but there is no doubt that the ubiquitin system as we know it appears in eukaryotes.

But what about its evolutionary history in eukaryotes?

We have already mentioned the extremely high conservation of ubiquitin itself.

UBA1, the main E1 enzyme, is rather well conserved from fungi to humans: 60% identity, 1282 bits, 1.21 bits per aminoacid (baa).

E2s are small enzymes, extremely conserved from fungi to humans: 86% identity, for example, for UB2D2, a 147 AAs molecule.

E3s, of course, are the most interesting issue. This big family of proteins behaves in different ways, consistently with its highly specific functions.

It is difficult to build a complete list of E3 proteins. I have downloaded from Uniprot a list of reviewed human proteins including “E3 ubiquitun ligase” in their name: a total of 223 proteins.

The mean evolutionary behavior of this group in metazoa is rather different from protein to protein. However, as a group these proteins exhibit an information jump in vertebrates which is significantly higher than the jump in all other proteins:

 

Fig. 4 Boxplots of the distribution of human conserved information jump from pre-vertebrates to vertebrates in 223 E3 ligase proteins and in all other human proteins. The difference is highly significant.

 

As we already know, this is evidence that this class of proteins is highly engineered in the transition to vertebrates. That is consistent with the need to finely regulate many cellular processes, most of which are certainly highly specific for different groups of organisms.

The highest vertebrate jump, in terms of bits per aminoacid, is shown in my group by the E3 ligase TRIM62. also known as DEAR1 (Q9BVG3), a 475 AAs long protein almost absent in pre-vertebrates (best hit 129 bits, 0.27 baa in Branchiostoma belcheri) and which flaunts an amazing jump of 1.433684 baa in cartilaginous fish (810 bits, 1.705263 baa).

But what is this protein? It is a master regulator tumor suppressor gene, implied in immunity, inflammation, tumor genesis.

See here:

TRIM Protein-Mediated Regulation of Inflammatory and Innate Immune Signaling and Its Association with Antiretroviral Activity

and here:

DEAR1 is a Chromosome 1p35 Tumor Suppressor and Master Regulator of TGFβ-Driven Epithelial-Mesenchymal Transition

This is just to show what a single E3 ligase can be involved in!

An opposite example, from the point of view of evolutionary history, is SIAH1, an E3 ligase implied in proteosomal degradation of proteins. It is a 282 AAs long protein, which already exhibits 1.787234 baa (504 bits) of homology in deuterostomes, indeed already 1.719858 baa in cnidaria. However, in fungi the best hit is only 50.8 bits (0.18 baa). So, this is a protein whose engineering takes place at the start of metazoa, and which exhibits only a minor further jump in vertebrates (0.29 baa), which brings the protein practically to its human form already in cartilaginous fish (280 identities out of 282, 99%). Practically a record.

So, we can see that E3 ligases are a good example of a class of proteins which perform different specific functions, and therefore exhibit different evolutionary histories: some, like TRIM62, are vertebrate quasi-novelties, others, like SIAH1, are metazoan quasi-novelties. And, of course, there are other behaviours, like for example BRCA1, Breast cancer type 1 susceptibility protein, a protein 1863 AAs long which only in mammals acquires part of its final sequence configuration in humans.

The following figure shows the evolutionary history of the three proteins mentioned above.

 

Fig. 5 Evolutionary history in metazoa of three E3 ligases (human conserved functional information)

 

An interesting example: NF-kB signaling

I will discuss briefly an example of how the Ubiquitin system interacts with some specific and complex final effector system. One of the best models for that is the NF-kB signaling.

NK-kB is a transcription factor family that is the final effector of a complex signaling pathway. I will rely mainly on the following recent free paper:

The Ubiquitination of NF-κB Subunits in the Control of Transcription

Here is the abstract:

Nuclear factor (NF)-κB has evolved as a latent, inducible family of transcription factors fundamental in the control of the inflammatory response. The transcription of hundreds of genes involved in inflammation and immune homeostasis require NF-κB, necessitating the need for its strict control. The inducible ubiquitination and proteasomal degradation of the cytoplasmic inhibitor of κB (IκB) proteins promotes the nuclear translocation and transcriptional activity of NF-κB. More recently, an additional role for ubiquitination in the regulation of NF-κB activity has been identified. In this case, the ubiquitination and degradation of the NF-κB subunits themselves plays a critical role in the termination of NF-κB activity and the associated transcriptional response. While there is still much to discover, a number of NF-κB ubiquitin ligases and deubiquitinases have now been identified which coordinate to regulate the NF-κB transcriptional response. This review will focus the regulation of NF-κB subunits by ubiquitination, the key regulatory components and their impact on NF-κB directed transcription.

 

The following figure sums up the main features of the canonical activation pathway:

 

Fig. 6 A simple summary of the main steps in the canonical activayion pathway of NF-kB

 

Here the NF-κB TF is essentially the heterodimer RelA – p50. Before activation, the NF-κB (RelA – p50) dimer is kept in an inactive state and remains in the cytoplasm because it is linked to the IkB alpha protein, an inhibitor of its function.

Activation is mediated by a signal-receptor interaction, which starts the whole pathway. A lot of different signals can do that, adding to the complexity, but we will not discuss this part here.

As a consequence of receptor activation, another protein complex, IκB kinase (IKK), accomplishes the Phosphorylation of IκBα at serines 32 and 36. This is the signal for the ubiquitination of the IkB alpha inhibitor.

This ubiqutination targets IkB alpha for proteosomal degradation. But how is it achieved?

Well, things are not so simple. A whole protein complex is necessary, a complex which implements many different ubiquitinations in different contexts, including this one.

The complex is made by 3 basic proteins:

  • Cul1 (a scaffold protein, 776 AAs)
  • SKP1 (an adaptor protein, 163 AAs)
  • Rbx1 (a RING finger protein with E3 ligase activity, 108 AAs)

Plus:

  • An F-box protein (FBP) which changes in the different context, and confers specificity.

In our context, the F box protein is called beta TRC (605 AAs).

 

Fig. 7 A simple diagram of the SKP1 – beta TRC complex

 

Once the IkB alpha inhibitor is ubiquinated and degraded in the proteasome, the NF-κB dimer is free to translocate to the nucleus, and implement its function as a transcription factor (which is another complex issue, that we will not discuss).

OK, this is only the canonical activation of the pathway.

In the non canonical pathway (not shown in the figure) a different set of signals, receptors and activators acts on a different NF-κB dimer (RelB – p100). This dimer is not linked to any inhibitor, but is itself inactive in the cytoplasm. As a result of the signal, p100 is phosphorylated at serines 866 and 870. Again, this is the signal for ubiquitination.

This ubiquitination is performed by the same complex described above, but the result is different. P100 is only partially degraded in the proteasome, and is transformed into a smaller protein, p52, which remains linked to RelB. The RelB – p52 dimer is now an active NF-κB Transcription Factor, and it can relocate to the nucleus and act there.

But that’s not all.

  • You may remember that RelA (also called p 65) is one of the two components of NF-kB TF in the canonical pathway (the other being p 50). Well, RelA is heavily controlled by ubiquitination after it binds DNA in the nucleus to implement its TF activity. Ubiquitination (a very complex form of it) helps detachment of the TF from DNA, and its controlled degradation, avoiding sustained expression of NF-κB-dependent genes. For more details, see section 4 in the above quoted paper: “Ubiquitination of NF-κB”.
  • The activation of IKK in both the canonical and non canonical pathway after signal – receptor interaction is not so simple as depicted in Fig. 6. For more details, look at Fig. 1 in this paper: Ubiquitin Signaling in the NF-κB Pathway. You can see that, in the canonical pathway, the activation of IKK is mediated by many proteins, including TRAF2, TRAF6, TAK1, NEMO.
  • TRAF2 is a key regulator on many signaling pathways, including NF-kB. It is an E3 ubiquitin ligase. From Uniprot:  “Has E3 ubiquitin-protein ligase activity and promotes ‘Lys-63’-linked ubiquitination of target proteins, such as BIRC3, RIPK1 and TICAM1. Is an essential constituent of several E3 ubiquitin-protein ligase complexes, where it promotes the ubiquitination of target proteins by bringing them into contact with other E3 ubiquitin ligases.”
  • The same is true of TRAF6.
  • NEMO (NF-kappa-B essential modulator ) is also a key regulator. It is not an ubiquinating enzyme, but it is rather heavily regulated by ubiquitination. From Uniprot: “Regulatory subunit of the IKK core complex which phosphorylates inhibitors of NF-kappa-B thus leading to the dissociation of the inhibitor/NF-kappa-B complex and ultimately the degradation of the inhibitor. Its binding to scaffolding polyubiquitin seems to play a role in IKK activation by multiple signaling receptor pathways. However, the specific type of polyubiquitin recognized upon cell stimulation (either ‘Lys-63’-linked or linear polyubiquitin) and its functional importance is reported conflictingly.”
  • In the non canonical pathway, the activation of IKK alpha after signal – receptor interaction is mediated by other proteins, in particular one protein called NIK (see again Fig. 1 quoted above). Well, NIK is regulated by two different types of E3 ligases, with two different types of polyubiquitination:
    • cIAP E3 ligase inactivates it by constant degradation using a K48 chain
    • ZFP91 E3 ligase stabilizes it using a K63 chain

See here:

Non-canonical NF-κB signaling pathway.

In particular, Fig. 3

These are only some of the ways the ubiquitin system interacts with the very complex NF-kB signaling system. I hope that’s enough to show how two completely different and complex biological systems manage to cooperate by intricate multiple connections, and how the ubiquitin system can intervene at all levels of another process. What is true for the NF-kB signaling pathway is equally true for a lot of other biological systems, indeed for almost all basic cellular processes.

But this OP is already too long, and I have to stop here.

As usual, I want to close with a brief summary of the main points:

  1. The Ubiquitin system is a very important regulation network that shows two different signatures of design: amazing complexity and an articulated semiotic structure.
  2. The complexity is obvious at all levels of the network, but is especially amazing at the level of the hundreds of E3 ligases, that can recognize thousands of different substrates in different contexts.
  3. The semiosis is obvious in the Ubiquitin Code, a symbolic code of different ubiquitin configurations which serve as specific “tags” that point to different outcomes.
  4. The code is universally implemented and shared in eukaryotes, and allows control on almost all most important cellular processes.
  5. The code is written by the hundreds of E3 ligases. It is read by the many interactors with ubiquitin-binding domains (UBDs).
  6. The final outcome is of different types, including degradation, endocytosis, protein signaling, and so on.
  7. The interaction of the Ubiquitin System with other complex cellular pathways, like signaling pathways, is extremely complex and various, and happens at many different levels and by many different interacting proteins for each single pathway.

PS:

Thanks to DATCG for pointing to this video in three parts by Dr. Raymond Deshaies, was Professor of Biology at the California Institute of Technology and an Investigator of the Howard Hughes Medical Institute. On iBiology Youtube page:

A primer on the ubiquitin-proteasome system

 

Cullin-RING ubiquitin ligases: structure, structure, mechanism, and regulation

 

Targeting the ubiquitin-proteasome system in cancer

Comments
Hmm! Antonin
DATCG, OLV and all interested: Ubiquitin is definitely a friendly molecules. It even helps clarifying important issues. A recurring problem in our debates is that sometimes molecules that are extremely conserved in their evolutionary history are more tolerant than expected to sequence substitutions in the lab (for example, histones). This observation has been used many times by IDists and by neo-darwinists to state that sequence conservation could not be a good measure of functional constraint. I have always disagreed. As anyone who has read my OPs certainly knows, I am absolutely convinced that sequence conservation thorugh long evolutionary times is definitely a very good indicator of functional constraint. I have always argured that short term results in lab conditions are not a measure of lasting functional fitness, while sequence conservation definitely is. Now, this very interesting recent paper about ubiquitin seems to absolutely confirm my point: Extending chemical perturbations of the ubiquitin fitness landscape in a classroom setting reveals new constraints on sequence tolerance http://bio.biologists.org/content/7/7/bio036103.long
ABSTRACT: Although the primary protein sequence of ubiquitin (Ub) is extremely stable over evolutionary time, it is highly tolerant to mutation during selection experiments performed in the laboratory. We have proposed that this discrepancy results from the difference between fitness under laboratory culture conditions and the selective pressures in changing environments over evolutionary timescales. Building on our previous work (Mavor et al., 2016), we used deep mutational scanning to determine how twelve new chemicals (3-Amino-1,2,4-triazole, 5-fluorocytosine, Amphotericin B, CaCl2, Cerulenin, Cobalt Acetate, Menadione, Nickel Chloride, p-Fluorophenylalanine, Rapamycin, Tamoxifen, and Tunicamycin) reveal novel mutational sensitivities of ubiquitin residues. Collectively, our experiments have identified eight new sensitizing conditions for Lys63 and uncovered a sensitizing condition for every position in Ub except Ser57 and Gln62. By determining the ubiquitin fitness landscape under different chemical constraints, our work helps to resolve the inconsistencies between deep mutational scanning experiments and sequence conservation over evolutionary timescales.
gpuccio
OLV found an interesting paper! Cross-posting OLV's find here on Chromatin and Ubiquitin "crowbar" and Histone H2B... https://www.researchgate.net/publication/49765818_Chromatin_A_ubiquitin_crowbar_opens_chromatin
A ubiquitin crowbar opens chromatin Monoubiquitylation of histone H2B is found to disrupt condensation of chemically defined chromatin fibers. A novel fluorescence-based assay is used in concert with analytical ultracentrifugation to uncover the synergistic roles of histone acetylation and ubiquitylation on chromatin dynamics
Article in Nature Chemical Biology · February 2011 DOI: 10.1038/nchembio.514 · Source: PubMed Chromatin: A ubiquitin crowbar opens chromatin Originally posted by OLV in PaV's post on Chromatin... https://uncommondesc.wpengine.com/intelligent-design/chromatin-topology-the-new-and-latest-functional-complexity/ DATCG
Gpuccio @946, thanks and at #947... Ha! Evidence just keeps growing with nearly every discovery and paper! Design, not Blind! ;-) OLV @948-949, thanks as well! More reading to follow up on. DATCG
They mention the keyword "ubiquitin" here too: the function of the ubiquitin-like modifier DiSUMO-LIKE (DSUL) for early embryo development in maize OLV
gpuccio: That's an interesting paper. Thanks. In 2018 they are still finding novel functions for the ubiquitin system components and also novel protein-protein interactions? BTW, do these discoveries help the case for neo-Darwinian macroevolution? https://www.molbiolcell.org/doi/pdf/10.1091/mbc.E17-04-0248
We demonstrate a novel function for the E3 Ub ligase UBR5 in regulation of ciliogenesis via maintenance of centriolar satellite stability. We also demonstrate a novel protein-protein interaction between UBR5 and the CSPP-L isoform of CSPP1, predominantly at the centrosome and surrounding centriolar satellites.
In summary, we have demonstrated a highly novel role for the E3 Ub ligase UBR5 in primary cilia maintenance/formation, with potential implications for understanding the molecular basis of key signalling pathways in development and disease.
OLV
To all: This is brand new: The E3 ubiquitin ligase UBR5 regulates centriolar satellite stability and primary cilia. https://www.ncbi.nlm.nih.gov/pubmed/29742019
Abstract: Primary cilia are crucial for signal transduction in a variety of pathways, including Hedgehog and Wnt. Disruption of primary cilia formation (ciliogenesis) is linked to numerous developmental disorders (known as ciliopathies) and diseases, including cancer. The Ubiquitin-Proteasome System (UPS) component UBR5 was previously identified as a putative positive regulator of ciliogenesis in a functional genomics screen. UBR5 is an E3 Ubiquitin ligase that is frequently deregulated in tumours, but its biological role in cancer is largely uncharacterised, partly due to a lack of understanding of interacting proteins and pathways. We validated the effect of UBR5 depletion on primary cilia formation using a robust model of ciliogenesis, and identified CSPP1, a centrosomal and ciliary protein required for cilia formation, as a UBR5-interacting protein. We show that UBR5 ubiquitylates CSPP1, and that UBR5 is required for cytoplasmic organization of CSPP1-comprising centriolar satellites in centrosomal periphery, suggesting that UBR5 mediated ubiquitylation of CSPP1 or associated centriolar satellite constituents is one underlying requirement for cilia expression. Hence, we have established a key role for UBR5 in ciliogenesis that may have important implications in understanding cancer pathophysiology.
Now, UBR5 is an E3 ligase of exceptional length: 2799 AAs. This is the "function" section at Uniprot: "E3 ubiquitin-protein ligase which is a component of the N-end rule pathway. Recognizes and binds to proteins bearing specific N-terminal residues that are destabilizing according to the N-end rule, leading to their ubiquitination and subsequent degradation (By similarity). Involved in maturation and/or transcriptional regulation of mRNA by activating CDK9 by polyubiquitination. May play a role in control of cell cycle progression. May have tumor suppressor function. Regulates DNA topoisomerase II binding protein (TopBP1) in the DNA damage response. Plays an essential role in extraembryonic development. Ubiquitinates acetylated PCK1. Also acts as a regulator of DNA damage response by acting as a suppressor of RNF168, an E3 ubiquitin-protein ligase that promotes accumulation of 'Lys-63'-linked histone H2A and H2AX at DNA damage sites, thereby acting as a guard against excessive spreading of ubiquitinated chromatin at damaged chromosomes." Now, this protein has an amazing jump in human-conserved information from pre-vertebrates to vertebrates: 2098 bits (0.75 baa) The human protein and the protein in cartilaginous fish (callorhincus milii) show the following homology: 4913 bits 2574 identities (92%) 2690 positives (95%) IOWs, this very long protein has remained almost the same for 400+ million years! Uniprot recognizes only two domains in the C terminal part: PABC (78 AAs) HECT (338 AAs) The Blast page recognizes the same two domains, plus one small putative zinc finger (67 AA) in the middle of the sequence, and an even smaller CUE domain (64 AAs) in the N terminal part. IOWs, more than 2200 AAs that make up the protein, and that are extremely conserved, do not correspond to known domains. This is certainly an amazing example of a highly specific and very long sequence, whose complex regulatory functions we can only barely imagine, and that exhibits almost 5000 bits of functional information (conserved from cartilaginous fish to humans) more than 2000 of them appearing for the first time in the transition to vertebrates. gpuccio
DATCG at #944: Thank you for cross-posting. Strange that blind evolution can so easily find targets that we cannot even predict. gpuccio
To all: This is an in depth analysis of the ERAD (ER-associated protein degradation) retrotransposition mechanism, already quoted in this discussion. Mechanistic insights into ER-associated protein degradation https://www.sciencedirect.com/science/article/pii/S0955067418300048
Abstract: Misfolded proteins of the endoplasmic reticulum (ER) are discarded by a conserved process, called ER-associated protein degradation (ERAD). ERAD substrates are retro-translocated into the cytosol, polyubiquitinated, extracted from the ER membrane, and ultimately degraded by the proteasome. Recent in vitro experiments with purified components have given insight into the mechanism of ERAD. ERAD substrates with misfolded luminal or intramembrane domains are moved across the ER membrane through a channel formed by the multispanning ubiquitin ligase Hrd1. Following polyubiquitination, substrates are extracted from the membrane by the Cdc48/p97 ATPase complex and transferred to the proteasome. We discuss the molecular mechanism of these processes and point out remaining open questions.
Unfortunately, the paper is paywalled. The general idewa is that proteins from the ER must be degraded in the cytosol by the proteasome. For that to happen, they must be "retrotranslocated" to be ubiquinated. It seems that the protein manily responsible for that is Hrd1, a transmembrane E3 ligase, which "exposes" the target protein to the cytosol and ubiquinates it. Another protein complex (Cdc48/p97 ATPase) then extracts the ubiquinated protein from the ER membrane, so that it can be degraded in the cytosol by the proteasome. The ERAD pathway exists in three different forms: ERAD-L, ERAD-M and ERAD-C. Each of them is implemented by different specific protein complexes. For ERAD-L (the pathway involved in degradation of intraluminal ER proteins) we have: a) A complex linked to the ER membrane, made of 4 different proteins: Hrd3, Usa1, Der1 and Hrd1, where both Hrd3 and Hrd1 are E3 ligases, and Hrd1 is the "channel" through which the target protein is retrotranslocated, exposed to the cytosol and ubquinated. b) A luminal protein, Yos9 c) Other ubiquinating components on the cytosol margin of the ER membrane, including Ubc7 (an E2 enzyme) and Cue1 d) A protein complex which extracts the ubiquinated target protein from the membrane, and releases it for proteosomal degradation in the cytosol, made of our well known Cdc48/VCP/p97 (or however you want to call it), Ufd1 (Ubiquitin recognition factor) and Npl4, plus at least one DUB (Otu1), but probably more than one. e) A couple of "shuttling factors", Rad23 and Dsk2, which each have both ubiquitin- and proteasome-binding domains, and which are probably responsible for transferring the target protein to the proteasome. Rather simple, isn't it? And this is only the ERAD-L pathway! :) gpuccio
Gpuccio @942, thanks :) I'm cross-posting your link from Defending Design OP here for the Ubiqitin Proteaome System: Predictive hypotheses are ineffectual in resolving complex biochemical systems. Abstract:
Scientific hypotheses may either predict particular unknown facts or accommodate previously-known data. Although affirmed predictions are intuitively more rewarding than accommodations of established facts, opinions divide whether predictive hypotheses are also epistemically superior to accommodation hypotheses. This paper examines the contribution of predictive hypotheses to discoveries of several bio-molecular systems. Having all the necessary elements of the system known beforehand, an abstract predictive hypothesis of semiconservative mode of DNA replication was successfully affirmed. However, in defining the genetic code whose biochemical basis was unclear, hypotheses were only partially effective and supplementary experimentation was required for its conclusive definition. Markedly, hypotheses were entirely inept in predicting workings of complex systems that included unknown elements. Thus, hypotheses did not predict the existence and function of mRNA, the multiple unidentified components of the protein biosynthesis machinery, or the manifold unknown constituents of the ubiquitin-proteasome system of protein breakdown. Consequently, because of their inability to envision unknown entities, predictive hypotheses did not contribute to the elucidation of of complex systems. As data-based accommodation theories remained the sole instrument to explain complex bio-molecular systems, the philosophical question of alleged advantage of predictive over accommodative hypotheses became inconsequential.
Imagine that, a blind belief in gradualist blind events cannot envision or predict functionally complex systems? That's an honest assessment at least and well founded. Yet Darwinist will claim Design Theorist lack imagination. And still cling to JUNK DNA as their savior. Yep, they missed out on the UPS as this well done OP and growing aggregated list of Ubiquitin Systems network topology and functionality keeps increasing day by day. DATCG
DATCG and all: This is from today's search: Neuronal Proteomic Analysis of the Ubiquitinated Substrates of the Disease-Linked E3 Ligases Parkin and Ube3a https://www.hindawi.com/journals/bmri/2018/3180413/
Abstract: Both Parkin and UBE3A are E3 ubiquitin ligases whose mutations result in severe brain dysfunction. Several of their substrates have been identified using cell culture models in combination with proteasome inhibitors, but not in more physiological settings. We recently developed the strategy to isolate ubiquitinated proteins in flies and have now identified by mass spectrometry analysis the neuronal proteins differentially ubiquitinated by those ligases. This is an example of how flies can be used to provide biological material in order to reveal steady state substrates of disease causing genes. Collectively our results provide new leads to the possible physiological functions of the activity of those two disease causing E3 ligases. Particularly, in the case of Parkin the novelty of our data originates from the experimental setup, which is not overtly biased by acute mitochondrial depolarisation. In the case of UBE3A, it is the first time that a nonbiased screen for its neuronal substrates has been reported.
Public access. Here are a few interesting remarks from the paper:
Both Parkin (PARK2) and UBE3A are E3 ubiquitin ligases for which mutations result in severe brain dysfunction, Familial Parkinson’s Disease (PD), and Angelman Syndrome (AS). In order to unravel the molecular mechanisms leading to these neurological dysfunctions it is necessary to identify and understand the role of their ubiquitinated substrates. --- However, even when proteins are correctly folded and functionally active in their final compartment, various factors can destabilise the proteins and irreversibly impair them. For this purpose, cells possess quality control mechanisms such as the Ubiquitin-Proteasome System (UPS) and autophagy that specifically degrade damaged proteins and organelles. --- Interestingly, ubiquitination is also involved in the regulation of autophagy [14–19]. In addition to its other roles, therefore, it is clear that ubiquitination serves as universal tag for substrate degradation, as all intracellular degradation pathways appear to be interconnected and governed by it. --- On our first application of this method, our group detected 121 ubiquitinated proteins in Drosophila neurons during embryonic development [149], including several key proteins involved in synaptogenesis and hence suggesting that UPS is important for proper neuronal arrangement. We later compared the ubiquitin landscape between developing and mature neurons in Drosophila melanogaster and identified 234 and 369 ubiquitinated proteins, respectively [154], some of which were found in both developmental stages. More interestingly, certain proteins are preferentially ubiquitinated in specific cell types during specific periods of the Drosophila life cycle, reinforcing the importance of using the appropriate cell type when studying ubiquitination. For example, Ube3a was found to be active in both developing and adult neurons, while Parkin was found to be enzymatically active in adult neurons only [104, 154]. Recently we have successfully employed this approach to analyze the ubiquitinated proteome of Drosophila under different conditions ([104, 154] and Ramirez et al. unpublished data). Altogether and thanks to the usage of more sensitive MS instruments, we have identified a total of ~1700 ubiquitinated proteins in Drosophila neurons (Figure 4), which represent ~11% of the total fly proteome (15.000).
See Fig. 4 for the growing number of recognized ubiquitinated proteins in Drosophila neurons. gpuccio
DATCG at #940 and #941: Wonderful comments! One of the amazing rewards of this thread has been, for you and for me and, certainly, for a few others, to discover a complexity of regulation and engineering in these cellular system that has gone weel beyond our best expectations: and my expectation, for certain, were vey high even in the beginnign of this adventure. But simply checking the literature for ubiquitin, ubiquitin code, E3 ligases, and similar keywords has given such an amazing return! Pubmed has been revealing new pearls almost daily: every 2-3 days, it's enough to search for "ubiquitin" and 10 - 20 brand new papers appear, and among them there is, almost always, some precious new discovery. Our kind interlocutors from the other side have done their best to practice their favourite sport: denying function, minimizing complexity, pretending that things are different from what they are. But in the end they are powerless: the fascination and the wonder of biological truth, of its continuous revelations, of this constant mise en abyme of engineering beauty, cannot be denied, minimized, or mistified. gpuccio
Now, from 2016 of IDRs, IDPs, Unstructure vs Structure and where things stand with recent summary and review. And why Gpuccio's term: Darwin-of-the-Gaps apply. Because as knowledge grows, increases, we find more evidence of design, not less, more inter-dependency, not less. More tightly controlled regulatory systems, not less, even built-in redundancy shows purpose and function in gene expression. More Code, more Code Layers. A PDF... Order, Disorder, and Everything in Between https://pdfs.semanticscholar.org/8580/46da325254ab78d5b1239bdedff685623555.pdf Shelly DeForte 1 and Vladimir N. Uversky 1,2,3,* 1 Department of Molecular Medicine, Morsani College of Medicine, University of South Florida, Tampa, FL 33612, USA; 2 USF Health Byrd Alzheimer’s Research Institute, Morsani College of Medicine, University of South Florida, Tampa, FL 33612, USA 3 Laboratory of Structural Dynamics, Stability and Folding of Proteins, Institute of Cytology, Russian Academy of Sciences, St. Petersburg 194064, Russia Published: 19 August 2016
Abstract: In addition to the “traditional” proteins characterized by the unique crystal-like structures needed for unique functions, it is increasingly recognized that many proteins or protein regions(collectively known as intrinsically disordered proteins (IDPs) and intrinsically disordered protein regions(IDPRs)), being biologically active, do not have a specific 3D-structure in their unbound states under physiological conditions. There are also subtler categories of disorder, such as conditional(or dormant) disorder and partial disorder. Both the ability of a protein/region to fold into a well-ordered functional unit or to stay intrinsically disordered but functional are encoded in the amino acid sequence. Structurally, IDPs/IDPRs are characterized by high spatiotemporal heterogeneity and exist as dynamic structural ensembles. It is important to remember, however, that although structure and disorder are often treated as binary states, they actually sit on a structural continuum.
Finally, it is necessary to distinguish proteins that are mostly or fully disordered from proteins with isolated regions of disorder. The term IDP is used to refer to proteins that are fully disordered, or contain long, defining regions of disorder. In contrast, when a protein is mostly structured but displays some regions of disorder, it is said to have intrinsically disordered protein regions (IDPRs). Proteins that contain a mix of ordered and disordered regions are also called hybrid proteins.
Interesting review . DATCG
Gpuccio @936, Very nice :) Agree with Upright BiPed #939. Excellent reply. It's not promiscuity. It's about relational context and conditions requiring flexible design solutions. You reply is a good example of how Design compares to blind assumptions. From your original comment link @10 on the following paper: Design Principles Involving Protein Disorder Facilitate Specific Substrate Selection and Degradation by the Ubiquitin-Proteasome System* https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4807260/ It's right in front of their faces, but they cannot see it due to blindness? Or, stubbornly refuse to accept it? There's a reason specific words used to explain processing functions matter. It's not merely descriptive language. It's understanding functional design in order to reverse engineer the design process. The title itself understands the key to Flexible design features for "specific substrate selection... by the UPS*" When it gets to purely operational functions, features and systems, Design-centric methodology and words take over. It's natural, but also required if we are to make any sense of functional systems. The Abstract:
The ubiquitin-proteasome system (UPS) regulates diverse cellular pathways by the timely removal (or processing) of proteins. Here we review the role of structural disorder and conformational flexibility in the different aspects of degradation. First, we discuss post-translational modifications within disordered regions that regulate E3 ligase localization, conformation, and enzymatic activity, and also the role of flexible linkers in mediating ubiquitin transfer and reaction processivity. Next we review well studied substrates and discuss that substrate elements (degrons) recognized by E3 ligases are highly disordered: short linear motifs recognized by many E3s constitute an important class of degrons, and these are almost always present in disordered regions.
Highly flexible once again.
Substrate lysines targeted for ubiquitination are also often located in neighboring regions of the E3 docking motifs and are therefore part of the disordered segment.
location, location, location
Finally, biochemical experiments and predictions show that initiation of degradation at the 26S proteasome requires a partially unfolded region to facilitate substrate entry into the proteasomal core.
Requirements and specs. None of this is random happenstance. Mutations to this process cause disease, so it's tightly controlled. We have relational docking to substrates. Contextually dependent, compartmental, conditional and Flexible(Disordered) for that purpose. Required as part of Dynamic Cellular Processes and/or Signal processing systems, etc., to many to name here. The UPS regulation network must be flexible across different phases or functions yet recognize and fold upon specific substrates. Promiscuous is poorly used word. Much like "disordered" was poor language assignment and "JUNK" was poorly thought out. This is mainly due to misunderstandings and confusion due to lack of knowledge and/or beliefs at the time. A Darwinist view requires large amounts of junk. It's not easy to reverse engineer so much detailed interactions at nano-scale levels. So it is understandable, but as Darwin-of-the-Gaps is reduced, informed knowledge shows more function. Showing more aspects of Design elements and principles. Like Flexible folding design elements, recognition and degrons. It's relational, utilizing built-in Flexibility for regulatory assignments in context dependent roles. These were once regions not well understood, because they were not "structured" and "rigid," but "disordered." But flexibility(disordered) is key to E3 Ligases from much of what we read so far. If not for "disordered" Flexible folds, induced folds, how many different substrates would be required? Or how many different E3s? If all were Structure-Function, rigid protein folds? And rigid substrates? A key design in systems networking is knowing when recognition requirements(identification) must be rigid or if flexibility is key for utilization of dynamic interactions, especially in signal processing. For subsequent acquisition and action(s) down stream. Without flexible interfaces built-in, the system could come to a grinding halt. One only need to review the title again with added clarity:
Design Principles Involving Protein Disorder(Flexible Structure) Facilitate Specific Substrate Selection and Degradation by the Ubiquitin-Proteasome System*
As Dionisio would say, "you've not seen nothing yet" OOPaaah! OK, that's a bit Greek, but he would get it. ;-) DATCG
I hope you appreciated the wonderful scenario described at #936
:) It was what caused me to comment. You have become the whack-a-mole champion of UD -- and you do with such class. To the most errant objections, you simply dismantle and answer. Dismantle and answer. Dismantle and answer. I also want to thank you publically for including the semiotic angle in your post. In my obviously-biased opinion, that argument has grown a great deal in the past 10 years here, and this thread is a booming exclamation mark in that record. Upright BiPed
Upright BiPed: I don't know where they are. And I have renounced checking regularly TSZ, because really it isn't worth the while. I hope that, if some interesting comment and criticism appears there, some friend who has the goodwill to post there will realize it, and give me a notice. In the meantime, I go on with the work here. I hope you appreciated the wonderful scenario described at #936, with three different times and levels of regulation of the same target in the same global process, always by E3 ligases. The intelligent complexity and precision of these scenarios is really beyond imagination! :) gpuccio
Where o' where are your objectors GP? Where are those rabid ID critics who just have so much to say? Are they on other threads? Where are the sock puppets? Are they saving them for greener grass elsewhere? Has TSZ become (er, remained) merely a place to hide from UD? How funny is that! Upright BiPed
To all: The fact that different E3 ligases can interact with the same substrate has been presented by our kind friends from the other side as evidence of their "promiscuity" and poor specificity. Of course, I have pointed to the simple fact, supported even by the authors of the paper they referred to, that different E3 ligases could bind the same substrate, but in different contexts. Therefore, that is a sign of extreme specificity, not of promiscuity. See comment #834 here. This is the relevant statement from the quoted paper:
Significant degrees of redundancy and multiplicity. Any particular substrate may be targeted by multiple E3 ligases at different sites, and a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments. This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment.
Well, here is a brand new paper that shows clearly how different E3 ligases target the same substrate at different steps of the cell cycle, and with different functional meaning. The "huge diversity in spatial and temporal control of ubiquitylation" is here clearly demonstrated. The HECT-type ubiquitin ligase Tom1 contributes to the turnover of Spo12, a component of the FEAR network, in G2/M phase. April 23, 2018 https://www.ncbi.nlm.nih.gov/pubmed/29683484
Abstract The ubiquitin-proteasome system plays a crucial role in cell cycle progression. A previous study suggested that Spo12, a component of the Cdc fourteen early anaphase release (FEAR) network, is targeted for degradation by the APC/CCdh1 complex in G1 phase. In the present study, we demonstrate that the Hect-type ubiquitin ligase Tom1 contributes to the turnover of Spo12 in G2/M phase. Co-immunoprecipitation analysis confirmed that Tom1 and Spo12 interact. Overexpression of Spo12 is cytotoxic in the absence of Tom1. Notably, Spo12 is degraded in S phase even in the absence of Tom1 and Cdh1, suggesting that an additional E3 ligase(s) also mediates Spo12 degradation. Together, we propose that several distinct degradation pathways control the level of Spo12 during the cell cycle.
So, we have: a) One target: Spo12 b) Three different functional moments: - G1 phase: control implemented by the APC/Ccdh1 E3 ligase - G2/M phase: control implemented by the Tom1 E3 ligase - S phase: control probably implemented by addirional E3 ligase(s) One substrate, three different functional contexts, three different E3 ligases: this is specificity at its best! :) gpuccio
OLV: This is the link (I had forgotten to include it in my post): https://onlinelibrary.wiley.com/doi/abs/10.1111/gbb.12481 Unfortunately, it is not public access! gpuccio
gpuccio (929): Thanks for answering my questions. Do you have a link to the mentioned paper? Thanks. OLV
To all: Again about the translation of the ubiquitin signal, and its specificity. This has just been accepted for publication: Linear ubiquitin chain-binding domains https://febs.onlinelibrary.wiley.com/doi/epdf/10.1111/febs.14478
Abstract Ubiquitin modification (ubiquitination) of target proteins can vary with respect to chain lengths, linkage type, and chain forms, such as homologous, mixed and branched ubiquitin chains. Thus, ubiquitination can generate multiple unique surfaces on a target protein substrate. Ubiquitin?binding domains (UBDs) recognize ubiquitinated substrates, by specifically binding to these unique surfaces, modulate the formation of cellular signaling complexes and regulate downstream signaling cascades. Among the eight different homotypic chain types, Met1?linked (also termed linear) chains are the only chains in which linkage occurs on a non?Lys residue of ubiquitin. Linear ubiquitin chains have been implicated in immune responses, cell death and autophagy, and several UBDs ? specific for linear ubiquitin chains ? have been identified. In this review, we describe the main principles of ubiquitin recognition by UBDs, focusing on linear ubiquitin chains and their roles in biology.
With a table about associated diseases. gpuccio
Gpuccio @926...
Complexes composed of Polycomb Group (PcG) proteins promote transcriptional silencing while those containing trithorax group (trxG) proteins promote transcriptional activation. However, other epigenetic protein factors, such as RYBP, have the ability to interact with both PcG and trxG and thus putatively participate in the reversibility of chromatin compaction, essential to respond to developmental cues and stress signals.
"... essential to respond to developmental cues and stress signals." What happens if RYBP is not present? To "respond to ... cues and stress signals?" To "reverse" chromatin compaction? I find this interesting, because the problem with supposed blind generation of function is blind events do not have recognition, let alone create, proactive ability of a responsive system that reverses a previous decision based upon guided feedback signals. Since when does any blind systems create an observational deck of monitoring tools, Code layered above Code, and feedback signals? I guess it's fun imagining fairy tales of past blind events over time, but seems like a great waste of mind. Reverse engineering, the actual process. That takes actual engineering principles and deciphering code - takes active interpretation and fundamental principles of semiotic recognition in multiple instruction sets of code. Reverse engineering an incredibly designed system, takes great imagination, expertise and knowledge. Understanding function takes logic, language and prescriptive clues. Blindness needs none of these. It just needs to keep telling stories. Based upon gaps. Gupccio, you mentioned in previous comment, "Darwin-of-the-Gaps." And what did DoG get us? It gave us "Junk" DNA. As "Darwin-of-the-Gaps" continues to shrink with each new discovery, the old fairy tales fade and actual knowledge increases. So too neo-Darwinism dies it's slow, gap death. Neo-darwinism, held up only by "Darwin-of-the-Gaps" archaic beliefs. That life spontaneously generated and voila, bit by bit, step by step, gradually arises. But that's not the pattern we observe. As knowledge replaces these once large gaps - "Junk" DNA - our insights grow of the remarkable design of life, codes and systems programming. And Design Theory strengthens. DATCG
Gpuccio, more excellent papers to read :) Sorry, not able to post lately, but finally found time to read up some. I appreciate all the questions, discussions you guys are having. Comment #859 - excellent review on antiquated and wrong argument by neo-darwinist. The problem for neo-darwinist faithful is they refuse to recognize the difference between random and organized, functional sequence complexity. They think randomness generated is an answer. No, it's not an answer, it's a complete misunderstanding and blindness to reality of what is staring them in the face - FSC - Functional Sequence Complexity which cannot be measured by Shannon Information Theory. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/
Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).
I guess when people believe in random blind events, they become blind to reality. Or, they refuse to accept the obvious, not based upon scientific grounds, but on a worldview. DATCG
Origenes, Thanks for the summary information on Gpuccio's answers. Excellent review. Have enjoyed reading those and catching up. DATCG
OLV: "What could the remaining 80% be associated with?" I think they mean that they found associations with genetic variants that could explain almost 20% of the variance, at least for the variable: emotional expression. I don't know how important that is, but it is well known that many apsects of mental behaviours or mental pathologies are associated to some genetic background. I think there is nothing strange in that. Of course, the remaining variance remains unexplained by this kind of analysis.
What does that mean?
They did the analysis in two steps. First they looked for associations in part of the population (about 40%), then they tried to confirm the associations in the reamining population (60%). This is usually done in GWAS, and is some form of external validation of the results.
How could the synapse maturation regulation affect the emotional behavior?
Of course a GWAS cannot offer data about that kind of explanation. But I don't think that would be so strange. Most of what we know about the working of the brain is connected to how synapses work.
Could that somehow relate to the concept of interface between consciousness and the CNS? Have you referred to something like that before?
Yes, of course. As you probably know, my model of the interface is at quantum level. Synaptic activity is a very likely candidate as a major component of the interface. Synaptic events are caused by a complex convergence of many different factors and stimuli. Moreover, synaptic activation is mostly a binary final event. Quantum distributions of probabilities can have a major role in influencing what happens at the level of synaptic connections. gpuccio
gpuccio (927): “...our results showed the existence of up to 20% genetic contribution to coping behaviors.” What could the remaining 80% be associated with? “...none of these associations were confirmed in the replication stage.” What does that mean? How could the synapse maturation regulation affect the emotional behavior? Could that somehow relate to the concept of interface between consciousness and the CNS? Have you referred to something like that before? Thank you. OLV
To all: Some functions are subtler than others: A genome-wide association study of coping behaviors suggests FBXO45 is associated with emotional expression. https://onlinelibrary.wiley.com/doi/abs/10.1111/gbb.12481
Abstract Individuals use coping behaviors to deal with unpleasant daily events. Such behaviors can moderate or mediate the pathway between psychosocial stress and health-related outcomes. However, few studies have examined the associations between coping behaviors and genetic variants. We conducted a genome-wide association study (GWAS) on coping behaviors in 13,088 participants aged 35-69 years as part of the Japan Multi-Institutional Collaborative Cohort Study. Five coping behaviors (emotional expression, emotional support seeking, positive reappraisal, problem solving, and disengagement) were measured and analyzed. A GWAS analysis was performed using a mixed linear model adjusted for study area, age, and sex. Variants with suggestive significance in the discovery phase (N=6,403) were further examined in the replication phase (N=7,685). We then combined variant-level association evidence into gene-level evidence using a gene-based analysis. The results showed a significant genetic contribution to emotional expression and disengagement, with an estimation that the 19.5% and 6.6% variance in the liability-scale was explained by common variants. In the discovery phase, 12 variants met suggestive significance (P<1×10-6 ) for association with the coping behaviors and perceived stress. However, none of these associations were confirmed in the replication stage. In gene-based analysis, FBXO45, a gene with regulatory roles in synapse maturation, was significantly associated with emotional expression after multiple corrections (P<3.1 × 10-6 ). In conclusion, our results showed the existence of up to 20% genetic contribution to coping behaviors. Moreover, our gene-based analysis using GWAS data suggests that genetic variations in FBXO45 are associated with emotional expression.
And, of course, FBXO45 is, guess what? A specific part of E3 ligases complexes. From Uniprot:
Component of E3 ubiquitin ligase complexes. Required for normal neuromuscular synaptogenesis, axon pathfinding and neuronal migration (By similarity). Plays a role in the regulation of neurotransmission at mature neurons (By similarity). May control synaptic activity by controlling UNC13A via ubiquitin dependent pathway (By similarity). Specifically recognizes TP73, promoting its ubiquitination and degradation.
gpuccio
To all: This is very interesting: Epigenetic and non-epigenetic functions of the RYBP protein in development and disease: Short title: The RYBP/dRYBP protein. https://www.ncbi.nlm.nih.gov/pubmed/29665352
Abstract: Over the last decades significant advances have been made in our understanding of the molecular mechanisms controlling organismal development. Among these mechanisms the knowledge gained on the roles played by epigenetic regulation of gene expression is extensive. Epigenetic control of transcription requires the function of protein complexes whose specific biochemical activities, such as histone mono-ubiquitylation, affect chromatin compaction and, consequently activation or repression of gene expression. Complexes composed of Polycomb Group (PcG) proteins promote transcriptional silencing while those containing trithorax group (trxG) proteins promote transcriptional activation. However, other epigenetic protein factors, such as RYBP, have the ability to interact with both PcG and trxG and thus putatively participate in the reversibility of chromatin compaction, essential to respond to developmental cues and stress signals. This review discusses the developmental and mechanistic functions of RYBP, a ubiquitin binding protein, in epigenetic control mediated by the PcG/trxG proteins to control transcription. Recent experimental evidence indicates that proteins regulating chromatin compaction also participate in other molecular mechanisms controlling development, such as cell death. This review also discusses the role of RYBP in apoptosis through non-epigenetic mechanisms as well as recent investigations linking the role of RYBP to apoptosis and cancer.
RYBP at Uniprot:
Component of a Polycomb group (PcG) multiprotein PRC1-like complex, a complex class required to maintain the transcriptionally repressive state of many genes, including Hox genes, throughout development. PcG PRC1-like complex acts via chromatin remodeling and modification of histones; it mediates monoubiquitination of histone H2A 'Lys-119', rendering chromatin heritably changed in its expressibility (PubMed:25519132). Component of a PRC1-like complex that mediates monoubiquitination of histone H2A 'Lys-119' on the X chromosome and is required for normal silencing of one copy of the X chromosome in XX females. May stimulate ubiquitination of histone H2A 'Lys-119' by recruiting the complex to target sites (By similarity). Inhibits ubiquitination and subsequent degradation of TP53, and thereby plays a role in regulating transcription of TP53 target genes (PubMed:19098711). May also regulate the ubiquitin-mediated proteasomal degradation of other proteins like FANK1 to regulate apoptosis (PubMed:14765135, PubMed:27060496). May be implicated in the regulation of the transcription as a repressor of the transcriptional activity of E4TF1 (PubMed:11953439). May bind to DNA (By similarity).
Emphasis mine. gpuccio
To all: This recent paper is really thorough, long and detailed. It is an extremely good summary about what is known of the role of ubiquitin in the regulation of the critical pathway of NF-kB Signaling, of which we have said a lot during this discussion: The Many Roles of Ubiquitin in NF-kB Signaling http://www.mdpi.com/2227-9059/6/2/43/htm I quote just a few parts:
Abstract: The nuclear factor kB (NF-kB) signaling pathway ubiquitously controls cell growth and survival in basic conditions as well as rapid resetting of cellular functions following environment changes or pathogenic insults. Moreover, its deregulation is frequently observed during cell transformation, chronic inflammation or autoimmunity. Understanding how it is properly regulated therefore is a prerequisite to managing these adverse situations. Over the last years evidence has accumulated showing that ubiquitination is a key process in NF-kB activation and its resolution. Here, we examine the various functions of ubiquitin in NF-kB signaling and more specifically, how it controls signal transduction at the molecular level and impacts in vivo on NF-kB regulated cellular processes. --- Importantly, the number of E3 Ligases or DUBs mutations found to be associated with human pathologies such as inflammatory diseases, rare diseases, cancers and neurodegenerative disorders is rapidly increasing [22,23,24]. There is now clear evidence that many E3s and DUBs play critical roles in NF-kB signaling, as will be discussed in the next sections, and therefore represent attractive pharmacological targets in the field of cancers and inflammation or rare diseases. --- 3.3. Ubiquitin Binding Domains in NF-kB Signaling Interpretation of the “ubiquitin code” is achieved through the recognition of different kinds of ubiquitin moieties by specific UBD-containing proteins [34]. UBDs are quite diverse, belonging to more than twenty families, and their main characteristics can be summarized as follows: (1) They vary widely in size, amino acid sequences and three-dimensional structure; (2) The majority of them recognize the same hydrophobic patch on the ?-sheet surface of ubiquitin, that includes Ile44, Leu8 and Val70; (3) Their affinity for ubiquitin is low (in the higher µM to lower mM range) but can be increased following polyubiquitination or through their repeated occurrence within a protein; (4) Using the topology of the ubiquitin chains, they discriminate between modified substrates to allow specific interactions or enzymatic processes. For instance, K11- and K48-linked chains adopt a rather closed conformation, whereas K63- or M1-linked chains are more elongated. In the NF-kB signaling pathway, several key players such as TAB2/3, NEMO and LUBAC are UBD-containing proteins whose ability to recognize ubiquitin chains is at the heart of their functions. --- 9. In Vivo Relevance of Ubiquitin-Dependent NF-kB Processes NF-kB-related ubiquitination/ubiquitin recognition processes described above at the protein level, regulate many important cellular/organismal functions impacting on human health. Indeed, several inherited pathologies recently identified are due to mutations on proteins involved in NF-kB signaling that impair ubiquitin-related processes [305]. Not surprisingly, given the close relationship existing between NF-kB and receptors participating in innate and acquired immunity, these diseases are associated with immunodeficiency and/or deregulated inflammation. 10. Conclusions Over the last fifteen years a wealth of studies has confirmed the critical function of ubiquitin in regulating essential processes such as signal transduction, DNA transcription, endocytosis or cell cycle. Focusing on the ubiquitin-dependent mechanisms of signal regulation and regulation of NF-kB pathways, as done here, illustrates the amazing versatility of ubiquitination in controlling the fate of protein, building of macromolecular protein complexes and fine-tuning regulation of signal transmission. All these molecular events are dependent on the existence of an intricate ubiquitin code that allows the scanning and proper translation of the various status of a given protein. Actually, this covalent addition of a polypeptide to a protein, a reaction that may seem to be a particularly energy consuming process, allows a crucial degree of flexibility and the occurrence of almost unlimited new layers of regulation. This latter point is particularly evident with ubiquitination/deubiquitination events regulating the fate and activity of primary targets often modulated themselves by ubiquitination/deubiquitination events regulating the fate and activity of ubiquitination effectors and so on. --- To the best of our knowledge the amazingly broad and intricate dependency of NF-kB signaling on ubiquitin has not been observed in any other major signaling pathways. It remains to be seen whether this is a unique property of the NF-kB signaling pathway or only due to a lack of exhaustive characterization of players involved in those other pathways. Finally, supporting the crucial function of ubiquitin-related processes in NF-kB signaling is their strong evolutionary conservation.
The whole paper is amazingly full of fascinating information. I highly recommend it to all, and especially to those who have expressed doubts and simplistic judgments about the intricacy and specificity of the ubiquitin system, in particular the E3 ligases. But what's the point? They will never change their mind. gpuccio
To all: Again about E3 ligases specificity: April 13, 2018 Crucial Role of Linear Ubiquitin Chain Assembly Complex-Mediated Inhibition of Programmed Cell Death in TLR4-Mediated B Cell Responses and B1b Cell Development. http://www.jimmunol.org/content/early/2018/04/13/jimmunol.1701526
Abstract: Linear ubiquitin chain assembly complex (LUBAC)-mediated linear polyubiquitin plays crucial roles in thymus-dependent and -independent type II Ab responses and B1 cell development. In this study, we analyzed the role of LUBAC in TLR-mediated B cell responses. A mouse strain in which LUBAC activity was ablated specifically in B cells (B-HOIP?linear mice) showed defective Ab responses to a type I thymus-independent Ag, NP-LPS. B cells from B-HOIP?linear mice (HOIP?linear B cells) underwent massive cell death in response to stimulation of TLR4, but not TLR9. TLR4 stimulation induced caspase-8 activation in HOIP?linear B cells; this phenomenon, as well as TLR4-induced cell death, was suppressed by ablation of TRIF, a signal inducer specific for TLR4. In addition, LPS-induced survival, proliferation, and differentiation into Ab-producing cells of HOIP?linear B cells were substantially restored by inhibition of caspases together with RIP3 deletion, but not by RIP3 deletion alone, suggesting that LPS stimulation kills HOIP?linear B cells by apoptosis elicited via the TRIF pathway. Further examination of the roles of cell death pathways in B-HOIP?linear mice revealed that deletion of RIP3 increased the number of B1 cells, particularly B1b cells, in B-HOIP?linear mice, indicating that B1b cell homeostasis is controlled via LUBAC-mediated suppression of necroptosis. Taken together, the data show that LUBAC regulates TLR4-mediated B cell responses and B1b cell development and/or maintenance by inhibiting programmed cell death.
LUBAC is an interesting complex of 3 different proteins, with E3 ligase activity. But it generates its own specific type of uniquitination: Linear ubiquitination-mediated NF-?B regulation and its related disorders https://academic.oup.com/jb/article/154/4/313/760726
Abstract: Ubiquitination is a post-translational modification involved in the regulation of a broad variety of cellular functions, such as protein degradation and signal transduction, including nuclear factor-?B (NF-?B) signalling. NF-?B is crucial for inflammatory and immune responses, and aberrant NF-?B signalling is implicated in multiple disorders. We found that linear ubiquitin chain assembly complex (LUBAC), composed of HOIL-1L, HOIP and SHARPIN, generates a novel type of Met1 (M1)-linked linear polyubiquitin chain and specifically regulates the canonical NF-?B pathway. Moreover, specific deubiquitinases, such as CYLD, A20 (TNFAIP3) and OTULIN/gumby, inhibit LUBAC-induced NF-?B activation by different molecular mechanisms, and several M1-linked ubiquitin-specific binding domains have been structurally defined. LUBAC and these linear ubiquitination-regulating factors contribute to immune and inflammatory processes and apoptosis. Functional impairments of these factors are correlated with multiple disorders, including autoinflammation, immunodeficiencies, dermatitis, B-cell lymphomas and Parkinson’s disease. This review summarizes the molecular basis and the pathophysiological implications of the linear ubiquitination-mediated NF-?B activation pathway regulation by LUBAC. --- We identified LUBAC, a ?600 kDa ternary complex composed of HOIL-1L (also known as RBCK1), HOIL-1L-interacting protein (HOIP) (also known as RNF31, ZIBRA and PAUL) and SHANK-associated RH domain interacting protein (SHARPIN) (Fig. 2A). LUBAC is the only E3 that assembles linear polyubiquitin chains by peptide bonds between the C-terminal Gly76 of ubiquitin and the ?-NH2 group of M1 of another ubiquitin moiety (5, 6). --- LUBAC is currently the only E3 complex known to generate an M1-linked linear polyubiquitin chain, and the linkage specificity is defined by LUBAC, rather than the E2s.
Emphasis mine. The whole paper is very interesting, and describes many highly specific aspects of this unique system. For example, about DUBs:
The LUBAC-mediated NF-?B Pathway is Down-regulated by Specific DUBs: Ubiquitin signalling is generally attenuated by DUBs, through the proteolytic cleavage of ubiquitin–ubiquitin or ubiquitin–substrate bonds. Human cells contain ?55 ubiquitin-specific proteases (USP), 4 ubiquitin C-terminal hydrolases (UCH), 14 ovarian tumour proteases (OTU), 4 Josephins and 10 JAB1/MPN/MOV34 (JAMM)-family DUBs (49). USP, UCH, OTU and Josephin belong to the Cys protease family, whereas the JAMM family members are zinc metalloproteases. Each DUB exhibits specificity for ubiquitin chain linkages and intracellular localization, and thus regulates distinct cellular functions. NF-?B signalling is reportedly regulated by two OTU family DUBs, A20 and Cezanne and a USP family DUB, CYLD (50).
And Fig. 3 is another candidate to our simplicity award! :) gpuccio
To all: New complexity at the endpoint of the ubiquitin system: April 13, 2018 Structure and Function of the 26S Proteasome. https://www.ncbi.nlm.nih.gov/pubmed/29652515
Abstract As the endpoint for the ubiquitin-proteasome system, the 26S proteasome is the principal proteolytic machine responsible for regulated protein degradation in eukaryotic cells. The proteasome's cellular functions range from general protein homeostasis and stress response to the control of vital processes such as cell division and signal transduction. To reliably process all the proteins presented to it in the complex cellular environment, the proteasome must combine high promiscuity with exceptional substrate selectivity. Recent structural and biochemical studies have shed new light on the many steps involved in proteasomal substrate processing, including recognition, deubiquitination, and ATP-driven translocation and unfolding. In addition, these studies revealed a complex conformational landscape that ensures proper substrate selection before the proteasome commits to processive degradation. These advances in our understanding of the proteasome's intricate machinery set the stage for future studies on how the proteasome functions as a major regulator of the eukaryotic proteome.
Emphasis mine. Those who have followed the discussion will understand. :) gpuccio
gpuccio Yes. It is the first post there at this moment. bill cole
bill cole: OK, I will do it. Is it posted at TSZ? gpuccio
gpuccio Here is my post at TSZ to Joe. We may not get an answer for a few days. April 14, 2018 at 4:56 pm
Joe Felsenstein, May I assume that gpuccio has not redefined “functional information” from Hazen and Szostak? Is there some reason to discard their definition? Bill Cole I don’t think he is hung up on the definition. He is trying to establish a way to measure it using living organisms. From listening to your lecture it appears that you have been aware of this issue for 40 years. You and gpuccio have been talking over each other at this point and maybe thats the best we can do for the time being. He is frustrated because you appear to be evasive. Tom posting your lecture is a clue why you are walking so carefully through this discussion. It appears that although you don’t completely agree with what Demski has concluded about functional information you do see value. Do you see value in the work gpuccio is doing? Where you the first to attempt a model of energy/information flow through living organisms?
If you have time it would be valuable for you to scan through his lecture. bill cole
Joe Felsestein at TSZ:
I will be away from the keyboard, mostly, over the weekend — hope that by Monday gpuccio has cleared that up.
It's not difficult to clear it up. If you had just read my specific answers to you quoted at #885: my comments #828, #831, #847 and #882 instead of just reading and answering my #885, which was just a brief comment to ET, you would probably understand what I mean. My definition of functional information is not essentially different from the others, including Orgel, Abel, Durston, Szostak and, of course, Dembski: the concept is always to measure the complexity in bits that is necessary to implement some funtion. However, I have tried to make my definition empirically explicit. As mentioned many times, you can find my definition, and related clarifications, here: Functional information defined https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ The essential definition, quoted from that OP, is the following:
e) The ratio Target space/Search space expresses the probability of getting an object from the search space by one random search attempt, in a system where each object has the same probability of being found by a random search (that is, a system with an uniform probability of finding those objects). f) The Functionally Specified Information (FSI) in bits is simply –log2 of that number. Please, note that I imply no specific meaning of the word “information” here. We could call it any other way. What I mean is exactly what I have defined, and nothing more. One last step. FSI is a continuous numerical value, different for each function and system. But it is possible to categorize the concept in order to have a binary variable (yes/no) for each function in a system.
Of course, there are more details in the OP. My problem with you is about your statements quote in my comment #828: "That counts up changes anywhere in the genome, as long as they contribute to the fitness, and it counts up whatever successive changes occur." My comment (always at #828): "Again, are you kidding? So, if you have 500 different mutations of 1 AA in different proteins, each of them contributing in completely different and independent ways to fitness, you believe that you have 500 bits of complex functional information?" The question is rather simple, and I would appreciate to recieve an answer from you. To make it even more clear, I have given the example of the thief, to which, if I have not missed something, you have not answered (if you have, please give me a reference, because it's becoming very diffcult to check everything in the many different pages at TSZ). The thief mental experiment can be found as a first draft at my comment #823, quoted again at #831, and then repeated at #847 (to Allan Keith) in a more articulated form. In essence, we compare two systems. One is made of one single object (a big safe). the other of 150 smaller safes. The sum in the big safe is the same as the sums in the 150 smaller safes put togethjer. that ensures that both systems, if solved, increase the fitness of the thief in the same measure. Let's say that our functional objects, in each system, are: a) a single piece of card with the 150 figures of the key to the big safe b) 150 pieces of card, each containing the one figure key to one of the small safes (correctly labeled, so that the thief can use them directly). Now, if the thief owns the functional objects, he can easily get the sum, both in the big safe and in the small safes. But our model is that the keys are not known to the thief, so we want to compute the probability of getting to them in the two different scenarios by a random search. So, in the first scenario, the thief tries the 10^150 possible solutions, until he finds the right one. In the second scenario, he tries the ten possible solutions for the first safe, opens it, then passes to the second, and so on. A more detailed analysis of the time needed in each scenario can be found in my comment #847. So, I would really appreciate if you could answer this simple question: Do you think that the two scenarios are equivalent? What should the thief do, according to your views? This is meant as an explicit answer to your statement mentioned before:
"That counts up changes anywhere in the genome, as long as they contribute to the fitness, and it counts up whatever successive changes occur."
The system with the 150 safes corresponds to the idea of a function that include changes "anywhere in the genome, as long as they contribute to the fitness". The system with one big safe corresponds to my idea of one single object (or IC system of objects) where the function (opening the safe) is not present unless 500 specific bits are present. Please, answer at your ease. But answer. gpuccio
Corneel at TSZ (about my new OP): "Will that be reposted here at TSZ?" I have posted it here. Anyone can post it, or parts of it, at TSZ. There is no copyright, it is public domain. gpuccio
Entropy at TSZ: April 12, 2018 at 11:31 pm
So, he hadn’t examined “a few organisms,” he had examined “a few groups of organisms.” All against just humans. Hum.
Yes, I have tested the human proteome against specific groups of organisms (all known protein sequences in each group, IOWs the non redundant database of NCBI). Human proteins here are used as a "probe" to measure human conserved information against the times of divergence from the human line. I think I have explained that in detail many times. What's your problem?
Oh, and he has concluded that information has increased.
No. Of course not. I observe and describe the variations and the increase in human conserved information (the y axix in my plots), the only quantity that my procedure can measure. And I detail the conservation times (the x axis in my plots). I never say anything about a generic "increase" in some generic "information". My variables are very clearly defined. It's not my fault if you don't understand them.
Hum. So, he didn’t check a few organisms because he has a special definition for few, and he didn’t conclude that there was increases in information, but he has concluded that there’s “jumps” in information.
As I have explained, it was not a "few": it was all the known protein sequences for each explicitly defined group of organisms. I never concluded that there were "increses in information", a phrase that deos not mean anything. I have measured, in all cases, human conserved information in each explicitly defined group for each explicitly defined human protein. And I have observed, of course, big jumps in human conserved information. That's what I have done. gpuccio
OMagain at TSZ: April 12, 2018 at 8:56 pm Oh, so someone is giving a look at my OPs about NS and RV! I commend you for doing that. You ask:
So my question is, can you give me an example of a specific biological sequence with 160 bits of functional information and explain how you know that sequence is functional? I assume that you know what it does, otherwise how do you know it’s functional information.
The alpha and beta chains of ATP synthase are an example O have often quoted. The beta chain has 663 bits of conservation between e. coli and humans, for example. See also my comment #713 for more details. And yes, I know what it does. Together with the alpha chain, it makes the main functional part of the F1 subunit of ATP synthase. Which builds ATP moòlecules from the energy deriving from a proton gradient.
And can you then give me another sequence with around half, or 80, bits of functional information and explain how you know it is functional information?
I can give you whatever you like, but I see no rreasons to search my database to find proteins with exactly 160 or 80 bits of functional information, unless you explain the reason for that. For the moment, the beta chain of ATP synthase will do.
I’ll then present to you a similar sequence. You can then presumably tell me if it’s actually functional information and if so how many bits it contains?
Not at all. This is a silly misunderstanding of ID theory. If you just give me a sequence, of AAs, bits, or whatever, in most cases I cannot know if it has any interesting or complex functions just form the sequence itself. For example, a sequence of AAs is well beyond my personal capacity of understanding how it will fold and what it will do (and, I would say, beyond the understanding of almost everyone). So I could never understand if any sequence of bits in machine language is functional or not. You see, the function is observed in the real world, not necessarily derived from the sequence itself. For proteins, scientists observe what the protein can do. Uniprot has a "function" section at the start of each protein page. I can look at it, ans so can you. For many proteins, function is not well known. In my reasonings here, I have often used conservation thorugh lonf evolutionary windows as a mark of function, even if the function itself is not known in detail. So, if you give me a siftwrae, I can easily test if it works and what it does, even if I don't know the source code. Of course, I must know the sequence and other indirect informations if I want to measure the specific functional information of a functional sequence for an observed, and explicitly defined, function.
If you are not able to do that not, why not? It seems to follow logically from your claims.
No, it doesn't, as I have shown.
Those are my questions.
Well, thank you for asking them. gpuccio
Origenes: Thank you again for all your work, putting togetehre the most relevant moments of this long discussion. It is very appreciated! :) At least, it shows that I have really tried to answer many of their "arguments"! gpuccio
DATCG: "I have catching up to do!" Please, take your time! :) gpuccio
Gpuccio @911, Good find on specificity :) I have catching up to do! Might not participate much. For now, will continue to read and when time allows, add comments. DATCG
The Skeptical Zone’s winning arguments — part IV:
Corneel: Alas, not true. Neo-darwinism is the theory of population change through natural selection put on more secure genetic footing than Darwin did. That doesn’t rely on common descent, I fear.
This sounds really strange. I have always thought that the step by step darwinian process does require CD. Could you explain better how it could take place if CD were not true? I don’t understand.[GPuccio]
Corneel: …
Dazz: Just keep regurgitating the same crap and pretend you’ve made a positive case for anything. Unbelievable.
GlenDavidson: The trouble is that the crucial premise [Natural systems where there is no obvious intervention of consciousness can generate complex functional information.] is not sound, it has not been shown to be true by the evidence. Indeed, the evidence is contrary to it ….
If “the evidence is contrary to it”, as you say, just provide a counter-example.[GPuccio]
GlenDavidson: …
GlenDavidson: Indeed, the evidence is contrary to it, since life is peculiarly lacking in aspects that one gets from observed designers.
What does “life is peculiarly lacking in aspects that one gets from observed designers” have to do with that?
GlenDavidson: … It would have to be legitimate first. You have to show that “No system of the a) type can generate complex functional information,” is actually true. If you’re using a false premise, there’s no falsification possible. And it’s at the least an unsound premise, as it has never had the evidence to demonstrate that it is so.
No, you are simply confused here. Falsifiability has nothing to do with the merits of a scientific theory. It just means that it is a scientific theory, because it is falsifiable. Please, check your philosophy of science.[GPuccio] - - Wrapping up this parade of nonsense … Two more “killer arguments” by Entropy:
Entropy: See that? He changed from asking about the function to asking about the protein. This way, instead of something as easy as getting new functions from already existing proteins, he’s asking for new proteins.
Entropy: The guy is a shameless ass-hole.
Origenes
To all: While I have commented a little on E3 ligases in the new OP, I will continue to post interesting news about ubiquitin here. This is new, and brings us rather back in metazoa, to c. elegans: The UBR-1 ubiquitin ligase regulates glutamate metabolism to generate coordinated motor pattern in Caenorhabditis elegans. https://www.ncbi.nlm.nih.gov/pubmed/29649217
Abstract: UBR1 is an E3 ubiquitin ligase best known for its ability to target protein degradation by the N-end rule. The physiological functions of UBR family proteins, however, remain not fully understood. We found that the functional loss of C. elegans UBR-1 leads to a specific motor deficit: when adult animals generate reversal movements, A-class motor neurons exhibit synchronized activation, preventing body bending. This motor deficit is rescued by removing GOT-1, a transaminase that converts aspartate to glutamate. Both UBR-1 and GOT-1 are expressed and critically required in premotor interneurons of the reversal motor circuit to regulate the motor pattern. ubr-1 and got-1 mutants exhibit elevated and decreased glutamate level, respectively. These results raise an intriguing possibility that UBR proteins regulate glutamate metabolism, which is critical for neuronal development and signaling.
This is further evidence against the silly idea that E3 ligases are promiscuous and not specific: their defects are a cause of disease not only in humans, but even in nematodes! gpuccio
Hi guys: I have been very busy writing the new OP, that has now been published. It is about a few important issues already patially discussed in this thread. I am rather tired, so I apologize if I will be rather slow in commenting (at least for a few hours!) :) gpuccio
ET @908 At this point I would not trust any of these guys with understanding that 2 + 2 = 4. Felsenstein also 'understood' that 500-bits functional complexity arises "at once in one mutation" — yes really, see #882. Felsenstein seriously thought that this was GPuccio's argument. GPuccio had to tell Felsenstein that he never said that. You cannot make that stuff up. 500-bits by one mutation ... words fail me at this point. Origenes
Joe Felsenstein:
I understand William Dembski’s “complex specified information” (in Dembski’s 2002-2007 arguments) as well as the altered version in his 2005-2006 paper.
Nonsense. Your natural selection paper on NCSE demonstrates that you do not understand Dembski at all ET
Don't forget that we don't know what information is even though we have gone through pain-staking detail explaining exactly what we mean. ET
The Skeptical Zone’s winning arguments — part III:
Entropy: That is clearly pointing to a “gap.” Pointing to something you cannot understand how it can be done naturally. Sorry, but that’s not just god-of-the-gaps, but even classic god-of-the-gaps.
I would like to clarify a very important point: the “god-of-the-gaps” argument against ID and why it is completely false. … [GPuccio] — See #657
Entropy: Scientists have understood for quite a while that information arises from the dynamics between energy flows and the nature of physical/chemical “entities.”
Complex functional information? Really? Examples, please. If scientists “have understood” such a thing “for quite a while”, it will not be difficult for you to give examples. Do it. [GPuccio]
Entropy: For example, a substrate that it never encounters in its environment. However, once the correct substrate is found, it looks rather obvious in the efficiency / specificity of the enzyme towards it, compared to the “wrong” substrate. Where does all of this lead? To the realization that enzyme activities are not as perfect as presented in kinder-garden biochemistry, that they range in potential towards substrates other than their “normal” ones, and that, thus, there’s such a thing as “ladders” of specificity available for enzyme evolution. Not only that, after understanding this issue, it seems rather obvious.
There is nothing obvious in this confused fuss. You must explain how some new complex functional protein, for example a new protein superfamily, can arise by gradual steps, each of them giving an increase of function. Or at least why we should believe that it is possible. You only make generic and confused statements about enzymes. What is your point? [GPuccio]
Entropy: … physical interactions. They are also measured. Why would they if they’re so specific and perfect according to kinder-garden biochemistry? Shouldn’t we just see a complex and be done? Well, no, the formation of the complex depends on the relative concentrations of the proteins in question, which depend on their relative affinities towards each other. Wait! Relative affinities? Yes. They have pseudo-affinities towards other proteins. So, here, again, we see that there’s an obvious “ladder” for protein-protein interactions to evolve, and thus to the evolution of protein complexes.
Even more confusion. Is it possible? Affinities have nothing to do with that. We are speaking of naturally selectable functions. [GPuccio]
Entropy: I hope that gives you enough of a hint.
Not at all. Look, just an advice. Don’t give “hints”. Give answers. [GPuccio]
Entropy: Thus my emphasis. I see non-conscious systems doing that all the time. You seem to forget that this happens in life forms all the time with no consciousness involved. They put those amounts of information together with no conscious activity involved. Most life reproduces with no conscious activity involved. All life forms duplicate their DNA, transcribe it, translate the RNA into proteins, etc., thus putting together quite a bit of information, with no conscious activity involved.
Are you kidding? Do you even understand what you are saying? All life forms duplicate their DNA. Sure. They do that because: a) The information in their DNA is already there b) That information includes the information for DNA replication IOWs, they are only executing information that has been put together in their genomes. Not by them. Your statements are like saying that when I print a Shakespeare sonnet I am putting together the information in it. I, the great poet! Again, are you kidding? [GPuccio]
Entropy: I said that energy flow transforms into information. Complexity is what happens when systems out of equilibrium move towards equilibrium. For as long as equilibrium isn’t reached, we have information. Yes, that includes “functional” information.
No, it doesn’t. I mentioned writing because it is a clear and objective example of complex functional information beyond the 500 bits threshold. You do the same: give an example. But of course you can’t. [GPuccio] Origenes
The Skeptical Zone’s winning arguments — part II:
Entropy: I did touch at least one, I explained that the semiosis you see is but an anthropomorphism.
It’s no anthropomorphism. It’s an objective property of the system.[GPuccio] See #590, #610
Entropy: Your claim is that the “complexity,” “functional information,” or whatever you want to call it, is beyond nature.
False. I never used that word [“beyond nature”]. [GPuccio]
Entropy: I haven’t seen a single life form that needs to consciously control its metabolism, or its ubiquitin-related processes.
This is really silly. Nobody, of course, is suggesting that animals, or humans, consciously control their metabolism, or similar things. [GPuccio]
CharlieM: How did the exact same 63 AA sequence come to appear in both species? Can the probability be estimated? I don’t know.
If you are saying that this is another empirical evidence for CD, I agree. But why say that while quoting me? I believe in common descent. How many times should I say that, to be believed? [GPuccio]
GlenDavidson: Basically, you assume that DNA is symbolic in God’s mind (yes, we know), and never imagine that a code might exist because, besides the ability of coded systems to store information compactly, sequential codes work very well for producing the sequences of proteins, among other things.
God’s mind has nothing to do with it. Or any mind, for that. Semiosis, as defined, is not a priori a mind thing. It is only empirically found in designed systems. [GPuccio] - - - - - GPuccio @903 I am not done yet. The best arguments are yet to come. :) Origenes
keiths- if you think Wagner supports anything you say then it's up to you to show it. You can't even account for the proteins he used ET
Origenes: Thank you for the nice summary! :) gpuccio
The Skeptical Zone’s winning arguments — part I:
Entropy: Examining a few organisms, and comparing them to a few other, apparently less complex, ones, and concluding that information has increased, rather than reorganized, is quite a hasty conclusion.
A procedure and a conclusion that I have never done or stated.[GPuccio]
Entropy: You, however, think that just pointing to complexity will make your absurd imaginary friend into a reality.
In my view, instead, my argument is that there are three different markers that are linked to a design originn and therefore allow empirically a design inference (that is the basic concept in ID, and I have discussed it many times in all its aspects). Those three features are: a) Functional complexity (the one I usually discuss, and which I have quantitatively assessed many times in detail) b) Semiosis (which has been abundantly discussed by UB) c) Irreducible complexity In my OP I have discussed in detail a specific biological system where all those three aspects are present. Therefore, a system for which a design inference is by far the only reasonable explanation. [GPuccio]
Corneel: No, that is patently false. You are having your cake and eating it too. The “information jumps” that gpuccio introduces in his OP critically rely on the different genes he is comparing being homologs, i.e. on common descent being true. If he is unwilling to defend this, he must also drop that argument.
It is absolutely true that my argument here relies on common descent. I have clarified that I believe in common descent, and that I assume it for my biological reasonings. But there is more. I have defended Common Descent in detail and with the best arguments that I can think of. see my comments here, #525, 526, 529, 534, 538 and 546. What can I do more than that? [GPuccio]
OMagain: Please feel free to go into detail regarding these “severe limits” and how you have determined that they exist at all.
I have dedicated two whole OPs and long following discussions to the limits o NA and RV, with a lot of detail. Here they are: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson And: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world Please, feel free to read them and to comment. I will answer. [GPuccio]
Entropy: If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that. So, claiming that a designer is necessary to produce “information,” seems a lot like putting the cart before the horse.
I can’t follow your reasoning. Yes, designers use energy to create patterns. And so? [GPuccio]
Entropy: So 500 bits? A joke for natural processes.
Then show one single example of that. [GPuccio] Origenes
EugeneS: Me too! :) gpuccio
GP We are on the same page here ;) I am glad. EugeneS
bill cole: Yes, of course. I will include some more discussion about that in my next OP. gpuccio
Bill- Dr Behe has dispensed with Thornton et al., also ET
Bill Cole- Structure, Function and Assembly of Flagellar Axial Proteins:
The bacterial flagellum is a biological macromolecular nanomachine for locomotion. A membrane embedded molecular motor rotates a long helical filament that works as a propeller driving the bacterium through the liquid environment. The flagellum is composed of about 30 different proteins with copy numbers ranging from a few to a few thousands and is made by self-assembly of those proteins.
Of course that "self-assembly" is unsupported... ET
gpuccio Is this why you think the Hayashi paper supports your hypothesis?
The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination. Recombination among neutral or surviving entities may suppress negative mutations and thus escape from mutation-selection-drift balance. Although the importance of recombination or DNA shuffling has been suggested [30], we did not include such mechanisms for the sake of simplicity. However, the obtained landscape structure is unaffected by the involvement of recombination mutation although it may affect the speed of search in the sequence space.
bill cole
bill cole: I suppose Keiths is his greatest fan. :) gpuccio
gpuccio
Wagner is beyond any sense.
Perfect for keiths :-) bill cole
ET
That is wrong, Bill. Each of those 30 have more than one residue (copy) required. Some are into the thousands of residues. There are thousands of proteins that need to be assembled just-so. And there are chaperones that make sure cross reactions don’t happen that will ruin the assembly process. So forget the 30, or 40 or 50- those are the base proteins but each is required in different quantities.
Thanks. Do you have a citation? bill cole
Eugene S: I agree that not all the information can be in the genome, at least not in the form that we understand at present. So, we have some interesting possibilities: a) Functional information in the genome that we still don't understand. b) Epigenetic information. c) Functional information in some form that goes beyond the biochemical level. a) and b) are of course more conservative. c) is intriguing, but I agree that at present we have not enough data to support it. gpuccio
ET: Wagner is beyond any sense. gpuccio
Eugene S- Your friend is right as genomes do not determine the final form. Dr Denton has written about this and so has Dr Sermonti and others. ET
ET (883) Ok. Thanks. OLV
In addition, to my comment above, to represent my interlocutor's ideas better, he is quite sympathetic to ID. However, he claims that there is no grounds to think that all complexity is within the genome. Just to describe phenotype differences between two distinct species sometimes requires more information than there is in the human genome. Consequently, he claims, the complexity must be somewhere else. I pass on this one. I need more data to be able to judge. Eugene S
gpuccio- I find it very telling that not one of your opponents has even tried to show how natural selection or drift could have produced the ubiquitin system. Evolutionists should be very ashamed of themselves. keiths is even referring to the "Arrival of the Fittest" totally clueless that natural selection or drift could not have produced any of the proteins Wagner discusses. ET
GP Thank you very much for your time. That was the essence of my response to my interlocutor as well. I did not want to provide my reasoning together with my question to you because this way it really allows me to synchronize my watches with you better ;) "Imaginary castles". Right you are! And neutral it is, exactly! No magic, no free lunch (including co-evolutionary scenarios). Eugene S
ET: However, Joe Felsestein has given an answer, as unsatisfactory as it may be. And I have answered back. Let's see what he has to say. I am really surprised, I must say, that a person who seems to understand very well the importance of functional information (see his comment "April 8, 2018 at 8:31 pm" at TSZ, and my comment #796 here) can at the same time misunderstand so blatantly what functional information is (see my comments #828, #831, #847 and #882 here). gpuccio
Eugene S: Yes, I had not noticed that comment from you. Here is the most relevant part:
I actually had a chat with someone about the rarity of function in protein sequence space. They pointed me to what they consider as evidence against rarity. I am not qualified to judge that but it would be interesting to hear your opinion. The family Buprestidae is among the largest of the beetles, with some 15,000 species known in 450 genera. As far as I understood from our opponent, one of the current explanations is neutral evolution. To repeat, this example was put forward as evidence against the rarity of protein functions in sequence space. It appears, there are some very dense clusters of solutions in it which can be traversed by random walk/neutral drift. I don’t know what evidence (if at all) they have supporting the claim that “neutral drift did it”. It would be nice to have an expert look into this.
I will discuss the problem of rarity of functional islands in my next OP. I am working at it. For the moment, I would say that what you reprt about your opponent's argument is really too vague. I would ramind here that whatever we can discuss about protein function and its rarity or frequency requires precise molecular data. Evolutionary biologist always build imaginary castles looking at morphology, or clades, and so on, but they vecome really restless as soon as one mentions molecular data about functions. Abd yet, only molecular data allow us to understand the complexity implied in phenotypes, and that is the only way to understand ID theory and, more in general, the protein funtional space. Was the argument of your opponent based on molecular data? If yes, what were they? Neutral evolution is exactly that: neutral. It does not chanke the probabilitstic barriers. Not at all. If the probability of reaching a target if 1: 2^500 with a random wlak, it remains 1:2^500, however neutral evolution intervenes. That is not true of NS, of course. NS does reduce the probabilistic barriers, in the measure that it can really take place. I hope that helps. gpuccio
OLV - "victory" to them means we are poopy heads who don't know anything and they are superior who know it all. ET
Joe Felsenstein at TSZ: April 12, 2018 at 1:09 pm Thank you for answering. I don't know if you misinterpret Dembski. You are certainly misinterpreting me. And I can asnwer only for my ideas, not for others.
gpuccio, you gave your 500-bit threshold as a figure which is “the foundation itself of ID”.
Of course. The foundation of ID is that complex functional information is the objective property that allows to infer a design origin for an object. And 500 bits is an appropriate threshold in the general case.
Then you said that it had to arise at once in one mutation.
I have never said that. What I have said is that 500 bits of functional information means that an object exhibits 500 bits or more of some specific configuration which are necessary to implement one explicitly defined function. If that is the case, we can infer design for that object. I have critized, with some strength, your apparent idea that the bits of information could arise "anywhere in the genome", and that they could be added if each of them increased the generic concept of fitness. I have presented my mental experiment of the thief exactly to emphasize the big fallacy in your reasoning. Could you please answer about it? A function which exhibits 500 bits of functional complexity is one explicitly defined function for which at least those 500 bits are necessary: IOWs, the defined function is not there if all the 500 bits are not present. That does not mean that it has to "arise at once in one mutation", as you say. It can arise however you like, but arise it must. There is absolutely no need that it must happen at one time. But there is the need that all the specific bits must be present at some final time, if the function has to appear. The point is that NS cannot help it arise, because the individual bits of information are not functional at all: it's only the global configuration of the 500 bits that confers the function. Therefore, any function that implies 500 bits of functional information cannot arise with the help of NS, and has to rely only on the probabilistic resources of the system (IOWs, the number of states that the system can reach in the allotted time). You will find a very generous computation of the probabilistic resources of our planet at the biological level at the beginning of my OP: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ The first table. 500 bits are definitely beyond the probabilistic resources of our planet, of the universe, and probably of many universes put together. I hope this clarifies your misunderstanding of me. And if you could answer about the thief, I would be happy. gpuccio
ET, Victory? What does that mean in this context? Thanks. OLV
TSZ has already declared victory and Joe F will never change ET
GP, Sorry for a distraction in this thread. You might have overlooked my response to an old thread here: https://goo.gl/7mhNrg Most importantly, I have a relevant question there and would be interested in your opinion. Thanks. Eugene S
To all: No interesting news from TSZ. Joe Felsestein has not answered my comments on his strange views about functional information, it seems. If I have missed his reasponse, please someone let me know. DNA_Jock insists with his favourite toys, the TSS and the alternative solutions. As said, I am working at a very detailed answer about them. Nothing else, it seems. gpuccio
Bill, you have some brushing up to do:
You need to get 30 proteins to bind and to perform a single function
That is wrong, Bill. Each of those 30 have more than one residue (copy) required. Some are into the thousands of residues. There are thousands of proteins that need to be assembled just-so. And there are chaperones that make sure cross reactions don't happen that will ruin the assembly process. So forget the 30, or 40 or 50- those are the base proteins but each is required in different quantities. ET
bill cole: Really, Allan Keith is not worth the while. gpuccio
ET: "By the way natural selection didn’t produce any of those polypeptides." Nor did it ever select any of them, after they were produced. The Szostak paper is not about the frequency of naturally selectable functions in random libraries. That Alan Fox and others still think that it is is only evidence of their misunderstanding of the paper itself, and of their confusion about the basic foundations of their own theory. gpuccio
ET at #872: (quoting Alan Fox) I have discussed in great detail the Szostak paper. Many times. I cannot repeat everything each time someone suddenly awakes and decides that it really shows what it is thought to show. I discuss it here, briefly, at #663, #713, #715. In my thread about the limits of NS: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ I discuss it more extensively at #61, #62, #229, #237 (another Szostak paper), #238, #263, #277, #284, #303, #320, #343. In my thread about the limits of random variation: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ I discuss it again at #66, #70, #78, #87, #179, #184, #191, #253, #266. And these are only the most recent examples. Definitely, the Szostak paper is one of my favourite papers in favor of ID. Second only to Hayashi's paper on the rugged landscape. gpuccio
Allan
Bill Cole, but you are still basing this on an assumption that the flagellum was the goal.
Now you're really confused. I have not stated it as the goal only what we are observing. The question is the cause. I know your struggling with this concept but science is about determining cause. Regarding the iPhone 7 we know the cause is design. The question you ask is irrelevant to scientific thought. Can you demonstrate that random events created the iPhone 7? Why in the world would you think you can to attribute chance to organisms that are orders of magnitude more sophisticated then the I phone 7? The combinatorial explosion problem eliminates chance as a cause. Only design as a cause makes sense. Your arguments are deeply flawed and you need to come up with a new stick :-) bill cole
Alan Fox:
I see DNA_Jock has already picked up on this. All what evidence? Keefe and Szostak did some pioneering work generating random protein samples and testing for just one property, ATP affinity. That didn’t show functionality is rare in sequence space.
What? Read the paper:
Functional primordial proteins presumably originated from randomsequences,butitisnotknownhow frequentlyfunctional, orevenfolded,proteinsoccur incollectionsofrandomsequences. Here we have used in vitro selection of messenger RNA displayed proteins, in which each protein is covalently linked through its carboxyterminustothe39endofitsencodingmRNA1,tosamplea large number of distinct random sequences. Starting from a library of 6´1012 proteins each containing 80 contiguous randomaminoacids,weselectedfunctionalproteinsbyenriching for those that bind to ATP. This selection yielded four new ATP binding proteins that appear to be unrelated to each other or to anything found in the current databases of biological proteins. The frequency of occurrence of functional proteins in random sequence libraries appears to be similar to that observed for equivalent RNA libraries2,3.
4 out of 6 x 10^12 That seems pretty rare to me.
We therefore estimate that roughly 1 in 10^11 of all random sequence proteins have ATP-binding activity comparable to the proteins isolated in this study
Seems pretty rare to me What is Alan talking about? By the way natural selection didn't produce any of those polypeptides. ET
AK: … using the combinatorial explosion is just using a probability argument to knock over a strawman view of what evolution is. The probability arguments being used would be perfectly valid if evolution was goal oriented.
Reference please. I hold that the opposite is true: in the case of a goal oriented evolution, probability arguments would no longer be valid. If evolution would be able to skew the outcome distribution — goal oriented / toward a certain outcome — then our probabilities would be wrong.
AK: For example, if the goal of evolution was to produce ATP synthase, or lactase, then using the probability calculations commonly thrown around to criticize evolution would be valid. But evolution is not goal oriented like this.
No one on this side claims it is. It is not something to be boastful about; like ET said: “evolution is not goal oriented which makes the problems worse.”
AK: … what is the probability that Gpuccio with his unique DNA sequence would be sitting at his computer typing a response to my comment?
Whatever the chance is, the chance of Julius Caesar correctly predicting GPuccio’s DNA sequence is way smaller — see #861.
These probability arguments assume that existing proteins and existing metabolic pathways are the only ones that were ever possible.
That’s too extreme. Instead, it is assumed that biological function is rare in sequence space. A common sense assumption:
“however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive. You may throw cells together at random, over and over again for a billion years, and not once will you get a conglomeration that flies or swims or burrows or runs, or does anything, even badly, that could remotely be construed as working to keep itself alive.” — Richard Dawkins, The Blind Watchmaker
Origenes
Allan Keith at #862: Interesting. One of the most (probably intentionally) confused comments I have ever read. Thank you however for having given me the occasion to explain why the infamous deck of cards argument is so infamous. You don't seem worried. Good for you. You just mix some remnants of it with two wholly different arguments: the "no goals" argument (so silly that it does not deserve any answer) and the "aletrnative solutions" argument, that I will not answer here because it is one of the arguments from DNA_Jock that I will address in my next OP. That's moving goal posts in multi-tasking! I will leave you to your wisdom. Even if I have not yet understood what the thief should do, according to you. Or if I am really an exceptional result of nature, or just one of the almost all human beings with an unique genome, whose probability to be born is almost 100% at each delivery (of course, I am leaving out identical twins). Good luck. gpuccio
Allan spearshake:
But using it is more of a problem for ID than it is for evolution.
That is your uneducated opinion. You do realize that ID is not anti-evolution making your sentence nonsensical. You do realize that your position doesn't have any way to test its claims which makes it a huge problem for science ET
If the flagellum wasn't the goal then the probabilities shrink. And look, you cannot account for the type three secretory system- it is also IC. And there isn't any evidence that natural selection can take it and fashion a flagella. Again your position doesn't have anything to test the claim that proteins arose via blind and mindless processes. If you did then we wouldn't be talking about probabilities.
I’m not saying that a probability model cannot be used, just that the way that ID has been using it is not valid.
Just saying it doesn't make it so. I bet that you can't actually make the case ET
Bill Cole,
The combinatorial explosion problem is not going away no matter how cleaver your rhetoric is.
I have no problem with ID continuing to use this approach. But using it is more of a problem for ID than it is for evolution. If you doubt me, let's use an example for which design is the known cause. Given that humans first emerged (or were designed, if you prefer), what is the probability that the iPhone 8 would exist in 2017? Allan Keith
Bill Cole, but you are still basing this on an assumption that the flagellum was the goal. Or that the specific proteins and their arrangements are the only ones possible to produce a structure that facilitates locomotion. Almost all of the proteins in the flagellum structure are found in other bacterial cells, serving other functions. There is also a structure (injectisome, I think) that is almost identical to the flagellum, but serves a completely different function. I'm not saying that a probability model cannot be used, just that the way that ID has been using it is not valid. Allan Keith
Allan
The probability arguments being used would be perfectly valid if evolution was goal oriented.
They are valid simply because we observe functional biological structures. They easily eliminate random change as a cause of functional biological structures. Do you want to take a shot a building a bacterial flagellum with random change driving the process to biological function? Start with protein one. How does protein 2 form so it binds to protein 1? How does protein 3 form so it binds to protein 1 and 2? Once you claim serendipity as your hypothesis you fall into the probability trap. bill cole
Allan aka William aka Arcatia
These probability arguments assume that existing proteins and existing metabolic pathways are the only ones that were ever possible. This is not the case as is demonstrated by the number of variations on the theme observed in extant organisms.
So you move that goal posts. New field goal requited.:-) The "evolution could build anything" like your past argument is a fallacy. Once you have started to build a multi protein structure you are committed with the remaining proteins. If you start to build anything it will not function beyond protein one of the structure as it will not bind and evolution fails. The combinatorial explosion problem is not going away no matter how cleaver your rhetoric is. bill cole
Probability arguments are used for the mere fact tat there isn't any evidence that natural selection and drift could produce the biological structure in question. There isn't even a methodology to test the claim. Right, evolution is not goal oriented which makes the problems worse. Your position doesn't even deserve a seat at the probability table ET
Bill Cole and Gpuccio, there are many avenues that can be used to support ID but using the combinatorial explosion is just using a probability argument to knock over a strawman view of what evolution is. The probability arguments being used would be perfectly valid if evolution was goal oriented. For example, if the goal of evolution was to produce ATP synthase, or lactase, then using the probability calculations commonly thrown around to criticize evolution would be valid. But evolution is not goal oriented like this. Just as the unique DNA sequence that is Gpuccio was not the goal (nothing personal :) ). Given his name, I am assuming that he is of Italian descent. Starting at the time of the Romans, and Gpuccio's hypothesized great-great^?? grandparents, what is the probability that Gpuccio with his unique DNA sequence would be sitting at his computer typing a response to my comment? I think that we would all agree that it would be astronomically improbable. However, what is the probability that these same two people would have extant descendants? These probability arguments assume that existing proteins and existing metabolic pathways are the only ones that were ever possible. This is not the case as is demonstrated by the number of variations on the theme observed in extant organisms. Allan Keith
The trick of the argument is equivocating between a specification obtained from the results and a specification obtained independently from the results. In GPuccio's example (#859), the chance to get a result and to get a specification from that result = 1 (100%). However, the chance to get a result that matches a specification obtained independently from the results = 10^-150 Origenes
gpuccio:
You are really proposing again what I call “the infamous deck of cards fallacy”. One of the worst and most arrogant wrong arguments that I have ever heard.
I called it a brain-dead argument. ET
Allan Keith at #850: b) Second point:
You have a very unique and specific DNA sequence. What is the probability of this arising? Your father produced millions of sperm cells, your mother thousands of ova. To create you, the exact two would have had to get together. Add to this the probability of your parents getting together at the right time (or them getting together at all). Follow these probabilities back just a few generations and you get to anastronomical number. But that is not the point. Your existence wasn’t preordained. It wasn’t the goal. But it happened in spite of the astronomical odds against it.
OK. I can't believe it. You are really proposing again what I call "the infamous deck of cards fallacy". One of the worst and most arrogant wrong arguments that I have ever heard. It has been some time, maybe years, since I heard it the last time as a criticism of ID. It was rather frequent about 10 years ago, but then, apparently, even neo-darwinist must have realized how wrong and stupid it is. Not you, apparently. So, the fallacy goes as follows: We shuffle a deck of cards (let's say 52) and we draw all of them in random order (if we have shuffled well). What are the probablilities of that specific sequence to come out? Easy: this is a combinatorics problem too, permutations without repetition, and the asnwer is n! (n factorial). So, the answer is: 8.065818e+67 225 bits Well, that's not the 500 bits of Dembski's UPB, but I think it's enough. Almost everybody would agree with the folloing statement, taken from a web site:
To put this in perspective, the dinosaurs died out 65,000,000 years ago, and the age of the earth is just 4,500,000,000 years. Now suppose everybody in the world was to arrange packs of cards at the rate of one per second, it would take 600,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years to get all the combinations! That's why you're VERY unlikely ever to shuffle a pack of cards the same way twice.
From: How many different ways can you put a pack of cards in order? http://www.murderousmaths.co.uk/cardperms.htm So, a really unlikely event. And here goes the infamous fallacy. It says: "See? the permutation you obtained is absoluitely unlikely, and yet you got exactly that. This is the demonstration that extremely unlikely events happen all the time!" And that is exactly your reasoning about me as an unlikely individual. Now, as narcissistic as I can be, I am not an unlikely individual. It's you who does not understand probability and specification (if you are sincere in what you say). I will assume you are sincere. So, how to explain it to you? There are many ways. I will try the simplest, referring again to my example. The thief and the safes. Beware: this is the explanation. Pay attention! So, our thief goes for the big safe (against all reason and common sense). OK, he is a neo-darwinist, and he has just read Joe Felsestein's arguments and your comment #850. He makes a first try, and he types a random 150 figures number. The safe does not open. What happened here? We have one event: the random generation of a 150 figures number. What is the probability of that event? It depends on how you define the probability. In all probability problems, you need a clear definition of what probability you are computing. So, if you define the problem as follows: "What is the probability of having exactly this result? ... (and here you must give the exact sequence for which you are computing the probability)" then the probability is 10^-150. But you have to define the result by the exact contingent information of the result you have already got. IOWs, what you are asking is the probability of a result that is what it is. That probability in one try is 1 (100%). Because all results are what they are. All results have a probability of 10^-150. That property is common to all the 10^150 results. Therefore, the probability of having one generic result whose probability is 10^-150 is 1, because we have 10^150 potential results with that property, and no one that does not have it. So, should we be suprised that we got one specific result, that is what it is? Not at all. That is the only possible result. The probability is 1. No miracle, of course. Not even any special luck. Just necessity (a probabiltiy of one is necessity). Now, let's say that our thief, at his first try, types exactly the sequence that opens the safe. Now we are defining the event not by some specific contingent sequence (we may have no idea at all of what the sequence is). We define it by something that it can do: IOWs a function. The sequence that opens the safe. The only sequence that can do that. What is the probability of getting that result? It is 10^-150! Really, this time. So, if our thief gets it at the first try, I will be really suspicious. The best explanation, by far, is that he already knew the solution (IOWs, design). See, the general concept is: what is a specification? the answer is: A specification is any objective rule that generates a binary partition in the search space. Now, both the definitions we have considered above do generate a binary partition in the search space. First definition: a result which is what it is, and is one of the 10^150 results that have an individual probability of 10^-150. Partition: Search space = 10^150 Target space = 10^150 The target space is the same as the search space. Probability of success = 1 (100%) Second definition: a result which can open the big safe. Partition: Search space = 10^150 Target space = 1 The target space is extremely small if compared to the search space. Probability of success = 10^-150 Therefore, the deck of cards fallacy is not only a fallacy: it is infamous, completely wrong and very, very silly and arrogant. It really makes me angry. gpuccio
Allan Keith at #850: Two points: a) First point:
your question is obviously about the “combinatorial explosion”. So, I ask you a similar question in response.
You have not answered my question at #847. A "similar quesstion" is not an answer. You speak of "combinatorial explosion". I don't know if it is an "explosion", but it certainly is a very correct application of combinatorics, a well defined branch of mathematics. So, I ask again: What does he try? The big safe or the 150 smaller safes? What would you try? It's a simple question. Please, answer. The second point in next post. gpuccio
bill cole- Allan Keith is "Acartia" over on TSZ. It has already admitted its purpose in these discussions is to be a pokey- do whatever it can to provoke people. Instigation is its name Just sayin'... ET
Hi Allan
Calling it a strawman doesn’t make it so. The strawman is claiming that the combinatorial explosion disproves anything. The statistical assumptions used by those using this argument are wrong. Which makes the argument wrong. Don’t blame me if your assumptions are wrong.
The combinatorial explosion is a problem for identifying random change or trial and error as a cause of what is observed. When you talk about the cause of you being born we know what the cause is.:-) If you sit at a poker table and someone gets dealt 5 royal straight flushes in a row you would not assume a fair shuffled deck. The fact that he got those hands is 100% because it already happened the question, however, is the cause. The combinatorial explosion problem is not about backward probabilities it is about determining if random change can be the cause of the pattern you are seeing. When you see a functional protein sequence of 500 bits we can eliminate random change as a cause. This is one of the pieces of evidence that supports the design inference. bill cole
Look, you are starting with existing humans who have the combinatorial explosion you need to account for in the first place. And then add in everything else I first posted in response to your straw man- that is the evidence it is a straw man. Just saying your "argument" is valid doesn't make it so. ET
ET,
Your straw man has been exposed. Now run along and let us adults discuss science.
Calling it a strawman doesn’t make it so. The strawman is claiming that the combinatorial explosion disproves anything. The statistical assumptions used by those using this argument are wrong. Which makes the argument wrong. Don’t blame me if your assumptions are wrong. Allan Keith
Your straw man has been exposed. Now run along and let us adults discuss science. ET
ET,
The chances are high that a human will be born after a successful mating between a male and female human.
Thank you Captain Obvious.
The odds that any particular baby will have a unique genome is as close to 1 to 1 as you can get.
Duh! Your ability to comprehend is duly noted. Allan Keith
Oh my, talk about a brain-dead argument. The chances are high that a human will be born after a successful mating between a male and female human. "What are the odds that you are going to be dealt a specific hand?" It is a certainty that I will be dealt a hand if I am in the game. The odds that any particular baby will have a unique genome is as close to 1 to 1 as you can get. ET
Gpuccio, I apologize for not really following this thread, but your question is obviously about the “combinatorial explosion”. So, I ask you a similar question in response. You have a very unique and specific DNA sequence. What is the probability of this arising? Your father produced millions of sperm cells, your mother thousands of ova. To create you, the exact two would have had to get together. Add to this the probability of your parents getting together at the right time (or them getting together at all). Follow these probabilities back just a few generations and you get to anastronomical number. But that is not the point. Your existence wasn’t preordained. It wasn’t the goal. But it happened in spite of the astronomical odds against it. Allan Keith
Allan Keith:
Maybe I am missing something.
A brain and a spine, at the very least. :razz: ET
Allan Keith- Is that all you have? Really? You are lying, of course and slandering someone. Typical, but still pathetic. Wallow in your willful ignorance. ET
Allan Keith: Waiting for Joe Felsestein, have you any position about what he says (quoted at #828) and my rebuttal at #828 and #831? I will ask the question again, to you, and in more detail. A thief enters a house, a big house. Inside, he knows that there are: a) One big safe, with a key that is some 150 figures number. b) 150 smaller safes, each of them with a key which is a one figure number: from 0 to 9. c) He knows that the same sum is in the big safe and in the 150 smaller safes put together. d) What does he try? The big safe or the 150 smaller safes? e) Of course, the gain in fitness is the same in both cases. You answer. Of course, maybe the thief is a proud fool, or maybe he is a neo-darwinist, and he chooses the big safe. But most thieves with a minimum of common sense would certainly go for the 150 smaller safes. A few reflections, waiting for your answer. We already know that Joe Felsestein apparently thinks that the two options are the same thing. If I have not misinterpreted what he says. A good thief, quick and concentrated and well organized could probably empty the 150 smaller safes in less than three hours, especially if he has a couple of accomplices to empty them while he opens them. Indeed, I think that about half a minute is needed to try the 10 digits, and most of the times he will find the right key in much less time. If he goes for the big safe, instead... OK, let's see. 10^150 combinations. Each of them 150 figures long. Let's say one minute a try, to be very generous. Reasonably, he could find the right key with half of the total possible number of tries. But we have faith in his moderate luck, so let's say that he can find the right combination after 1/10 of the possible tries. That leaves us with 10^149 tries. At one minute a try, that is... About 10^143 years. If the total time of our universe from the Big Bang to now is 1.5x10^10 years, then the time necessary to find the combination, with a little luck, is 10^133 times the whole lifetime of our universe. Some difference, with three hours! But of course neo-darwinists understand very well that 150 events with a complexity of 3.3 bits each (the complexity of a decimal figure) are the same thing as one event with a complexity of 500 bits. Joe Felsestein seems to defend this position, and nobody on his side has apparently disagreed. What do you say? I think this is a very important point. I would like to hear statements about that. gpuccio
ET,
Already done, Allan. And it was posted on TSZ- and guess what? They all choked on it.
Maybe I am missing something. Both of those links were to a site who’s owner is a well known jerk who does nothing but insult and swear at anyone who disagrees with him. A well known homophobe who frequently calls his opponents “faggots”, “assmunchers” and other more offensive epithets. Do yo have links to any more reputable sites? I would be interested to read those. Allan Keith
Already done, Allan. And it was posted on TSZ- and guess what? They all choked on it. How to test and falsify ID Definitive evidence that ATP synthase was intelligently designed And that is more than evolutionism has. There isn't anything published that supports evolutionism. There isn't a methodology to test the claim that natural selection produced vision systems, for example. ET
OMagain:
It always amuses me to ask “which one?” when IDists proclaim that the bacterial flagellum was designed.
And it always amuses us when you try to defend the claim tat any one of them evolved via natural selection and/ or drift. Your entire position is amusing because it always comes back to flailing away at ID with your ignorance. Even if you ever find a valid fault with ID that will never help you find support for your position's claims. ET
ET,
You want to watch TSZ implode? We need to post the way to test ID, along with the positive criteria. Who do I talk to?
God? But seriously, we would love to see ID start to do this. When are you going to start? Feel free to start with ATP synthase. How do you test that ID created it? Feel free to publish this in any of the high end peer reviewed journals. If you can’t do that, feel free to publish your research in BioComplexity. Or, if you can’t get it published there, just post it here. Allan Keith
You want to watch TSZ implode? We need to post the way to test ID, along with the positive criteria. Who do I talk to? ET
DNA_Jock at TSZ: April 10, 2018 at 8:41 pm Please, read my answer at #834. gpuccio
OMagain:
It’s kind of the point actually. Multiple ways of doing similar things is the ladder.
Not at all. Gradual stepwise naturally selectable configurations of a sequence, linked by simple variation and which show a constant increase of the selectable function: that's a ladder. I don't know where you have left your logic. gpuccio
Corneel at TSZ: Thank you for signaling the typo. I have just deleted the "y"! :) I am a bad monkey. No Shakespeare from me, certainly! :) gpuccio
DNA_Jock at TSZ: <blockquote cite<It’s not like we haven’t been over this before. Sigh. See my comment #837. gpuccio
dazz:
Here we go again with the caricature of a straw man, sprinkled with tons of texas sharp shooting
I am writing a full OP about the TSS fallacy and other arguments from DNA_Jock. Please, have a little patience. gpuccio
Corneel at TSZ:
Just a warning.
Consider me warned, but not impressed. Yes, the variance is different in different gorups, but there can be many different explanations for that. One of them could be that the groups have very different mumerosity (not my fault, it depends on the number of sequenced data in each group). You should also consider that the analysis regards the whole human proteome, about 20000 proteins. It's a rather good population size. I really see no problems here. gpuccio
Corneel at TSZ: You can also look at my comments #828 and #831 for an interesting follow-up to my discussion with Joe Felsestein. gpuccio
Corneel at TSZ:
So those naughty E3 ligases are promiscuous as well, and they may acquire novel targets simply by having their expression changed to another cellular compartment or different timing. Regulation which may be changed by single nucleotide changes in the regulatory domain of those genes. Of course, ensuing evolution towards greater affinity can occur in small evolutionary steps.
This strange comment should be justified by the following passage in the paper you quote.
3. Significant degrees of redundancy and multiplicity. Any particular substrate may be targeted by multiple E3 ligases at different sites, and a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments. This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment.
Well, I have see many non sequitur, but this is one of the best. So, we have 600+ E3 ligases which are completely different one from the other at sequence level (except for the shraed, small domains, as explained). Now we learn that: "Any particular substrate may be targeted by multiple E3 ligases at different sites," and: "a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments" My original statement was: "Not so in the case of the specificity of E3 ligases. That specificity is about recognizing completely different target proteins, and their appropriate state." Emphasis added. So, a single E3 ligase can target more than one protein. But that was already clear, because we have 600+ E3 ligases and thousands of targets. See the OP:
2 E1 proteins – 40 E2 proteins – 600+ E3 proteins + thousands of specific substrates IOWs, each of hundreds of different complex proteins recognizes its specific substrates, and marks them with a shared symbolic code based on uniquitin and its many possible chains. And the result of that process is that proteins are destined to degradation by the proteasome or other mechanisms, and that protein interactions and protein signaling are regulated and made possible, and that practically all cellular functions are allowed to flow correctly and smoothly.
Emphasis added. That perfectly corresponds to: "a single E3 ligase may target multiple substrates under different conditions or in different cellular compartments". But we also learn that: "Any particular substrate may be targeted by multiple E3 ligases at different sites," (emphasis added) So, it's not so much redundancy, as multiplicity. Different E3 ligases may target a same protein, but at different sites. Specificity, again. You say that they are "promiscuous". But the authors have different conclusions, and I agree fully with them: "This drives a huge diversity in spatial and temporal control of ubiquitylation (reviewed by ref. [61]). Cellular context is an important consideration, as substrate–ligase pairs identified by biochemical methods may not be expressed or interact in the same sub-cellular compartment." (Emphasis mine) IOWs, there is a huge diversity of strict control: IOWs, huge functional complexity. Promiscuous? Do you know how that is called? It's called "cross-talk". A term that we have found consistently in the scientific literature about the ubiquitin system. It's semiosis and complexity, at the highest level. How you can ifer from these interesting observations that: "Of course, ensuing evolution towards greater affinity can occur in small evolutionary steps." is really a mystery, to me.
You did not offer protein domains as an example, gpuccio. You offered proteins. The function of a protein relies for a great deal on the specific combination of domains it has.
Yes. And so? What do you mean. I stick to all that I have said about this issue.
Did you read Joe Felsensteins criticism in the beginning of the thread?
Yes, of course. And I have answered it in great detail. Did you read my answer to him at #716? gpuccio
Corneel at TSZ: So why would the protein coding genes not be representative of all evolutionary transmitted information? I trust that you have a scientific and empirical argument for doubting this. Yes. By far the most important argument is that I don't believe that the structure of the complex nervous system in humans, in particular the brain, and its exclusive new potentialities, can derive from a very small change in protein coding genes. For that, even the changes in regulatory non coding sequences seem to be insufficient. I do believe that it is transmitted information, but I don't know where it is and how it is transmitted. That's not so strange. A lot of obviously transmitted information in living organisms is completely elusive, at present: for example, complex behaviours in some types of orgnaisms. gpuccio
Intelligent Design is about the DESIGN. ID is not about the designer(s) because we don't even ask about that until after we have determined intelligent design exists. Yes, the existence of intelligent design says there was an intelligent designer. But we only have the DESIGN to study. ET
Joe Felsenstein at TSZ: Just to make it simple for all. Please, read my comment #823. It's short and simple. OK, I paste it again here.
Of course 500 bits refers to an exponential measure for one object (or set of objects, if we can show that the set is IC) implementing one specific function. It’s the implemention of one function which has a complexity of 500 bits: IOWs, it cannot be implemented with a simpler configuration. Independent individual bits cannot be summed. Bits are an exponential measure. 500 bits means: a specific sequence of 500 binary values, or of 150 specific decimal values, or of 115 specific AAs (base 20) that is necessary to implement one specific function. It’s like having a number like this: 6394672650104367823952223904758… 150 figures long, which is the unique key to an electronic safe, and trying to divine it by RV and NS: a) RV: we just try any sequence of 150 figures b) NS: after seeing it does not work, we change a figure at a time, and we test if there is any increase in the (non) function. This is what the neo-darwinian model amounts to, as far as complex functional proteins are involved. Has Joe Felsestein any comment?
OK, now a very simple question. Is finding the 150 figures key the same as finding 150 different one figure keys to 150 different safes? In both scenarios, the "fitness" of the thief increases, I suppose. Answer, please. gpuccio
ET: "I tried to warn you." That's true! :) gpuccio
Are you kidding? Have you lost your mind?
I tried to warn you. ET
Joe Felsenstein at TSZ: April 10, 2018 at 12:32 am Are you kidding? Have you lost your mind?
The 500 bits criterion, which originated with Dembski, was gpuccio’s criterion for “complex”, as I demonstrated in clear quotes from gpuccio in my previous comment.
It's of course my criterion for complex functional information: the information linked to the implementation of one explicitly defined function. IOWs, if a function can be implemented by an object, or by an IC system, and it requires at least 500 specific bits to be implemented by that object, it is complex. As I have always said. See also one of my first OPs here: Functional information defined https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/
That counts up changes anywhere in the genome, as long as they contribute to the fitness, and it counts up whatever successive changes occur.
Again, are you kidding? So, if you have 500 different mutations of 1 AA in different proteins, each of them contributing in completely different and independent ways to fitness, you believe that you have 500 bits of complex functional information? Are you really saying that? I cannot believe it! Each of those mutations is independent, and has an independent and different functional effect. Each of them contributes to fitness, therefore is selectable. None of them contributes to the same function as the others, even if all of them contribute, independently, to fitness (which is of course a meta-function, ot which many different functions contribute). Do you understand why we measure functional information (yes, the same functional information that you recognized as a true and important concept) in bits? It's because it is -log2 of the probability of the event. Do you understand? Bits are exponential. It is rather easy to have one random AA change which is specific to one function (there is not function, or increase of the function, without it). It is more difficult to have two random AA changes that are specific to one function (there is not function, or increase of the function, without both of them). It is empirically impossible to have 150 random AA changes that are specific to one function (there is not function, or increase of the function, without all of them). The sum of 150 simple mutations, each of which gives inependently an increse of "fitness", is building no complex function. The idea of neo-darwinism is that a complex function (like ATP synthase) should come into existence through hundreds of specific mutations in the same structure which in the end build the function as we observe it today. And each of those mutations should increase fitness, and therefore be naturally selectable. Now, either each mutation is naturally selectable because the final function already exists: IOWs, ATP synthase appears as a simple mutation (5 AAs top) in something unrelated that already existed, and the humdreds of specific AAs that follow just "optimize" that simple initial function Or The final function (ATP synthase) just appears when hundreds of specific AAs are in place, but for some strange reason there is a ladder of simple mutations, each of them increasing fitness for different and inscrutable reasons, which for some even stranger reason just builds the exact complex sequence that will, one day, provide a completely different function, ATP synthase. You choose which of the two is less empirically impossible.
Now, in both gpuccio’s and your comments, the requirement is added that all this occur in one protein, in one change, and that it be “new and original function”.
It is not added. It was there from the beginning. Just read any single discussion I have had here in the last ten years. Or just my linked OP about functional complexity. Or anythin from me about the issue.
That was not a part of the 500-bit criterion that gpuccio firmly declares to be the foundation of ID.
It was, of course. It's not my fault if you don't understand ID theory. Not at all.
There was supposed to be some reason why a 500 bit increase in functional information was not attainable by natural selection. Without any requirement that it involve “new and original function”.
I have explained what new and original mean, and why they are part of the definition of functional information from the beginning. See also comment #716, to you. gpuccio
Entropy at TSZ: April 10, 2018 at 12:31 am You seem to have lost any reasonable attitude, not for the first time I must say.
A few curious things, 1. The ubiquitin system, and the sequence of ubiquitins themselves, are described as very conserved. 2. The existence of distant homologs for all of the proteins involved in the ubiquitin systems is acknowledged, thus contradicting 1. 3. The distant homologs are claimed to “add to the complexity of the system” 4. But gpuccio insists that gene copies do not add functional information/complexity. Thus contradicting 3. 5. This guy rejects enzyme promiscuity as a “conceptual reason” for the existence of “ladders” towards “complex protein function” on the basis of the similarity of the substrates recognized by those enzymes. 6. Yet he’s impressed by the way the ubiquitin system works, even though it involves proteins belonging to larger protein families, similar substrates, similar reactions, and similar actions.
Curious things? 1. Ubiquitin, the molecule, is extremely conserved. The other components are more or less conserved, in different ways. For example, the set of 600+ E3 ligases shows very different conservation history, and that is an explicit point in my OP, and especially in Fig. 5. 2. I have ackowledged the existnce of distant homologues for ubiquitin itself, ubiquitin like proteins, and Probably some other components. E3 ligases, for example, are not present in prokaryotes at all (there is only an example of a RING domain, but not of an E3 ligase, as already discussed. See the OP section: "Evolution of the ubiquitin system?" and my comment 758 to Corneel. How these facts "contradict" the high conservation of ubiquitin in eukaryotes, is really a mystery. 3. The phrase you quote is, apparently, from the OP: "A number of ubiquitin like proteins add to the complexity of the system." Ubiquitin like proteins are not "distant homologs". They are more variant molecules, often rather unrekated to the ubiquitin molecule, which have a distinct role in the system, separated from the role of ubiquitin. That's why they "add to the complexity". Of course they do. And, like ubiquitin, some Ubl have distant homologues in prokaryotes, as you can check in the abstract of the paper quoted in the section "Evolution of the ubiquitin system?" 4. They are not gene copies. They are different and specialized molevule, each with its specialized system. not contradiction. 5. Yes. And also on the base of the extreme similarity of the proteins themselves, which are usually part of a same protein family, or even sub-family. 6. I am amazed at many things. The greatest of all, in the ubiquitin system, is the huge group of the E3 ligases, as said many times. E3 ligases share some basic domains for the function of ubiquitin transfer, domains which are however of a few different basic types, but they are completely different one from the other for the main part of the molecule, which has the role of recognizing the correct target. But why should I explain it again? I can just quote my OP, where the issue is clearly explained:
Now, a very important point. Those 600+ E3 proteins that we find in humans are really different proteins. Of course, they have something in common: a specific domain. From that point of view, they can be roughly classified in three groups according to the specific E3 domain: RING group: the RING finger domain ((Really Interesting New Gene) is a short domain of zinc finger type, usually 40 to 60 amino acids. This is the biggest group of E3s (about 600) HECT domain (homologous to the E6AP carboxyl terminus): this is a bigger domain (about 350 AAs). Located at the C terminus of the protein. It has a specific ligase activity, different from the RING In humans we have approximately 30 proteins of this type. RBR domain (ring between ring fingers): this is a common domain (about 150 AAs) where two RING fingers are separated by a region called IBR, a cysteine-rich zinc finger. Only a subset of these proteins are E3 ligases, in humans we have about 12 of them. See also here. OK, so these proteins have one of these three domains in common, usually the RING domain. The function of the domain is specifically to interact with the E2-ubiquitin complex to implement the ligase activity. But the domain is only a part of the molecule, indeed a small part of it. E3 ligases are usually big proteins (hundreds, and up to thousands of AAs). Each of these proteins has a very specific non domain sequence, which is probably responsible for the most important part of the function: the recognition of the specific proteins that each E3 ligase processes. This is a huge complexity, in terms of functional information at sequence level.
Other curious things? gpuccio
corny the quote-mining fool just cannot help itself:
He really seems incapable of seeing the delicious irony of him proudly proclaiming that they establish their “scientific methodology” AFTER they have decided that organisms are designed.
That is your opinion and it is incorrect. ID's scientific methodology is geared toward distinguishing intelligent design from nature. That much has been spelled out in every piece of pro-ID literature. And when compared to your position, which only has, "anything but ID" any objective person can see which one is science and which one is a joke. ET
My favorite part for identifying ATP synthase as intelligently designed is the external connection between the two functional subunits that has nothing to do with the functionality of either subunit but without which ATP synthase would not exist: he architecture and subunit composition of ATP synthase It holds the two subunits just the right distance apart so that together they form ATP synthase. Without it the cap would just float freely around the dead cell ET
ET (and Joe Felsestein at TSZ): The alpha and beta chains of ATP synthase remain a good example. gpuccio
ET (and Joe Felsestein at TSZ): Of course 500 bits refers to an exponential measure for one object (or set of objects, if we can show that the set is IC) implementing one specific function. It's the implemention of one function which has a complexity of 500 bits: IOWs, it cannot be implemented with a simpler configuration. Independent individual bits cannot be summed. Bits are an exponential measure. 500 bits means: a specific sequence of 500 binary values, or of 150 specific decimal values, or of 115 specific AAs (base 20) that is necessary to implement one specific function. It's like having a number like this: 6394672650104367823952223904758... 150 figures long, which is the unique key to an electronic safe, and trying to divine it by RV and NS: a) RV: we just try any sequence of 150 figures b) NS: after seeing it does not work, we change a figure at a time, and we test if there is any increase in the (non) function. This is what the neo-darwinian model amounts to, as far as complex functional proteins are involved. Has Joe Felsestein any comment? gpuccio
Joe F's misconception of CSI summed up:
2. That counts up changes anywhere in the genome, as long as they contribute to the fitness, and it counts up whatever successive changes occur.
No Joe. No where does Dembski ever make that claim. Reading Dembski and Meyer they both make it clear they are talking about producing the 500 bits in one sequence. ET
dazz at TSZ: This is an interesting passage from the paper quoted at #803:
Systematic studies on the evolutionary origin of orphan genes in primates (Toll-Riera et al. 2009) and the plant Arabidopsis thaliana (Donoghue et al. 2011) indicate that gene duplication and exaptation from transposable elements (TEs) are the major forces driving the emergence of orphan genes. Another study investigating the emergence of new Drosophila genes (not restricted to orphan genes) corroborated the dominant role of gene duplication but also suggested that surprisingly many genes (~12%) seem to have originated de novo, that is, from previously noncoding sequences or RNA coding sequences (CDS) (Zhou et al. 2008).
gpuccio
dazz at TSZ:
Well, that’s appalling. The way I see it you’re disagreeing with yourself since I’m drawing conclusions from your claims
I am disagreeing with how you draw conclusions from my claims.
Unfortunately, no new, original functionally specified complexity of a gazillion bits of information was generated in that microevolutionary event as far as I can tell.
What micorevolutionary event? The generation of a new functional ORF in a non coding sequence is not a microevolutionary event. The final step that relaeses it as a transcirbed sequence is just the final step, but it would be useless if the sequence did not code for a functional protein. The whole process is not microevolutionary.
You’re obsessed with neo-darwinism, but evolutionary changes don’t need to be selectable at every single step. Neutral theory and all that, no “design” in sight
If you appeal to neutral theory, the probabilistic barriers remain the same. You have to accept the limitations of RV. If you appeal to NS, you have to accept the limitations of NS. If you put both together, you have the limitations of the combined algorithm: only simple evolutionary events are allowed.
We seem to be in disagreement as to what gradual means. It may develop in a month, but there is a continuum of small gradual changes, with lots of IC generation and apparently no issues with lack of protein functionality due to islands of function. Maybe most of those transitions you consider problematic are essentially developmental and you’re grossly overstating the importance of “new” protein function?
I don't think so.
You’re the one who’s confused. If, as I argue follows from your premises and claims, saltation is inevitable, the existence of intermediates would readily falsify your claims. It’s not rocket science
You are right. I had misunderstodd your statement. I apologize. You will find the correct answer at #815. gpuccio
gpuccio- It has nothing to do with any debate with Dembski. CSI is Dembski's construct and it is important to how your opponents view that construct in order to know what they are talking about when discussing it. Try this:
If we have a population of DNA sequences, we can imagine a case with four alleles of equal frequency. At a particular position in the DNA, one allele has A, one has C, one has G, and one has T. There is complete uncertainty about the sequence at this position. Now suppose that C has 10% higher fitness than A, G, or T (which have equal fitnesses). The usual equations of population genetics will predict the rise of the frequency of the C allele. After 84 generations, 99.9001% of the copies of the gene will have the C allele. This is an increase of information: the fourfold uncertainty about the allele has been replaced by near-certainty. It is also specified information — the population has more and more individuals of high fitness, so that the distribution of alleles in the population moves further and further into the upper tail of the original distribution of fitnesses. It's an increase in information because the uncertainty about an allele has been reduced. He isn't talking about the same thing as Dembski. You guys are not discussing the same thing when you are talking about information.
ET
Glen D chimes in with:
ID doesn’t actually have anything to do with design.
And evolutionism actually doesn't have anything to do with evolution. derp ET
LoL! Corneel quote-mines a post- I'll help you out, corny:
Generally speaking, finding some operational way to identify who the designer is, what tools it uses, when it designs, how to distinguish design from non-design, etc.- Flint @ TSZ
That said we do have a scientific methodology for distinguishing between intelligent design and nature. And you and yours don’t have anything but “Not ID!” Your lack of integrity gets exposed when you do crap like that, corny- corny failed to mention that part of my response ET
ET: Joe Felsestein, if interested, can answer my arguments. If his answer implies his concepts about CSI, he can mention them and I will try to understand if those concepts are relevant. I am not interested in his specific debate with Dembski, for the reasons I have explained. gpuccio
dazz at TSZ: I see that I had not understood well your statement quoted at #813, so my answer is obviously wrong. I apologize. Too much rush! :) The correct answer is: Fossils are interesting, but they tell us about morphology, not the molecular basis for it. So, fossils can really be useful only in the measure that we understand the molecular information related to the morphological patterns we observe. gpuccio
gpuccio- Until you read what Joe Felsenstein says about CSI you won't understand hos posts on the subject. His thinking about it is too convoluted to try to respond to unless you have that as a reference. the link is in post 782. It is an article that is posted on the NCSE's website (the alleged National Center for Science and Education) ET
dazz at TSZ:
Oh, and of course, gpuccio, the saltationist result also implies that transitional fossils in vertebrate evolution would falsify your theory, don’t you think?
Of course not. that's the difference with neo-darwinism. Design does not require the expansion and fixation of each simple step. Which is instead required by neo-darwinism. Are you a little confused, or what? gpuccio
dazz at TSZ:
Unfortunately that looks nothing like the kind of macro-design events that your theory requires All I see there is micro evolution
Not at all. The final transformation to an ORF is only that: the final step. But the protein sequence is prepared before that, by transposon activity. And, when it is released as a protein, it seems to be already functional. Without any intervention of NS.
I suspect we will only see gradual development if we look at what goes on inside that cocoon, should we check?
It usually happens in about a month. Not so gradual. Would you agree then that if the first vertebrate precursor of fish had emerged from a lancelet in some protected niche in the sea, by design, in a time window of about one month, you would have no problems with that?
The many ones with homologies couldn’t have possibly evolved while still functional, right?
Not by RV + NS, of course. By design, definitely yes. Not gradually. Each protein would be designed in a non functional state, and then released. With all that is necessary for its specific function. gpuccio
dazz at TSZ: April 9, 2018 at 2:09 pm I still don't agree with your views about this point (phenotypic slatation), but I will leave it to that. I don't like to repeat the same arguments. You have expressed your arguments, and I have expressed mine. But you are of course completely wrong in your final conclusions:
So what you actually have is special creation + common descent. How ironic, after all this time putting up the that old creationist tripe “evolution fails cuz cats don’t give birth to dogs” now we learn that’s pretty much how ID works.
Special creation??? Who ever spoke of creation? Of course, it is special design. With or without saltations, it's design all the same. Look, I am not a creationist at all. Not because I don't believe in creation (of course I believe in it, as my personal religious conviction). But I have never made any "creation science". Nothing in my scientific arguments is baesd on any religious idea. All that I say can be shared by anyone, whatever his ideas about religion. That sai, do as you like, and call me as you like. We live in a free will world. gpuccio
Joe Felsenstein at TSZ: April 9, 2018 at 12:48 pm I don't know what you mean by this post. Of course I accept 500 bits as a good threshold in all general cases: it certainly puts the observed result beyond any possible reach by RV alone. That is why I use it. Of course, some lower threshold would be more than enough for a biological system, which has much lower probabilistic resources than the whole universe. As I have stated very clearly in my previous answers to you (and to many others) my arguments against the powers of NS are not probabilistic. They are empirical, even if they include the probabilistic limitations of RV. Very briefly: a) Empirical data clearly show that the random emergence of a new naturally selectable function has severe limitations of complexity. Even if it is not two AAs, it is not much more. See also the many times linked paper: Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution http://www.genetics.org/content/180/3/1501 b) The optimization of the randomly generated function has severe limits too. In all known cases, it's a few AAs at most. c) Complex functions, like the alpha and beta chains of ATP synthase, cannot certainly emerge as simple mutations of 2 or 4 AAs by RV. And then be optimized by hundreds of single AA steps, each of them naturally selected. That has never been observed, of course, and is well beyond reason, given all that we know about available biological data. d) No naturally selectable pathway to complex functions has ever been observed, found in the lab, or even imagined in some detail. In that sense, from a scientific point of view, the existence of such pathways is a myth and nothing else. And yet, according to neo-darwinism, those pathways should be the absolute rule. This is only a brief summary. You can find the details of my arguments in my OP (and following discussion): What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ Maybe you could have a look at them, or comment on them before stating that "gpuccio has presented no argument as to why that 500-bit limit cannot be exceeded simply by natural selection". gpuccio
Corneel at TSZ:
What I am saying is that, until you show me how a protein behaves that does NOT have such jumps, I do not trust your plots to show me which proteins DO have them.
That's easy. Many of them have no jumps. For example, the beta chain of ATP synthase (an old friend) has no great jumps throughouts its metazoan history. It starts at more than 1.5 baas and gets gardually to abou 1.8 in Afrotheria. Very smoothly. Ehm, a very small jump can be seen, at the vertebrate transition. But very small. It's not my fault, after all. :) The reason why this sequence has no great jumps is rather simple: it already shows 1.25 baa in E. coli. If it ever had any big jump, that was a very, very long time ago! gpuccio
Corneel at TSZ: April 9, 2018 at 11:29 am
Entropy’s example of lactate dehydrogenase demonstrating enzyme promiscuity is here. You dismissed it, but it is quite relevant to substrate specificity of E3 ubiquitin-protein ligases as well.
Not so. That case is about an enzyme who already has a specific folding potentially functional for some class of simple biochemical molecules, and for some class of reactions. A simple substitution at the active site can change (sometimes even a lot) the affinity for specific substrates in the range of possible substrates for that folding and general function. That can be seen in many cases inside a protein family, where the structure and folding are mostly shared, and so is most of the sequence specificity. Not so in the case of the specificity of E3 ligases. That specificity is about recognizing completely different target proteins, and their appropriate state. My data about homology show clearly that a great part of the protein sequence specificity is involved in that, usually a very long sequence, much longer than the domain part involved in the common ubiquitin transferase process. You are comparing two completely different scenarios.
Haha, that’s rich. So you have yourself shown how the human form of PRICKLE1 has evolved by an already functional protein acquiring an additional function? And this is not a rung of the ladder why exactly?
Not at all. Again you don't understand. The scenario is similar to that of E3 ligases. The function of the protein is modular, it is due to the interaction of (at least) two different parts of the molecule. In the case of Prikle1, there is a domain part which is more conserved in prevertebrates. IOWs the domain is older, it appeared previously. The non domain part, instead, is taxonomically restricted. In vertebrates, it is completely different from the same part in hymenoptera, for example, but it is highly conserved in each of the two groups of organisms, as I have shown here: Information jumps again: some more facts, and thoughts, about Prickle 1 and taxonomically restricted genes. https://uncommondesc.wpengine.com/intelligent-design/information-jumps-again-some-more-facts-and-thoughts-about-prickle-1-and-taxonomically-restricted-genes/ As I have already said, this is an analysis of the different functional specificities of two different functional modules in the same protein. With different evolutionary histories. It has nothing to do with ladders. A ladder is a realistic pathway through which a new original complex functionv that did no exist before, appears by simple AA modifications starting from some unrelated sequence, that existed before and had some different function or no function at all, and where an initial mutation explained by RV only already provides the new function in a naturally selectable form, and all the other steps are simple and, each one of them, naturally selectable over the previous step. I hope that's clear. Neither Entropy, nor you, nor, least of all, myself, have ever provided such a ladder. Because, very simply, such a ladder does not exist.
Protein evolution will involve a certain amount of swapping, duplicating, adding and deleting functional domains. Mutations of this type do not require a designer and are valid steps on your ladder to increased functional complexity.
Re-use of domains in multi-domain proteins is a different problem. I have never used it as a scenario of functional information simply because in that scenario the measurement of the functional information is much more difficult. Indeed, in the ideal case where the sequence of the modules is always kept the same (which is not true in most cases, because they are often adapted to each new protein), the functional information involved is only that implied by the probability of getting that particular assemblage over all possible random assemblages. I believe that a designer can probably be inferred in those cases too, but you can certainly understand that there are many more variables, and too little is known to make a quantitative reasoning about that. I prefer easier tasks, which can be done with precise and explicit reasonings, to difficult analyses, that would remain vague and uncertain anywhere. That's why I stick to sequence analysis. And all my reasonings are about the building of functional sequences. Why not? We have thousands, millions of functional sequences. In functional sequences the bits are obvious, they are there to be measured. So, your statement that: "Mutations of this type do not require a designer and are valid steps on your ladder to increased functional complexity." is completely unsupported, and irrelevant to the problem of how functional sequences emerge. However, if you could demonstrate (which you have not even tried) that some modular recombination of functional parts really is in the range of RV, that would be a simple recombination. Therefore no ladder. To show that there is a ladder to complex function, you should demonstrate that: a) Joining the existing functional modules A, B and C to get a new function is a complex transition: IOWs, that it requires more than 500 bits of information. b) That it happens in many steps, each of them simple (a few bits) c) That each step from A, B, and C to A+B+C in a new functional configuration is naturally selectable over the previous step. That would be a ladder for complex domain recombination. You want to try? gpuccio
Do you agree that your results show that the Designer hardly added any complex functional information to the human lineage since the human-chimpanzee split?
Question-begging ET
gpuccio Corneel
Not much at the level of protein coding genes, I agree. Probably some has been added at that level too, but not much. Certainly much less than at some other transitions.
However, large changes in splicing patterns and gene expression. bill cole
Corneel at TSZ:
Apropos of nothing: Do you agree that your results show that the Designer hardly added any complex functional information to the human lineage since the human-chimpanzee split?
Not much at the level of protein coding genes, I agree. Probably some has been added at that level too, but not much. Certainly much less than at some other transitions. gpuccio
Too funny- Apparently the "problems" with ID have nothing at all to do with ID!:
Generally speaking, finding some operational way to identify who the designer is, what tools it uses, when it designs, how to distinguish design from non-design, etc.- Flint @ TSZ
Umm, we don't even ask those questions until AFTER (intelligent) design has been detected and is being studied. And that means those questions are irrelevant to ID, which is only about the (intelligent) DESIGN. These people have thinking issues. Thankfully not one is any kind of investigator. That said we do have a scientific methodology for distinguishing between intelligent design and nature. And you and yours don't have anything but "Not ID!" ET
dazz at TSZ: A follow-up to my #795. This paper has been linked by Cornelius Hunter in another thread. I think it can add to our discussion: Mechanisms and Dynamics of Orphan Gene Emergence in Insect Genomes https://academic.oup.com/gbe/article/5/2/439/560219 gpuccio
dazz at TSZ:
What do you mean by unrelated? How many of the thousands of proteins involved in the vertebrate transition are “unrelated” to the ones in the closest extant invertebrate relatives? Genuine question there, I have no idea
I like genuine questions! :) As we discuss a sequence space, "unrelated" essentially means with no sequence homology at all. That is certainly true for the 2000 protein superfamilies, which are also required to show no similarity in structure and function too. Regarding the tarnsition from prevertebrates to vertebrates, of course not all the proteins that show high engineering are unrelated to their pèossible homologues in the previous step. Some really appears in cartilaginous fish, with no homologues before, but thay are a minor subset. In all the other cases, the negineering adds new functional information to a core of human conserved functional information that already existed. For example, our old friend TRIM62 has the best hit in pre-vertebrates with Branchiostoma belcheri, a chordate: tripartite motif-containing protein 54-like 129 bits 131 identities (26%) 228 positives (44%) 72 gaps Evalue: This is certainly an homologue, with that Evalue. But is it the same protein? Has it a similar function? I think it is a similar protein, with similar E3 ligase activity. The fourth hit, always in Branchiostoma belcheri, which has a slightly lower botscore (106 bits) is explicitly identified as: E3 ubiquitin-protein ligase Midline-1-like The highest homology is observed in the N terminal part of the molecule, where we find the RING finger domain and the B-Box zinc finger domain. But the best hit in cartilaginous fish is with Rhincodon typus: E3 ubiquitin-protein ligase TRIM62 823 bits 382 identities (80%) 423 positives (89%) 1 gap Evalue: 0.0 Now the whole sequence is almost identical to the human form. Therefore, even if TRM62 has definitely homologues in pre-vertebrates, essentially for the two N terminal domains, the rest of the sequence as it appears in cartilaginous fish and is conserved up to humans is highly unrelated to any sequence that existed before. gpuccio
dazz: About saltationism in an ID scenario, I see that it is a very important point for you. It is not for me. If you prefer to believe that in an ID scenario there must necessarily be a phenotipic saltation in terms of days, maybe hours, maybe minutes, please be free to believe that. I don't agree, but again it's not important to me. For me, it makes no difference, and moreover I don't like to make up things when I don't know them. As I said, only facts can tell us how it happened really. gpuccio
dazz at TSZ:
Can’t you see the irony of it all? As I’ve tried to show, it’s actually your position that involves non-gradualism, necessarily, while darwinism requires gradualism, yet you assume non-gradualism in darwinism to debunk darwinism
Let's try to understand each other, seeing your new reasonable attitude. I assume non gradualism of information in facts. Not in neo-darwinism. I am fully aware that neo-darwinism requires gradualism. A gardualism that does not exist in the information space. My table about probability barriers, and all my reasonings about probability barriers, are meant exclusively to show the limitations of RV. I have debated NS separately, in an entirely different OP, and using other arguments. There is no doubt that in any hypothetical neo-darwinist process, a nea function must appear and be naturally selectable, before NS can have any role. In penicillin resistance, the new function is rather simple: 1 AA. In chloroquine resistance, the new function is a little more complex: 2 AAs. This has nothing to do with the intervention of NS, which comes later. How many AAs are necessary, in your opinion, to have the function of ATP synthase? Just curious. gpuccio
Coleweed at TSZ: (continued) OK, here are the baa values in the group of 223 E3 ligases and in the group of all other proteins: Deuterostomia (not vertebrates): E3 ligases: mean: 0.7195036 sd: 0.430313 median: 0.6618929 3d percentile: 0.1508268 97th: percentile: 1.632665 All other proteins: mean: 0.6699668 sd: 0.4280429 median: 0.6082425 3d percentile: 0.09913258 97th: percentile: 1.581547 Cartilaginous fish: E3 ligases: mean: 1.128915 sd: 0.4982895 median: 1.189977 3d percentile: 0.2220413 97th: percentile: 1.9767 All other proteins: mean: 0.9470713 sd: 0.5179081 median: 0.9669421 3d percentile: 0.1176752 97th: percentile: 1.843935 The jump: E3 ligases: mean: 0.4089799 sd: 0.3220369 median: 0.3886589 3d percentile: -0.1011145 97th: percentile: 1.104845 All other proteins: mean: 0.286571 sd: 0.3150006 median: 0.2628155 3d percentile: -0.1823288 97th: percentile: 0.9217532 And that is what you can see in Fig. 4. gpuccio
Coleweed at TSZ: (continued) I would suggest that you can look at this OP of mine for further details: The amazing level of engineering in the transition to the vertebrate proteome: a global analysis https://uncommondesc.wpengine.com/intelligent-design/the-amazing-level-of-engineering-in-the-transition-to-the-vertebrate-proteome-a-global-analysis/ You could in particular look at the density distribution graphs. Then you say:
Could you plot the variable in that figure against the average human conserved information of pre-vertebrates and vertebrates please?
The variable plotted in Fig. 4 is the information jump, that is the difference between pre-vertebrates and vertebrates in the two goups of proteins. I don't understand what you mean by "plotting the variable in that figure against the average human conserved information of pre-vertebrates and vertebrates". I can give you here the values of baa in both groups, instead of their difference, but that requires a little time. I will do it later.
Of course not, the average curve will be flattened towards the mean, exactly because all curves are bound by the upper and lower limit.
You are wrong. The form of the individual virves, and also of the average curve, cannot be predicted at all, except for its extreme values, which of course will be near to the maximum for mammals (or even more for primates), and low (but od course not zero) at the start of metazoa. But the form of the curve depends only on the flow of information in evolutionary history, and there is no laws that predicts it. For example, there is no special reason to predict that the biggest jump (by far), both as absolute value and even more in relation to the time window, should happen at the vertebrate transition. You are trying to suggest that these data do not describe anything, but you are completely wrong.
Not true. The protein divergence will saturate with increasing time since diverence.
That is true only for neutral variation, in particular the Ks. And it's an argument that I have used a lot of times to defend common descent. But that is not true at all for functional sequences, IOWs for conserved sequences. The behaviour in that case is completely different from protein to protein.
The rate of approach to the protein sequence (NOT homology, they already were homologous) of modern humans will be dependent on the rate of amino acid substitutions, right?
Not at all. It depends on functional constraints. The rate of substitutions if the process that tends to change information. Conservation is the opposing force, which tends to keet it if it is functional. For long time windows (like the 400+ million years for the vertebrate transition) the power of beutral variation is already at its maximum, as you yourself stated (it reaches saturation more or less at that time). Therefore, the only cause of different behaviours is the functional constraint. gpuccio
Coleweed at TSZ: April 8, 2018 at 9:48 pm
TRIM62 is the only protein that traverses the median portion of the graph and that happens to be the protein that shows the largest jump. Your figure 5 corroborates my interpretation, not yours.
What do you mean? I have chosen three different E3 ligases that have different behaviour in their evolutionay history. That was exactly the point of that Figure. Why would that "corroborate your interpretation"?
Standard error bars, please.
I can give you the details for each group of organism. I have not the time to include them in a plot now. Here they are (I give you both the baa values and the absolute bitscore values): baa values: Cnidaria: mean: 0.5432765 sd: 0.4024939 median: 0.4337176 3d percentile: 0.07280598 97th: percentile: 1.489408 Cephalopoda: mean: 0.5302676 sd: 0.3949502 median: 0.4286452 3d percentile: 0.06695778 97th: percentile: 1.449216 Deuterostomia (not vertebrates): mean: 0.6705278 sd: 0.4280898 median: 0.6086298 3d percentile: 0.0994824 97th: percentile: 1.583754 Cartilaginous fish: mean: 0.9491001 sd: 0.5180335 median: 0.9685318 3d percentile: 0.1179967 97th: percentile: 1.848485 Bony fish: mean: 1.06373 sd: 0.4992876 median: 1.088608 3d percentile: 0.180113 97th: percentile: 1.916667 Amphibiams: mean: 1.106878 sd: 0.509575 median: 1.147745 3d percentile: 0.16634 97th: percentile: 1.946024 Crocodiles: mean: 1.2175 sd: 0.5166932 median: 1.288133 3d percentile: 0.1993833 97th: percentile: 2.007613 Marsupialia: mean: 1.354032 sd: 0.5016414 median: 1.458531 3d percentile: 0.2311863 97th: percentile: 2.03424 Afrotheria: mean: 1.628872 sd: 0.43412 median: 1.750877 3d percentile: 0.3554703 97th: percentile: 2.068184 Absolute bitscore: Cnidaria: mean: 276.9441 sd: 330.8035 median: 185 3d percentile: 30 97th: percentile: 1003 Cephalopoda: mean: 275.5714 sd: 332.3311 median: 181 3d percentile: 28.1 97th: percentile: 1010 Deuterostomia (not vertebrates): mean: 357.6499 sd: 429.6395 median: 250 3d percentile: 30.8 97th: percentile: 1251.36 Cartilaginous fish: mean: 541.4306 sd: 676.6653 median: 376 3d percentile: 29.3 97th: percentile: 1937 Bony fish: mean: 601.545 sd: 708.3537 median: 427 3d percentile: 36.2 97th: percentile: 2059.76 Amphibiams: mean: 630.4002 sd: 739.64 median: 445 3d percentile: 32.3 97th: percentile: 2159.45 Crocodiles: mean: 706.2374 sd: 815.3788 median: 495 3d percentile: 32.7 97th: percentile: 2456 Marsupialia: mean: 777.487 sd: 862.6009 median: 560 3d percentile: 33.5 97th: percentile: 2594.75 Afrotheria: mean: 936.2252 sd: 977.1243 median: 691 3d percentile: 53.5 97th: percentile: 3098 OK, more in next post. gpuccio
Joe Felsenstein at TSZ: April 8, 2018 at 8:31 pm Wow. Thank you! :) I agree wholeheartedly with all that you say here. Except, of course, with the idea that NS can really generate complex functional information. But that's not really what you are discussing in this comment. Look, I will not enter the specifics of your criticism to Dembski. I agre with Dembski in most things, but not in all, and my arguments are however more focused on empirical science and in particular biology. As I have already discussed, my criticism of the powers of NS is strictly empirical (see also my comment #783 to you). I do believ that NS can add some functional information (we see that in the micorevolutionary scenarios I have quoted and analyzed). But only in very limited form. It can never even start to approach a really complex function. However, I must really thank you for your very clear, and correct, position about functional information. I am happy that both you and me (like, I hope, many others) agree with Orgel, Dembski, Szostack, Abel, Durston and others about its existence and nature. Your final statement is precious:
Nevertheless many commenters critical of Dembski’s argument have declared that CSI is meaningless. I disagree strongly. Yes, the amount of specified information is very difficult to measure, but it is meaningful, and there is no doubt that the amount of it in any living form exceeds Dembski’s threshold. The real question is how it got there, not whether the scale is meaningless.
I fully agree. Again, thank you. gpuccio
dazz at TSZ: April 8, 2018 at 8:14 pm Your understanding and ability to discuss in a civil way are increasing in a amtter of days, maybe hours. While I certainly comment you for that, I must say that it's a brick shitting sight! :) Seriously, you say:
One other thing I just noted is that gpuccio, in your proposal for a design mechanism based on duplication-deactivation-tweaking-reactivation, you qualified that engineering sequence as non-coding.
Yes.
I’m guessing that’s not a coincidence, you couldn’t have possibly said non-functional DNA because there’s no such thing as junk DNA, right? I mean, you could say that it’s function is to provide a medium to engineer the future protein, but it would still be technically non-functional or junk DNA as understood by mainstream biology.
I have never said that no non functional DNA exists. That's not what I believe. I don't believe that most DNA is non functional, but I do believe that some of it is. Moreover, as you say yourself, its function could well be, at least in part, to provide a medium to engineer future functions. This is not strange at all. After all, great part of non coding DNA is of transposonic origin. And transposons are also the most likely tool for biological design, as I have said many times.
More questions spring to mind: is there any evidence that such a process is currently ongoing? any way we can check for non coding DNA being prepared for the next macro-design event with tons of coordinated protein complexes with unique functions being engineered?
Yes, there is. First of all, there is a lot of evidence that at least some new protein coding genes originate from transposon activity. And there is at least one example (maybe more than one) of an apparently functional ORF in humans that was present in other primates as a non coding sequence, but not as an ORF. I have not the reference reasily available, but I have discussed the paper a few times here. I will see if I can retrieve it in some way. gpuccio
dazz at TSZ: This is really strange:
You’re missing the point. I’m not talking about saltationism in information, or at the molecular level. I’m saying that your assertions imply wild saltations at the phenotypic/morphological level, and those are HUGE problems for your “theory”.
I can't really see the problem. Saltations at thae phenotypic level are of course the direct consequence os saltations in information. What's the problem?
You tried to dodge the islands of function problem by proposing that the engineering could happen in non-coding DNA (because gradual engineering of functional proteins destroys islands of function), but you’re still left with the problem of the effect of activating a protein that has undergone massive changes. One would presume, since you also make a huge deal about protein complexes and IC, that activating a single protein at a time won’t do the job (what good is half a protein complex? don’t all the proteins need to be in place to have a functional IC system?). In conclusion, upon activation of the newly engineered sequence, you would get an entirely new organism: poof! in one fell swoop!
Of course what is activated must be a fully working functional system. Again, I can't say if the engineering of vertebrates takes place in one quick step (possible) or in slow multiple steps (possible), each involving some functional block. Facts must answer that. As I have said, at present I would opt for a process that involves both procedures. But it's just a guess. I still can't see your problems with that. You say:
Well, I happen to find it hard to believe that an invertebrate could give birth to a vertebrate somehow. Call me hyper-skeptical if you wish, but you must admit it must have been a brick shitting sight.
Maybe. Have you ever seen a butterfly emerge from a cocoon where only a caterpillar was a short time before? It's a brick shitting sight. gpuccio
I haven’t read all of Entropy’s comments, but, aside from the incredible rudeness and lack of understanding, one “argument” caught my attention. Not often is one confronted with such baffling stupidity:
Entropy: I can explain to you why I find that unconvincing. If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that.
I cannot imagine myself holding such nonsense longer than a fraction of a second, let alone post it as a reply to Gpuccio. Even if all possible designers are dependent on energy from this universe, Entropy's “reasoning” doesn’t make sense. Doesn’t Entropy know that ID is neutral on the identity of the designer? Doesn’t he know that ID is compatible with alien designers who, of course, have access to “energy flow” from nature? Secondly, given that the universe does not come from nothing, and that it must take “energy flow” to produce the universe, it cannot be the case that the energy of this universe exhausts all the energy there is. I cannot understand a person who cannot come up with these simple refutations. Origenes
ET: Of course there are AA substitutions that do not affect the function. That's why we must rely on conservation to know how much functionally constrained a protein sequence is. And it's true that some synonimous substitutions can have functional consequences (like diseases). But that's more an exception than the rule. gpuccio
On another note these guys at TSZ do not seem to realize that protein AA sequences are variable, meaning you can alter the sequence and not affect the final protein. That means that mutational changes in the DNA most likely won't produce any difference. That said it is also true that so called silent mutations- mutations in the DNA tat still code for the same amino acid- can cause the final protein product to be malformed. It's a timing thing and not every organism has the same number of each tRNAs which cause the protein production to be messed up due to timing. (HT DaveScot) ET
gpuccio, Please read what Joe Felsenstein has to say about CSI - I provided a link in comment 782. Once you read his article you will see that he doesn't have a clue when it comes to specified complexity and CSI. ET
dazz at TSZ: you ask (not from me) the following:
Riddle me this. why can the designer add tons of mutations to engineer a new protein but can’t do it one at a time on a functional one?
I think the answer is simple enough. Because there is no way to go from one existing functional structure to another, unrelated, functional structure by single AA steps, ans still retain the function of the previous structure during all the transition. Even neo-darwinists usually accept that a transition needs, at least, duplication and inactivation. Or, better still, working on some non coding sequence. The problem is simply that not only there are no naturally selectable ladders from one protein to some unrelated one. There are not even ladders that simply retain a function. gpuccio
dazz at TSZ: I will try to be more clear. I have never, never stated that neo-darwinism is logically impossible. I state that it is empirically impossible. IOWs, the premises that would make it work are not true. They don't exist in reality. It's not a logical theorem: it's an empirical falsification. By the way, I certainly believe that neo-darwinism is a scientific theory. Because it is falsifiable. Indeed, it has already been empirically falsified by facts. gpuccio
dazz at TSZ:
What’s your problem with that? if every step in the way was a microevolutionary one, why would you question the adequacy of darwinism as a mechanism?
No problem at all. I have only said that you assume the premises that make your theory reasonable. I have no problems with that: but it is not certainly an argument. It is circular. gpuccio
dazz at TSZ: April 8, 2018 at 3:48 pm My compliments. You are starting to understand at least something of what I say. Good. You ask:
I’ll just ask, how do you get from pre-vertebrates to vertebrates through that mechanism gradually? You claim there are thousands of proteins involved in the design of vertebrates. Were all those sequences engineered in non functional DNA and activated all at once? Or can you have intermediates with a mix of pre-vertebrate and vertebrate proteins? maybe we should start looking for crocoducks
The transition happens in the window of 30 million years. We can't say, at present, if it happened in one day, by some extreme act of design and global engineering, or in one year, or one million year, or 30 million. But we could be able to understand it better as our techniques to get information about the past improve (and they will improve, I am sure). So, the answer to your question is simply: the facts will tell us. As I said, design can be slow and gradual (non coding DNA gives lots of chances to engineer new proteins), or much more sudden. Personally, I believe that it happens in both ways: some slow engineering, leading to some more dramatic global change. But it's only an hypothesis. Facts must tell us what is true. By the way, both crocodiles and ducks are vertebrates, I suppose. gpuccio
DNA_Jock at TSZ: April 8, 2018 at 2:43 pm Your comment is, as usual, competent and, as usual, wrong. As I am preparing a wider discourse about some of your points, including this, that I hope I will finish as soon as possible, I will not answer here, for the moment. Please, have a little patience. :) gpuccio
Joe Felsestein at TSZ: Do you really believe that a structure like ATP synthase arises by a 2-3 AAs variation in some other pre-existing structure with a completely different function, and then NS adds the hundreds of AAs necessary for it to work, one at a time? Do you really believe that? gpuccio
Joe Felsestein at TSZ:
The assertion that “No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID)” ignores that natural selection can put any number of bits of functional information into the genome.
That's not true. In the only well known instances of NS, only a few AAs are added to optimize the original function. Again, let's look at the the two which are better detailed. Both are examples of resoistance which arises in the presence of extreme environmental pressure (an antibiotic) and as a consequence of minor mutations in an already existing, complex and functional structure: Penicillin resistance: Originals function: 1 AA NS adds a few AAs (about 4) to optimize the function. Chloroquine resistance: Originals function: 2 AAs NS adds a few AAs (2 - 3) to optimize the function. See my comment #716 (to you) for further details. The point is: a) If the starting mutation is much more complex that two AAs (as is certainly the case for all new functions that are not simply a deformation of an existing functional structure, as in our two examples), then it falls quickly beyond the probabilistic resources of biological systems. b) NS can only optinize the function that has already appeared, and that has to be strong enough to give some reproduction advantage (otherwise the trait will not be fixed). c) The increase in the alredy existing function works against any emergence of a new function: indeed purifying selection will act to preserve the increasingly functional sequence, optimized by positive selection. So, if a new and original function (again, see comment #716) which requires at least 500 bits of functional information to be implemented arises, it's really empirically impossible that it may arise by RV + NS. The simulation shown at your TSZ article is irrelevant for our discussion here. A single phrase shows why: "The relevant measure is fitness." False. A complex functional protein can add to fitness only if its function already exists and is strong enough to have any effect on reproductive success. A protein that requires 500 bits of information to be functional (and there are lots of them) cannot add to fitness until the information is there. How will it appear? Your simulation tells nothing of what really happens in biologcila systems, and with proteins. As usual in simulations, it only assumes the increase in fitness, and the continuous landscape, and so on. Irrelevant. gpuccio
Joe Felsenstein has integrity issues. Even after it was carefully explained that his NCSE article didn't even address CSI he is still promoting it as an example of natural selection producing CSI. However from the article it is clear that Joe doesn't have a clue. Read his trope for yourself: Joe F vs Wm Dembski- Joe is happy to misrepresent Dembski and then refute that misrepresentation. ET
Corneel at TSZ:
I am not familiar with this gene and have only some superficial knowledge of the ubiquitin system, so you will need to help me out here:
You coud try to read the OP, for example.
From online databases I have gathered that TRIM62 (tripartite motif containing 62) also known as DEAR1 is a member of the family of E3 ubiquitin-protein ligases. It is a known tumor suppressor gene and a regulator of cell polarity.
OK.
So what are the complex protein functions that you need to be deconstructed? Is it the the ubiquitin-protein transferase activity? Or perhaps you meant the substrate specificity (SMAD3 seems to be an inmportant substrate).
The substrate specificity is the part where most of the engineering takes place.
If the latter, how is the substrate specificity conceptionally going to be any different from that of Entropy´s example?
What example? Please, be more specific. I cannot follow the many lines of reasoning that I am trying to answer.
And look, I have already deconstructed the function of TRIM62 into two simpler steps. We are of to a good start
No, you have very correctly distinguished between two different aspects of a function, corresponding to different domaisn and parts of the molecule. In multi domain proteins, that's obvious, and of course different parts of the protein often ìhave a different evolutionary history. You will find at the beginning of this thread some very interesting discussions with DATCG about non domain parts. I have analyzed the different evolutionary history of the domain and non domain part of Prickle 1 in this OP: Homologies, differences and information jumps https://uncommondesc.wpengine.com/intelligent-design/homologies-differences-and-information-jumps/ That's not deconstructing, but only differentiating functional modules. gpuccio
Corneel:
Heh heh, I noticed that too. gpuccio’s plot only run to 100 MYA. I suspect no “information jumps” could be found after that time. Perhaps if we ask nicely, gpuccio would be kind enough to include Pan troglodytes in his graphs and we could discuss the implications of those findings with his fellow IDers?
There is a specific methodological reason why I stop at 100 million years in my plot and analysis. I have explained it many times, most recently in my comment #778 to you. Of course, I absolutely agree that there is almost no difference in protein coding genes between chimp and humans. What's the problem? gpuccio
dazz at TSZ:
2) If islands of function then saltationism, irrespective of mechanism
The landscape is certainly not smooth. It is certainly rugged. And functions are obviously in islands, as all evidence shows. The discussion is at most on how big the islands are. Theresore, slatationism in information is the only truth. But, as I have tried to explain to you, saltation of information can well be achieved by gradual engineering. Or it can be achieved by quick engineering, and therefore with a steep saltation in time. See comment #776. I can't see why you see saltation as a problem for ID. It is definitely a problem for your theory, but not for ID. gpuccio
Corneel at TSZ:
Come again? The variable you plot in your graphs is bound between zero and 2.2? Then of course there is not going to be a linear relationship between time since divergence and your measure of conserved functional information. The data in you plot suffer from scaling effects, and you will always observe an information jump around intermediate values!
Not at all. My data are bound simply because of course a protein cannot be more than identical to another one. There is no special scaling, outsied of what the bitscore meaasures and the expression per aminoacid site. And there is absolutely no reason to observe an information jump around intermediate values. If you look at the three proteins in Fig. 5, you can see that each has a completely different behaviour. There are proteins that are already very similar to the human form in bacteria, like ATP synthase chains alpha and beta. There are prooteins that are different from the human form still in mammals. The form of the cureve cannot be predicted according to any law, because it depends on the flows of information in evolutionary history. The dotted line in red is the mean behaviour of the whole human proteome. Even there, a significant jump at the pre-vertebrate vertebrate transition is obvious. Moreover, the temporal window of that transition is very narrow, if compared to the whole evolutionary time of metazoa, represented in the plot. Moreover, the jump in vertebrates is very significantly different for different groups of proteins, as sjown in Fig. 4.
Since your data are bound by an upper and lower limit, the fitted curve will assume a sigmoid shape, just like dose-response curves do.
Not at all. As you can see, the curve for indivisual proteins can have any possible form. And the mean curve for the whole proteome is not sigmoid at all. The common shape of dose-response curve is due to specific interaction laws, threshol, saturation, and so on. But there is no such laws for protein homologies. The only obvious thing is that if I plot (which I don not do= primate, for example, and in particular chimp, I would of course always have a very high, almost maximal value. And of course, if I plotted humans, the curve would always end with the maximal identity. But that is trivial. Moreover, as I have often said, the homologies between near species, where the exposure to neutral variation has been relatively brief, are not good to measure functional information. An unmeasurable part of the homology, in those cases, can be passive, and not due to functional constraints.
The exponential phase is the part where you believe your Designer has been busy, but you will always observe an exponential phase in any curve that uses the entire range, regardless of whether there really were injections of information.
Not so. For many protiens, the curve is rather linear, and even the mean curve is rather linear, except for the jump at vertebrates. Many proteins do not reach high values of homology to humans even in mammals. And many proteins have already high values at the start of metazoa.
What you need to show is that the steepness of the curves is larger then what would be excepted under constant substitution rate.
I am not sure what you mean by "constant substitution". What we are observing here is not a substitution rate, but rather the rate at which new homology to the human protein appears. And of course, if that rate were more or less constant, we would observe a more or less linear curve (as we do in some cases), and no big jump. gpuccio
gpuccio Thanks for looking and critiquing Dazz's post. There have been additional posts by both Cornell, Joe Felsenstein and DNA jock this am. All are very bright guys. What I got from Dazz's post is where the disconnects are. Joe's position was simply to attack your 500 bit claim. There is a very strong likelihood of mis communication here as I believe you both are looking at the problem very differently. Where you are looking at 500 bits in a limited space (single protein complex) he is looking at adding it anywhere in the genome so theoretically he can refute your claim yet it is not really important for your overall argument. Cornell is starting to look and understand TRIM62. This is a good first step in understanding your argument since it is application specific. Jock is making the same argument as Dazz with local peaks of protein performance based on enzyme efficiency and the hypothesis of smooth or rigid landscapes.. I am hoping that Cornell discovers that you are not looking at proteins with single molecule binding assignments but multi protein complexes that all have to work properly for the system to function. In this case you would need to optimize multiple landscapes to reach function. bill cole
bill cole (about dazz's statements at TSZ): "I can see why you are frustrated with these guys." well, dazz has never made any real argument, and just sticks to that. Not much to be frustrated with. His best is: If my theory is true, my theory is true. At the same time, he is accusing me of not having a positive argument. That's something. Just a few points that deserve clarification: dazz:
OK, so this transition could have happened gradually you say. Now all those gradual changes obviously happened in functional DNA. That means that each of those small changes could have been naturally selected, because they would produce differential fitness. Don’t you think?
No, I don't. I have given a lot of explicit reasons why that transition could not have happened gradually by RV + NS. You don't agree, but what does it matter? Just to be clear, the transition can very well have happened by design, both gradually or rather suddenly. See later. Again, the difference is in the mechanism, and in what each mechanism can and cannot do. But you seem not to understand that.
I’m sorry, but I’m not interested in your FI red herring nor any negative arguments. Please don’t pretend we haven’t addressed your math, so let’s stick to what actually might have happened OK?
Not OK. Addressed my math?
Is there evidence that the transition from pre-vertebrates to vertebrates happened? Yes, I’m sure you will agree.
Of course.
Your barriers are only in your imagination.
No. I have given solid evidence for them. You have given no evidence in favour of your explanation (gradual naturally selectable steps), therefore those are really only in your imagination.
Wait a minute, if the ladder doesn’t exist, then we’re back to saltationism: if the ladder is not there, it’s not there for NS NOR the designer!
Wrong. Again you don't understand. A ladder of naturally selectable simple steps does not exist. But design can be implemented either gradually or more quickly. It does not need any ladder, because the variation is guided by intelligence. It does not need NS, if not maybe after the whole function or plan have been implemented. An example of how design can build a new protein? Very simple. Starting from some non coding sequence, that could be a duplicated and inactivated gene, or just a piece of non coding, non functional DNA, guided variation (for example by guided transposon activity) implements the sppecific mutations that gradually build the future protein. However, the sequence is activated only when the engineering is complete (including the regulatory parts). Then, and only then, the sequence acquires a starting codon, becomes an ORF, and is translated. That's exactly what seems to happen in many cases. But of course, it could never happen without an intelligent guide. Because, without an intelligent guide, the variation would be only random neutral variation (no NS is possible in this scenario), and could generate only meaningless random sequences. I hope that's clear.
Saltation yay or nay?
Design. More or less saltational, only facts will say. A lot of evidence, however, is for some saltation, definitely (see Gould and Eldredge).
You’re between a rock and a hard place buddy
Not at all, as explained above.
I’ve never seen a single act of design producing a single DNA change.
Strange. Never heard of genetic engineering?
Since your FI calculations are based on DNA sequence, I can confidently affirm that I’m on firmer grounds when I say you have absolutely nothing to show for your work. No theory, no evidence, no nothing.
I take notice of your opinion. (I know, that seems to make everyione at TSZ angry, calling an opinion an opinion. But I am definitely a bad guy). Ah, yes, and you have: "If it happened gradually, then only sequences very close to each other were produced at each step, making every step trivially attainable and naturally selectable. " IOWs: "If my theory is true, then my theory is true". Good for you. gpuccio
gpuccio Dazz's argument.
That means that each of those small changes could have been naturally selected, because they would produce differential fitness. Don’t you think?
He is making the assumption that all functional mutations are selectable. Amazing
I’m sorry, but I’m not interested in your FI red herring nor any negative arguments. Please don’t pretend we haven’t addressed your math, so let’s stick to what actually might have happened OK?
He critiques your claim without argument or support.
If it happened gradually, then only sequences very close to each other were produced at each step, making every step trivially attainable and naturally selectable. Your barriers are only in your imagination
He makes an if then claim without logical connection.
There’s absolutely no reason to believe there’s a “barrier” to RV+NS if it’s possible to traverse the transition one step at a time (just as regularly observed), and if it’s impossible,
He makes an unsupported claim that is almost certainly wrong.
Since your FI calculations are based on DNA sequence, I can confidently affirm that I’m on firmer grounds when I say you have absolutely nothing to show for your work. No theory, no evidence, no nothing.
He makes a conclusion based on erroneous and unsupported claims and declares victory. I can see why you are frustrated with these guys. bill cole
dazz at TSZ: (as quoted by bill cole) I am not sure what your challenge is, because it's really confused. But I will try to understand it and answer it. If I misunderstand, please clarify.
You calculate your information based on conservation and functional constraints of certain protein sequences, right? so if you think you’ve found a protein that, being highly conserved in vertebrates grants the conclusion that such a sequence is functionally specified for vertebrates and there’s no gradual path to such protein from pre-vertebrates (what you call an informational jump),
Correct. But it's not one protein. More like thousands.
that means that there can’t be a gradual pathway from pre-vertebrates to vertebrates.
No. As usual, you don't follow the logic. The detection of a jump just means that there is a huge and rather quick increase in specific functional information. In the particular case of the transition to vertebrates, it happens in the relative time window of about 30 million years. For TRIM62, for example, that jump is of 681 bits. This is all we can say from the analysis of the homologies. The consideration that such a jump is well beyond the powers of a neo-darwinian model derives from many other arguments. 1) No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID). 2) In the case of TRIM62, the 861 bits correspond to 244 new identities to the human sequence arising in cartilaginous fish. A gradual pathway would imply a pathway of 244 1 AA changes, each of them naturally selectable, each of them increasing gradually the function of a protein that already existed in the first deuterostomia, and was probably already fully functional in them, as it is today. All of that in 30 million years, through a ladder that obviously does not exist (see my challenge, yes, the one answered only by one, and with a completely wrong answer). Including the times to fixation, and all the rest. That is not a fairy tale: it is pure myth, and bad myth indeed. As a reminder, have a look at the usual paper: Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution http://www.genetics.org/content/180/3/1501 3) Of course, it is obvious from the proteomic data that the transition to vertebrates is not a gradual adaptation from the first chordates, but rather a whole new plan, a plan that involves 1.7 million new bits of information and that prepares most of the following transitions, including the final one to mammals and humans. It's not a case that the jump in that transition is by far the greatest jump in human conserved information in the whole history of metazoa, and that the proteins most involved are complex regulatory proteins, involved mainly in the regulation of the immune system and of the nervous system. No gradual adaptation at all (of which, by the way, there is not the least trace). A whole new plan, a whole new design. However, if you like, you could start proposing a realistic and credible model for the gradual evolution of the 681 functional bits in TRIM62.
I know you laughably claim you’re not interested in explaining life, just “functional information”
I can't see what there is to laugh in that. As I have always said, ID theory is about functional information. Functional information and life are two different things, and life is not even a well defined or clear concept. I think that is a serious distinction, but if it makes you laugh, good for you.
if you were intellectually honest you would at least consider telling us how do you envision this weird scenario where hosts of pre-vertebrates are giving birth to vertebrates, and how can you possibly defend the existence of “barriers” to small gradual change while accepting stupidly large changes as perfectly feasible
I thought that could be clear even for you, but I see I was wrong. The barriers are not to the change, but to the mechanism. The mechanism of RV + NS can never generate 681 bits of functional information. Never. Design can easily generate a lot more. For example, this post has more than 681 bits of functional information in it. So, your observation is meaningless. I will not say stupid because it has been said too many times. gpuccio
Corneel:
Not sure which one you would consider the core, but natural selection is certainly the idea that is most relevant to our discussions on ID, as it negates the need for the Designer.
Yes that was the intention. However reality got in the way and proved that natural selection is impotent in that regard. ET
gpuccio Here is a post from Dazz at TSZ
gpuccio, you insist on affirming I have nothing, and I may not have much, but I think the challenge I presented you deserves some attention: You calculate your information based on conservation and functional constraints of certain protein sequences, right? so if you think you’ve found a protein that, being highly conserved in vertebrates grants the conclusion that such a sequence is functionally specified for vertebrates and there’s no gradual path to such protein from pre-vertebrates (what you call an informational jump), that means that there can’t be a gradual pathway from pre-vertebrates to vertebrates. I know you laughably claim you’re not interested in explaining life, just “functional information”) but if you were intellectually honest you would at least consider telling us how do you envision this weird scenario where hosts of pre-vertebrates are giving birth to vertebrates, and how can you possibly defend the existence of “barriers” to small gradual change while accepting stupidly large changes as perfectly feasible
I am interested in your thoughts here. This posts interests me as it shows the contrast between morphological and genetic evidence. bill cole
dazz sez the following after it has been refuted thousands of times:
They don’t even try to make a positive case
The positive case for ID is summed up as: “Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”- Dr. Behe DBB Now compare that to the positive case for evolutionism: It is not intelligently designed! Pathetic ET
No news at TSZ: only dazz, GlenDavidson and Entropy telling nothing. Ah, some rest! :) gpuccio
Origenes: "Larry Moran: … experts do not see a need to encode body plans and brain in our genome …" Yes. that's really discouraging. They really believe that no higher procedures, no higher control are necessary. And they definitely underestimate (a true euphemism) the problem of form. What do they believe in? Probably in what I sometimes call: "a very lucky, infinitely complex network of lucky feedbacks". IOWs, they believe that thousands of complex effectors cna interact in a varied, infinitely complex way without any general control. gpuccio
DATCG at #765: How true. This thing of the code, or symbolic code, is a clear example of how people on the other side (see the many examples at TSZ in this discussion) are ready to equivocate instead of just trying to understand what is being said. I always feel the need to give explicit definitions for all the terms that could be ambiguous in a discussion, so that others can understand what I am saying and decide if they agree or not. In this discussion, I have recently given two explicit definitions: a) A definition of what a symbolic code is, for me. b) A definition of nature that is acceptable for me (the second of three different definitions proposed). In all my reasonings I have always been extremely careful to stick always to my definitions, to avoid any ambiguity. The result? I have been accused of redefining nature many times according to my convenience. I have been accused that I was implying wrong meanings with my reasonings about symbolic code, because a code is always a product of conscious design. I have even been accused of being trivial because I have defined a design system as a system where design interventions happen. And, of course, I am accused of not reading or understanding the non-arguments that they have proposed, where of course there is no definition, no explicit resoning, and therefore no argument. Is that necessarily what happens with all? I would say not. Corneel has demonstrated to be a reasonable and courteous discussant. DNA_Jock is competent and makes real arguments, even if he is rather self-satisfied and does not always understand the arguments I make. And, of course, there have been other good ones in the past, many of them: Mark Frank, Zachriel, Piotr, Elizabeth (and I apologize for those that I don't remember). I am not trying to make a classification of their intelligence or other human qualities. I leave that to the TSZ people. I am just remembering the good ones, because there are so few of them. gpuccio
ET: Thank you for the kind words. "Is English even your native language?" No. Italian is my native language. "Your demeanor is that of a Saint" Of course not, but thank you just the same for the appreciation of my discussion style! :) gpuccio
Larry Moran may be, what they call at TSZ, a “pseudoscientific sellout” for holding that the DNA code is real, but, of course, he holds that most of it is junk anyways. In order to unravel this 'Neutral theory' nonsense I once posed the following question to Larry:
If most of our genome is junk, then where is the information stored for the (adult) body plan? Where is the information stored for e.g. the brain? And where is the information stored for how to build all this?
His response:
Larry Moran: ... experts do not see a need to encode body plans and brain in our genome ... [source]
Origenes
ET @740, "And yet they do- not all. Heck even Larry Moran accepts the genetic code is a real code. TSZ’s Allan Miller is famous for saying it isn’t a code. Glen Davidson says it isn’t a symbolic code" Then they disagree with other Evolutionist who are NOT Design theorist, but who do recogize the Genetic Code as Code. Maybe, a reason some refuse to recognize it as Code is a defense mechanism? If one recognizes Code within the cell, that the Genetic Code is authentic as are multiple other Codes, then fear creeps in that Design Theorist will be seen as logical in conclusions of Design in cellular processing of information. And that cannot be allowed in the market place of ideas or imagination. Only one kind of imagination is tolerated. That of blind, unguided, non-design. Yet, we know it takes great imagination, knowledge and insight to create nano-technology, especially manufacturing of such products like Qualcomm's snapdragon processor. Where Code is embedded in the CPU, connecting circuit processors and even memory. If memory was not encoded, address location of memory could not be utilized to retrieve stored data. Encoding in chemicals happens every day by design. In silicon based form. Encoding has happened in carbon-based life forms as well. We know information is encoded in our brain cells, otherwise we could not have these conversations, or know our way home without specific information, rules and syntax, as well as location, place and imaging information. And we know scientist have already stored programs, and large data in cells. And research scientist are working on how to information is stored in brain cells as well. Information is encoded in every single cell. If it were not, cells could not communicate and exchange information crucial to life's survival. In order to encode informational data onto paper, print, cpu processors or cells in life, there must be a Code to Write it, Read it, Update it, Repaire it and if needed Erase it. You simply cannot get around Code. Evolutionist who recognize this are not crazy, stupid, or dumb. And neither are Design Theorist. Where the two depart is on interpretation of how the Code was implemented. a) by Design b) by blind, unguided, non-design DATCG
Morning, Gpuccio, you've been busy! ET @763, yes, he's doing great in answering specific questions. And I appreciate all the time he takes to respond to us and others at TSZ. Thanks Gpuccio for all the time you give in these post. DATCG
Thank you very much gpuccio. You have managed a calm and reasonable argument in the face venom spewing cowardice. Your posts are very much appreciated here. Your demeanor is that of a Saint and also very much appreciated. Is English even your native language? (just curious- because if not then your posts are even more telling, ie that you care that much to post so eloquently in a foreign language) ET
Joe Felsestein at TSZ: April 6, 2018 at 11:38 pm
He may have earned the insults, but they rally onlookers to his side. They permit him to concentrate on how unfairly he was treated, rather than on the weakness of his arguments.
False. Please, see my comment #751, in response to Alan Fox saying the same thing. And, as you can see, I have continued to answer Entropy's intelligible statements, for example at #757, without any reference to his conitnuing insults. So, if your only worry is the insults help me to dismiss your brilliant arguments, relax: that's not the case. If, on the other hand, you have other reasons to dislike insults in a discussion, why not simply say it? gpuccio
DNA_Jock at TSZ: April 6, 2018 at 9:22 pm (to Bill Cole)
Because the fraction of total sequence space that is explored by mutations to an optimized protein is a tiny, tiny fraction of the total sequence space. You guys keep pointing out how effing yuge the sequence space is. Sampling an infinitesmial fraction of it tells you bugger all about the remainder. Do you guys not even pay attention to your own arguments? Imagine a 80aa protein motif. It’s optimized (this matters) so mutation can perhaps explore three steps away from the optimum, 1520 nearest neighbours, 2.3 million two steps away, 3.5 billion three steps away. but 1.2 x 10 ^104 members of the space, so you’re only explore one 10^95 th of the space.
I think this argumemt is in some way related to the TSS argument. Therefore, I will answer it in detail when I write about that issue. gpuccio
DNA_Jock at TSZ:
In comment #583, in response to Entropy asking for a definition of functional information, you linked to the whole TSS debacle — “An attempt at computing dFSCI for English language.” I encourage UDites to read that conversation, and see if they can figure out who was right, who was wrong, and who, err, “went ballistic”.
Of course they can. You were wrong. You are wrong. I did not want to reopen the discussion, but I will. I will post on that, as soon as I have time. I hope you will answer. gpuccio
dazz at TSZ points to the Yarus hypothesis (through Venema!). How original. Of course, that has been debated many times here, by me and others. Even dazz's final sum-up is almost reasonable in its tone, which is amazing: "So the “arbitrary” DNA “code” could have had a chemical origin after all" dazz, it is arbitrary. Its origin is all another matter. If you are convinced of Yarus'arguments, good for you. I am not. gpuccio
Corneel at TSZ: April 6, 2018 at 1:48 pm Ah, what a pleasure to discuss with someone who is reasonable! :)
No, this is not what I meant. Having not worked with bit scores before, I have no intuition for them. I realise they are derived from raw alignment scores, but they receive some transformation. Another thing is: as you travel back in time the sequence similarity will not fall in a linear way, because there will be saturation of AA that were already substituted twice or more.
I am not sure what you mean. However, the BLAST bitscore is indeed based on transformations from the raw scores, and that's the reson why it assigns a maximum value of about 2.2 bits per aminoacid for identity, while the potential information value is about 4.3 bits. The reason for that is that the aim of the bitscore is to derive e values (the probability of observing that level of homology by chance, given the procedure that has been used in the alignment and the number of comparisons made).
Bottom line: I suspect that the plots you make will always produce an exponential phase, even if the substitution rate remains constant. But I cannot test that idea myself. So that’s the reason I asked you: If we were to plot a protein in your graph that evolves at a constant rate, might we not still observe “information jumps”?.
I am not sure what you mean by "an exponential phase". I will try to clarify. In my plots, you can see the homology between the protein in human form and the protein in sone other group of organisms (best hit). It is expressed in bits per aminoacid. If we blast a protein with itself (for example the human form with the human form) we gewt about 2,2 bits per aminoacid. Therefore, that's the value in the bitscore for absolute identity. So, the range of homology for any protein with the human form (that, as explained many times, I use as a probe here) is from 0 to about 2.2. OK? Now, in the x axis I put the approximate times of split of each group of organisms from the human line. So, if you look at Fig. 5 in the OP, you will see that I plot the evolutionary history (in terms of human conserved information) of three different proteins. Let's concentrate on the prevertebrate-vertebrate transition (shown in the plot as deuterostomes-cartilaginous fish). That takes place in about 30 million years, and it's where many important information jumps happen. But you can see that the behavious of the three proteins is completely different. SIAH1 is alredy very similar to the human form from the start of metazoa., so it does not show any big variation. BRCA1 is very different from the human form in almost all metazoa, and only in mammals it acquires part of the possible homology. TRIM62, finally, is completely different from the human form up to pre-vertebrates, and acquires most of the possible homology in the transition from pre-vertebrates to vertebrates (those famous 30 million years). Now, my plot shows how much human-conserved information is acquired at each step. I says nothing about the question if that information was acquires, say, in one day (one major step) or gradually during that time window (one AA at a time). The idea is: acquirine 500 or 1000 bits of functional information in 30 million years (indeed in any realistic time window) is well beyond the power of any model based on RV + NS. But that conclusion, of course, is based on many other reasonings. The plot just shows the amount of the jump at each step. I hope that's clear.
Not sure what you mean by “complex protein functions” seeing as you rejected Entropy’s example of substrate affinity. Could you provide a concrete example?
I mean, as I have always meant, the appearance of functional information linked to a new function beyond some appropriate threshold of complexity. In general, 500 bits will do for any scenario (that's Dembski's UPB). The above mentioned TRIM62 at the vertebrate transition is a very good example. It exhibits a jump of: 1.433684 baa x 475 (AA length of the protein) = 681 bits Another good example would be the alpha and beta chians of ATP synthase. See my comment #713. But there are thousands os such examples, and I have mentioned many in my different OPs, starting with Prickle1. Ubiquitin is a rather short protein, but highly conserved. Its maximum potential for functional information is (in bitscore) about 167 bits. And it acquire almoost all of it in eukaryotes. It's not the best example, but I would still make a design inference. But, as it's not the best example, I will not explain why here. The paper you link about archaea is interesting, but it would require a separate discussion. In brief, I would say that: a) It is a very isolated find, to be confirmed, and it could be in principle a case of HGT (the authors discuss that, and exclude it with good arguments, but I would not say that it is settled). b) It is, however, very different from the eukaryotic system: just as an example, it lacks an E3 ligase (alrhough it has a RING doamin), and the ubiquitin like protein has only 33% identity to ubiquitin (22 identities, 45.1 bits). But again, I would not make a case here for the ubiquitin molecule itself. But, as shown in my OP, the most amazing amount of specific functional information is to be found in the 600+ E3 ligases: each of them is highly specific for one or for a group of targets. I definitely make a case for them starting with TRIM62 as an example.
The evidence says that all complex stuff, including spliceosome and ubiquitination, started out simple.
I don't agree. There is no such evidence at all. Even the archaeal system you mentioned is complex, not simple. Only different. Of course, part of the functional information in older systems is reused in younger systems: that's very clear. But a lot of new functional information is constantly added, in jumps and without any evidence of graduality. gpuccio
Entropy at TSZ: April 6, 2018 at 1:29 pm
By definition, an arbitrary mapping does not depend on “laws.” “Laws” are mathematical descriptions of deterministic phenomena in nature.
Correct. So you do understand what a definitions is. Good.
A symbolic code is not just an arbitrary mapping, but a consciously generated one.
Maybe according to your definition. I have always made it clear that I do not mean that, I only mean that it is a system that uses an arbitrary mapping. That's the only meaning that I have given to the term in all my reasonings. See for example my explicit definition at #719, in response to GenDavidson, in case you have not read it, and hoping that you still remember what a definition is. I quote here the relevant part, for your convenience:
Look, I don’t know how to explain it to you, but I will try just the same. In all my reasonings, I use the word “symbol”, and “symbolic code” exclusively to mean what I have included in my definition, that you can find in explicit form at comment #590. “A semiotic system is a system which uses some form of symbolic code. A symbolic code is a code where something represents something else by some arbitrary mapping.” It’s very simple, and objective, It has nothing to do with all your philosophical “arguments”.
I think that my definition is in perfect accord with the common use of the word, as shown by my Wikipedia quote, and especially with the use of the word in biology. You can disagree with that, but I hope that even you can see that I have never said that a symbolic code is "a consciously generated one". That's only what you say. I don't use the term in that sense. See also DATC's very good comment at #734, and the many references in it. The evidence that I don't use the term "symbolic code" in the sense of "a consciously generated one" is in the simple fact that I make the empiric argument that symbolic codes are venere observed in non design systems: perhaps even you can understand that what I mean is that they could be observed in non design systems (because they are not by definition "consciously generated", but they are not observed. Fact. You say:
Also, here he’s defining nature as being just about deterministic phenomena, while somewhere else he defines it as being just about stochastic events (so that nature alone would not be able to generate “functional” information).
Please, check my comment #620. I have given three explicit and clear possible definitions of nature, and I have said bvery clearly that I was choosing the second: 2) All that can be observed as the only definition I would use in all my scientific discussions. That's exactly what I have done. If you disagree, please quote any part of my discussions where I use another definition.
Of course, arbitrary “mappings” can appear in nature by the combination of “law-like” phenomena and stochastic events.
I have simply stated that it is never observed in non design systems. Have you any counter-example?
It’s equivocation by redefinition.
The only equivocation, of course, is in your faulty understanding of what I clearly say. As shown above. gpuccio
Corneel at TSZ: April 6, 2018 at 1:09 pm Thank you for your clarification. You say:
Vice versa, if common descent were shown not to be true (hey, just a mental hypothesis) for example because all extant species were created independently in the past, then they could still be evolving by the neo-darwinian process of natural selection of beneficial genetic variants. Natural selection is a within-population process, see?
That's correct. But my point was simply that in that case (that neither you nor I believe to be true) the neo-darwinian process could certainly happen, but it would not be the explanation of the functional information that makes species different. OK, I think that's enough with mental experiments. We agree on common descent, so let's not waste time on that. gpuccio
The scientifically illiterate OMagain posts:
Anyone who actually thinks there is something to ID seems to quickly run out of steam as far as actual science goes.
Then it is strange that IDists are the only ones discussing and presenting science. Why is that? You don't have a scientific theory. You don't have any testable hypotheses based on the posited mechanisms. You don't even have a methodology to test your claims. So clearly you are all just a bunch of deluded children. And your posts prove it. But hey you are providing evidence that you have chimps for relatives. Nicely done. ET
DNA Jock posts:
Imagine a 80aa protein motif.
You have to as you don't have a mechanism capable of producing one. :razz: These people are soooooo clueless. ET
Origenes: "So, again, we see a link with consciousness, or, perhaps, a specific aspect of consciousness." Interesting thoughts. I think that many things that happen in us have an intuitive source, at some level of consciousness. But this is a much wider discourse! :) gpuccio
GPuccio:
“What we see, rather, is a continual mutual adaptation, interaction, and coordination that occurs from above.” [Talbott]
Absolutely!
I know it is very much off-topic, but I would like to run this thought by you: Would you agree with me when I say that this mysterious coordination from above, this 'mastery with parts' if you will, is comparable with how we use language? Or rather how one thinks? Here it is not proteins, but, instead, ideas, words and concepts which are constantly recombined into new contexts. And the mystery is the same: the coherence of it all and the effortlessness which accompanies it. How is it that we can talk, form coherent sentences, without carefully preparing them in advance? When I contemplate these matters I feel a similar awe as when I read about the goings on in the cell. So, again, we see a link with consciousness, or, perhaps, a specific aspect of consciousness. Origenes
ET (and Alan Foz at TSZ): I have not the time, or the will, to read Alan Fox's comment now. So, I should not answer. However, from the simple phrase that you quote, I would like to say a couple of things. Alan Fox: a) I thought that adding insults should be discouraged for better reasons than pure strategy. Don't you agree? b) You may believe it or not, but I have always tried to answer anything that seemed intelligible or making a minimum of sense, even if it was part of a comment full of insults. See for example my comment #723, where i anwer as well as I can the human parts of GlenDavidson comment: April 5, 2018 at 9:37 pm and then, at the end, I list, just for reference, the 42 (I think) assorted insults that can be found in the same comment. So, as you can see and check, I am not at all "dismissing those comments as insulting". I answer what I can and what I consider worthwhile. The things that I do not asnwer are the things that I don't consider worthwhile. I hope you have the decency to leave me at least this basic freedom of thought. The insults, I just acknoweldge. There is not much to comment about them, they speak for themselves. gpuccio
Upright Biped @737 You bet, wish I had more time to discuss, learn and contribute. But blind, unguided walking, talking mutation creations keep interrupting me ;-) Always wanted a Wookiee, but found out they aren't real "thanks for nothing George Lucas" DATCG
Origenes at #747: Wonderful comment! :) I absolutely agree. That's more or less what I mean whne I say that we still missing the true procedures, IOWs the coordinating information, or maybe more than information, that makes everything run. "What we see, rather, is a continual mutual adaptation, interaction, and coordination that occurs from above." Absolutely! gpuccio
Oh my- Alan Fox posts:
Probably too late but can I suggest that adding insults to otherwise cogent and convincing rebuttals is counter-productive, allowing gpuccio to dismiss those comments as insulting.
He actually believes they have "cogent and convincing rebuttals"- what a desperate fool Alan is. ET
GPuccio: But what about ubiquitin itself? I mean the molecule? Is it regulated? Of course it is!
Which points to a well-known conundrum: decisions are made, but how? And by what?
When regulators are in turn regulated, what do we mean by “regulate” — and where within the web of regulation can we single out a master controller capable of dictating cellular fates? And if we can’t, what are reputable scientists doing when they claim to have identified such a controller, or, rather, various such controllers? If they really mean something like “influencers,” then that’s fine. But influence is not about mechanism and control; the factors at issue just don’t have controlling powers. What we see, rather, is a continual mutual adaptation, interaction, and coordination that occurs from above. What we see, that is — once we start following out all the interactions at a molecular level — is not some mechanism dictating the fate or controlling an activity of the organism, but simply an organism-wide coherence — a living, metamorphosing form of activity — within which the more or less distinct partial activities find their proper place. [S. Talbott]
GPuccio: Another point I would like to clarify: life is not the same thing as functional complexity. ID is about functional complexity, not about life.
Perhaps Talbott (above) points to one reason as to why this is justified. Origenes
DATCG and all: And the recognition (translation) of the tags is not simple stuff. See here (March 2015): Structural Basis for Ubiquitin Recognition by Ubiquitin-Binding Zinc Finger of FAAP20 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4370504/
Abstract: Several ubiquitin-binding zinc fingers (UBZs) have been reported to preferentially bind K63-linked ubiquitin chains. In particular, the UBZ domain of FAAP20 (FAAP20-UBZ), a member of the Fanconi anemia core complex, seems to recognize K63-linked ubiquitin chains, in order to recruit the complex to DNA interstrand crosslinks and mediate DNA repair. By contrast, it is reported that the attachment of a single ubiquitin to Rev1, a translesion DNA polymerase, increases binding of Rev1 to FAAP20. To clarify the specificity of FAAP20-UBZ, we determined the crystal structure of FAAP20-UBZ in complex with K63-linked diubiquitin at 1.9 Å resolution. In this structure, FAAP20-UBZ interacts only with one of the two ubiquitin moieties. Consistently, binding assays using surface plasmon resonance spectrometry showed that FAAP20-UBZ binds ubiquitin and M1-, K48- and K63-linked diubiquitin chains with similar affinities. Residues in the vicinity of Ala168 within the ?-helix and the C-terminal Trp180 interact with the canonical Ile44-centered hydrophobic patch of ubiquitin. Asp164 within the ?-helix and the C-terminal loop mediate a hydrogen bond network, which reinforces ubiquitin-binding of FAAP20-UBZ. Mutations of the ubiquitin-interacting residues disrupted binding to ubiquitin in vitro and abolished the accumulation of FAAP20 to DNA damage sites in vivo. Finally, structural comparison among FAAP20-UBZ, WRNIP1-UBZ and RAD18-UBZ revealed distinct modes of ubiquitin binding. UBZ family proteins could be divided into at least three classes, according to their ubiquitin-binding modes.
gpuccio
ET: Maybe Moran too is a "pseudoscientific sellout." :) I always advise my neo-darwinist friends: beware of neutralists! :) gpuccio
DATCG and all: Let's go back to serious stuff. We have seen that ubiquitin and the ubiquitin system, does contribute to the regulation of almost everything that happens in the cell. That's beautiful, interesting and amazing. But what about ubiquitin itself? I mean the molecule? Is it regulated? Of course it is! :) See here (november 2017): Ubiquitin turnover and endocytic trafficking in yeast are regulated by Ser57 phosphorylation of ubiquitin https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5706963/
Abstract: Despite its central role in protein degradation little is known about the molecular mechanisms that sense, maintain, and regulate steady state concentration of ubiquitin in the cell. Here, we describe a novel mechanism for regulation of ubiquitin homeostasis that is mediated by phosphorylation of ubiquitin at the Ser57 position. We find that loss of Ppz phosphatase activity leads to defects in ubiquitin homeostasis that are at least partially attributable to elevated levels of Ser57 phosphorylated ubiquitin. Phosphomimetic mutation at the Ser57 position of ubiquitin conferred increased rates of endocytic trafficking and ubiquitin turnover. These phenotypes are associated with bypass of recognition by endosome-localized deubiquitylases - including Doa4 which is critical for regulation of ubiquitin recycling. Thus, ubiquitin homeostasis is significantly impacted by the rate of ubiquitin flux through the endocytic pathway and by signaling pathways that converge on ubiquitin itself to determine whether it is recycled or degraded in the vacuole.
The following must certainly have been written by some “brain-damaged pseudoscientist”:
The ubiquitin code is highly complex, and modification by conjugation to ubiquitin can alter the fate of substrate proteins by promoting degradation, altering subcellular localization, or altering interactions with binding partners. The complexity of the ubiquitin code is underscored by the fact that ubiquitin can polymerize at any of seven internal lysines (or the N-terminus) leading to chains of different linkage types, each with a unique structure that can be interpreted differently, and that mixed-linkage or branched chains are also possible. More recent work has also led to a consensus that post-translational modifications of ubiquitin (other than polymerization) can alter its function (Herhaus and Dikic, 2015; Zheng and Hunter, 2014) - making the ubiquitin code as we know it even more complex than previously appreciated.
From the Discussion section:
For over a decade, post-translational modifications of ubiquitin have been detected in phospho-proteomic analysis of cells from multiple eukaryotic species (Olsen et al., 2006; Peng et al., 2003; Rikova et al., 2007; Swaney et al., 2013; Villén et al., 2007) but the functional significance of these modifications has only recently come into focus. Our data reveal that phosphorylation of ubiquitin at the Ser57 position is regulated by Ppz phosphatases in yeast, and elevated Ser57 phosphorylation in ppz mutants contributes to ubiquitin deficiency as well as other phenotypes. Furthermore, the data presented suggests that Ser57 phosphorylation of ubiquitin promotes both endocytic trafficking and ubiquitin degradation – two in vivo effects that can be explained by decreased susceptibility to cleavage by endosomal deubiquitylases as observed in vitro for Doa4. Finally, we show that vacuolar degradation is the primary pathway for ubiquitin turnover in yeast cells, underscoring the Doa4-mediated recycling of ubiquitin as a critical point of regulation for global ubiquitin metabolism. Thus, we propose that Ser57 phosphorylation of ubiquitin can function as a regulatory switch to control ubiquitin recycling along the endocytic route, although the low observed stoichiometry of this modification suggests that this mode of regulation may occur transiently and only affect limited pools of ubiquitin (e.g. on the endosome).
Who watches the Watchmen? (Juvenal, Alan Moore) gpuccio
Read it for yourself: The Real Genetic Code Of course he thinks evolution didit, but at least he acknowledges that it is a real code ET
ET: We have Larry Moran on our side on that? Don't tell me! :) gpuccio
bill cole: "This reminds when Beaman broke the long jump record by 3 feet in 1968." Yes! :) I must definitely have struck a sensitive chord. gpuccio
DATG:
What they cannot do however is pretend it is not Code.
And yet they do- not all. Heck even Larry Moran accepts the genetic code is a real code. TSZ's Allan Miller is famous for saying it isn't a code. Glen Davidson says it isn't a symbolic code ET
DATCG at 734: Very good stuff about codes, thank you! :) A symbolic code, IMO, points to design for two reasons: a) The rationale of a relationship between consciousness and a code is rather obvious. After all, a code is a system were an arbitrary mappinf has been realized and embedded in the system itself. It's perfectly reasonable to assume that the conscious experience of understanding meaning is the natural origin of a code. After all, meaning itself is about projecting mental structure on physoical realities, such as the cause and effect relationship, and of course abstract codes like concepts and words. There is nothing more natural, then for conscious intelligent beings than to work by symbolic codes. b) On the other hand, the most important aspect is that empirically symbolic codes are never observed in non design systems. This is similar to what we can observe for functional information, but here there is a purely formal aspect which does not depend strictly on complexity, as I have discussed in comment #498. gpuccio
gpuccio
“Just couldn’t face the stupidity of your illogic, could you?” “Which you’d deal with if you were intellectually honest.” “Yeah, if I were as dishonest as you are.” “You’re a shameless believer” “How fucking stupid your response is.” “clearly you’re too dumb and/or dishonest to recognize that your idiotic response has fuck all to do with” “Look, dumbshit, ” “you’re too much a dull and dishonest bozo” “not the dishonest bullshit that you swill from mendacious morons” “What a retard you are.” “an idiot like you” “stupidly thinking” “you’re a pseudoscientific sellout.” “Yes, dumbass.” “your asinine claim” “your fucking lies” “How fucking retarded are you, shithead?” “Get it through your damaged brain ” “you’re too fucked in the head” “you just blither on with your mindless drivel ” “too ignorant and stupid to understand.” “brain-damaged pseudoscientist” “shithead” “disingenuous fool” “dumb as you are about everything” “no praise for your endless stupidity” “you’re still stupid, and you’re too rude” “you dull dull fool” “you’re too idiotic” “you’re too stupid” “just make up shit” “mendacious claims” “writing inanities.” “Look, stupid fuck,” “Oh wow, a stupid fuck telling us” “A stupid fuck” “Too damned stupid to recognize” “No you don’t, moron, you’re too dumb even to understand it. ” “You do try to rubbish what you don’t understand with your insipid lies about “opinion.”” “dishonest sarcasm” “revealing your character” “You’re too dumb” “You’re a worthless interlocutor, because you begin stupid, and then you merely accentuate your stupidity whenever you’re called on it.” All that in a sigle comment? Wow! ????
This reminds when Beaman broke the long jump record by 3 feet in 1968. bill cole
DATCG, I don't know who you are, but this website needs you to stay put. Upright BiPed
GP: "It’s hopeless." Yes, its hopeless with terrified anti-science loons like those two. But of course, it all makes perfect sense. What awaits them if they were to give up their howling idiocy over your definition of "arbitrary"? What would Entropy be left with if he stopped screaming like an idiot? Having to deal with physical realities such as semiosis, and semantic closure, and the irreducible relationship of representations and constraints ... YIKEZ. They have absolutely no motivation to stop what they are doing. In every way possible you defended your statements, and remained fully prepared to continue. GP +1 Upright BiPed
All the blah, blah, blah and no one has even tried to support the claim that the ubiquitin system arose via natural selection or drift. It's as if they are very, very afraid to try to support their own claims. It is much, much easier to make a mess poo-poo'ing ID with trollish ignorance. Nicely played, TSZ- not ET
Gpuccio @ All the hatred, rage, vitriol and bitterness does not put poor Darwin back together again. "Modern Synthesis is Dead" Allen MacNeill, Evolutionist Cornell University Codes, Symbols, Mappings are not that controversial and recognized by Evolutionist for decades now, even while keeping a materialist view. From Code Biology(w/thx to Upright Biped previous mention of Barbieri, Pattee, et al.),
Codes and conventions are the basis of our social life and from time immemorial have divided the world of culture from the world of nature. The rules of grammar, the laws of government, the precepts of religion, the value of money, the rules of chess etc., are all human conventions that are profoundly different from the laws of physics and chemistry, and this has led to the conclusion that there is an unbridgeable gap between nature and culture. Nature is governed by objective immutable laws, whereas culture is produced by the mutable conventions of the human mind. In this millennia-old framework, the discovery of the genetic code, in the early 1960s, came as a bolt from the blue, but strangely enough it did not bring down the barrier between nature and culture. On the contrary, a protective belt was quickly built around the old divide with an argument that effectively emptied the discovery of all its revolutionary potential. The argument that the genetic code is not a real code because its rules are the result of chemical affinities between codons and amino acids and are therefore determined by chemistry. This is the ‘Stereochemical theory’, an idea first proposed by George Gamow in 1954, and re-proposed ever since in many different forms (Pelc and Welton 1966; Dunnil 1966; Melcher 1974; Shimizu 1982; Yarus 1988, 1998; Yarus, Caporaso and Knight 2005). More than fifty years of research have not produced any evidence in favour of this theory and yet the idea is still circulating, apparently because of the possibility that stereochemical interactions might have been important at some early stages of evolution (Koonin and Novozhilov 2009). The deep reason is probably the persistent belief that the genetic code must have been a product of chemistry and cannot possibly be a real code. But what is a real code? The starting point is the idea that a code is a set of rules that establish a correspondence, or a mapping, between the objects of two independent worlds (Barbieri 2003). The Morse code, for example, is a mapping between the letters of the alphabet and groups of dots and dashes. The highway code is a correspondence between street signals and driving behaviours (a red light means ‘stop’, a green light means ‘go’, and so on). What is essential in all codes is that the coding rules, although completely compatible with the laws of physics and chemistry, are not dictated by these laws. In this sense they are arbitrary, and the number of arbitrary relationships between two independent worlds is potentially unlimited. In the Morse code, for example, any letter of the alphabet could be associated with countless combinations of dots and dashes, which means that a specific link between them can be realized only by selecting a small number of rules. And this is precisely what a code is: a small set of arbitrary rules selected from a potentially unlimited number in order to ensure a specific correspondence between two independent worlds. This definition allows us to make experimental tests because organic codes are relationships between two worlds of organic molecules and are necessarily implemented by a third type of molecules, called adaptors, that build a bridge between them. The adaptors are required because there is no necessary link between the two worlds, and a fixed set of adaptors is required in order to guarantee the specificity of the correspondence. The adaptors, in short, are the molecular fingerprints of the codes, and their presence in a biological process is a sure sign that that process is based on a code. This gives us an objective criterion for discovering organic codes and their existence is no longer a matter of speculation. It is, first and foremost, an experimental problem. More precisely, we can prove that an organic code exists, if we find three things: (1) two independents worlds of molecules, (2) a set of adaptors that create a mapping between them, and (3) the demonstration that the mapping is arbitrary because its rules can be changed, at least in principle, in countless different ways. Two outstanding examples The genetic code In protein synthesis, a sequence of nucleotides is translated into a sequence of amino acids, and the bridge between them is realized by a third type of molecules, called transfer-RNAs, that act as adaptors and perform two distinct operations: - at one site they recognize groups of three nucleotides, called codons, and - at another site they receive amino acids from enzymes called aminoacyl-tRNA-synthetases. The key point is that there is no deterministic link between codons and amino acids since it has been shown that any codon can be associated with any amino acid </b<(Schimmel 1987; Schimmel et al. 1993).
Evidence of Code:
Hou and Schimmel (1988), for example, introduced two extra nucleotides in a tRNA and found that that the resulting tRNA was carrying a different amino acid. This proved that the number of possible connections between codons and amino acids is potentially unlimited, and only the selection of a small set of adaptors can ensure a specific mapping. This is the genetic code: a fixed set of rules between nucleic acids and amino acids that are implemented by adaptors. In protein synthesis, in conclusion, we find all the three essential components of a code: (1) two independents worlds of molecules (nucleotides and amino acids), (2) a set of adaptors that create a mapping between them, and (3) the proof that the mapping is arbitrary because its rules can be changed. The signal transduction codes Signal transduction is the process by which cells transform the signals from the environment, called first messengers, into internal signals, called second messengers. First and second messengers belong to two independent worlds because there are literally hundreds of first messengers (hormones, growth factors, neurotransmitters, etc.) but only four great families of second messengers (cyclic AMP, calcium ions, diacylglycerol and inositol trisphosphate) (Alberts et al. 2007). The crucial point is that the molecules that perform signal transduction are true adaptors. They consists of three subunits: a receptor for the first messengers, an amplifier for the second messengers, and a mediator in between (Berridge 1985). This allows the transduction complex to perform two independent recognition processes, one for the first messenger and the other for the second messenger. Laboratory experiments have proved that any first messenger can be associated with any second messenger, which means that there is a potentially unlimited number of arbitrary connections between them. In signal transduction, in short, we find all the three essential components of a code: (1) two independents worlds of molecules (first messengers and second messengers), (2) a set of adaptors that create a mapping between them, and (3) the proof that the mapping is arbitrary because its rules can be changed (Barbieri 2003). A world of organic codes In addition to the genetic code and the signal transduction codes, a wide variety of new organic codes have come to light in recent years. Among them: the sequence codes (Trifonov 1987, 1989, 1999), the Hox code (Paul Hunt et al. 1991; Kessel and Gruss 1991), the adhesive code (Redies and Takeichi 1996; Shapiro and Colman 1999), the splicing codes (Barbieri 2003; Fu 2004; Matlin et al. 2005; Pertea et al. 2007; Wang and Burge 2008; Barash et al. 2010; Dhir et al. 2010), the signal transduction codes (Barbieri 2003), the histone code (Strahl and Allis 2000; Jenuwein and Allis 2001; Turner 2000, 2002, 2007; Kühn and Hofmeyr 2014), the sugar code (Gabius 2000, 2009), the compartment codes (Barbieri 2003), the cytoskeleton codes (Barbieri 2003; Gimona 2008), the transcriptional code (Jessell 2000; Marquard and Pfaff 2001; Ruiz i Altaba et al. 2003; Flames et al. 2007), the neural code (Nicolelis and Ribeiro 2006; Nicolelis 2011), a neural code for taste (Di Lorenzo 2000; Hallock and Di Lorenzo 2006), an odorant receptor code (Dudai 1999; Ray et al. 2006), a space code in the hippocampus (O’Keefe and Burgess 1996, 2005; Hafting et al. 2005; Brandon and Hasselmo 2009; Papoutsi et al. 2009), the apoptosis code (Basañez and Hardwick 2008; Füllgrabe et al. 2010), the tubulin code (Verhey and Gaertig 2007), the nuclear signalling code (Maraldi 2008), the injective organic codes (De Beule et al. 2011), the molecular codes (Görlich et al. 2011; Görlich and Dittrich 2013), the ubiquitin code(discussed and elaborated on in this post) (Komander and Rape 2012), the bioelectric code (Tseng and Levin 2013; Levin 2014), the acoustic codes (Farina and Pieretti 2014), the glycomic code (Buckeridge and De Souza 2014; Tavares and Buckeridge 2015) and the Redox code (Jones and Sies 2015). The living world, in short, is literally teeming with organic codes, and yet so far their discoveries have only circulated in small circles and have not attracted the attention of the scientific community at large. Code Biology Code Biology is the study of all codes of life with the standard methods of science. The genetic code and the codes of culture have been known for a long time and represent the historical foundation of Code Biology. What is really new in this field is the study of all codes that came after the genetic code and before the codes of culture. The existence of these codes is an experimental fact – let us never forget this – but also more than that. It is one of those facts that have extraordinary theoretical implications. The first is the role that the organic codes had in the history of life. The genetic code was a precondition for the origin of the first cells, the signal transduction codes divided the descendants of the common ancestor into the primary kingdoms of Archaea, Bacteria and Eukarya, the splicing codes were instrumental to the origin of the nucleus, the histone code provided the rules of chromatin, and the cytoskeleton codes allowed the Eukarya to perform internal movements, including those of mitosis and meiosis (Barbieri 2003, 2015). The greatest events of macro evolution, in other words, were associated with the appearance of new organic codes, and this gives us a completely new understanding of the history of life. The second great implication is the fact that the organic codes have been highly conserved in evolution, which means that they are the great invariants of life, the sole entities that have been perpetuated while everything else has been changed. Code Biology, in short, is uncovering a new history of life and bringing to light new fundamental concepts. It truly is a new science, the exploration of a vast and still largely unexplored dimension of the living world, the real new frontier of biology.
References
Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2007) Molecular Biology of the Cell. 5th Ed. Garland, New York. Barash Y, Calarco JA, Gao W, Pan Q, Wang X, Shai O, Blencow BJ and Frey BJ (2010). Deciphering the splicing code. Nature, Vol 465, 53-59. Barbieri M (2003) The Organic Codes. An Introduction to Semantic Biology. Cambridge University Press, Cambridge, UK. Barbieri M (2015) Code Biology. A New Science of Life. Springer, Dordrecht. Basañez G and Hardwick JM (2008) Unravelling the Bcl-2 Apoptosis Code with a Simple Model System. PLoS Biol 6(6): e154. Doi: 10.137/journal.pbio.0060154. Berridge M (1985) The molecular basis of communication within the cell. Scientific American, 253, 142-152. Brandon MP and Hasselmo ME (2009) Sources of the spatial code within the hippocampus. Biology Reports, 1, 3-7. Buckeridge MS and De Souza AP (2014) Breaking the “Glycomic Code” of cell wall polysaccharides may improve second-generation bioenergy production from biomass. BioEnergy Research, 7, 1065-1073. De Beule J, Hovig E and Benson M (2011) Introducing Dynamics into the Field of Biosemiotics. Biosemiotics, 4(1), 5-24. Dhir A, Buratti E, van Santen MA, Lührmann R and Baralle FE, (2010). The intronic splicing code: multiple factors involved in ATM pseudoexon definition. The EMBO Journal, 29, 749–760. Di Lorenzo PM (2000) The neural code for taste in the brain stem: Response profiles. Physiology and Behaviour, 69, 87-96. Dudai Y (1999) The Smell of Representations. Neuron 23: 633-635. Dunnill P (1966) Triplet nucleotide-amino-acid pairing; a stereochemical basis for the division between protein and non-protein amino-acids. Nature, 210, 1267-1268. Farina A and Pieretti N (2014) Acoustic Codes in Action in a Soundscape Context. Biosemiotics, 7(2), 321–328. Flames N, Pla R, Gelman DM, Rubenstein JLR, Puelles L and Marìn O (2007) Delineation of Multiple Subpallial Progenitor Domains by the Combinatorial Expression of Transcriptional Codes. The Journal of Neuroscience, 27, 9682–9695. Fu XD (2004) Towards a splicing code. Cell, 119, 736–738. Füllgrabe J, Hajji N and Joseph B (2010) Cracking the death code: apoptosis-related histone modifications. Cell Death and Differentiation, 17, 1238-1243. Gabius H-J (2000) Biological Information Transfer Beyond the Genetic Code: The Sugar Code. Naturwissenschaften, 87, 108-121. Gabius H-J (2009) The Sugar Code. Fundamentals of Glycosciences. Wiley-Blackwell. Gamow G (1954) Possible relation between deoxyribonucleic acid and protein structures. Nature, 173, 318. Gimona M (2008) Protein linguistics and the modular code of the cytoskeleton. In: Barbieri M (ed) The Codes of Life: The Rules of Macroevolution. Springer, Dordrecht, pp 189-206. Görlich D, Artmann S, Dittrich P (2011) Cells as semantic systems. Biochim Biophys Acta, 1810 (10), 914-923. Görlich D and Dittrich P (2013) Molecular codes in biological and chemical reaction networks. PLoS ONE 8(1):e54,694, DOI 10.1371/journal.pone.0054694. Hafting T, Fyhn M, Molden S, Moser MB, Moser EI (2005) Microstructure of a spatial map in the entorhinal cortex. Nature, 436, 801-806. Hallock RM and Di Lorenzo PM (2006) Temporal coding in the gustatory system. Neuroscience and Behavioral Reviews, 30, 1145-1160. Hou Y-M and Schimmel P (1988) A simple structural feature is a major determinant of the identity of a transfer RNA. Nature, 333, 140-145. Hunt P, Whiting J, Nonchev S, Sham M-H, Marshall H, Graham A, Cook M, Alleman R, Rigby PW and Gulisano M (1991) The branchial Hox code and its implications for gene regulation, patterning of the nervous system and head evolution. Development, 2, 63-77. Jenuwein T and Allis CD (2001) Translating the histone code. Science, 293, 1074-1080. Jessell TM (2000) Neuronal Specification in the Spinal Cord: Inductive Signals and Transcriptional Codes. Nature Genetics, 1, 20-29. Jones DP and Sies H (2015) The Redox Code. Antioxidants and Redox Signaling, 23 (9), 734-746. Kessel M and Gruss P (1991) Homeotic Tansformation of Murine Vertebrae and Concomitant Alteration of Hox Codes induced by Retinoic Acid. Cell, 67, 89-104. Komander D and Rape M (2012), The Ubiquitin Code. Annu. Rev. Biochem. 81, 203–29. Koonin EV and Novozhilov AS (2009) Origin and evolution of the genetic code: the universal enigma. IUBMB Life. 61(2), 99-111. Kühn S and Hofmeyr J-H S (2014) Is the “Histone Code” an organic code? Biosemiotics, 7(2), 203–222. Levin M (2014) Endogenous bioelectrical networks store non-genetic patterning information during development and regeneration. Journal of Physiology, 592.11, 2295–2305. Maraldi NM (2008) A Lipid-based Code in Nuclear Signalling. In: Barbieri M (ed) The Codes of Life: The Rules of Macroevolution. Springer, Dordrecht, pp 207-221. Marquard T and Pfaff SL (2001) Cracking the Transcriptional Code for Cell Specification in the Neural Tube. Cell, 106, 651–654. Matlin A, Clark F and Smith C (2005) Understanding alternative splicing: towards a cellular code. Nat. Rev. Mol. Cell Biol., 6, 386-398. Melcher G (1974) Stereospecificity and the genetic code. J. Mol. Evol., 3, 121-141. Nicolelis M (2011) Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines and How It Will Change Our Lives.Times Books, New York. Nicolelis M and Ribeiro S (2006) Seeking the Neural Code. Scientific American, 295, 70-77. O'Keefe J, Burgess N (1996) Geometric determinants of the place fields of hippocampal neurons. Nature, 381, 425-428. O’Keefe J, Burgess N (2005) Dual phase and rate coding in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells. Hippocampus, 15, 853-866. Papoutsi M, de Zwart JA, Jansma JM, Pickering MJ, Bednar JA and Horwitz B (2009) From Phonemes to Articulatory Codes: An fMRI Study of the Role of Broca’s Area in Speech Production. Cerebral Cortex,19, 2156 – 2165. Pelc SR and Weldon MGE (1966) Stereochemical relationship between coding triplets and amino-acids. Nature, 209, 868-870. Pertea M, Mount SM, Salzberg SL (2007) A computational survey of candidate exonic splicing enhancer motifs in the model plant Arabidopsis thaliana. BMC Bioinformatics, 8, 159. Ray A, van der Goes van Naters W, Shiraiwa T and Carlson JR (2006) Mechanisms of Odor Receptor Gene Choice in Drosophila. Neuron, 53, 353-369. Redies C and Takeichi M (1996) Cadherine in the developing central nervous system: an adhesive code for segmental and functional subdivisions. Developmental Biology, 180, 413-423. Ruiz i Altaba A, Nguien V and Palma V (2003) The emergent design of the neural tube: prepattern, SHH morphogen and GLI code. Current Opinion in Genetics & Development, 13, 513–521. Schimmel P (1987) Aminoacyl tRNA synthetases: General scheme of structure-function relationship in the polypeptides and recognition of tRNAs. Ann. Rev. Biochem., 56, 125-158. Schimmel P, Giegé R, Moras D and Yokoyama S (1993) An operational RNA code for amino acids and possible relationship to genetic code. Proceedings of the National Academy of Sciences USA, 90, 8763-8768. Shapiro L and Colman DR (1999) The Diversity of Cadherins and Implications for a Synaptic Adhesive Code in the CNS. Neuron, 23, 427-430. Shimizu M (1982) Molecular basis for the genetic code. J. Mol. Evol., 18, 297-303. Strahl BD and Allis D (2000) The language of covalent histone modifications. Nature, 403, 41-45. Tavares EQP and Buckeridge MS (2015) Do plant cells have a code? Plant Science, 241, 286-294. Trifonov EN (1987) Translation framing code and frame-monitoring mechanism as suggested by the analysis of mRNA and 16s rRNA nucleotide sequence. Journal of Molecular Biology, 194, 643-652. Trifonov EN (1989) The multiple codes of nucleotide sequences. Bulletin of Mathematical Biology, 51: 417-432. Trifonov EN (1999) Elucidating Sequence Codes: Three Codes for Evolution. Annals of the New York Academy of Sciences, 870, 330-338. Tseng AS and Levin M (2013) Cracking the bioelectric code. Probing endogenous ionic controls of pattern formation. Communicative & Integrative Biology, 6(1), 1–8. Turner BM (2000) Histone acetylation and an epigenetic code. BioEssays, 22, 836–845. Turner BM (2002) Cellular memory and the Histone Code. Cell, 111, 285-291. Turner BM (2007) Defining an epigenetic code. Nature Cell Biology, 9, 2-6. Verhey KJ and Gaertig J (2007) The Tubulin Code. Cell Cycle, 6 (17), 2152-2160. Wang Z and Burge C (2008) Splicing regulation: from a part list of regulatory elements to an integrated splicing code. RNA, 14, 802-813. Yarus M (1988) A specific amino acid binding site composed of RNA. Science, 240, 1751-1758. Yarus M (1998) Amino acids as RNA ligands: a direct-RNA-template theory for the code's origin. J. Mol. Evol.,47(1), 109–117. Yarus M, Caporaso JG, and Knight R (2005) Origins of the Genetic Code: The Escaped Triplet Theory. Annual Review of Biochemistry, 74,179-198.
So Gpuccio, you're not stating anything more radical here other than other Evolutionist have accepted. The only difference is you recognize Code by Design, correct? Not an emergent property? But Code is revolutionary understanding of how life works and how independent systems communicate through "adapters" that bridge communication and allow two separate functional systems coordinate together. And it did rather shock the world as Francis Crick and others were loathe to call it Design and warned all materialist not to recognize it as so. But only to call it the "appearance" of Design. OK, materialist can pretend it's not Design. What they cannot do however is pretend it is not Code. So the only difference here, based upon experimental evidence is Design Theorist accept Code as real Code. And they make a very logical conclusion, Code is Design. Not sure why so many rage and hate you for simply following the facts where they lead to best inference on ready information within the cells of fundamental, symbolic communication concepts of Code of Life. People can rage, hate, insult, name-call all they like. Gnashing of teeth does not change the fact that Code exist and that a reasonable conclusion is Code comes by intelligent design. I do not need to see or even know a designer to understand that a Code Stream of bits and bytes generating symbolic representations is evidence of Design. This is the very basis of SETI. They do not need to see or even observe a Designer. All they need to do is detect intelligent Code signals. The signals in molecular biology are Code, programmatic, conditional, symbolic code. DATCG
Origenes: "I simply fail to understand their commitment." Me too. After all, they are on the side with power. They are the vast majority in the scientific environment. And they say that they are certain that our arguments are irrelevant. So, what do they fear? Why such a dedication to fight our ideas? It's a true mystery! :) gpuccio
GPuccio @730 It is a scary group of ppl. I am sitting here wondering what it would be like to be that screwed up, and frankly I do not get anywhere. My problem with these a/mats is perhaps more fundamental, as that I cannot understand their mission. They fight with tooth and nail for what exactly? I simply fail to understand their commitment. Origenes
Upright BiPed: It's hopeless. I tell GenDavidson that I use "semiotic" exclusively in the sense I have explicitly defined, and in no other sense, and he accuses me to extrapolate to some ill defined meaning of "symbol" that I have never implied. And he goes ballistic! Those people have a problem with the concept of definitions. They have accused me of being tricial only because I was giving precise and explicit definitions of the terms I was using. On the other hand, they avoid to define their terms as though it were a mortal sin. I have tried to answer all their points that were intelligible to me, and the sheer volume of my posts here is evidence of that, and they go ballistic because. a) I don't read their comments or: b) I don't understand them or: c) I lie to show that they are wrong or: d) I invert reality or: e) I repeat ID bullshit (of course I do: I believe that it is the truth) and so on, and so on. The simple possibility that I read their argument, understand them and don't agree with them, and try to explain why I don't agree, is never considered. And yet, that's exactly what happens. I read their arguments, understand them (as far as they are understandable), don't agree with them (not a bit), and try to express the reasons why I don't agree. Again, that's apparently a mortal sin at TSZ. A simple position like: OK, you have expressed your ideas, but I don't agree with you more or less followed by counter-arguments, is not an option apparently. Indeed, when I simply state: "I take notice of your opinion" I am labeled as culpable of "insipid lies". Because calling an opinion an opinion is, obviously, an insipid lie. As everybody knows. The simple truth is that these people cannot tolerate that others don't agree with them. They are, of course, the repository of absolute truth. All those who believe differently, in particular all ID proponents, are idiots by default. Even DNA_Jock, who is certainly an intelligent and competent person, went ballistic with me a few years ago simply because I did not agree with his intepretation and use of the “Texas Sharp Shooter” fallacy as a criticism of ID. And he still remembers it. And believe me, he was (and is) completely wrong about that issue. If you want, I can explain the details. But please, don't tell him. He does not like to hear it! :) gpuccio
Origenes: "Are we at the point that they finally address the arguments? Or are we still in that preliminary stage of misrepresentations, evasions, deliberate misunderstanding and personal attacks?" You judge (see comment #723). gpuccio
I think the poor guy has gone into full-on pendejo at this point. May I offer him a clue? Hey Einstein, in semiotic system analysis, anthropocentric distinctions such as sign, signal, symbol, representation, gesture, signum, icon, index, etc, etc, etc, etc are effectively irrelevant because all such systems require a physical embodiment of the same triadic relation (object, representation, constraint). GP is already aware of this, and you are clearly not. Enjoy the irony. Upright BiPed
To the general group of TSZ commenters: Guys, you are really trying to kill me by some emotional overexposure! Compliments and insults wisely mixed up! It's like being in a sauna and then rolling in the snow (something I have never done, and I hope I will never do). I suppose I should be grateful for the extreme experience! :) gpuccio
GlenDavidsos at TSZ:
It’s sad that we seem to feel the need to come up with a hierarchy of the mental capacities of these dolts
You bet! gpuccio
GlenDavidsos at TSZ:
Even KF gets the fact that he needs to back up his claims, even though his “standards” for doing so are hideously reductive. So, as annoying as KF is, and as meaningless as ‘trillions of examples of functional complexity having been designed’ is, he’s still a cut above GP.
Ah, that's reassuring. I was a bit tired of being the best! :) KF, now it's your turn to bear the burden... :) gpuccio
Joe Felsestein at TSZ:
Does gpuccio really buy into that? One could as easily say that if, by a carefully and intelligently designed algorithm, we can make a reasonably successful simulation of erosion of soil, that this proves that erosion occurs by intervention of an Intelligent Designer.
Just a kind suggestion: better to comment on what I write, rather than on Entropy's interpretation of what I write. Your time would be better spent. gpuccio
GlenDavidson at TSZ: April 5, 2018 at 9:50 pm A nice follow-up:
You know that Pooch is just pleased with his beliefs, satisfied with the accolades that he receives from the mentally-challenged, and unable and unwilling to consider anything written by those evil atheists. So he’s just going to stay an ignorant chump, whatever mental abilities he has. You catch him out on really stupid illogic and fallacious presuppositions, and he just weasels around with pathetic excuses. Not only won’t he learn, he won’t discuss in any way except by privileging ID fallacies and bullshit above all truth and logic. I think that profitable discussion is largely at an end, although I’m not swearing it off yet.
My simple translation: gpuccio believes in his ideas, and tries to defend them in the discussion. But apparently that is not a moral behaviour, at TSZ. gpuccio
GlenDavidson at TSZ: April 5, 2018 at 9:37 pm
Well that’s meaningless. Why even use the term “nature” if you’re speaking of all observables?
I have proposed three different definitions of "nature". And I have chosen the second, explaining why. I am only consistent with what I said. Is that a sin?
“Observable” is not a very apt word for consciousness. Empiric, or something that we experience, works rather better.
I can't see the difference, therefore I can go with "empiric". But it is observable, and that's the only reason that it is empiric.
No, I have a model of it that you’ve not touched. Hardly a full explanation (what is?), it’s deals with the evidence in a manner that no one’s brought a good objection against. The criticisms I’ve gotten were against McFadden’s rather poor model.
You have a model that solves the hard problem? Well many say they have one. I am not amazed. Still, a lot of people in different fields would agree that the hard problem has no credible solution, or even proposal of solution, at present. I am amongst them. However, if you want to detail your solution I will comment on it.
What good is the word “nature” if you couldn’t theoretically ever observe something that wasn’t “nature”?
This is silly. Philosophies of all times have dealt with supposedly transcendent entities. By definition not observable. So, a concept of "observable things" is very useful to define the field of science.
Why? Have you any counter-example? Yes, but the real point is that you never justified your claim in the first place.
Counter-example, please...
How fucking stupid your response is. Those are accidental features, not part of the design. Somehow, we don’t really have much trouble distinguishing between the wood of a bow and the design and manufacture of that bow.
Yes, and I do believe, as I have said many times, that there are many accidental features in biological objects. Which do not depend on the will of the designer. They include all the constraints I have mentioned, including having to work through what already exists. By the way, I will of course not answer the usual repetitions of non arguments (You have not demostrated it, and similar). You can keep your opinions, and be happy with them. Instead, I have really appreciated the long and varied list of insults in this long post: it's quite a record, and I am proud. "Just couldn’t face the stupidity of your illogic, could you?" "Which you’d deal with if you were intellectually honest." "Yeah, if I were as dishonest as you are." "You’re a shameless believer" "How fucking stupid your response is." "clearly you’re too dumb and/or dishonest to recognize that your idiotic response has fuck all to do with" "Look, dumbshit, " "you’re too much a dull and dishonest bozo" "not the dishonest bullshit that you swill from mendacious morons" "What a retard you are." "an idiot like you" "stupidly thinking" "you’re a pseudoscientific sellout." "Yes, dumbass." "your asinine claim" "your fucking lies" "How fucking retarded are you, shithead?" "Get it through your damaged brain " "you’re too fucked in the head" "you just blither on with your mindless drivel " "too ignorant and stupid to understand." "brain-damaged pseudoscientist" "shithead" "disingenuous fool" "dumb as you are about everything" "no praise for your endless stupidity" "you’re still stupid, and you’re too rude" "you dull dull fool" "you’re too idiotic" "you’re too stupid" "just make up shit" "mendacious claims" "writing inanities." "Look, stupid fuck," "Oh wow, a stupid fuck telling us" "A stupid fuck" "Too damned stupid to recognize" "No you don’t, moron, you’re too dumb even to understand it. " "You do try to rubbish what you don’t understand with your insipid lies about “opinion.”" "dishonest sarcasm" "revealing your character" "You’re too dumb" "You’re a worthless interlocutor, because you begin stupid, and then you merely accentuate your stupidity whenever you’re called on it." All that in a sigle comment? Wow! :) By the way, serious question: why is calling your opinion an opinion an "insipid lie"? Just curious. gpuccio
Are we at the point that they finally address the arguments? Or are we still in that preliminary stage of misrepresentations, evasions, deliberate misunderstanding and personal attacks? Origenes
DNA_Jock at TSZ:
I disagree. I went a coupla rounds with gpuccio; he’s polite, but he was either unwilling, or unable, to read for comprehension. Once he started in with the “Intelligent Selection” rubbish, I realized he was obfuscating, probably intentionally. That was over three years ago, and my cursory review of his latest output confirms my suspicion that he has learnt nothing since, and still lacks basic understanding. In particular, he suffers from Texas Sharp Shooter in a big way.
Hi, DNA_Jock! :) Of course, I remember you very well, and your arguments too. And I am very happy to hear from you again. Sincerely. Unfortunately, I cannot reciprocate your judgement about myself, I really think that you were a good interlocutor, one of the best, only obsessed by the "Texas Sharp Shooter" thing (that has not changed, I see). I still think that you don't understand that issue. But I will not certainly take it again here, don't worry. Best. gpuccio
Entropy at TSZ:
Just façade. The impoliteness manifests in the disinterest in reading for comprehension. He thinks he’s dealing with flies, and that all he has to do is hand-wave them away. The impoliteness is also manifest in the dishonesty, though I’d understand why they have such a tendency, ID is hypocritical from the very foundations, no wonder it would permeate into their whole character.
Ah! Someone needed to set things right! :) Believe it or not, I have read all your arguments with great attention, and my answers are exactly what I think of them. I am afraid you have to live with that.
The stupidity displayed in the inability to understand the ridiculousness of the “experiments-only-prove-intelligent-design” claim, makes it hard to put him above BA77 or ET.
OK, I am happy that I have been reconnected to my friends! :)
But maybe you’re right. Some things indicate that he’s above those other idiots, which makes this “inability,” I suspect, just more of the dishonesty.
Never be happy too early!
Maybe he understand the stupidity of that position, but uses the claim just because he knows that the rest of the idiots don’t understand the problem with that shit. If nobody there called him on his lack of understanding of the word “arbitrary,” for example, then they won’t notice anything.
I can reassure you: I really don't understand the stupidity of that position (whatever it is). I must be a natural stupid.
OK. Then he’s just deeply dishonest and passive aggressive.
Thank you! That's so much money saved from shrinks. :) gpuccio
GlenDavidson at TSZ: April 5, 2018 at 12:21 pm
Yes, much better than most of the others at UD. And yet he seems to follow the script, seemingly unaware of the bad logic and unsupported presuppositions that ID uses and that are obvious to us when he uses them.
A compliment?
He seems to be a true believer
Yes, I am.
not one who is aware of how bad his logic is, for example. And yet why not? Is he incapable of recognizing bad logic, or is he just unwilling to do so?
Why not simply one who is sincerely convinced of the things he says?
ake the definition of a symbol, which includes a representational relationship, and make the leap to ubiquitin tags being symbols simply because they have a similar causal relationship (or actually worse, he says that they indicate an outcome, however ambiguous that is–but it seems based on the causal relationship). Without apparently even realizing that the definition of the symbol depended upon representation, not upon causal outcomes, let alone “indicating” anything to anybody or anything before science discovered the relationship.
You go with that again. Look, I don't know how to explain it to you, but I will try just the same. In all my reasonings, I use the word "symbol", and "symbolic code" exclusively to mean what I have included in my definition, that you can find in explicit form at comment #590. "A semiotic system is a system which uses some form of symbolic code. A symbolic code is a code where something represents something else by some arbitrary mapping." It's very simple, and objective, It has nothing to do with all your philosophical "arguments". It's a problem of the structure of the system. Either it uses a symbolic code (an arbitrary mapping that is not explained by laws, but depends on the internal configuration of the system) or it doesn't. The genetic code uses an arbitrary mapping. The mapping is solved by the 20 aatRNA synthetases. The Ubiquitin code uses an arbitrary mapping. The mapping is solved by the specific Ubiquitin Binding Proteins, which link the ubiquinated target to the appropriate outcome. What's so difficult in that, that you cannot understand it? The simple point is that semiotic systems (in the sense defined) are never observed to arise in non design systems. It's as simple as that. gpuccio
Corneel at TSZ: "Agreed, gpuccio has proven himself a lot brighter and more polite than several other participants in that same UD thread. We have experienced far less pleasant exchanges." Thank you ! :) gpuccio
Acartia at TSZ: "No, That’s not fair. He is orders of magnitude less stupid than... " (A couple of names follow, names of friends, and of course I will not report them here, because of course I don't agree, and because I don't want to contribute, even indirectly, to name calling) What is interesting in this strange "acknowledgement" (which I do appreciate, however, for its positive part), is the reference to "orders of magnitude". That's really cute, thank you. Maybe the ID approach, based on quantitative arguments, is having some effect? :) (Just kidding, just kidding... ) gpuccio
Joe Felsestein at TSZ: (as quoted by dazz who apparently quotes Entropy: OK, there is some common descent here. :) )
Note the extra weasel-words “new original”.Because it has been shown many times that natural selection can put complex functional information into the genome, and this has been discussed here at TSZ and also at Panda’s Thumb many times. But add the “new original” and you have the ability to deny that any complex functional information isn’t “new” enough and/or “original” enough to qualify.
OK, here is an explanation of why I specify "new" and "original". New = functional information that did not exist before. That seems quite obvious, because we are discussing exactly that. Original = relating to a new function, that did not exist before. Why that? Because it's the emergence of new functions that is really the issue here. What can we say about the tweaking of an existing funtion? As I have said many times, that's where NS can act in some measure, because some ladder exists once a function is in place, that can gradually improve it. We have examples in the few documented cases of effextive NS: peniciliin resistance, chloroquine resistance, nylonase, and similar. But that measure is extremely limited. I have discussed in great detail each of them, here: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ and following discussion. In those cases, a ladder exists, but: a) It is very short, in all cases (a few aminoacids), so it contributes only a few bits to the existing function. In no way is it complex, as previously defined. There is a reason for that. A tweaking by simple naturally selectable steps stops very early. b) That short ladder can only increase the existing function, but it does not lead to a new function. Indeed, it leads away from them, because the effect of purifying selection will preserve the existing sequence linked to the existing function, and that effect will be stronger as the function increases. Therefore, to explain the emergence of new complex functions, which is the issue, we have, I am afraid, to add the word "new" to our reasoning, however little Joe Felsestein likes it. So, I hope that clarifies those two little words. But maybe my explanation is a weasel explanation, too! (For some strange reason, I feel that someone will say exactly that, somewhere :) ) gpuccio
Entropy at TSZ: April 5, 2018 at 12:17 am A few last pearls from you:
See that? He changed from asking about the function to asking about the protein. This way, instead of something as easy as getting new functions from already existing proteins, he’s asking for new proteins.
I have already answered that (#713). However, to make it more clear, even for you, there is absolutely no difference between getting 500 bits of functional complexity for a new function starting from an existing protein, or getting a new protein with a function involving 500 bits of functional complexity. In both cases, you have to get 500 bits of previously non existent functional complexity for a new function. The problem is exactly the same. But, of course, you will not understand that. And, by the way, new proteins do emerge throughout the whole span of natural history. Even if we just stick to superfamilies, we have 2000 of them.
Of course, that’s also answerable,
No.
but he made sure to mention, on passing, that directed evolution experiments were not acceptable answers.
Directed evolution experiments are not an acceptable model for natural selection. That should be obvious, even for you.
Why not? Because they’re experiments made by people, and people are intelligent, and thus they only prove Intelligently Designed Selection [TM].
No. More simply, they are models of Intelligent Selection, and of what it can or cannot do. But they are not models of Natural Selection. I have dedicated a whole OP, and following discussion, to this issue: Natural Selection vs Artificial Selection https://uncommondesc.wpengine.com/intelligent-design/natural-selection-vs-artificial-selection/ And even Intelligent Selection starting from RV has its severe limitations. It is, of course, much more powerful than NS. But even Szostak was not able to generate a naturally selectable protein, using Intelligent Selecttion. Intelligent selection can detect one specifically defined function at very low levels, and increase it by random variation and directed selection, but that's all it can do. It is a form of bottom-up design, and it has its powers. But top-down design can add a lot to the powers of design.
Therefore any experiments aiming at showing that there’s selectable ladders, will prove Intelligent Design.
No. It will only prove that it is possible to tweak an existing function to higher levels, if you can detect it at very low levels and intelligently select any increase of that specific function that arises by RV. That's what Szostack has done with ATP affinity. That's what the immune system does with affinity maturation. It does prove that there exists a ladder for one specific protein. But it is not a ladder of naturally selectable steps. Indeed, in Szostak's experiment, neither the original weak affinity, nor the final strong affinity were naturally selectable. Instead, anything is intelligently selectable. That's the big difference. We can detect any function we can define, and RV will help in most cases, if we can detect any increase in that function and select for it. But, even so, in the end what we can have is simply an increase in the function that we have originally defined and recognized. Nothing else. No ladder of selectable simple steps can bring from one complex function to another, unrelated complex function, even using RV + Intelligent selection. Because, very simply, those ladders do not exist. It's the same as trying to go from Word to Excel by eight bit random variations + Intelligent selection of the resulting change of function in each step. Where would that bring you? Nowhere, of course. The ladder simply does not exist. You go on:
So, no experiments. Is examination of life forms acceptable? Are you kidding me? Of course not! Those were designed by The Magical Being In The Sky, That would only prove Intelligent Design!
Ah, good. I was missing my dose of pure rubbish! :)
Not any conceptual reasons, not ladders in function, not experiments, not reasonable answers, not reasonable inferences. The point is to move the goal posts and to make sure to dismiss any potential answer.
The point is, of course, to understand what is true.
The guy is a shameless ass-hole.
Ah, here it is at last! :) OK, I can agree with you in part. I feel no shame at all for the things I say here. Of course, I feel a lot of shame for many things, things that I am and things that I do. But not for what I do and say here. For that, I am shameless. So, you are right, at least in part. I am happy that we agree on one thing, at least. As for the "asshole" part, I cannot comment. It would not be appropriate, because I am personally involved in the judgement. However, it's funny. It's probably the first time that I am addressed by that English metaphor (it is a metaphor, I hope! And probably athropomorphic... ). I am thrilled, in a way. :)
P.S. The bit about directed evolution experiments proving intelligent design puts gpuccio at the very same level as the most stupid among creationists.
Well, that's a record in its own way. I always liked the idea of being exceptional. Narcissism, you know...
That’s were he irremediably lost all my respect.
I suppose that I have to cope with that. gpuccio
Well GP, you've sent the best that TSZ has to offer into a complete juvenile meltdown. They are now doing nothing but groping each other for the next unfounded insult to sling. No surprise, it was obviously coming from the very start. They have no choice; they simply cannot challenge you on the empirical facts, so they can do nothing else. Your summary: a) Functional complexity (not even touched) b) Semiosis (not even touched, hello?) c) Irreducible complexity (not even touched) Upright BiPed
Entropy at TSZ: April 5, 2018 at 12:17 am I thought you were out (you said it). And there was nothing in your recent posts deserving an answer. But you are still in, it seems. And in this post there is maybe something not completely boring. Let's see:
See that? He changed from asking about the function to asking about the protein. This way, instead of something as easy as getting new functions from already existing proteins, he’s asking for new proteins.
Frankly, I did not expect such a complete misunderstanding of words on your part. My question has always been: 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?” Now, I must explain to you (who of course will not understand it) that the words complex protein function mean exactly the same as new complex functional protein. You probably don't understand what the word complex means here. It means, of course, the appearance of new original functional information beyond the general threshold of 500 bits. Now, that means that a new protein emerges with more than 500 new original bits of functional information. I mention protein superfamilies because they are the best example of functional islands. We have about 2000 of them. Each of them is different from the others in terms of folding, function and sequence. They are isolated islands. Each of them appears at some time in the course of natural history, and most of them have well more than 500 bits of functional information. So, that is a good scenario to show how protein superfamilies can be deconstructed into simple functional steps of, say, one AA at a time. All naturally selectable. Now, the complex protein function that I mention in my question must of course be a functional novelty of at least 500 bits. That means more or less 120 specific aminoacids. You have made an exmaple of a 1 aminoacid transition. Then you have mentioned generic affinities in enzymes, without ever putting together a minimal scientific concept. Your answer is ridiculous, pathetic, and... will you please suggest some more idea calling, please? I am not a real master at this. Just to show you what a complex protein function is, I will give you an example. A true example, a real example of what I am speaking of. We go back to an old friend, ATP synthase. Alpha and beta chains. Alpha chain: 553 AAs. 290 of them are conserved between E. coli and humans. Bitscore 561 bits. Beta chain: 529 AAs. 334 of them are conserved between E. coli and humans. Bitscore 663 bits. Total conservation time: 3.5 - 4 billion years. The two sequences share very low sequence homology between themselces (94.7 bits) Together (with three other minor components), they form the F1 subunit of ATP synthase, which is essential for the enzymatic activity. Working together with the F0 subunit. Now, the question is very simple: how did those two highly functional sequences originate? How did those 624 conserved AAs come into existence? Is there any reason to believe that, starting from scratch or from some unrelated pre-existing sequence, the 1000+ AA sequence of these two proteins, with the 624 conserved positions, was accumulated gradually, say in 600 successive steps, on 1-2 AAs each? With each step generating a protein more functional than what existed before, at the point of giving a reproductive advantage, and being fixed, obliterating the previous step? For your convenience, I remind you what is known about the best studied cases of natural selection under very strong selective pressure, and with extremely large populations: Penicillin resistance in bacteria: starting functional mutation: 1 AA. Added tweaking by NS: 3-4 AAs Chloroquine resistance in malaria prasite: starting functional mutation: 2AAs. Added tweaking by NS: 2-3 AAs Nylonase: probably a couple of AAs Very simple transitions, all of them, even with the added help of NS at its possible best. You speak vaguely of enzymes and affinities, and you think that you have made a brilliant argument. But you have only shown how superficial and arrogant is your reasoning. Enzymes share affinities usually in the same protein family. As we have seen, and as you yourself have shown in the only real example that you have been so kind to give, those affinities are realted to simple transitions at the level of the active site, while the folding and the general structure of the molecule remains very similar. Whatever the explanation of those transitions (and Axe has published about that issue), they remain simple transitions. Do you understand the word simple? 1 AA: 4.3 bits; 20 configurations 2 AA: 8.6 bits; 400 configurations 3 AA: 13 bits; 8000 congiguration 120 AA: 518 bits; 10^156 configurations Can you see the difference between what is simple (the first three cases) and what is complex (the last case)? It's a difference of more than 150 orders of magnitude, in terms of search space and probability. But, of course, you are not interested in those numbers. You are not interested in facts. You ramble about affinities, and don't even understand the difference between a biochemical affinity and a naturally selectable function. Even if I mentioned to you the famous Szostak paper, which generated (by Intelligent selection) a strong affinity from a random very weak affinity for ATP, but could never generate a naturally selectable molecule. But why am I wasting my time? You are out, as you said yourself. Out in many, different ways. gpuccio
Gpuccio, Take your time, I've much reading to catch up on to store in my non-informational cells of long-term, non-encoded "memory." Q: Wait, what is the definition of memory? https://en.wikipedia.org/wiki/Memory DATCG
#676 , #688
A reminder to Entropy: Tell us where the information is, and tell us what it is about.
It appears that Entropy is preparing to bug out without defending his claims. What a surprise. Perhaps he realized that his comments about information were as indefensible and ridiculous as they seemed when he first made them. Upright BiPed
dazz fires back:
Give us a proper theory with causal explanatory power ...
There isn't any scientific theory of evolution, dazz. Yours doesn't have any causal explanatory power. That is the whole problem. ET
DATCG: Welcome back! I will look at your posts later (if the TSZ work leaves any trace of life, or functional information, in me! :) ) gpuccio
By the way, I apologize for my "false posting" at #705. gpuccio
Surprise. GlenDavidson occasionally returns to reason. Comment: April 4, 2018 at 10:57 am
Well, it seems that you think consciousness is something beyond natural causation, at least in its origins,
No. I have said nothing about its causation or origin. Check better what I have written at #620. I have said that consciousness is part of nature under the definition 2): All that can be observed because consciousness is an observable. Are you denying that? Then I have said that it is not part of nature under the definition 3) All that we can explain with the scientific theories we have at present because I am not aware of any scientific theory available that explains what consciousness is.
hence it’s hard to see how you consider it part of nature, unless you’re just calling whatever we ordinarily see as “nature.”
I think I have been very clear: if we rerally have to use the word "nature", I am adopting for it definition 2) All that can be observed for all my scientific reasonings, because it is perfectly appropriate for that: science is about observables.
.To be sure, I don’t care for the term “nature” at all, except as a term of convenience.
I care even less. Then, let's try not to use it.
Natural vs. artificial seems the most useful distinction, in my view,
If you mean: "Natural" = what arises in a non design system and "Artificial" = what arises without any design intervention that's fine with me.
since I have no idea if, say, gods or ectoplasmic beings would be part of nature or not (certainly not without observing them).
I agree. But if we can observe effects from their interventions, those effects would be part of nature, because they would be observable.
I don’t know why. “Intelligence” is the term I used, and it seems to fit what I wanted to say. I rather think that dogs are conscious, but not too bright, hence I expect little in the way of design from them.
I can agree. But "intelligence" is ambiguous too, because it is often used for non conscious systems that have been designed by some conscious agent. I think we can agree for "intelligent consciousness". Indeed, my complete definition for what a designer is is: A conscious being, capable of the subjective experiences of understanding meaning and having purpose, and with access to some interface that allows to output his subjective representations into matter.
What would be a mark of consciousness? Vs. computers, say, or philosophical zombies?
The generation of new, original and complex functional information is a reliable indicator of a design origin in material objects, and therefore of the intervention of consciousness. A general inference of consciousness in some being is a complex inference from analogy: we do that all the time with other human beings (and, I would say, with higher animals).
Not if the consciousness is that of dogs, or of lesser intelligences than dogs.
OK. I have already clarified that. A conscious, intelligent and purposeful agent is necessary to generate complex functional information.
But fine (don’t want to try to make a lot of the caveat above) some conscious intelligences can produce information of that sort. We’ve not seen any that could or evidently would create life.
Life is more than functional information. And however, the functional information necessary to support life, even in its simplest forms, is still well beyond our human intelligence.
Trouble is, you’d have to justify your premises.
I have explained them in detail. I don't know what you mean by "justify". You can simple say that you don't agree. I had no special hope to convince you.
It has to do with the fact that your premise a) isn’t sound,
Why? Have you any counter-example?
and that life exhibits very undesign-like characteristics. Why don’t any organisms use radio waves to communicate? Why do bats have wings adapted from mammalian forelimbs, rather than from bird or pterosaur wings? What designer begins to make wings with forelimbs rather than the wings available to intelligence? At least give the bats contour feathers, for aerodynamic purposes (there are some advantages to their wings for certain lifestyles, but again, why make it out of a mammalian hand in the first place?).
Nonsense. A lot of designed objects can exhibit what you call "very undesign-like characteristics". A statue can retain irregolarities of the original stone, or scratches that originated after the design. And I have already commented on all the attempts to deny design only because you don't agree with the designer's style. They are nonsense.
Above all, the question is, why not mix and match function and need without regard to interbreeding of organisms? Certainly designers don’t stick with almost entirely modifying parts (I’m talking morphology) from older models.
If this were not nonsense, it would certainly be an anthropomorphism, I suppose.
I know that when you use your premises that way that it’s formally a positive argument.
Ah! Almost an admission. I am thrilled!
The trouble is that the crucial premise a) is not sound, it has not been shown to be true by the evidence. Indeed, the evidence is contrary to it, since life is peculiarly lacking in aspects that one gets from observed designers.
Do you even understand English? This is my premise: No system of the a) type can generate complex functional information. If "the evidence is contrary to it", as you say, just provide a counter-example. What does "life is peculiarly lacking in aspects that one gets from observed designers" have to do with that?
I don’t know where I said that. I’m sure I’ve said it, but I don’t use that as an argument against ID per se. I consider arguments from analogy proper in many cases. I don’t think they’re proper for “design” of life, however, because the analogy is poor, and as used by IDists there is much that is left out because those aspects are disanalogous.
OK. I take notice of your position.
Uh, yeah. Find anywhere that I’ve said otherwise.
No need for that. If you agree, that's fine. My purpose is to clarify ideas, not to get points against you.
And there’s the non sequitur. Life is very much unlike the things that we design, and evolution appears to be the primary reason for this.
I take notice of your opinion.
No, it is what explains the highly derivative nature of life, most often (and in many cases nearly entirely) vertically, that is, life is extremely derivative of its ancestors in many cases (HGT matters more for prokaryotes, yet even there the vertical signals remain strong).
IOWs you are saying that neo-darwinism accepts common descent. I do, too. I hope you will praise me too, then.
I realize that syncretism can put evolution and design together, but there’s no indication that there’s anything that really does design through evolutionary time (no, our domesticated organisms hardly count).
Not sure what you mean here (syncretism?). However, I take notice of your opinion.
Be that as it may, there’s no justification for your reasoning with or without neo-darwinism, particularly for premise a).
Again, why?
It would have to be legitimate first. You have to show that “No system of the a) type can generate complex functional information,” is actually true. If you’re using a false premise, there’s no falsification possible. And it’s at the least an unsound premise, as it has never had the evidence to demonstrate that it is so.
No, you are simply confused here. Falsifiability has nothing to do with the merits of a scientific theory. It just means that it is a scientific theory, because it is falsifiable. Please, check your philosophy of science. Bad scientific theories, if falsifiable, are still scientific theories. Neo-darwinism is a good example.
I suspect you do that in order to use, rather than question, your unsound premises, notably premise a). Consciously or not.
This is really cryptic. Ah, this is really precious: My statement (quoted by you): Another point I would like to clarify: life is not the same thing as functional complexity. ID is about functional complexity, not about life. Your comment to my statement: "Yes, it tries to smear life and functional complexity into one category." !!! No comment.
The trouble is, in science you don’t just get to make up causes for your effects. Such work very well in the abstract, indeed, but you have to match up putative design effects to actual design causes, not to some vague unobserved “designer” who can cause most any effect seen.
I take notice of your opinion. You have certainly demonstrated that you are an expert in the field of philosophy of science. gpuccio
Heya Gpuccio :) I used your "Darwin-of-the-Gaps" a few comments up. Is exactly my thoughts on this. Darwin-of-the-gaps gave us "Junk" DNA. Assumptions led by ignorance and conjecture based upon a materialist doctrine of neo-Darwinism. As these "gaps" are researched, more function is found, Darwin-of-the-Gaps assumptions are falling one by one. DATCG
Surprise. GenDavidson occasionally returns to reason. Comment: April 4, 2018 at 10:57 am
Well, it seems that you think consciousness is something beyond natural causation, at least in its origins,
<blockq gpuccio
To all: Just an example of dazz's high level of discussion and cognitrive confrontation (at TSZ). In his comment: April 4, 2018 at 10:55 am he quotes in detail my arguments at comment #638 here, then he destroys all my reasonings with this amazing comment:
Just keep regurgitating the same crap and pretend you’ve made a positive case for anything. Unbelievable.
How can I survive? gpuccio
Corneel at TSZ: "Alas, not true. Neo-darwinism is the theory of population change through natural selection put on more secure genetic footing than Darwin did. That doesn’t rely on common descent, I fear." This sounds really strange. I have always thought that the step by step darwinian process does require CD. Could you explain better how it could take place if CD were not true? I don't understand. Not that it is important, but I am just curious. gpuccio
UB, glad to hear! and #695, I think Denis Noble's admission of "gene centric" dominance waning and not accurate of causality is more good news for Design. Epigenetic Code screams design even if he won't admit Design. At least he recognizes the inefficiency of "gene-centric" thought of the past. And that epigenetic code controls gene expression, modifies it and like we see with Ubiquitin in post processing, can tag proteins for post-translation modifications(PTM). An amazing system of information networks and communications from the environment sensors to internal reactors and protein processing interactions. DATCG
Well, another simple thing: I will ignore the last posts by GlenDavidson. He's completely out of his mind (and I am being very kind). gpuccio
#695 UB (<---- see dual purpose) ;-) hehe ENCODE is revolutionizing the field and study of epigenetic factors. This post by Gpuccio on Ubiquitin System research shows information exploding in Design centric relations of information tagging, conditional responses, context and modularity. As Diniosio has often said, "this is just the beginning." BTW, where is Dionisio? DATCG
Energy at TSZ: Well, let's start again the hard work. With something simple. "Is gpuccio a Giuseppe?" Yes. gpuccio
Thanks DATCG, I am all good. I am one of the lucky ones. (lots of lead-time). Thank you for asking. Upright BiPed
I guess if a materialist admits information exist within cells, it undermines their belief system? It certainly does not undermine science.
BOOM! Upright BiPed
Haha UB, Welcome to the club! :) and my bad attempt at humor. You'd been out and it's hard to keep up. I get lost in this post Gpuccio's created often forgetting he's already posted something I reference later. But, we designers know information exchange is important within Context ;-) And at times information can be two different things at a time - Symbolic representations representing two meanings ;-) So you have shown us how semiosis and dual meaning works ;-) How are you doing? Hope you're doing well! DATCG
It’s Epigenetic regulation, multiple layers of code and functional information that can form automated reaction forces rapidly with multiple changes in a robust response to environmental changes.
BOOM! Upright BiPed
#692 Ahhh thanks, DATCG. Now I can enjoy being that the last person on the surface of the planet to figure that out. :) Upright BiPed
Where is information? Do materialist dream? Do they store location information in their brain? If they do not store information in their brain, how in the world do they get home from the office? Or for that matter, how do they remember where their car is parked in a parking lot? Or, that they need a key to unlock it? Information is in cell(s) and retrievable as Code or even images, places, words, names, etc. When a materialist dreams, do they ever see in their dreams actual words? I do. My dreams include math, programming code, location names, people's faces, complex discussions even, along with many other facets of dreams in colors, sometimes black and white of events and people. If information does not reside in brain cells how exactly does a materialist function day by day? Recall memory? That's information and evidence of information stored within cellular memory banks. If we could not retrieve information from... get this, Place cells, then none of us could look at a map and understand it, let alone simply remember where home is. How to reprogram memory cells in the brain You cannot "reprogram" non-existent information. The very term, "reprogram" acts upon existing information within cells and/or connections to those cells.
How do we know what happened to us yesterday, or last year? How do we recognize places we have been, people we have met? Our sense of past, which is always coupled with recognition of what is currently present, is probably the most important building block of our identity. Moreover, from not being late for work because we could not remember where the office was, to knowing who our friends and family are, long-term memory is what keeps us functional in our daily lives. It is therefore not surprising that our brain relies on some very stable representations to form long-term emories. One example are memories of places we have seen. To each new place, our brain matches a subset of neurons in the hippocampus (a centrally located brain area crucial to memory formation): place cells. The memory of a given environment is thought to be stored as a specific combination of place-cell activity in the hippocampus: the place map. Place maps remain stable as long as we are in the same environment, but reorganize their activity patterns in different locations, creating a new place map for each environment.
Memory = Information Storage. Remembering = Information Storage Retrieval Memory Comparison = Information comparison of new information with old information stored in cells Now think - think of your home. Did you recall it? That's information? What color is your sofa? Where is the kitchen? Keep retrieving information as you think of the streets to get to your home. Think of different ways you might take to get home. Some short cuts, some longer routes, some changes to the schedule to go to the store and get that gallon of milk? And think of all the words your mind digested so you could understand this thought process. That's information at work in your cells. The absurdity of dogmatic materialist to deny functionally qualified, targeted, highly organized, complex, information within our cells is strange at best. At worst, it's a blind adherence to doctrine and a belief system, not science. A willing blindness to the obvious if pointed out that information does in fact exist, but remains denied. A coping mechanism of sorts. I guess if a materialist admits information exist within cells, it undermines their belief system? It certainly does not undermine science. In fact science must rely on information exchange within our cells in order to comprehend the very logic of how to Decode and Decipher the information from "Place Cells" to Translation, transcription, tagging and regulatory operations. The denial of information leads to cognitive dissonance. Many materialist cannot accept new evidence of Code and Information brought about by scientific discovery that contradicts their old assumptions and beliefs. And so blind dogma goes. But science progresses despite such blind dogma speaking of information because it exist. Science keeps undermining materialism, darwinism and supporting Design. If you don't believe this, just try NOT using the information in your own brain cells. Do not use them to remember which key to press on your keyboard, or what word to speak verbally. That would be cheating, using information stored within your brain cells. Instead, knock your head three times with your fist, shake your head wildly, make large noises, stamp your feet on the ground, flail your arms like a chimp and and type in random keystrokes for 5 minutes using a rather large rock on the keyboard, blindfolded. Or try a stick. Do this randomly, because "Remember" you have no information in your brain. DATCG
Upright Biped @513 Hey! Hope you are well? The UB UB UB was in reference to Ubuiquitin ;-) Sorry, an attempt at humor because UB is everywhere we look :) DATCG
Rapidly... Findings... were Staggering - Rapid Evolution via Change in Environment
The findings, published today in the academic journal Nature, were staggering. By sequencing genetic material in the guppies' brains, researchers found that 135 genes evolved in response to the new environment. Most of the changes in the gene expression(ie. Epigenetic regulation) were internal and dealt with a fish's biological processes such as metabolism, immune function and development. But more importantly, the immediate response of genes to change in the environment did not reflect the eventual evolutionary change. "Genes" can change their activity levels in an immediate response to the environment—what evolutionary biologists call plasticity—or in an evolutionary response that occurs over many generations.
() emphasis mine Note use of genes in above paragraph may be antiquated understanding from past indoctrination. That what instead we are seeing is Epigenetic regulatory features that modify existing information in the gene database. Genes themselves are not acting. The existing regulatory Code above the Genome is acting. And therefore explains why so much former "Junk" DNA is important and may possibly lead to disease if "randomly" mutated. DATCG
Hello Gpuccio, Everyone! Wow, looks like I missed a lot last several days. Much to look through. Hope everyone had a good Easter! I see people are still uninformed or stubbron about failures of neo-Darwinism. And holding on to the gradual step-by-step process of Random Mutations and Natural Selection. Yet research scientist of all kinds from geneticist to molecular biologist have for years openly admitted Darwinism failed and neo-darwinism did not save Darwin. That Modern Synthesis cannot account for all the diversity we see on earth. This is not news. Neo-Darwinism is not surviving as a creative "force" of Functional information for macro-evolutionary changes. It's to weak and trivial as observed in lab experiments and nature for quite some time. Let it die a peaceful death as a once antiquated belief system, based upon assumptions and guess work. Quoting from 2014 Huffington Post, Suzan Mazur's interview with Denis Noble, Time to Replace Modern Synsthesis - neo-darwinism
Suzan Mazur: In recent years the modern synthesis has been declared extended by major evolutionary thinkers (e.g., “the Altenberg 16“ and others), as well as dead by major evolutionary thinkers, the late Lynn Margulis and Francisco Ayala among them. Ditto for the public discourse on the Internet. My understanding is that you are now calling for the modern synthesis to be replaced. Denis Noble: I would say that it needs replacing. Yes. The reasons I think we’re talking about replacement rather than extension are several. The first is that the exclusion of any form of acquired characteristics being inherited was a central feature of the modern synthesis. In other words, to exclude any form of inheritance that was non-Mendelian, that was Lamarckian-like, was an essential part of the modern synthesis. What we are now discovering is that there are mechanisms by which some acquired characteristics can be inherited, and inherited robustly. So it’s a bit odd to describe adding something like that to the synthesis ( i.e., extending the synthesis). A more honest statement is that the synthesis needs to be replaced. By “replacement” I don’t mean to say that the mechanism of random change followed by selection does not exist as a possible mechanism. But it becomes one mechanism amongst many others, and those mechanisms must interact. So my argument for saying this is a matter of replacement rather than extension is simply that it was a direct intention of those who formulated the modern synthesis to exclude the inheritance of acquired characteristics. That would be my first and perhaps the main reason for saying we’re talking about replacement rather than extension. The second reason is a much more conceptual issue. I think that as a gene-centric view of evolution, the modern synthesis has got causality in biology wrong. Genes, after all, if they’re defined as DNA sequences, are purely passive. DNA on its own does absolutely nothing until activated by the rest of the system through transcription factors, markers of one kind or another, interactions with the proteins. So on its own, DNA is not a cause in an active sense. I think it is better described as a passive data base which is used by the organism to enable it to make the proteins that it requires.
Wholeheartedly agree with DNA as a "data base" that can be updated btw, duplicated and redundant. Redundancy is a mark of database design. Noble's answer to Mazur continued...
The third is an experimental reason. The experimental evidence now exists for various forms and various mechanisms by which an acquired characteristic can be transmitted. So I think the reasons for replacing the modern synthesis are the experimental, that certain forms of inheritance of acquired characteristics have now been both demonstrated and their mechanism worked out, and the more philosophical point about he nature of causality. I believe that the modern synthesis, and indeed very many aspects of the interpretation of molecular biology generally, got the question of causality in biological systems muddled up.
Indeed, and evolutionist have known these failures of Modern Synthesis and weaknesses for quite some time.
Suzan Mazur: Lynn Margulis told me the following in 2009:
[W]hat Haldane, Fisher, Sewell Wright, Hardy, Weinberg et al. did was invent.... The anglophone tradition was taught. I was taught, and so were my contemporaries, and so were the younger scientists. Evolution was defined as “changes in gene frequencies in natural populations.” The accumulation of genetic mutations was touted to be enough to change one species to another.... No, it wasn’t dishonesty. I think it was wish fulfillment and social momentum. Assumptions, made but not verified, were taught as fact.
Lets repeat this for clear understanding by all readers: "Assumptions, made but not verified, were taught as fact." "... it was wish fulfillment and social momentaum." I agree, it was assumptions, not verified, taught as fact. Same thing with "JUNK" DNA and so many other "Darwin-of-the-Gaps" assumptions in the past. Much of it based upon ignorance as well. The assumptions by Darwinist filled in the gaps with dogma, not not observational science, but dogmatic assertions.
Rapid changes can take place in response to stress, environment, or breeding(Darwin's finches new "species" in 2 generations). What we observe is Variation within limits, or evidence of HGT and other non-Darwinian processes. Darwin is Dead, and neo-Darwinism(Modern Synthesis) was declared dead here at UD by an evolutionist in 2006, Allen MacNeill at Cornell University: Modern Synthesis is Dead
Before people on this list start hanging the crepe and breaking out the champagne bottles, I would like to hasten to point out that evolutionary theory is very much alive. What is “dead” is the core doctrine of the “modern evolutionary synthesis” that based all of evolution on gradualistic changes in allele frequencies in populations over time as the result of differential reproductive success.
Dead. And we know a new Extended Evolutionary Synthesis was finally, openly proposed at Royal Society of England. Why a need for EES if Modern Synsthesis is doing so well? If Darwin is correct? Because Modern Synthesis cannot account for macro events. And Darwin was not correct. As ET quoted Darwin above in #686,
“If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” [Darwin1859, pg. 175].
We know changes can happen rapidly due to surrounding environments. Not Darwinian gradualism, but within generations. His theory did "absolutely break down." It's time to fully acknowledge it across educational institutions instead of enforcing it as a sacred ritual and religious belief. Acknowledge at last it's failure as an explanation for macro events. I'm firmly in the IDK camp of evolutionary history. But I know what we do observe today. And it's not Darwinian, nor Modern Synthesis. It's Epigenetic regulation, multiple layers of code and functional information that can form automated reaction forces rapidly with multiple changes in a robust response to environmental changes. DATCG
Not one of those clowns could make it through one round in a formal debate that included a panel of objective and impartial judges. If they ain't lying, bluffing and equivocating they are attacking the person who is whipping their behinds with facts and science. ET
The sudden round of attempted compliments towards GP was a bit amusing this morning. - - - - - - - - - A reminder to Entropy: Tell us where the information is, and tell us what it is about? --or-- are we to assume that you cannot defend your statements? Upright BiPed
dazz chimes in:
Do we even have a reason to believe our beloved planet earth, in all it’s perfection to support intelligent life and also creationists, could have been formed by a blind and mindless process like that? Do we need to believe that all those little rocks just happened to hit in the right places time and again?
Right, sounds totally ridiculous doesn't it? But that is all your position has for the formation of the planet. ET
"If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." [Darwin1859, pg. 175].
And that means he was positing a mechanism of "numerous, successive, slight modifications". Strange that an evolutionist didn't know that ET
Allan Keith- Darwin (1859)- and no one since has ever changed it.
There is plenty of evidence of a step-by-step process. The literature is full of papers showing this. Random mutation changing a gene and subsequent expression, selection resulting in the change being fixed in the population.
Look, Allan, there was an experiment with fruit flies. And after 600 generations there wasn't one substitution even though it was being pushed by the experiment. Also ID is NOT anti-evolution. You need to show that stochastic processes did it. And you have to also do it with the paper "waiting for two mutations" which shows the problem with getting just two specific mutations. ET
Allan Keith @682
Allan Keith: There is plenty of evidence of a step-by-step process. The literature is full of papers showing this. Random mutation changing a gene and subsequent expression, selection resulting in the change being fixed in the population. Lather, rinse, repeat.
No, Keith, you are mistaken. In all of the literature there is not one single paper which shows that complex protein functions can come about by naturally selectable steps. Nowhere in science is it shown that such a ladder exists, in general, or even in specific cases. There isn't any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps. No such ladder is shown to exist, in general, and/or even in specific cases. So, the question is: what are you talking about? Origenes
Allan Keith: Microevolution is well documented. A few AAs at most, when the context for NS is the most favorable we can imagine. I have discussed all that here: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ Please, read it and say something more specific, if you like. gpuccio
Guccio,
OK, I would say that claiming that it followed a step by step process. without having any evidence that a step by step process exists, is rather pointless.
There is plenty of evidence of a step-by-step process. The literature is full of papers showing this. Random mutation changing a gene and subsequent expression, selection resulting in the change being fixed in the population. Lather, rinse, repeat. Obviously, it is far more complicated than this (e.g., meiosis, inversions, etc.) but these processes resulting in changes in genetic arrangements and subsequent phenotypes is vey we'll documented. Allan Keith
Allan Keith: OK, I would say that claiming that it followed a step by step process. without having any evidence that a step by step process exists, is rather pointless. That's the core of my "challenge", no more unanswered, but answered by one person with a wrong answer. gpuccio
Gucci,
Excuse me if I intervene, but I think all that ET is saying is that the neo-darwinian theory assumes that evolution happens by a step by step process, where the steps are more or less naturally selected for the reproductive advantage they confer.
I just wanted to point out that there is a huge difference between claiming that we have a step by step process and claiming that it followed a step by step process.
So again, are you denying that this is the neo-darwinian theory? Gradual evolution by RV + NS?
There is obviously a lot more involved, but this is a foundation of the theory. Allan Keith
Allan Keith: "I think that I am going to need a reference for this claim." Welcome to the discussion, Allan. Excuse me if I intervene, but I think all that ET is saying is that the neo-darwinian theory assumes that evolution happens by a step by step process, where the steps are more or less naturally selected for the reproductive advantage they confer. Do you need a reference for that? By the way, I say "more or less" to allow for some neutralist stuff. But of course, as neutral variation has no role in modifying the probabilistic barriers, in the end only the selectionist approach has any apparent sense. So again, are you denying that this is the neo-darwinian theory? Gradual evolution by RV + NS? gpuccio
Upright BiPed: I have not had the time to go on reading and answering Entropy's (or anyone else's) arguments (after the comments I wrote yesterday). I hope I can do that later today. Thank you for your kind interventions here about the very trivial issue of my intellectual and/or moral nature, I appreciate it. And thank you also for anticipating, in a way, Entropy's final argument (the shameless asshole metaphor :) ). That will give me more time to elaborate some complex counter-argument: is that an anthropomorphism? is the argument functionally complex? how much energy is flowing in this case, and how far from equilibrium? is it a god-of-the-gaps arguments? and so on... In the meantime, it seems that we have both lost Entropy's respect. That would be one more thing to unite us! :) (Entropy, I am just kidding: of course I will be more precise and serious as soon as I have the time to read your argument from your own words, and in all its precious detail! :) ) gpuccio
ET,
Umm your position makes the claim that it has a step-by-step processes for producing new proteins and biological systems.
I think that I am going to need a reference for this claim. Allan Keith
Entropy, I'm going to put a battery in my penlight and turn it on. The energy will be flowing; a state of equilibrium will not exist. Why don't you tell me where the information is, and what it's about. Let us see if what you have to say bears any meaningful relation to what exists in the gene system. Upright BiPed
...it has been shown many times that natural selection can put complex functional information into the genome, and this has been discussed here at TSZ and also at Panda’s Thumb many times.
Natural selection doesn't even exist until semantic closure is achieved at the origin of the system. So why don't you get with cousin Felsenstein there, and the two of you can provide a link to where you show natural selection achieving the semantic closure required of it. If you can't do that, then all you've done is assume the very thing you need to demonstrate. As you may already know, that is generally frowned upon in science. Upright BiPed
Note: GP, very obviously, you don't need anyone to step in on your behalf. But there is a point where one's permission to be an insulting prick becomes a mere cover for cowardice. That has become the standard at TSZ, and I just wanted to point it out (since your pal made it so easy to do). Upright BiPed
Surely my post at 672 was strong enough to get a reaction? I am eager to lose your respect too, Entropy. One of the more thoughtful and intelligent contributors to the ideological clusterf*ck that has dominated TSZ for years (since before EL shamelessly abandoned it) mentioned to you very early in this conversation that you would surely end this conversation looking like a fool if you chose to treat an interlocutor such as GP with the typical insult and slander response that TSZ prides itself in (being so intellectual and all). Of course, that would require you to actually engage what GP has stated, so your prospects were hopeless from the start. Thus, in absolutely perfect form, you end by saying of GP: "The guy is a shameless ass-hole". Your personal lack of self-awareness is stunning on this point. GP has been on the internet for far more than a decade, arguing his case with far more decorum and respect than anyone could possibly expect to see in these understandably contentious debates. He is famous his respect for others. This is so universally true, the list of names is indeed long on BOTH SIDES who would testify to it. Yet, there you are, in your careless desperation, with your meaningless claims to protect. Your problem, my dear friend, is that GP is very clearly not a shameless asshole; you are, I am, but he is not. So, by all means, allow me to stand next to GP as someone ready to lose your "respect". Given your statements, you don't have a damn clue what you are talking about when it comes to biological information, so it won't be a serious loss to anyone. Certainly not to me. Upright BiPed
From #664 Entropy to GP:
I said that energy flow transforms into information. Complexity is what happens when systems out of equilibrium move towards equilibrium. For as long as equilibrium isn’t reached, we have information. Yes, that includes “functional” information.
Thud. Here is a guy who very clearly has no idea -- no idea whatsoever -- what is entailed by the term "information" in a biological context. Apparently he has never sought nor received any educational guidance on the subject, nor does he (judging by his words here) appear to care about the history surrounding the science, or any relevant physical observations, or fulfilled predictions, or any of the unanswered philosophical questions, or anything else really. His formulation is a complete air ball -- utterly empty and void of content. And here's the truly rich part -- if you hold him down, force him to form complete sentences, and boil it all out, the odds are better than even that he's walking around with a completely anthropocentric view of information -- the very thing he throws at GP. Virtually all of these physicalist-information types make the same elementary conceptual mistakes. Anyone who says "energy flow transforms into information" and "as long as equilibrium isn’t reached, we have information" is surely a sweet-talking "information is everywhere" kinda guy -- even if he puts up a fight to spare himself any embarrassment in front of his pals. In the context of biological information, he has no idea what he's talking about. But I'm sure he's very certain of himself. Upright BiPed
And more ignorance from TSZ (OMagain):
Oddly they never demand the same level of detail in the explanations they accept.
Umm your position makes the claim that it has a step-by-step processes for producing new proteins and biological systems. We are merely asking you to support the claims of your position. It is not our fault that you cannot do so. And seeing that ID does not make that claim ID does not have to support it. Our "opponents" have some serious credibility issues, along with way too much ignorance. ET
Entropy doesn't understand the specific mutation problem. It thinks that just cuz living organisms are part of the physical world that they arose without the need of an intelligent designer. It thinks that just cuz we can observe and describe the way they work that it is all non-telic. Sad, really ET
Entropy: That enzymes have ranges of efficiency, sometimes working on substrates they’ve never encountered in their environments, is a “conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps.”
How does Entropy’s reasoning work here? Why is it that he thinks that this is an answer to GPuccio’s challenge — #600? Why does he think that the part in italics follows from the preceding part? We can only guess. Enzymes sometimes work on substrates they have never encountered before, therefore there are “ladders” …. First of all, given that homeostasis is a precious and delicate thing, it seems pretty dangerous to any organism if enzymes are suddenly doing new stuff, without any evolutionary history (“working on substrates they’ve never encountered in their environments”). What is the chance that this brand new activity is functional/beneficial for the organism? Put differently, what is the chance that the organism found another step on the ladder? Entropy, if your doctor would tell you that some of your enzymes are acting on stuff which they never encountered before, would you consider that good news? Bottom line, I have no idea what Entropy is talking about. One thing is for sure, it is not an answer to GPuccio’s challenge — #600. Origenes
Entropy had its final meltdown ET
And dazz thinks it's burden shifting by following Newton's four rules of scientific reasoning and asking our opponents to support the claims of their position. You really can't make this stuff up ET
You know just once it would be great for them to post their methodology for determining natural selection, drift or any other non-telic process produced the US. That way we could compare to see who the whiners are, who has the correct criticisms and who has the science. ET
Truth Will Set You Free "gpuccio everywhere: THANK YOU!" Many thanks to you! :) Yes, I have been a little overactive, maybe. My wife is not happy at all! :) gpuccio
Entropy at TSZ: April 3, 2018 at 10:01 pm Wow, you are getting better and better! (OK, this is irony)
You claimed that ‘Conscious understanding and purpose are necessary to “put that amount of information together”‘ and that “Conscious systems can do that. Non conscious systems cannot do that, even if the necessary energy is available.”
Yes.
Thus my emphasis. I see non-conscious systems doing that all the time. You seem to forget that this happens in life forms all the time with no consciousness involved. They put those amounts of information together with no conscious activity involved. Most life reproduces with no conscious activity involved. All life forms duplicate their DNA, transcribe it, translate the RNA into proteins, etc., thus putting together quite a bit of information, with no conscious activity involved.
Are you kidding? Do you even understand what you are saying? All life forms duplicate their DNA. Sure. They do that because: a) The information in their DNA is already there b) That information includes the information for DNA replication IOWs, they are only executing information that has been put together in their genomes. Not by them. Your statements are like saying that when I print a Shakespeare sonnet I am putting together the information in it. I, the great poet! :) Again, are you kidding?
Now, of course I understand what you think. You think that the first round, or rounds at strategic points in the evolution of life, come about by magical means.
Not at all. Maybe you should stop trying to divine what I think, and just read what I write. I think that all rounds in the evolution of life that involve new complex functional information, whenever they happen, come about by design.
I mean, by the conscious activity of “God,” I mean, by the conscious activity of a god, I mean, by the conscious activity of The Intelligent Designer (shit I used capitals!),
No.
I mean, by the conscious activity of some unnamed intelligent designer(s).
Yes. For all the rounds as defined by me above.
Maybe that after that it works on its own.
No. the addition of new complex functional information never works on its own.
However, that comes only to show that you prefer to ignore that we can see it happening all the time. Systems being put together with no conscious activity involved.
I am ignoring nothing. It simply doesn't happen.
You pretend that we should ignore that and instead imagine that at some point(s) that wasn’t/isn’t so. that it required/requires, conscious involvement.
I pretend that there is a huge difference between duplicating or executing existing information and generating it. If you don't understand that, you are beyond any hope. And I say it seriously.
I hope the point became clearer now. (Yes, I knew you’d be perplexed. Got your attention, right?)
No and no.
Well, it’s implicit in my name!
OK, here you are kidding. And it's welcome! :)
Now seriously, I have trouble translating the idea into humanly understandable terms
That's certainly true.
but the reason is that scientists, some of them, already understand a lot about information, probability, etc. That knowledge makes counting bits unimpressive.
That's certainly false.
And so, consciousness is not what’s required to put together some amount of information. Energy flow/Systems-out-of-equilibrium are (starting to develop a different way of explaining, please be patient).
What's wrong with your mind? Both consciousness and the rest you cite are necessary to put together complex functional information (not, of course, "some amount of information"). Do you understand what a "logical and" means?
Without energy flow making your consciousness possible, you’d be unable to have any control over those other bits of energy flow.
It's not energy flow that generates my consciousness. I can accept, however, that energy flow is necessary for my consciousness to work in its human state. Nothing more than that.
You should put a hidden camera. Make sure it doesn’t notice though.
You are a better man when you are kidding! :)
Kidding aside, of course not. I didn’t say that any energy flow transforms into writing.
That's reassuring. :)
I said that energy flow transforms into information. Complexity is what happens when systems out of equilibrium move towards equilibrium. For as long as equilibrium isn’t reached, we have information. Yes, that includes “functional” information.
No, it doesn't. I mentioned writing because it is a clear and objective example of complex functional information beyond the 500 bits threshold. You do the same: give an example. But of course you can't.
The point exists, only it’s not that easy to grasp. As you said, you use these conversations to improve your arguments. Well, I might improve my explanations by trying to get this somewhat-not-directly-intuitive idea across.
OK, I can wait.
I was just irony-ing back.
That's really fine! :) I suppose we should set up some specific smiley for irony! You know, a symbolic code... :) (specific smiley here, even if ungraspable) gpuccio
Entropy: April 3, 2018 at 7:42 pm
Excuse me! You’ve been complaining that this question has been in your mind, and in your blog, for eons, only to then ignore the answer? Really? You find a factoid and stop reading any further?
I have not ignored the answer. I have simply said that it is wrong, and why. And I have not stopped reading. I have read it all. I have read it all, and there is no answer in it. If you think that your only example distracted me, please give other examples. Or clear rationales. What I asked: "1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?" What you answered:
Yes there is. This might work better by example. The specificity of lactate dehydrogenase is often measured by its catalyzing of the reaction when putting lactate as a substrate compared to malate as a substrate. This gives us a some-thousand-fold better catalysis of lactate’s oxydation/reduction than of malate with the help of this proteinaceous catalyst. In other words, the protein can catalyze both reactions, only it’s more efficient towards one than towards the other. (A single amino-acid change can reverse this specificity.) Not only that, different LDHs have different specificities towards the two substrates.
OK, this is the "factoid" (your word, not mine) that I have commented upon, and that you apparently have decided to drop. OK, let's ignore it. But you said it yourself: without any example, it will certainly work worse.
Potential for cross reactions are so frequent that it is not uncommon to give the “wrong name” to an enzyme because it was put together with the “wrong” substrate. For example, a substrate that it never encounters in its environment. However, once the correct substrate is found, it looks rather obvious in the efficiency / specificity of the enzyme towards it, compared to the “wrong” substrate. Where does all of this lead? To the realization that enzyme activities are not as perfect as presented in kinder-garden biochemistry, that they range in potential towards substrates other than their “normal” ones, and that, thus, there’s such a thing as “ladders” of specificity available for enzyme evolution. Not only that, after understanding this issue, it seems rather obvious.
There is nothing obvious in this confused fuss. You must explain how some new complex functional protein, for example a new protein superfamily, can arise by gradual steps, each of them giving an increase of function. Or at least why we should believe that it is possible. You only make generic and confused statements about enzymes. What is your point? In a same family, changes of specificity at the active site can happen by simple transitions. We agree on that. I have commented on that. None of those transitions is complex. So, no answer at all here.
The same is true about physical interactions. They are also measured. Why would they if they’re so specific and perfect according to kinder-garden biochemistry? Shouldn’t we just see a complex and be done? Well, no, the formation of the complex depends on the relative concentrations of the proteins in question, which depend on their relative affinities towards each other. Wait! Relative affinities? Yes. They have pseudo-affinities towards other proteins. So, here, again, we see that there’s an obvious “ladder” for protein-protein interactions to evolve, and thus to the evolution of protein complexes.
Even more confusion. Is it possible? Affinities have nothing to do with that. We are speaking of naturally selectable functions. As an example, look at the famous (and infamous) Szostak paper about ATP binding. A good example of Intelligent selection leading to a powerful increase in the extremely weak affinity of selected random sequences. And so? The final, intelligently selected ATP binding was still non naturally selectable! Least of all the original weak affinity in the original sequence population. You should give some example of naturally selectable ladders, going from the reasonable appearance of a new naturally selectable function to the function we observe. Or at least give reasons why we should believe that those ladders exist. You have done neither.
I hope that gives you enough of a hint.
Not at all. Look, just an advice. Don't give "hints". Give answers. gpuccio
OMagain at TSZ:
So, evolution can generate a few bits of complexity, but for loads of bits of complexity we need Intelligent Design?
Exactly. I am happy you understood it, in the end! :)
Anyone ask them yet if pennies add up into dollars or what?
Exponentially? That would be a very good thing. If you know how to do, please let me know. I could use a few billion billion dollars, after all... :) (Please, reconsider the "add up" part, if you don't want to be scolded by your math teacher). gpuccio
Entropy at TSZ: April 3, 2018 at 6:15 pm Your attempt at solving the hard problem of consciousness is really something. About the god-of.the-gap nonsense, I have just posted a new comment (#657). I refer you to it. About the hard problem of consciousness, so you say:
This is very misinformed. There’s tons of advances in the field of consciousness. What stops people from realizing that there’s at least some levels of scientific understanding is their mystification of consciousness. Nothing else.
You have an interesting way of saying wrong things. There may be "tons of advancements" in some way related to consciousness in general, but I state that there is absolutely nothing that helps in solving the hard problem. And believe me, a lot of people in all fields would agree with me. So please, you either give quotes and arguments, or don't be surprised if I consider your statments as irrelevant.
Because it’s not some configuration of matter. It’s an activity. Activities are not just configurations. This activity depends on the dynamics between “configurations” of matter, their dynamics (the configurations are not static), energy flows, chemical reactions, etc. But, of course, it’s not just a configuration. Scientific understanding is not about explaining things in terms of configurations of matter. It’s about the whole of physics/chemistry/biology.
You really can't survive without quoting your famous energy flow, can you? Let's put it this way: there is absolutely no way to explain consciousness (the existence of subjective experiences) in terms of configurations of matter that are capable, with the whole of their energy flows, chemistry. and whatever you like, to become conscious and have subjective experiences. If you think otherwise, please give the explicit arguments.
Sorry, but that’s not an acknowledgement, that’s a claim. A gigantic claim based on scientific ignorance and religious inclinations that don’t allow you to see the fallacies involved in the construction of the claim.
It's a claim supported by facts. You give one example of new complex functional information arising in non conscious systems. You will probably quote computers. But: a) They are of course designed objects b) They cannot really generate new complex functional information. They can of course use the information that has been included in their design to compute things, even including new information from nthe outside. This is a greater discourse, but we can have it, if you like. For simplicity, just show for the moment any example of new complex functional information in any non design system which does not include any designed thing. That will make the discussion easier.
Scientists have understood for quite a while that information arises from the dynamics between energy flows and the nature of physical/chemical “entities.”
Complex functional information? Really? Examples, please. If scientists "have understood" such a thing "for quite a while", it will not be difficult for you to give examples. Do it.
As long as there’s tons of energy, and systems out of equilibrium, we’ll have information, quite complex, all over the place.
Complex functional information, please. That's the only type that counts for the discussion.
Your problem seems to be a double mystification: one for consciousness, and another for information.
I have looked at the definition of "mystification" in the Cambridge Dictionary. Here it is:
the state of feeling very confused because someone or something is impossible to understand
You know, you may be right here. There is one big mystification in me. One, not two. And it has nothing to do with consciousness and information. I am feeling very confused because someone is impossible to understand: guess who? gpuccio
It is exceptional because most phylogenetic studies overwhelmingly support common descent.
No, they don't. They don't even offer a mechanism that can produce the transformations required. You don't even know if there is a mechanism that can do that. The chimp and human genomes have been mapped. Allegedly there is less than a 2% difference in the two genomes. And yet no one has been able to map the genetic differences to the anatomical and physiological differences observed. Similarities are better described by a common design. ET
gpuccio everywhere: THANK YOU! Truth Will Set You Free
ET: "That is why I am an IDist." Me too! :) gpuccio
ET at #653 (and all): I agree. Now I am too tired to look again at TSZ, I will do that later. But I would like to clarify a very important point: the "god-of-the-gaps" argument against ID and why it is completely false. I will refer to my comment #638 (to GlenDavidson), where I give a rather explicit and detailed description of the positive argument for ID. That positive argument has nothing to do with neo-darwinisms. And it clearly shows that ID is not a god-of-the-gaps argument. I will clarify that better: a) God-of-the gaps-argument: We don't understand how we can explain A. We also have really no idea of how it could be explained. So, we can simply believe that God did it. b) Correct inference (which is not a god-of-the-gaps argument): We don't understand how we can explain A with a certain type of theories. We also have no idea of how it could be explained by that type of theories. But we have a good explanation with another kind of theory, which is fully derived from facts, because we have observed a comparable effect as the result of other types of causes. So, we explain A by this theory, which refers empirically to what we have observed. No god-of-the-gaps: just a good inference. The reasoning at comment #638 also shows clearly another important point: ID is not just an alternative to neo-darwinism. For the same reason. ID is a positive inference, supported by observed facts. Neo-darwinism is a possible falsification of ID, therefore ID must show that such a falsification is false. That's all. All the fuss that our adversaries go on repeating about god-of-the-gaps argument and alternative (or, even worse, default alternative) to neo-darwinism is only nonsense. gpuccio
gpuccio:
For me, science is always a search of the best available explanation.
That is why I am an IDist. And I do not categorically deny Common Descent. I am just saying that right now it is untestable and because of that not science. But that can all change. ET
ET: The chimp - humans problem is interesting, for me, specially for the amazing divergence between genotype (very similar) and phenotype (extremely different, especially where central nervous system functions are concerned). That is good evidence, for me, that we still don't understand well where and how functional information, or part of it, is stored and transmitted. I think you agree with that. gpuccio
ET: I respect you point of view. For me, science is always a search of the best available explanation. I make my choices, as well as I can. gpuccio
Glen is clueless:
The trouble is, in science you don’t just get to make up causes for your effects.
We don't. We call upon causes known to produce specific entailments. Entailments that natural selection and drift cannot produce. We start with: "Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.” And add: ” Might there be some as-yet-undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless, we can say that if there is such a process, no one has a clue how it would work. Further, it would go against all human experience, like postulating that a natural process might explain computers.” So when we observe that criteria and your position has nothing, we infer ID. "Thus, Behe concludes on the basis of our knowledge of present cause-and-effect relationships (in accord with the standard uniformitarian method employed in the historical sciences) that the molecular machines and complex systems we observe in cells can be best explained as the result of an intelligent cause. In brief, molecular motors appear designed because they were designed” Pg. 72 of Darwinism, Design and Public Education Choke on that, again. YOU, Glen, don't have a cause that can produce the effects observed. If you did then ID would be a non-starter. But all you have is you ignorance and your whining. ET
Without a mechanism that can account for the anatomical and physiological differences observed between two alleged related species, like chimps and humans, all "evidence" for Common Descent relies solely on bias and the unimaginative "I can't think of anything else that could produce what I see". ET
CD and ID can coexist. But first you need a mechanism that can produce the transformations required and means to test it. I totally agree that the only way Common Descent is true is by means of intelligent design evolution. Do you realize that the age of the earth depends on how it was formed? A 4.5x billion year old earth relies on untestable assumptions. One being that the proto earth was hot enough to melt all crystals in the material that formed the earth- untestable. And if the crystals survived then we are measuring them and we don't know when they formed. ET
To all: I just wanted to mention this: This thread is apparently very successful, both in terms of visualizations and of comments (including those at TSZ). It is probably the most successful thread I have ever started, and I am very happy of that. But at the beginning, and probably for almost one month, it seemed that very few people were attracted to look at it, least of all to comment. So, I am specially grateful to three friends who have had the enthusiasm and creativity and goodwill to stay with me and contribute to this thread when it was really a "private party". They are, in strict alphabetical order: DATCG Dionisio Upright BiPed Thank you, guys! :) gpuccio
To all: This is 3 April 2018. The insulin pathway is a very important growth regulator network. Extremely impoertant, I would say. And it is regulated by: guess what? The ubiquitin system. By, guess what? A crosstalk. I know, I know, it's becoming repetitive. But amazingly repetitive. Here it is: Ubiquitylation Pathways In Insulin Signaling and Organismal Homeostasis. https://onlinelibrary.wiley.com/doi/full/10.1002/bies.201700223
Abstract The insulin/insulin?like growth factor?1 (IGF?1) signaling (IIS) pathway is a pivotal genetic program regulating cell growth, tissue development, metabolic physiology, and longevity of multicellular organisms. IIS integrates a fine?tuned cascade of signaling events induced by insulin/IGF?1, which is precisely controlled by post?translational modifications. The ubiquitin/proteasome?system (UPS) influences the functionality of IIS through inducible ubiquitylation pathways that regulate internalization of the insulin/IGF?1 receptor, the stability of downstream insulin/IGF?1 signaling targets, and activity of nuclear receptors for control of gene expression. An age?related decline in UPS activity is often associated with an impairment of IIS, contributing to pathologies such as cancer, diabetes, cardiovascular, and neurodegenerative disorders. Recent findings identified a key role of diverse ubiquitin modifications in insulin signaling decisions, which governs dynamic adaption upon environmental and physiological changes. In this review, we discuss the mutual crosstalk between ubiquitin and insulin signaling pathways in the context of cellular and organismal homeostasis.
The paper is open access, so the usual lovers of simplicity can have a look at Fig. 2. The paper discusses in detail the roles of three different E3 ligases: MDM2, NEDD4 and CHIP. But there are many more involved, as shown in Table 1. NEDD4 had already surfaced here for its role in neurons and synapses (see comment #49). It is a 1319 AAs long protein. Here is its "function" section in Uniprot:
E3 ubiquitin-protein ligase which accepts ubiquitin from an E2 ubiquitin-conjugating enzyme in the form of a thioester and then directly transfers the ubiquitin to targeted substrates. Specifically ubiquitinates 'Lys-63' in target proteins (PubMed:23644597). Involved in the pathway leading to the degradation of VEGFR-2/KDFR, independently of its ubiquitin-ligase activity. Monoubiquitinates IGF1R at multiple sites, thus leading to receptor internalization and degradation in lysosomes. Ubiquitinates FGFR1, leading to receptor internalization and degradation in lysosomes. Promotes ubiquitination of RAPGEF2. According to PubMed:18562292 the direct link between NEDD4 and PTEN regulation through polyubiquitination described in PubMed:17218260 is questionable. Involved in ubiquitination of ERBB4 intracellular domain E4ICD. Involved in the budding of many viruses. Part of a signaling complex composed of NEDD4, RAP2A and TNIK which regulates neuronal dendrite extension and arborization during development. Ubiquitinates TNK2 and regulates EGF-induced degradation of EGFR and TNF2. Ubiquitinates BRAT1 and this ubiquitination is enhanced in the presence of NDFIP1
The internalization of the IGF1R receptor is the connection to the insulin pathway. gpuccio
(following along when I can) It appears that your friend Entropy just doesn't get it. He may soil himself when he figures out that semiosis is the primary physical condition that makes Darwinian evolution possible in the first place. Without semiosis, evolution doesn't exist. Hello? Upright BiPed
Truth Will Set You Free and ET: Perhaps you could look at my exchanges with Bill Cole, here, starting at #518. Truth Will Set You Free, you say: "CD seems to support Darwinian evolution and undermine ID. How can CD and ID coexist?" I have made it clear that the form of CD that I am considering is designed CD. It is not only compatible with ID, it is the best form of support for ID, as I have tried to demonstrate with my biological OPs, including this one. There is strong evidence in favour of CD, as I have discussed here, but the best evidecne comes from the pattern of Ks, neutral variation, in time. I have discussed the details in my comments to Bill Cole here, for example at #525, 526, 538. What is the meaning of CD in this context? It's simple. It means that the proteins that we observe have been physically transmitted from species to species (of course, at DNA level), because they retain the neutral variation that accumulates in the course of evolutionary times. That cannot be explained without some physical descent of the proteins themselves. But, of course, the proteins, their corresponding DNA, and all the rest of functional information have been constantly re-engineered (or just engineered, when a new protein superfamily or family appears), together with all the necessary regulatory information, and whatever is necessary, each time that new functions or new species or new phyla, or whatever, appear in evolutionary history. All that is new and complex and functional is of course the result of conscious design. The idea is: design seems to take place re-using physically what already exists, as when you design a new software starting from the already existing code of the previous version, and introduce all the desired variation. Therefore, both design and descent are true. I am not the only one to believe that. Behe agrees too, I believe, and probably many others in the ID field. ET, you say: "As for CD you still need a mechanism. Descent with modification is too vague to be scientific. What gets modified? Saying “DNA” is too vague to be scientific. And given that DNA does not determine form is it obvious that modifying DNA alone will not get to Common Descent." The only modifications that are linked to descent are neutral modifications. Those have nothing to do with functional information, but are only a mark of passive descent. The "modifications" that you refer to, those that make the new species or just the new functional information, are modifications by design. I agree with you that probably they do not work only at the DNA level. Design can and does act at all possible levels of functional information, those that we understand and those that we still don't understand. I hope that I have clarified my views about this topic. gpuccio
Unlike petrushka the entertainment just keeps on going- watching evos deny the obvious and contort reality all the while being totally impotent to support the claims of their position. Pure. Comedy. Gold. ET
And still no evidence that natural selection or drift did it. They don't even know how to test the claim. As for CD you still need a mechanism. Descent with modification is too vague to be scientific. What gets modified? Saying "DNA" is too vague to be scientific. And given that DNA does not determine form is it obvious that modifying DNA alone will not get to Common Descent. ET
gpuccio @ 642: CD seems to support Darwinian evolution and undermine ID. How can CD and ID coexist? Please feel free to point me to another post/source. Thank you. Truth Will Set You Free
GlenDavidson: April 3, 2018 at 4:09 pm
Basically, you assume that DNA is symbolic in God’s mind (yes, we know), and never imagine that a code might exist because, besides the ability of coded systems to store information compactly, sequential codes work very well for producing the sequences of proteins, among other things.
Please, read my comment #641. It answers all your "questions". This is the definition of "symbol" by Wikipedia, just to try one: "A symbol is a mark, sign, or word that indicates, signifies, or is understood as representing an idea, object, or relationship." Therefore, a markw which indicates an object or a relationship is a symbol. Therefore, ubiquitin tags, which indicate an outcome, are symbols. God's mind has nothing to do with it. Or any mind, for that. Semiosis, as defined, is not a priori a mind thing. It is only empirically found in designed systems. Of course a code exists because it is functional. Stop stating trivialities. The problem is: how did it arise? gpuccio
CharlieM at TSZ: April 3, 2018 at 3:40 pm
I’ve been reluctant to comment in this thread as I can see that gpuccio is trying to keep his argument focused and I don’t have sufficient knowledge of the ubiquitin system to make specific comments about it.
I appreciate that attitude.
But I do have a comment that bears on RM, NS and common descent in general. I would like to know in what way a finding that I linked to in a previous thread is explainable in terms of the consistency mentioned above? --- How did the exact same 63 AA sequence come to appear in both species? Can the probability be estimated? I don’t know.
If you are saying that this is another empirical evidence for CD, I agree. But why say that while quoting me? I believe in common descent. How many times should I say that, to be believed? My quote said (emphasis added): "But it is also true that, if CD were not true (just a mental hypothesis, beware!) then the only explanation for the homologies in proteins would be common design. That’s not what I believe, but it is a true and reasonable consideration." I am just pointing to a logical aspect, which is certainly true: a) If CD is true (what I believe), then it's good that my arguments for design are based on common descent (however you judge them) b) If CD were not true (just a logical possibility), then neo-darwinism would be automatically falsified, while design explanations would still be possible. It's just a logical consideration, and it has nothing to do with empirical evidence. Empirical evidence is in favour of CD, as I have always said. gpuccio
Entropy at TSZ: April 3, 2018 at 1:57 pm
Hum. I suspect you’re preparing to redefine …
It's no redefinition at all. It's the same definition I have always given (you can check). It's the same definition UB has always given. No one is redefining anything, I am simply defining it, to clarify what I mean. It is also a very good definition, IMO: it is clear, explicit, and we can work with it. It also corresponds very well to the common use of the word. It's the only sense in which I use the word "semiosis". And all my inferences about semiosis are about that definition of the word.
We could go on and on discussing whether the genetic code is arbitrarily mapped, or not, but that would be useless to your aims and my point.
We should. It would certainly not be useless to my aims. Maybe to your point... :)
If we went for arbitrary mapping” then there’s plenty of ways it would have worked evolutionarily/naturally speaking. Lots of options.
Your opinion. Bring the evidence. The simple fact is that there is no system of type b) (see my comment #638) which is semiotic, in the sense defined. And there are lots of systems in c) that are strongly semiotic.
Fewer “functional” bits than the code would “contain” if it wasn’t arbitrarily mapped.
Is that an explanation? A pathway? Please, explain your explanation, because, beleiev me, it's not clear at all.
At the same time, nothing linking this “mapping” to consciousness other than calling it “semiosis,” and thus extrapolating from the analogy, rather than from the facts.
Nonsense. Of course it's an inference from analogy based on facts. See my comment #638. And what links it to consciousness is that consciousness can do it, and other things apparently cannot. That is a good empirical link, for me.
I’d go much farther into the “arbitrariness” of the ubiquitin system and propose that the specific sequence of ubiquitin might be a “frozen” accident. The “arbitrary” mapping could have been to many other peptides to act as ubiquitins, it just happens to be that one. The more arbitrary the better. Fewer “functional” bits to talk about. Right?
Wrong. You really seem not to understand the arguments at all. The semiotic aspect in not in the ubiquitin molecule. That's just functional complexity, and as usual you just explain it as an "accident". My compliments! The semiotic aspect is in using that molecule to build many different tags, which are arbitrarily mapped to different outcomes in different systems. That requires a lot of functional bits: a lot of different proteins are involved, to write and read the different tags in the appropriate symbolic way.
The arbitrary mapping? Sure. Calling it “semiosis”? Nope.
Do you prefer another word? Really, it's not a problem of words, but of concepts. If you want, just in the discussion with you, I will call it "Amy", or whatever you like.
Of course they are metaphors and analogies. You demonstrate so when you take but one characteristic from semiosis/symbolic codes, and then proceed towards an equivocation to infer that conscious activity is involved.
"Metaphor" and "analogies" are quite different things. Your generalizations are amazingly gross! I have said many times that the design inference is an inference from analolgy. A very good inference from analogy. Again, see my comment #638. But I have used no metaphors about semiosis, or functional complexity.
I think you need to seriously consider the potential for anthropomorphisms.
I have seriously considered it, and I have seriously decided that there is no anthropomorphism in my reasonings, except for tha general anthropomorphism which is present in all human science, as already discussed. Maybe you should seriously consider the potential for inconsistency in how you use words and concepts. gpuccio
petrushka at TSZ: "Odd, the entertainment aspect ended for me four or five years ago. I stick around here because some really bright people post stuff I haven’t heard before." That's a very good statement. I like it, and I like it from you. gpuccio
GlenDavidson: April 3, 2018 at 1:03 am
Whoa, now, a design system includes design processes, and a non-design system doesn’t? My God, the intellectual output of ID is mind-numbing, uh, blowing.
Really, what's the problem with you guys? dazz is dazz. But you seem capable of human reasoning (see previous comment). So, why do you behave this way? dazz has quoted a definition of mine. A clear definition, very useful to clarify what I mean when I speak of a design system. Have you any doubts that it is a definition, and not an argument or an inference? OK, to make it easy even for you guys and for your refined skeptical mind, I quote again my statement with some obvious emphasis:
We can define a system “a design system” if, given an initial state A (which can be designed or not designed, indifferently), the evolution of the system in time, starting form A and up to another state A1, includes one or more design processes. Conversely, we can define a system “a non design system” if, given an initial state A (which can be designed or not designed, indifferently), the evolution of the system in time, starting form A and up to another state A1, does not include any design process.
So, what's the matter? Have you guys problems with the concept of definition? Or have you problems with the concept of reasoning? These are exactly the things that completely degrade the discussion at TSZ. And I mean it, with all respect for you as a person. gpuccio
GlenDavidson at TSZ: April 3, 2018 at 12:55 am Very reasonable post, thank you. Of course I agree with many of the things you say, while obviously disagreeing on the main basic conclusions. Which is how it should be. For the sake of discussion, I will try to explain the points that I disagree with. If you have read my comment #620 here in answer to Entropy, you know of my reservations about the ambiguous concept of nature, and you also know that, under a definition of nature that I find appropriate for scientific purposes, consciousness is certainly part of nature. Therefore, design is part of nature, too. So, we must simply make a distinction between: a) Natural systems (in the sense defined) where there is no obvious intervention of consciousness b) Natural systems (in the sense defined) where there is some obvious intervention of consciousness and, I would say: c) Natural systems (in the sense defined) for which we are trying to assess the question (IOWs, the biological world). Now my point is very simple. No system of the a) type can generate complex functional information. Systems of the b) type can do that very easily and in huge amounts The c) system shows huge amounts of functional information. From those premises, I state that the best explantion for c) is that c) is a system of type a). What has neo-darwinism to do with that? Nothing, of course. This is the positive argument for ID, and refuting darwinism has no role in it, as you can see. You say that it is an argument from analogy. You are right, it is. I have always said that. But what's the problem? Good arguments from analogy are the foundation of all that we know, both in science and in philosophy. We live daily by the strongest argument from analogy of all: the inference that other humans are comnscious, like ourselves. Noboy seems to doubt that inference, just because it is an inference from analogy. So, no sincere person aho wants to make sense should, IMO, doubt that biological objects are designed. Again, it's as simple as that. But then, what's the fuss about falsifying neo-darwinism? Why do I write long OPs about that point? That's rather simple, too. Noe-darwinism is a theory which pretends to be an explanation for c) as a system of type b). If that were true, it would of course falsify ID, and the reasoning I have presented above. (By the way, that shows clearly that ID theory is completely falsifiable, and therefore is a scientific theory in the Popper sense). But it is not true. Neo-darwinism is no explanation at all for c). Therefore ID is not falsified, and it remains the best exdplanation (indeed, the only available scientific explanation) for c). That's whay IDists, including myself, spend a lot of time in falsifying neo-darwinism. Not that it is difficult: it is a really easy task. But neo-darwinists are obstinate believers, and there are so many of them! :) Another point I would like to clarify: life is not the same thing as functional complexity. ID is about functional complexity, not about life. c) is the system of things that are alive. And c) is also the system where we find functional complexity, but have no direct knwoledge of the possible designer or designer process, so we have to infer design from the designed objects alone. But still, functional information does not explain life. Not with our present knowledge, however great it may be. Life, as you will certainly know, is even difficult to define satisfactorily. That's why I never refer to life in my reasonings, but only to the functional complexity associated to life. gpuccio
entropy:
I see non-conscious systems doing that all the time. You seem to forget that this happens in life forms all the time with no consciousness involved.
Question-begging. The whole debate is that those life forms were the product of intelligent design and not mother nature. So yes, of course it happens because it was intelligently designed to do so. Not because of some collection of genetic accidents, errors and mistakes. ET
And more cluelessness:
Evolutionary algorithms running on computers.
Yes, they do. And they are examples of evolution by means of intelligent design. Those algorithms have pre-specified goals and are given the coding and resources to reach those goals. Natural selection doesn't have any goals beyond survival or elimination. All of those algorithms are intelligently designed to produce specific results. Natural selection isn't like that at all. For example the antenna algorithm was going to produce the antenna is was designed to produce* and nothing else. *the program had the specifications for the parameters that had to be met by the antenna ET
To all: OK, that's enough for today. I will go on tomorrow with the parallel work, as long as I have the strength and goodwill... gpuccio
Origenes: "Is this an attempt to set the world record for misunderstanding?" It's probably a good candidate. gpuccio
ET: I don't mean that it is not !a conscious effort". It could well be. I mean that they do not understand the details of how proteins work, or of biochemistry. Or the genetic code, or the rules of metabolism. We too can consciously operate our nervous system, muscels, and many other things, without really knowing in detail how they work. gpuccio
Good grief, is the garbage being written by some at TSZ what Elizabeth Liddle really wanted?
They don't think it's garbage. And seeing what she tries to pass off as "reasoning" I would think she would tend to agree with them.
Apparently so.
They get to do over there what they couldn't get away with over here. So, yeah. If they actually had coherent arguments they would all still be posting here. ET
gpuccio- According to what Shapiro writes it appears that the E coli take some time to readjust and that time is used to get geared up for the other sugar (glucose was the preferred sugar and lactose was the other). The adjustment period isn't just for waiting. There seems to be a conscious effort (or at least part of the design) to reconfigure or do you think that just happens? ET
Entropy @ GPuccio @
GPuccio: I say that functional complexity is beyond the range of non conscious systems, of systems where there is no intervention of a conscious intelligent designer. This is the whole point. I refer to consciousness, and to the subjective experiences of understanding meaning and having purpose, as the real explanations of how functional complexity is generated.
A conscious intelligent designer explains functional complexity in life.
GPuccio: It’s rather simple.
Or so you thought …. Entropy succeeds is misunderstanding your explanation:
Entropy: “I haven’t seen a single life form that needs to consciously control its metabolism, or its ubiquitin-related processes.”
What happened here? Incredible. Is this an attempt to set the world record for misunderstanding? Only a very wise patient man responds like GPuccio did:
GPuccio: Nobody, of course, is suggesting that animals, or humans, consciously control their metabolism, or similar things. … I don’t think that they consciously understand how that metabolism works.
Origenes
Good grief, is the garbage being written by some at TSZ what Elizabeth Liddle really wanted? Apparently so. Upright BiPed
ET: I can absolutely agree that bacteria, animals, plants and humans can make choices that affect their metabolism. But my point is that I don't think that they consciously understand how that metabolism works. gpuccio
Entropy at TSZ: My compliments! You are the first who tries to answer my challenge! That's something, and I really appreciate it. :) Unfortunately, your argument is wrong.
Yes there is. This might work better by example. The specificity of lactate dehydrogenase is often measured by its catalyzing of the reaction when putting lactate as a substrate compared to malate as a substrate. This gives us a some-thousand-fold better catalysis of lactate’s oxydation/reduction than of malate with the help of this proteinaceous catalyst. In other words, the protein can catalyze both reactions, only it’s more efficient towards one than towards the other. (A single amino-acid change can reverse this specificity.) Not only that, different LDHs have different specificities towards the two substrates.
(Emphasis mine) Maybe you have not understood the question. I will write it again for your convenience: 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? Excuse me, you are making an example of a single one AA step which changes the specificity of the active site of lactate dehydrogenase, a 332 AAs protein. Where is the ladder in your argument? You have simply pointed to one step which changes the specificity of an already existing biochemical function. I know very well the simple concepts of biochemistry that you express. And I know very well that single or double mutations at the active site can change specificity. I have also debated many times here that this kind of microevolution is potentially in the range of RV + NS. In particular, I have defended and supported many times here that exact mechanism for the emergence of nylonase: a change of specificity caused by a couple of AA mutations at the active site. My point in my challenge, of course, is completely different. It is that complex functions, for example functions that depend on hundreds of AA positions, cannot be deconstructed into simple naturally selectable steps. You have not answered that. But at least you have tried. :) gpuccio
gpuccio:
Nobody, of course, is suggesting that animals, or humans, consciously control their metabolism, or similar things.
E coli isn't an animal or a human but reports have it they do control their metabolism- Monod discovered what he called "diauxy"- E coli were fed two types of sugar- one at a time. Then he accidentally mixed the two and observed the E coli used the preferred sugar first and when it was gone took a certain time to readjust their metabolism and started using the other sugar. See Shapiro "Evolution: A View from the 21st Century" 2011 ET
Energy at TSZ:
gpuccio@UD, You can think as you like, but I am rather sure of the reason: those who defend neo-darwinism are completely dogmatic about it, and cannot accept the obvious failure of their theory. Unfortunately, the worldview implications overcome the scientific attitude. You should avoid making these kinds of statements. I know you get something similar from this side of the fence, but just remember: those with religious inclinations abound on your side. So in trying to denigrate people on “the other side,” you end up denigrating most of those who agree with you. Even if they don’t “get a clue.”
Again, you misunderstand me. I was not retaliating because I am badly treated at TSZ (although I am). I was expressing a thought that is very inportant for me. I agree that there are a lot of people whose worldview implications overcome the scientific attitude, on both sides. And I was not trying to denigrate people. I have said often that I respect faith commitments, of all kinds, including of course those of atheists. I have no problem with that. I only ask that those commitments be kept as under control as possible when a scientific discussion takes place. In my statement, I was not referring to generic people who defend neo-darwinism, but rather to scientists, to people who have a role as science makers, and therefore have greater responsibility. IOWs, I was denouncing a general cognitive bias of modern science, in particular biology. You will of course disagree, but that idea is very serious, and has not the intention to denigrate individual people, but rather to point to a serious error of modern thought. That said, I am usually very tolerant of the faith commitments of people. But, when the discussion requires it, I just point to the overlap between faith and science, to make my point. I have done that with people on your side, and with people on my side. As respectfully as possible. gpuccio
Energy at TSZ: "All this time I thought we were starting to have a decent and respectful conversation." Just for a little bit of irony? Are you so sensitive? :) gpuccio
Energy at TSZ: "Sure. Point to the ones you were able to produce without any dependence on energy flow." You seem a little obsessed by this point of energy flow. I really cannot understand why. Of course we use energy flow. Everything which has some physical part uses energy flow. And so? I use energy flow and can write functionally complex comments because I am conscious. IOWs, I consciously use energy flow. My coffee machine uses energy flow, like me. But it cannot write complex functional comments (at least, I have never caught it doing that). Your point is non-existent (see how good I am becoming, I have not used "silly" or "ridicolous" :) ) gpuccio
Entropy at TSZ: "I wasn’t trying to be insulting. It’s pathetic to put the cart before the horse because of the immensity of nature. I don’t think that you’re ridiculous." Neither was I. I will never think or say that you are ridiculous or pathetic, but I can certainly think or say that some of your statements are, at least IMO. Even the best and most intelligent people can say something pathetic or ridiculous. Including me, of course. The important thing is that the confrontation stay between ideas, never a fight between persons. gpuccio
Entropy at TSZ: This seems unrelated enough to the issues at #620 to be answered:
That’s not what we see. We see a huge nature, with lots of life forms doing their stuff with no intervention by any designer. I haven’t seen a single life form that needs to consciously control its metabolism, or its ubiquitin-related processes. When I look closer, I see that these things happen on their own. I don’t see designers intervening. I see energy flow though. Lots of it. Not only that, energy flow seems rather fundamental.
"I haven’t seen a single life form that needs to consciously control its metabolism, or its ubiquitin-related processes." This is really silly. Nobody, of course, is suggesting that animals, or humans, consciously control their metabolism, or similar things. The idea is that metabolism and ubiquitin can work because of very specific and highly functionally complex configuration that were designed. Consciously. Not by the animals themselves, of course, but by the biological designer or designers. I am really surprised that you recur to such nonsense in our discussion. gpuccio
Entropy at TSZ: Comment April 2, 2018 at 11:50 pm
I think you should stop for a moment and consider the issue a bit more carefully, because take a look back. Your claim is that the “complexity,” “functional information,” or whatever you want to call it, is beyond nature. That is clearly pointing to a “gap.” Pointing to something you cannot understand how it can be done naturally. Sorry, but that’s not just god-of-the-gaps, but even classic god-of-the-gaps. Let’s continue regardless.
I see that at last you take explicitly the "nature" argument with me. So I can give you an important answer. a) I usually don't use the term "nature" and "naturalism" in my reasonings. Why? Because they are completely ambiguous. Now I will give you three different definitions of nature: 1) All that exists (= reality) 2) All that can be observed 3) All that we can explain with the scientific theories we have at present Now, while I definitely prefer definition 1) (but I also prefer to use the term reality), I can accept, for scientitifc discussions, definition 2), which is only a little more restrictive, cutting out essentially only possible fully transcendent entities. But definition 3), which is probably the most commonly used, more or less unconsciously, but the people on your side, is completely anti-scientific. It is a form of cognitive bias of the highest type: believing that only explanations which are compatible with what we already know, at least in its essential form, can be accepted. I definitely refute that definition and that use. So, let's go to your statements, using definition 2). "I think you should stop for a moment and consider the issue a bit more carefully" I always like to stop for a moment. Maybe two or three. "Your claim is that the “complexity,” “functional information,” or whatever you want to call it, is beyond nature." False. I never used that word. I say that functional complexity is beyond the range of non conscious systems, of systems where there is no intervention of a conscious intelligent designer. This is the whole point. I refer to consciousness, and to the subjective experiences of understanding meaning and having purpose, as the real explanations of how functional complexity is generated. It's rather simple. Now, are consciousness, and the subjective experiences of understanding meaning and having purpose, part of "nature"? For me, there are no doubts. Consciousness can be observed. By each of us, in our personal consciousness. The same place where we observe all the rest. The same is true for the subjective experiences mentioned above. Therefore, they are part of nature. The problem is: can consciousness, and the attached experiences, be part of nature according to definition 3) ? Of course not. Because there is no scientific theory that can explain consciousness as the result of some configuration of matter. See Chalmers, the hard problem. And our current scientific theories deal essentially with configurations of matter and related concepts (energy, forces, and so on). But of course any scientific theory can include consciousness as an observable, and study the connections between conscious experiences and configurations of matter. That is completely scientific, even if we cannot explain what consciousness is in terms of consfigurations of matter. We can certainly study and understand what cosnciousness can do at the level of configurations of matter. That's what ID does. ID just acknowledges a very simple truth: that consciousness, and consciousness alone, can generate those specific configurations of matter that exhibit functional complexity. It's simple and it's true. I will stop here for a moment, maybe two or three. Because, if you don't give me feedback about that, it's useless to go on, at least for everything that directly relates to this issue. gpuccio
To all: Wow, a whole new page of comments at TSZ. That will be tiresome. Ok, let's see what is worthwhile. gpuccio
bill cole: OK, interesting discussion. What do you mean by saying that "The eukaryotic cell is north of 100k bits from it’s “ancestors”? Do you mean that the jump from prokaryotes to eukaryotes is more than 100k bits? If you mean that, it is certainly true, but I would say that it is certainly much more than that. If you think that the jump from pre-eukaryotes to eukaryotes is 1.7 million bits (in terms of human conserved information)! :) gpuccio
My discussion with Corneel at TSZ Corneel,
The “preserved historically” part is the issue for you. We do not have access to the genomes of extinct organisms, so the only way to detect conserved sequences is by comparing the genomes of extant organisms and applying phylogenetic methods to it. If you want to have this piece of the cake, you need to accept common descent.
Have you followed the argument at UD? You will see that there is no knock out punch here. Lets assume for argument sake that genomes are custom designed for species groups with very limited common descent (between fish for example). Also we assume that the blueprint for a specific protein function is the same across species. If the fossil record is true and species emerged at different times and we see proteins preserved (same AA sequence between species of different age) then gpuccio's argument is supported without common descent. He makes the case that it is stronger with the common descent assumption and I agree with him.
Didn’t gpuccio demonstrate that “conserved functional information” (whatever that is) has been added into our lineage during the course of evolution? Is that not the evidence of interim sequences that you require? We are merely disputing the source of this information. Unless you reject gpuccio’s conclusions of course.
Only if the evidence pointed to transitions with very small information jumps say under 100 bits. The eukaryotic cell is north of 100k bits from it's "ancestors" so we can't even get the party started without design.
Is that a fact? How come multicellular bacteria can do without it then?
I am not sure they can but lets assume for argument that they can. The reason for variable cell rates is the requirement for rapid embryo development (including cellular differentiation) moving to stasis as the organism matures. This is important for sexual species with large cell counts. It is also critical for differentiation as different tissue has different regeneration requirements. I would be pretty confident with the hypothesis that the ubiquitin system had to be optimized and ready for the Cambrian explosion. You made good points. I will post at UD. bill cole
entropy:
Those systems are produced by processes that have nothing to do with eyes and minds. This is clear and undeniable evidence if you know how to look at it.
That is only your opinion and an uneducated opinion at that. If the evidence is so clear then why don't you just present it? ET
All one has to do is google semiosis and biology and you can see it is an active field of study: semiosis and biology But entropy doesn't care as it is OK making a fool out of itself with its ignorant denials ET
All one has to do is google semiosis and biology and you can see it is an active field of study: semiosis and biology But entropy doesn't care as it is OK making a fool out of itself with its ignorant denials ET
Entropy: … you cannot know how many things could have worked just as well.
Given common descent, we have to get from e.g. pre-vertebrates to vertebrates. Note that it is irrelevant to our quest whether, in theory, other sequences would have been available to pre-vertebrates or not. The sequences present in pre-vertebrates are what they are — they are the starting point. Okay, so “other possible sequences” can be relevant only after this starting point. This is where GPuccio’s challenge comes in:
GPuccio: Will anyone on the other side answer the following two simple questions? 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?
The problem brings to mind Richard Dawkins saying:
But, however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive.
Origenes
To all: This is April 2018: Guiding Mitotic Progression by Crosstalk between Post-translational Modifications. http://www.cell.com/trends/biochemical-sciences/abstract/S0968-0004(18)30025-2
Abstract: Cell division is tightly regulated to disentangle copied chromosomes in an orderly manner and prevent loss of genome integrity. During mitosis, transcriptional activity is limited and post-translational modifications (PTMs) are responsible for functional protein regulation. Essential mitotic regulators, including polo-like kinase 1 (PLK1) and cyclin-dependent kinases (CDK), as well as the anaphase-promoting complex/cyclosome (APC/C), are members of the enzymatic machinery responsible for protein modification. Interestingly, communication between PTMs ensures the essential tight and timely control during all consecutive phases of mitosis. Here, we present an overview of current concepts and understanding of crosstalk between PTMs regulating mitotic progression. Highlights: Mitotic progression is tightly regulated by many different types of post-translational modifications. Crosstalk between a high variety of post-translational modifications regulates each sequential phase of mitosis. The complexity of crosstalk increases as more and more different types of post-translational modifications are being identified. Post-translational modifications can target groups of functionally related proteins to regulate cellular functions in a protein group-like manner.
Crosstalk? Again? So many anthropomorphisms... :) gpuccio
To all: As we are discussing (with Entropy) the recognition of the signal, this is a very recent paper about that: Structural basis for ubiquitin recognition and autoubiquitination by Rabex-5 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1578505/ But, as it is often the case, things are even more complex than they appear:
Abstract: Rabex-5 is an exchange factor for Rab5, a master regulator of endosomal trafficking. Rabex-5 binds monoubiquitin, undergoes covalent ubiquitination, and contains an intrinsic ubiquitin E3 ligase activity, all of which require an N-terminal A20 zinc finger and an immediately C-terminal helix. The structure of the N-terminal portion of Rabex-5 bound to ubiquitin at 2.5 Å resolution shows that Rabex-5:ubiquitin interactions occur at two sites. The first site is a new type of ubiquitin binding domain, an inverted ubiquitin interaction motif (IUIM), that binds with ~29 ?M affinity to the canonical Ile44 hydrophobic patch on ubiquitin. The second is a diaromatic patch on the A20 zinc finger, which binds with ~22 ?M affinity to a polar region centered on Asp58 of ubiquitin. The A20 zinc finger diaromatic patch mediates E3 ligase activity by directly recruiting a ubiquitin-loaded ubiquitin conjugating enzyme.
From the paper:
Covalent monoubiquitination of proteins is a major regulatory signal in protein trafficking 11. In this process, the C-terminal carboxylate of a single molecule of the highly conserved 76-amino acid protein ubiquitin is covalently linked to a Lys residue in a substrate protein. This reaction is carried out by a series of enzymes known as E1, E2, and E3 12–14. Monoubiquitination of many transmembrane cargo proteins marks them for sorting into endosomal pathways 15–17. Monoubiquitin moieties on these proteins are recognized by specific ubiquitin binding domains in proteins of the trafficking machinery, including UIMs (ubiquitin interacting motifs), CUE (coupling of unfolded protein response to ER associated degradation), UEV (ubiquitin E2 variant) domains, and GAT (GGAs and TOM) domains 18. Furthermore, many trafficking proteins that contain ubiquitin binding domains are themselves monoubiquitinated in a manner that depends on both an E3 ubiquitin ligase and the presence of the binding domain 18. The monoubiquitination of these proteins is thought to regulate their activities.
gpuccio
Entropy at TSZ:: 2. Semiosis: no. That’s but anthropomorphism. I have already answered, but I will repeat the essential concepts here, for your convenience. It's no anthropomorphism. It's an objective property of the system. A system is semiotic if it uses a symbolic code. A symbolic code is a consistent and arbitrary mapping of one variable to another one, embedded in the specific and arbitrary configuration of the system. IOWs, if the variable A causes the effect B directly, according to known laws, it's not a symbolic code. But if the variable A has no direct effect on B, but its values are mapped to specific values of B by an arbitrary configuration of the system, that is a symbolic code, and the system is semiotic. As you can see, there is no antropomorphism here. In the case of the genetic code, the mapping is provided by the 20 aatRNA synthetases, which determine the corespondence between codons and aminoacids. IOWs, the translation is embedded in the system. In the case of the ubiquitin system, the writing is implemented by the E1-E2-E3 systenm, while the mapping is provided by the ubiquitin binding proteins, which recognize the signal and couple it to a specific outcome. In this case, both the writing and the translation are embedded in the system. I hope that is clear. More in next post. gpuccio
Entropy at TSZ: OK, I go on with you, because you seem to be the only one at TSZ that has something understandable to say. I will refer now to your post labeled as: April 2, 2018 at 2:20 pm where you anwwer my request for some specific comment.
I did touch at least one, I explained that the semiosis you see is but an anthropomorphism.
Well, I have answered the point of antropomorphism at #590. I have nothing to add, until you answer my comments about that point.
Because that’s the important problem. You can spend your life calculating information, being very careful about it, making your assumptions explicit, etc, and none of that will solve your main problems.
I was not saying that those problems are not important. Of course they are. I was only saying that there must be order in a discussion. I have debated those points in great detail in previous OPs, and I have pointed to those OPs many times in the present discussion. This OP was on specific biolgical arguments that try to show a specific application of those concepts, so in a discussion that arises from this OP I would expect some arguments about what this OP says. That's all. Of course, I am happy to discuss the wider aspects of ID, as I have always done. But only if someone enters the discussion with ideas adn understandable arguments, and not with mere denial. As long as you stick to ideas, I will answer with ideas.
Then why would you want feedback from anybody “on the other side”? If you’re assuming that you have solved all the philosophical and scientific problems of ID, then you don’t need any feedback on the ubiquitin or any other examples you might want to present. All you need to do is go on and present them, keep feeling astounded about the complexity of it all, and claim that’s evidence for ID. You have no use for my feedback at all.
See above. When I lamented no feedback about the ubiquitin thread, I was lamenting no feedback about the ubiquitn thread. I still have no feedback about that, so I am still lamenting. I have some serious feedback from you (and only you) about the general ID theory. OK, that's fine. I am answering it. Then you answer my questions with 3 clear "no". That's very fine, because you motivate your answers. So, here are my comments:
1. Complex functional information: no. Your definition presumes knowing how many “rocks” can be used as “hatchets” (if I remember correctly). Since you are examining but the one that evolved, you cannot know how many things could have worked just as well. Then, well, you assume quite a bit about the amount of conservation being more than just representative of the amount of divergence, to call it instead “added [functional] information.” (I know that you try and justify the assumption, but no space right now to get there. I can do that later if you’re interested. No point if all I might get is ET making a fool out of himself.)
I am very happy that you have taken time to look at my definition. That is a serious approach to the confrontation. And you are making the only objection that I can expect to my dealing with conserved sequences as a measure of functional information. I will try to sum it up as I see it: there could be other ways to implement that function, so your measure can be underestimated. I would have many things to say about that, but for the moment I will keep it brief. I am sure that the discussion about this point can go into greater detail, and you know it, because you say: "I know that you try and justify the assumption, but no space right now to get there." I am afraid that we need that space. For the moment, I will only say that the specific solutions that we observe have the obvious feature of extremely high functional complexity, as shown by the conservation of hundreds of AAs and of thousands of functional bits. Therefore, they could never arise from a mechanism (RV + NS) which is at most capable of generating two bits functions: Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution http://www.genetics.org/content/180/3/1501 (thank you, ET, for providing the link) or, if we want to be generous, a few bits functions. Must I remind you that bits are an exponential measure? So, the objection: "there could be other ways to implement that function" is not valid, for two important reasons: a) You have no empirical evidence that there are other ways b) Even it there were other ways, the observed way is however functionally complex. Highly functionally complex. I will make an example to be more clear. Let's go again, for a moment, to Paley's classical example: a watch. So I say: this wacth is a designed object, because its functional complexity cannot arise by chance (or by chance + NS, if we are speaking of a biological object). And you object: your argument is silly, because there are other ways to measure time. For example, an hourglass. Do you really believe that this objection makes sense? More in next post. gpuccio
Bill Cole: "If an AA sequence is preserved historically that is evidence of purifying selection." That's the point, I suppose. If you agree that the sequence is preserved historically in different species and lineages for long evolutionary times, then you are accepting common descent. Most species that we observe today are relatively recent. If you believe that the "right" sequence was infused in each species at its appearance, then you hae to determine in some way when that "appearance" took place, IOWs when the "right" sequence was inputted in that species (or line). I don't believe you have any empirical evidence for that. Moreover, for most observable species, the "time from design to present" would be relatively short, matby less than 100 million years. Therefore, the exposure to neutral variation would be less, and the connection between purifying selection and functional constraints less valid, even if still in some way true. That's the reason, for example, that I stick to old transitions, usually at least the vertebrate transition. Havin hundreds of million years of exposure to neutral variation is the best foundation to use sequence conservation as a reliable estimator of functional constraint. So, I do believe that the assumption of common descent is very important for my biological arguments. gpuccio
RodW was banned after this thread was started and going strong. So now that he has been banned he has something to say? Really? ET
RodW: You can post at TSZ. I will answer. But tomorrow, I suppose. Now it's late! :) gpuccio
gpuccio A message from Roy W
Someone pass on to gpuccio that I cant post at UD. I’ve been banned. And I have no more email addresses that I can use to start a new account!
bill cole
You're welcome. Of course they will say it doesn't say what it does and it doesn't mean what you think. But it will give YOU something to consider when looking at proteome families. How many specific mutations were required for each different sequence, for example. ET
ET: Thank you for the link. I was thinking to use that paper in my following discussions with Entropy, but I did not remember how to find it! :) gpuccio
The article waiting for two mutations puts a huge damper on gene duplication as a blind watchmaker mechanism. That is because a duplicated gene needs a new binding site, needs to end up on the right part of the chromatin to be expressed and requires numerous specific changes in order to change its spatial configuration. Not to mention that most genes can handle change without effecting the protein it codes for. Evolutionists are famous for overlooking the real world problems their untestable nonsense faces. And that is the reason for the dogma. ET
Entropy: “If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together.”
I remember someone named Keiths, who made similar jaw-dropping arguments. When you point out to him that physical processes on their own cannot produce e.g. a spaceship, he would counter that intelligent design is also unable to do it. Amazing stuff.
Entropy: “How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns.”
Hmmm. So?
Entropy: “We don’t produce available energy. We’re completely dependent on nature for that.”
Yes. But why is our specific situation relevant? We are neither the designers of earth's life nor the universe, now are we? Origenes
RodW at TSZ: I have received no email from you. However, I have read your post at TSZ. As already said, i don't want to write at TSZ. I have already explained my reasons, and I don't like to repeat things if not necessary. I can debate from here, but only if some of you come with ideas. I must say that most don't. I have started an apparently good discussion with Entropy, let's see how it goes. You can do the same. If you have ideas, express them. If they are interesting, I will try to answer. You say: "I think its interesting that both sides here insist that they’ve made devastating points that the other side wont even address. I think the way to discuss this is to dig out every assumption and every idea behind a claim and not to move on till its settled or at least thoroughly vetted. We might not convince the other side but I think it would be useful laying the arguments out that way." Beautiful. But that requires a serious desire to discuss. It's not impossible. I have had very good discussions with interlocutors like Mike Frank, Piotr and others. I still remember those people with great admiration. Of course, there is a point where we cannot agree. Why? You can think as you like, but I am rather sure of the reason: those who defend neo-darwinism are completely dogmatic about it, and cannot accept the obvious failure of their theory. Unfortunately, the worldview implications overcome the scientific attitude. I always try to stick to facts, but you can see how that is misunderstood by most people on that side. TSZ was better in the past. I miss Elizabeth, too. And Zachriel. We need serious intelligence in the debate, from both parts. So, if you are interested, discuss with me. You will be respected. I will try to understand your ideas, and to asnwer your comments. But please, express ideas. I will help you with a few suggestions. You could do one of the following three things: a) Comment about my thread: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ where I discuss in detail two of the best known examples of NS: penicillin resistance and chloroquine resistance. Do you agree with what I say? Is it wrong? b) Comment about my thread: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ especially the first table, where I give numbers. Am I right? What do you think? c) Try to answer my challenge. I paste it again here:
Will anyone on the other side answer the following two simple questions? 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?
I think these are relevant facts, and relevant arguments. Much more interesting than trying to imagine what a designer can do or would do, according to our personal prejudices. So, join the biological discussion, please. :) gpuccio
gpuccio ET I posted this at TSZ. Gpuccio, I made and argument here that common descent was not necessary for your hypothesis.
Corneel, No, that is patently false. You are having your cake and eating it too. The “information jumps” that gpuccio introduces in his OP critically rely on the different genes he is comparing being homologs, i.e. on common descent being true. If he is unwilling to defend this, he must also drop that argument. Bill: Gpuccio agrees with you here where I do not. I will address it again with Gpuccio. All you need for information jumps is to show change in species that adds information rich sequences. How you determine how information rich they are is based on translated amino acid counts and the required specificity to the AA sequence. If an AA sequence is preserved historically that is evidence of purifying selection. This along with the number of AA in the sequence tells us the information count in bits. Gpuccio is on very solid ground with his 500 bit limit as most everyone here would admit that a random process cannot perform this search. If you ask about selection the proper question is evidence of an interim sequence. The challenge with the ubiquitin system is that is controls cell division rates which is mission critical to multicellular life.
bill cole
And more stupidity from entropy:
ET cannot name a single person capable of designing a living organisms, therefore the living organism was designed!
Wrong again, as usual. It means that the intelligent designer of life was not a person alive on this planet. Can you find any evidence that blind and mindless processes can produce the ubiquitin system? No- then stop whining and get to work
I sure can.
I doubt it. But get your research published and maybe someone will care
Ubiquitin systems arising all over the place by processes that have nothing to do with eyes and minds.
Question-begging cowardice. Nicely done ET
Glen Davidson doesn't even know what evidence is. He has got to be the most pathetic debater ever. ET
GlenDavidson:
That’s what has to be discussed. There’s nothing to discuss regarding the BS about functional complexity and “semiosis,” they’re just nonsense that IDists put out without any regard for standards of evidence.
You don't want to discuss functional complexity and semiosis. If I were a mean guy, I would simply say that I don't want to discuss your points, those that in your opinion "have to be discussed". But I am generous (sometimes :) ). So, I will discuss them. In general, I never discuss morphologic arguments, if the molecular mechanism is not known. Because it's at the molecular level that information works. However, it seems that your objection is of the kind: "no designer would do that". I don't agree. The designer of biological objects clearly works under constraint. that has always been one of my constant points. He cannot do all that he likes. He works through common descent, introducing new information when and if possible. He also has to do with negative constraints: for example, random mutations. What we see is not th result of some omnipotent design from scratch. It is the convergence of design under constraints, maybe even of the action of different designers. It is more like adapting some existing code to new requirements, working on what already exists, and doing the best possible. Then you say: "Bullshit, that’s the whole point, you haven’t made arguments, you’ve made the usual evidence-free IDist claims and built a house of cards on that. That’s the continuing problem." I have made a lot of arguments, biological arguments, detailed arguments, both here and in the other OPs I have quoted. And you have never addressed them. But please, go on that way. So I have one discussant less to be answered. gpuccio
Entropy doesn't even understand what its position claims. Good luck honing your arguments, gpuccio. ET
Entropy at TSZ:
gpuccio, I think I’m going to enjoy our exchange. Only caveat, this week is horrendously busy, which might make my answers come a tad later. But I haven’t forgotten. Deal?
Deal! :) I wil take it slow, too. :) gpuccio
gpuccio- I get it. Keep up your good work. They are useful for helping refine arguments. I will give them that. Unfortunately they aren't any good at forming coherent arguments. For example Glen Davidson brings up developmental biology without realizing his position doesn't have anything to account for it. ET
ET: "This is a discussion?" I would say it is. As you can see, some good refinement of the issues has emerged. :) "Sure you are working on your argument which is always good but don’t think that you will convince the TSZ ilk." But my purpose is never to convince. This is about trying to understand truth. Of course the other discussant must express ideas, good or bad that they are, for the discussion to go on. I think Entropy has done that. Others have not. The only bad discussant is one that does not express any personal ideas. gpuccio
Until someone comes up with a way to test Common Descent the concept is not scientific. There aren't any known mechanisms capable of the feat so that would be a major problem. And as I said above we don't even know what makes an organism what it is. Chapter VI “Why is a Fly not a horse?” (same as the book’s title)
”The scientist enjoys a privilege denied the theologian. To any question, even one central to his theories, he may reply “I’m sorry but I do not know.” This is the only honest answer to the question posed by the title of this chapter. We are fully aware of what makes a flower red rather than white, what it is that prevents a dwarf from growing taller, or what goes wrong in a paraplegic or a thalassemic. But the mystery of species eludes us, and we have made no progress beyond what we already have long known, namely, that a kitty is born because its mother was a she-cat that mated with a tom, and that a fly emerges as a fly larva from a fly egg.”
No one knows what determines form ET
Entropy at TSZ:: The last point: Semiosis My statement: 3) Semiosis is a feature that by its same form is never found in non design systems, and clearly points to design. First level:
“Semiosis” is your inability to understand the concept, the problem, and the subjectivity, of anthropomorphism. You look at a system described in human terms (naturally, since it’s humans doing the describing), and take the metaphors and analogies to heart: therefore semiosis! It’s like ET’s problem with “teleology in biology,” which is yet another example of anthropomorphism.
Second level:
3. You might have never thought of the problem of anthropomorphism, or you think that’s an obvious inference, rather than an inclination from the fact that you’re human. That might be your true handicap. I have to warn you though, that convincing someone like me that your anthropomorphisms are anything but, might be a titanic task.
OK. let's start from the titanic task: that's not a problem, because I don't want to convince anyone, least of all "someone like you". :) I just want to clarify my arguments. Then anyone can decide for himself. That's the true aim of a good discussion: to refine and clarify the arguments, not to convince. Let's go to the "problem" of anthropomorphism. You say: "You look at a system described in human terms (naturally, since it’s humans doing the describing)" And here we agree. Of course, you will also agree that all science is about "looking at systems described in human terms". All science is human, as far as I know. So, if we agree on that, let's go on. You go on: "and take the metaphors and analogies to heart: therefore semiosis!" I don't understand what metaphors and analogies you are talking of! A semiotic system is a system which uses some form of symbolic code. A symbolic code is a code where something represents something else by some arbitrary mapping. The genetic code is a symbolic code, because the mapping from codons to AAs is arbitrary. See also my comment #227 to Bob O'H in this other thread: https://uncommondesc.wpengine.com/intelligent-design/how-some-materialists-are-blinded-by-their-faith-commitments/ The ubiquitin code is a symbolic code, because the mapping from different ubiquitin tags to different outcomes is arbitrary. These are objective properties of the systems we are considering, not metaphors or analogies. Yes, they are described in human terms. Like all science. But they are not metaphors or analogies. Your second level does not add anything to your wrong argument about antropomorphism, so I think we are done here. In next post, I will answer your answers to my previous request. gpuccio
Entropy at TSZ: So, let's go on. Second point: Irreducible complexity My statement: 2) Irreducible complexity multiplies the functional complexity of the individual components of the system. GlenDavidson seems to agree (see #574) What do you have to say? First level:
So this is but insistence on point number 1.
Yes, it is. With exponential increase. Second level:
2. You really think that irreducible complexity adds to the bits “problem.” See above.
It certainly adds. And the bits problem is the fundamental problem, whatever you say. So, that was easy enough. More in next post. gpuccio
Corneel at TSZ: Let's try not to debate just for the sake of it, while we agree. We agree on CD. How I manage my discussion with others who are not sure about that is another matter. I have never been reluctant to defend CD,a you know it. The idea that the origin of information remains the main point is true. It is connected with CD, but only in part. My arguments depend on CD. But it is true that if CD were false, then the whole neo-darwinian explanation would be false by default, because it needs CD, while a design hypothesis does not necessarily need it. That's the only "true and reasonable consideration" I was referring to. That said, let's stop this useless discussion. I believe in CD, period. I don't expect to be commended by darwinists for that, but frankly I did not expect to be attacked for not defending CD, when that's the only thing I have ever done here about this issue! You say: "I am very happy that you promote common descent and (a limited form of) natural selection on UD, because if you were to succeed in persuading your ID friends, it would allow us to move on to more interesting discussions than “everything related to evolution must be false”. " I agree. That's why I do it. However, let me remind you that many important IDists, like Behe, do believe in CD. And, of course, in a limited form of NS. My only criteria are facts. I only defend ideas that, IMO, are strongly supported by facts. Then you ask: "And I have a question about your plots of human conserved functional information against the estimated time since divergence. You use them to infer that information jumps have taken place wherever there appears to be a steep increase in the bit score, is that correct? My question is: what would a plot look like from a protein neutrally evolving at a constant substitution rate? Could you generate such a plot with simulated data?" I am not sure I understand. Of course we can plot anything. Let's see if I understand. Let's say that we have a jump of, say, 1 baa from pre-vertebrates to vertebrate. Of course, that appears like a steep increase, because it is a lot of information (about half the total information of the protein) and the time window is not so big. Now, let's say that the protein if 500 AAs long. Then 1 baa corresponds more or less to 250 new identities in the sequence. I could certainly simulate the appearance of 250 1 AA transitions in the same period. The plot would remain the same. Is that what you mean? Are you proposing that the transition happened, in 30 million years, by 250 single transition of 1 AA? That would be the neo-darwinist idea, wouldn't it? And of course each single mutation gave a reproductive advantage, increasing the function of that particular protein, isn't it? And so each mutation was fixed (rather quickly, I would say), and completely obliterated the previous state, isn't it? Isn't that the neo-darwinian scenario? So, please, defend that scenario by even a trace of evidence. Or of credibility. Because I can see none. But please, don't repeat the ridiculous claim that: "you have not demonstrated that it is really impossible, so we win". Because that's not how science works. In the meantime, please, could you try to answer my often repeated challenge, that no one has ever tried to answer until now? You will easily understand that it is very relevant to your discussion. I copy it here again:
Will anyone on the other side answer the following two simple questions? 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?
gpuccio
One has to wonder: Is the debate over? We continually see our side having all the arguments and the other side having none. Considering something like the Ubiquitin system, what is the best argument for unguided evolution? I have no idea. In fact, I am not aware of a good argument at all. The only way for them to stay in the discussion — to maintain the illusion that there is an ongoing balanced debate — is by misunderstanding the ID arguments. And surely, they will never stop doing so, because, once they have given up on that tactic, the whole atheistic edifice comes crashing down. Origenes
OMagain:
Well, ET, care to name a single person who needs their designs to be complex multilayered interlocking messes?
No one designed your brain, OM. But I have a change- your brain is simple and not complex. But it is an interlocking mess. I can't name a single person capable of designing a living organism. Can you, OM? Can you find any evidence that blind and mindless processes can produce the ubiquitin system? No- then stop whining and get to work ET
This is a discussion? The other person is just hand-waving and poo-poo'ing. Sure you are working on your argument which is always good but don't think that you will convince the TSZ ilk. They don't understand science and how to assess evidence. And they definitely will never post anything in support of evolutionism. Heck to them it's all "settled science"- except it isn't settled and it isn't science. The point being is they can't even give us something so that we can compare. How cowardly is that? ET
Entropy: I have seen your answers, thank you. That's exactly what I mean by a "discussion". I will complete my comments on your previous post, and then I will answer your answers. Let's go on this way, until it is possible. gpuccio
Entropy: So, let's finally come to the discussion of your points. In your post at TSZ, you comment my three points at two different levels, so I will join the two levels together, for better clarity. And the first point is: Functional information. My original statement is: 1) Functional complexity beyond some appropriate threshold (500 bits will do in all contexts) clearly allows a design inference. This is an old and fundamental point, I would say the foundation itself of ID. Let's see what you have to say: First level:
As I said, they just point to complexity and think that makes their absurd imaginary friend real. They compound it with misunderstood information theory and misapplied bit scores, but, in the end, it reduces to their inability to understand how could nature do something, therefore god-did-it. Same old god-of-the-gaps in disguise.
Second level:
1. You really think that talking about 500 bits is impressive and beyond nature. I can explain to you why I find that unconvincing. If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that. So, claiming that a designer is necessary to produce “information,” seems a lot like putting the cart before the horse. At the same time, enormous amounts of energy flow transforming into patterns happen all the time regardless of designers. So 500 bits? A joke for natural processes. Natural phenomena have energy flows to spare. So unimaginable, so unmanageable, so out of reach to any designers, that it makes the bits claim pathetic. Ants would be more justified in claiming that all the volcanoes in the planet cannot move as much material as a single ant colony.
Your "first level" is not so interesting. A God-of-the-gaps "argument" again, wholly unsubstantiated. With some mysterious reference to some "absurd imaginary friend", and to "misunderstood information theory" and to "misapplied bit scores" and to "nature". What a mess! I am afraid that you must be more specific. a) Who is the "absurd imaginary friend"? Where did I mention such a concept? b) What is misunderstood in my dealing with information, and with functional information in particular? Please, clarify the correct understanding. c) In what sense are bit scores misapplied in my reasonings_ Please, clarify how they should be correctly applied. d) The problem of @nature@ will be dealt with more in detailed in the following discussion. So, let's go to the second level, where you become a little more specific. "If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that. So, claiming that a designer is necessary to produce “information,” seems a lot like putting the cart before the horse." I can't follow your reasoning. Yes, designers use energy to create patterns. And so? The whole point is that non design systems cannot create complex functional patterns, whatever the available energy. Conscious understanding and purpose are necessary to "put that amount of information together", as you say. Caonscious systems can do that. Non sconscious systems cannot do that, even if the necessary energy is available. Because energy is not all that is needed. Functional information is needed, and that information derives from the subjective experiences of understanding meaning and having purpose. You say: "At the same time, enormous amounts of energy flow transforming into patterns happen all the time regardless of designers." Patterns maybe, but never functional information. Pllease give one example where a flow of energy is transformed into more than 500 bits of functional information in a non conscious system. I am not holding my breath. If you lack a clear definition of functional information, please look at this old OP of mine: Functional information defined https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ "So 500 bits? A joke for natural processes." Then show one single example of that. "Natural phenomena have energy flows to spare." Energy, but not functional information. "So unimaginable, so unmanageable, so out of reach to any designers, that it makes the bits claim pathetic." The only pathetic thing here are your unsupported statements. Look, just to be simple: a) This comment is, at this point, almost 5000 characters long. That makes a total complexity of about 20000 bits. That means,certainly, more than 500 bits of functional information. To understand why, please read this OP of mine: An attempt at computing dFSCI for English language https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ So, this is a clear demonstration that a simple conscious agent as I am can easily generate more than 500 bits of functional information. b) No non conscious system has ever been observed to do that. Please, feel free to show counter-examples, if you can. I will not comment about ants and volcanoes, just to be kind. More in next post. gpuccio
As predicted OMagain failed to support its claim. It can only think of one system tat can produce the ubiquitin system- yet it never says what that is nor provides any evidence for it. The only thing that will ever convince these people of the legitimacy of ID is a meeting with the Intelligent Designer(s). The TSZ ilk are useless, anti-science and angry keiths has been refuted more times than anyone else ever and it still prattles on unabated. Not one of them can form a coherent argument. ET
Entropy:
That’s why I explained. If only you had read the whole comment you’d have some idea.
Again you misunderstand me. I just meant that I would answer with explicit references to your points (see the last phase at #576). I have been delayed by lack of time and by some distractions from your colleagues. gpuccio
OMagain at TSZ; For the generic objections to ID, please see my future comments to Entropy, which are coming (if others like you do not distract me too much). You ask: "Please feel free to go into detail regarding these “severe limits” and how you have determined that they exist at all." I have dedicated two whole OPs and long following discussions to the limits o NA and RV, with a lot of detail. Here they are: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ And: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ Please, feel free to read them and to comment. I will answer. gpuccio
bornagain77: Hi BA, Welcome to the discussion! :) gpuccio
as to Glen's claim that IDists suffer from the Dunning–Kruger effect
“In the field of psychology, the Dunning–Kruger effect is a cognitive bias wherein people of low ability suffer from illusory superiority, mistakenly assessing their cognitive ability as greater than it is. The cognitive bias of illusory superiority derives from the metacognitive inability of low-ability persons to recognize their own ineptitude; without the self-awareness of metacognition, low-ability people cannot objectively evaluate their actual competence or incompetence.”
That is an interesting claim to be coming from what is essentially a 'neuronal illusion',,,
The Confidence of Jerry Coyne – January 2014 Excerpt: Well and good. But then halfway through this peroration, we have as an aside the confession that yes, okay, it’s quite possible given materialist premises that “our sense of self is a neuronal illusion.” At which point the entire edifice suddenly looks terribly wobbly — because who, exactly, is doing all of this forging and shaping and purpose-creating if Jerry Coyne, as I understand him (and I assume he understands himself) quite possibly does not actually exist at all? The theme of his argument is the crucial importance of human agency under eliminative materialism, but if under materialist premises the actual agent is quite possibly a fiction, then who exactly is this I who “reads” and “learns” and “teaches,” and why in the universe’s name should my illusory self believe Coyne’s bold proclamation that his illusory self’s purposes are somehow “real” and worthy of devotion and pursuit? (Let alone that they’re morally significant: But more on that below.) http://douthat.blogs.nytimes.com/2014/01/06/the-confidence-of-jerry-coyne/?_php=true&_type=blogs&_r=0
,,, a neuronal illusion who has the illusion of free will,,,
Sam Harris's Free Will: The Medial Pre-Frontal Cortex Did It - Martin Cothran - November 9, 2012 Excerpt: There is something ironic about the position of thinkers like Harris on issues like this: they claim that their position is the result of the irresistible necessity of logic (in fact, they pride themselves on their logic). Their belief is the consequent, in a ground/consequent relation between their evidence and their conclusion. But their very stated position is that any mental state -- including their position on this issue -- is the effect of a physical, not logical cause. By their own logic, it isn't logic that demands their assent to the claim that free will is an illusion, but the prior chemical state of their brains. The only condition under which we could possibly find their argument convincing is if they are not true. The claim that free will is an illusion requires the possibility that minds have the freedom to assent to a logical argument, a freedom denied by the claim itself. It is an assent that must, in order to remain logical and not physiological, presume a perspective outside the physical order. http://www.evolutionnews.org/2012/11/sam_harriss_fre066221.html
,,, a neuronal illusion who has illusory perceptions of reality,,,
Donald Hoffman: Do we see reality as it is? - Video - 9:59 minute mark Quote: “fitness does depend on reality as it is, yes.,,, Fitness is not the same thing as reality as it is, and it is fitness, and not reality as it is, that figures centrally in the equations of evolution. So, in my lab, we have run hundreds of thousands of evolutionary game simulations with lots of different randomly chosen worlds and organisms that compete for resources in those worlds. Some of the organisms see all of the reality. Others see just part of the reality. And some see none of the reality. Only fitness. Who wins? Well I hate to break it to you but perception of reality goes extinct. In almost every simulation, organisms that see none of reality, but are just tuned to fitness, drive to extinction that perceive reality as it is. So the bottom line is, evolution does not favor veridical, or accurate perceptions. Those (accurate) perceptions of reality go extinct. Now this is a bit stunning. How can it be that not seeing the world accurately gives us a survival advantage?” https://youtu.be/oYp5XuGYqqY?t=601
,,, a neuronal illusion who, since he has no real time empirical evidence substantiating his grandiose claims, must make up illusory "just so stories",,,
Sociobiology: The Art of Story Telling – Stephen Jay Gould – 1978 – New Scientist Excerpt: Rudyard Kipling asked how the leopard got its spots, the rhino its wrinkled skin. He called his answers “Just So stories”. When evolutionists study individual adaptations, when they try to explain form and behaviour by reconstructing history and assessing current utility, they also tell just so stories – and the agent is natural selection. Virtuosity in invention replaces testability as the criterion for acceptance. https://books.google.com/books?id=tRj7EyRFVqYC&pg=PA530
,,,, a neuronal illusion who makes up illusory just so stories with the illusory, and impotent, 'designer substitute' of natural selection,,,,
“Darwinism provided an explanation for the appearance of design, and argued that there is no Designer — or, if you will, the designer is natural selection. If that’s out of the way — if that (natural selection) just does not explain the evidence — then the flip side of that is, well, things appear designed because they are designed.” Richard Sternberg – Living Waters documentary Whale Evolution vs. Population Genetics – Richard Sternberg and Paul Nelson – (excerpt from Living Waters video) https://www.youtube.com/watch?v=0csd3M4bc0Q
,,, to 'explain away' the appearance (illusion) of design,,
"Organisms appear as if they had been designed to perform in an astonishingly efficient way, and the human mind therefore finds it hard to accept that there need be no Designer to achieve this" Francis Crick - What Mad Pursuit - p. 30 “Biologists must constantly keep in mind that what they see was not designed, but rather evolved.” Francis Crick – What Mad Pursuit - p. 138 (1990) living organisms "appear to have been carefully and artfully designed" Richard C. Lewontin - Adaptation,” Scientific American, and Scientific American book 'Evolution' (September 1978)
,,, a neuronal illusion who must make up illusory meanings and purposes for his life since the reality of the nihilism inherent in his atheistic worldview is too much to bear,,,
Do atheists find meaning in life from inventing fairy tales? - March 2018 Excerpt: The survey admitted the meaning that atheists and non-religious people found in their lives is entirely self-invented. According to the survey, they embraced the position: “Life is only meaningful if you provide the meaning yourself.” https://uncommondesc.wpengine.com/culture/do-atheists-find-meaning-in-life-from-inventing-fairy-tales/
Other than all that I guess Glen may have a point that this 'low ability' person finds his Darwinian worldview to be completely insane. But at least this 'low ability' person has not 'lost his mind' and is thus still a real person, and is not a neuronal illusion who is under the delusion that he is mentally superior to real people.
"It is not enough to say that design is a more likely scenario to explain a world full of well-designed things. Once you allow the intellect to consider that an elaborate organism with trillions of microscopic interactive components can be an accident...you have essentially lost your mind." Jay Homnick - senior editor of The American Spectator - 2005
Verse:
Romans 1:22-23 Claiming to be wise, they became fools, and exchanged the glory of the immortal God for images resembling mortal man and birds and animals and creeping things.
bornagain77
Corneel at TSZ: Before going on with Entropy, I would like to give a couple of quick answers to your two posts.
The point where some non-universal descent could be found is in the the last universal common ancestor. Sure thing, pal.
The idea that LUCA could have been not one organism, but a population of organsism, is not mine, but has been debated in the literature. From the Wikipedia "Last Universal Common Ancestor page:
In 1998, Carl Woese proposed (1) that no individual organism can be considered a LUCA, and (2) that the genetic heritage of all modern organisms derived through horizontal gene transfer among an ancient community of organisms.[31] While the results described by the later papers Theobald (2010) and Saey (2010) demonstrate the existence of a single LUCA, the argument in Woese (1998) can still be applied to Ur-organisms. At the beginnings of life, ancestry was not as linear as it is today because the genetic code took time to evolve.
Theobald disagrees:
In 2010, based on "the vast array of molecular sequences now available from all domains of life,"[29] a formal test of universal common ancestry was published.[1] The formal test favored the existence of a universal common ancestor over a wide class of alternative hypotheses that included horizontal gene transfer. While the formal test overwhelmingly favored the existence of a single LUCA, this does not imply that the LUCA was ever alone. Instead, it was one of several early microbes.[1] However, given that many other nucleotides are possible besides those that are actually used in DNA and RNA today, it is almost certain that all organisms do have a single common ancestor. This is because it is extremely unlikely that organisms which descended from separate incidents where organic molecules initially came together to form cell-like structures would be able to complete a horizontal gene transfer without garbling each other's genes, converting them into noncoding segments. Further, many more amino acids are chemically possible than the twenty found in modern protein molecules. These lines of chemical evidence, taken into account for the formal statistical test by Theobald (2010), point to a single cell having been the LUCA in that, although other early microbes probably existed, only the LUCA's descendents survived beyond the Paleoarchean Era.[30] With a common framework in the AT/GC rule and the standard twenty amino acids, horizontal gene transfer would have been feasible and could have been very common later on among the progeny of that single cell.
I have no special preference about LUCA being one organism or a pool of organisms. I was just mentioning that both theories exist in the scientific literature. Then you say:
No, that is patently false. You are having your cake and eating it too. The “information jumps” that gpuccio introduces in his OP critically rely on the different genes he is comparing being homologs, i.e. on common descent being true. If he is unwilling to defend this, he must also drop that argument.
This is really funny! It is absolutely true that my argument here relies on common descent. I have clarified that I believe in common descent, and that I assume it for my biological resonings. But there is more. I have defended Common Descent in detail and with the best arguments that I can think of. see my comments here, #525, 526, 529, 534, 538 and 546. What can I do more than that? If others, like Bill Cole, still have doubts, I can only respect their opinion, which is what I do with everyone after having clarified what I think. I have also declared that I keep an open mind, which is IMO a very good attitude in all cases. But I have always said that I believe in CD, universal or not, and I have always explicitly defended CD here, in detail, and always by the same argument (the pattern of Ks). So, how can you say that I am "unwilling to defend this" I certainly believe that "explaining new genetic information" is the really important thing. And I use CD to demonstrate that only design can explain it. But it is also true that, if CD were not true (just a mental hypothesis, beware!) then the only explanation for the homologies in proteins would be common design. That's not what I believe, but it is a true and reasonable consideration. gpuccio
Entropy (and others) at TSZ: Before going to your arguments, I would like to clarify an important aspect. My OP here, and almost all the following discussion, is about specific biological issues, with the purpose of showing that: The ubiquitin system is a biological system that exhibits huge amounts of complex functional information, semiosis and irreducible complexity. You, like all your colleagues, have not touched that point in any way, as far as I can understand. You have rather repeatedly discussed the more genral issue: Are functional complexity, irreducible complexity and semiosis markers of design? Which is a completely different issue, that I gave for granted in the present OP, having discussed it many times and in great detail previously, even with you TSZ guys. OK, I will discuss it again here. But before that, I would like to ask you an explicit question about what you did not touch: Do you agree that my arguments here about the biology of the ubiquitin system do show that it is a system that exhibits huge amounts of complex functional information, semiosis and irreducible complexity? A simple "yes" will do. :) Or a simple no, but possibly accompanied by some real arguments to explain why it is no, for you. OK, I will not wait for your answer (but I would definitely appreciate an answer). So, I will go on to comment on your points. In next post. gpuccio
Entropy (and others) at TSZ: OK, let's come to your arguments. Because some argumetns you have expressed, and in an understandable pattern, and I must commend you for that, because none of your "colleagues" seems to have even tried to do that. You say it yourself:
I don’t know about others here, but I’m interested. Not in the way you’d wish though, since I understand the problems with those arguments. I’m interested more in the sense of wondering if you’d understand why I’m not impressed. I understand why you’re impressed though:
. "I'm interested". That's very, very good. Interest is the foundation for a good discussion. "Not in the way you’d wish though, since I understand the problems with those arguments." But you misunderstand me here. If you were interested just because you agree with me, I would appreciate it, but that would add nothing to the discussion. What I really need, and wish, and hope, is someone who is interested but "understands the problems" with my arguments. Or at least believes so. And who has the goodwill to make those "problems" explicit. That can certainly lead to a good discussion, and that's all that I wish. "I’m interested more in the sense of wondering if you’d understand why I’m not impressed." I think I do, but it will certainly help to look at your explanations, rather than simply imagine! :) "I understand why you’re impressed though:" That's good. Understanding is one thing, agreeing another one. I don't want to convince anyone, but understanding is certainly a precious achievement. More in next post. gpuccio
Entropy (and others) at TSZ: OK, I have nothing to say about GlenDavidson's "argument", because it is not an argument at all.
They really do learn very bad habits of thinking from their pseudoscience. In practice, ID is little more than a means of inducing the Dunning-Kruger effect for the sake of belief in ID.
For those who don't know it, here is a definition of the Dunning-Kruger effect, from Wikipedia: "In the field of psychology, the Dunning–Kruger effect is a cognitive bias wherein people of low ability suffer from illusory superiority, mistakenly assessing their cognitive ability as greater than it is. The cognitive bias of illusory superiority derives from the metacognitive inability of low-ability persons to recognize their own ineptitude; without the self-awareness of metacognition, low-ability people cannot objectively evaluate their actual competence or incompetence." It's really strange that GlenDavidson says that, because I have made my biological arguments very explicit and everyone can check easily if what I have said is true or not. Moreover, serious commenters from the other side, like Bob O'H, when invited to comment about them, have declined because molecular biology was not their specialty. Moreover, Arthur Hunt, who is certainly a competent molecular biologist, has commented at my spliceosome thread, but apparently he was not explicitly denied anything that I was saying. He has just very reasonably pointed at some literature, which I have commented upon, and then promised to post about some aspects of his work that apparently denied the logic used in my thread. But he has never done that. My point is: in the presence of explicit arguments that can be easily checked, serious commenters will come and debate explicitly, or just avoid it if they feel that they don't understand well the arguments. "They really do learn very bad habits of thinking from their pseudoscience" does not sound like that. However, I will quote another statement by GlenDavidson:
We’re not interested in such delusions, true. But, more importantly, they’re not arguments, just IDist wishful thinking (OK, #2 is true, but banal).
#2 is this statement of mine: 2) Irreducible complexity multiplies the functional complexity of the individual components of the system. Well, admissions from the other side are so rare that we have to treasure them! So, for the future discussion I will use this admission by GlenDavidson that my #2 is true. We will see mre in detail if it is banal or not. More in next post (later). gpuccio
Entropy (and others) at TSZ: OK, I had decided not to look at the TSZ thread again, but I suppose that the last interventions there deserve some brief answer. I am greatful to ET for quoting a few statements from there so that I could have some idea of what was happening. I will start from this statement (by entropy):
It’s interesting that gpuccio was first begging for comments “from the other side,” then he won’t read the comments. Who can understand that kind of mentality?
But it's rather simple, after all. First, I was "begging" (more "hoping", I would say) for comments from the other sode, but my hope was that someone of those who post here could intervene. I was not really hoping for another parallel debate with TSZ, because I have done that a couple of times in the past, and it was very tiresome for me, and not really worthwhile in the end. However, when I became aware that a thread had been opened at TSZ about this thread of mine, I accepted to give it a look and ot try to give some answers here. After a brief time, as documented in the discussion above, I decided that it was even less worthwhile than in the past, and so I stopped looking at your thread. ET has continued to post some updates here, but I have not really commented on them. The reason why O decided that it was "was even less worthwhile than in the past" is ismple, too: in the past there were at least a few commenters who tried to make some real arguments about what I said (and, of course, the usual crowd of stupid nonsense). Now it seems that only the usual crowd has remained. In particular, nobody has really tried to comment on the content of this thread (which was, after all, the subject of your thread). Instead, the "best" among you have resorted to general denial of ID as a whole, which was not what I was discussing here. Now, it seems that my summary at #568 of the genral principles of ID has cause a couple of you to guve some readable answer, so I am trying to give a brief counter-answer. Of course, this is again about ID in general, and not about my arguments in this OP, that nobody has even touched in your thread, as far as I can see. Of course, I am checking your original posts to do that, and not relying on what ET quoted here, because that would not be appropriate. More in next post. gpuccio
The only thing that will ever convince these people of the legitimacy of ID is a meeting with the Intelligent Designer(s). The TSZ ilk are useless, anti-science and angry ET
responding to gpuccio:
1. You really think that talking about 500 bits is impressive and beyond nature.
Absolutely. If you had any evidence to the contrary you would post it. But you choose to post the following trope:
I can explain to you why I find that unconvincing. If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that. So, claiming that a designer is necessary to produce “information,” seems a lot like putting the cart before the horse.
Utter gibberish. The whole point is that designers can do things with nature, using nature, that nature itself, ie operating freely, could not do. Intelligent designers are required to produce specified complexity. Otherwise archaeology, forensic science and SETI are all in a heap of trouble.
2. You really think that irreducible complexity adds to the bits “problem.”
Absolutely. See above
3. You might have never thought of the problem of anthropomorphism, or you think that’s an obvious inference, rather than an inclination from the fact that you’re human. That might be your true handicap. I have to warn you though, that convincing someone like me that your anthropomorphisms are anything but, might be a titanic task.
No one cares about the willfully ignorant. We can only make pour case to those who can actually think for themselves and realize there is only one reality to our existence. And your position's sheer dumb luck really isn't an explanation. Yes sheer dumb luck- it is all about chance events with you. Cosmic collisions, accidental genetic changes- all probability arguments. And nothing close to science. Just think of how many just-so cosmic collisions had to have occurred to give the earth just the right rotational speed to sustain life and give us the axis stabilizing moon. Sheer dumb luck ETA this gem:
“Semiosis” is your inability to understand the concept, the problem, and the subjectivity, of anthropomorphism.
The inability to understand the concept is all yours, entropy. You cannot grasp the fact that the genetic code is real and other real codes are used within cells. You don't like it because you know it is absurd to think that nature can produce real codes. ET
1. Nobody has produced a way to demonstrate non-“materialistic” processes. So, of course, anything in science will be about “materialistic” processes.
Oh my. I told you this one is ignorant of what is being debated. Materialistic means blind and mindless- non-telic. And there is plenty science has told us about telic processes
2. The philosophically appropriate approach to understand nature is to assume non-telic processes.
Yeah, as much as you can. Nature produces rocks and stones but not Stonhenges.
3. “Stochastic” is not the same as “non-telic.”
Of course it is
Natural phenomena can have directions without being “telic.” Gravitation is a very obvious example.
That doesn't have anything to do with stochastic not being the same as non-telic ET
It doesn't matter cuz you don't know anything and clearly they know betta. :roll: They are wondering why I don't relay their alleged refutations but the only one I can see that attempted one is the first comment about the ubiquitous nature of ubiquitin. Everything else has just been whining and hand-waving- oh and personal attacks, deflections, misconceptions, etc. ET
ET: I quote from my comment #453:
In my view, instead, my argument is that there are three different markers that are linked to a design originn and therefore allow empirically a design inference (that is the basic concept in ID, and I have discussed it many times in all its aspects). Those three features are: a) Functional complexity (the one I usually discuss, and which I have quantitatively assessed many times in detail) b) Semiosis (which has been abundantly discussed by UB) c) Irreducible complexity In my OP I have discussed in detail a specific biological system where all those three aspects are present. Therefore, a system for which a design inference is by far the only reasonable explanation. This is my argument. It is not a god-of-the-gap argument (whatever you mean by that). It is an empirical and scientific argument.
There is not much to be added. 1) Functional complexity beyond some appropriate threshold (500 bits will do in all contexts) clearly allows a design inference. 2) Irreducible complexity multiplies the functional complexity of the individual components of the system. 3) Semiosis is a feature that by its same form is never found in non design systems, and clearly points to design. But I suppose that our friends at TSZ are not interested in those arguments. gpuccio
By the way you have all of the power to refute our design inference. All you have to do step up, do the work and demonstrate materialistic, stochastic (non-telic) processes can produce what we say is intelligently designed. Oh, but you say that you don't have to do anything but poo-poo the design inference with your ignorance. So much for science... ET
Entropy was too afraid to join this site. Now I understand why. ET
Do we infer design given any complexity? No. We have made that abundantly clear. Only people who are willfully ignorant make that claim.
If you want to be clear, then be clear, talk about evolution as understood by [most] scientists, or about evolution by natural means, for example.
I am. Most, if not all, evolutionary biologists say that natural selection and evolution proceed via blind and mindless processes. Mayr, who was one of the architects of the modern synthesis, supports that view in "What Evolution Is" and his other books. That is the point of "evolutionism"- evolution by means of blind and mindless processes is an untestable claim. The best evidence for macroevolution doesn't even call upon any mechanism, most likely because of that. So it doesn't have anything to do with religious inclinations that I don't have. It has everything to do with your ignorance of what your position claims. I readily accept science. You don't seem to know what science is. Hopefully you have a towel handy to wipe your spit off of your monitor. ET
One last bit of ignorance from entropy. First it said:
You, however, think that just pointing to complexity will make your absurd imaginary friend into a reality.
To which I responded - No one claims the design inference from mere complexity. So what does entropy say?
I didn’t write “mere complexity.” That was you.
Right, uh-huh. You used the word "complexity" all by itself. And that, my ignorant opponent, is mere complexity. Entropy seems to think it is my fault that I know more about evolutionism than it does. Because I have read more evolutionary literature by more authors that is somehow a knock against me. Wow. And yes evolutionism is correct as it is based on faith and not science. If you had the science you would just post it ET
That was OMagain ET
ET: "The word complex appears 200 times on that page as of that comment. But you want individual components to do simple things and those components acting in tandem doing complex things." Is that by Alan Fox? Is that about the ubiquitin system? Is that about this page? I would say that individual components do very complex things in the ubiquitin system, and those components working in tandem do extremely complex things. OK, that will probably make the body ("complex") count even higher! :) gpuccio
gpuccio- How would you like some whine?
The word complex appears 200 times on that page as of that comment. But you want individual components to do simple things and those components acting in tandem doing complex things.
You design what you need to in order to get what needs done accomplished. I would rather have a competent multi-tool then have to carry multiple single tools around. But it all depends on the need and the context.
Not a mish mash of multi layered complexity where it means one thing reading forwards and another backwards and upside-down too, as so impresses BA77.
What mish mash? It must make themselves feel big to say that. But is that an argument? No.
But who designs like that?
Whoever needs to, duh
Only one system that I can think of…..
If you ever find any evidence to support that one system that you can think of by all means present it so we can all have a laugh and then a go at it. Until then all you have is whining and misrepresentation. Congratulations ET
gpuccio, My apologies but I was just showing their desperation in their mindset. And no, they haven't even tried beyond saying that ubiquitin is ubiquitous and therefore evolutionism. You have to remember that Alan Fox thinks that languages evolve without our help. That we define and spell the words has nothing to do with language evolution. :roll: ET
ET: Thank you for the updates. In case some real argument emerges on that side, please let me know. :) gpuccio
Oh my. Entropy is a special case of pathetic. Strange that I don't have any religious inclination just a passion for reality. And it doesn't have any clue as to what Darwin's point was. If evolution is by design then it is very different from what Darwin and his followers (all evolutionary biologists) are talking about. That means one has to be clear about what type of evolution one is talking about. That you are too dim to understand that exposes your dishonesty. And finally there is a huge difference between mere complexity and "complex intricate networks". You are willfully ignorant and it shows. Good luck with that. And good luck trying to demonstrate that the ubiquitin system evolved by means of natural selection, drift or any other blind and mindless processes. And yes that is what evolutionism demands. ET
Too funny, "entropy" is now in full meltdown mode. It thinks that just because Darwin didn't use the words blind and mindless processes that he wasn't talking about them! Earth to entropy- that is all Darwin talked about. Natural selection is both blind and mindless. Darwin sought to remove teleology from biology, ie design without a designer. Read Ernst Mayr and buy a vowel. Then it sez:
I have enough knowledge and intellectual honesty to notice the profound problems with “intelligent design
You don't have any knowledge nor intellectual honesty. You don't even know what is being debated. ID is NOT anti-evolution. Clearly you are just an ignorant troll who has found a happy home among other ignorant trolls.
You, however, think that just pointing to complexity will make your absurd imaginary friend into a reality.
Ignorance and dishonesty. No one claims the design inference from mere complexity. Grow up ET
DATCG and all: New stuff about an old friend, p97 (see comments 240 - 244). This is as fresh as one can imagine (March 29): AP-SWATH reveals direct involvement of VCP/p97 in integrated stress response signaling through facilitating CReP/PPP1R15B degradation. http://www.mcponline.org/content/early/2018/03/29/mcp.RA117.000471.full.pdf
Abstract The ubiquitin-directed AAA-ATPase VCP/p97 facilitates degradation of damaged or misfolded proteins in diverse cellular stress response pathways. Resolving the complexity of its interactions with partner and substrate proteins, and understanding its links to stress signaling is therefore a major challenge. Here, we used affinity-purification SWATH mass spectrometry (AP-SWATH) to identify proteins that specifically interact with the substrate-trapping mutant, p97-E578Q. AP-SWATH identified differential interactions over a large detection range from abundant p97 cofactors to pathway-specific partners and individual ligases such as RNF185 and MUL1 that were trapped in p97-E578Q complexes. In addition, we identified various substrate proteins and candidates including the PP1 regulator CReP/PPP1R15B that dephosphorylates eIF2? and thus counteracts attenuation of translation by stress-kinases. We provide evidence that p97 with its Ufd1-Npl4 adapter ensures rapid constitutive turnover and balanced levels of CReP in unperturbed cells. Moreover, we show that p97-mediated degradation, together with a reduction in CReP synthesis, is essential for timely stress-induced reduction of CReP levels and, consequently, for robust eIF2? phosphorylation to enforce the stress response. Thus, our results demonstrate that p97 not only facilitates bulk degradation of misfolded proteins upon stress, but also directly modulates the integrated stress response at the level of signaling.
(Emphasis mine.) And from the conclusions:
Intriguingly, CReP degradation is triggered by the SCF-?-TrCP ubiquitin ligase complex (54-57) as is degradation of two other p97-substrates, IkB? and CDC25A (58,59). This reveals how p97 function is intertwined with stress signaling. DNAdamage induced degradation of CDC25A halts cell cycle progression, while CReP degradation is part of the integrated stress response that governs global protein synthesis through regulation of eIF2? phosphorylation. p97 has therefore an unanticipated dual role in maintaining cellular homeostasis (see model Fig. 5F).
(Emphasis mine) Dual or multiple roles seem to be the rule in this intricate network of networks! :) gpuccio
bill cole: "I think Michael Behe said it best when he said that common descent in itself is not that important. It’s explaining new genetic information thats important." That is certainly true! :) gpuccio
ET
The problem with Common Descent is we don’t actually know what determines form. Until we know that we don’t know what has to change which means it is an untestable concept. Saying it predicts certain patterns is nonsense as the patterns depend on the mechanisms involved.
I agree with gpuccio, this is a very good point. It is hard to argue with guys over their like John and Joe as they understand the phylogenetic details so well. I think Michael Behe said it best when he said that common descent in itself is not that important. It's explaining new genetic information thats important. bill cole
ET: "The problem with Common Descent is we don’t actually know what determines form. Until we know that we don’t know what has to change which means it is an untestable concept. Saying it predicts certain patterns is nonsense as the patterns depend on the mechanisms involved." I agree. The lack of understanding of form control remains a key point. That's why I never reason about form and macroscopic issues, but only about what is understood at the molecular level. That's why evolutionary biologists and molecular biologists are two different populations. Very different indeed. In the end, neo-darwinism flourishes only in the imagination of evolutionary biologists who choose to completely ignore molecular biology. Of course, you can well understand what side I prefer! :) gpuccio
gpuccio
bill cole: Gene loss is not rare in the existing proteomes. It is, however, and exception and not the rule. Not an extremely rare exception, but an exception just the same. Some cases are easier to explain, others are somewhat weird. However, there are prbpably specific explanations in each case, but of course we don’t know them all. But that happens in all biological fields. The general trend is what counts most, even if the exceptions are interesting.
If information is intentionally added to the genome then it can be intentionally taken away so this flower works with your CD plus information idea imo. The idea is challenging for the guys who think this comes solely from nature. Whats the problem with a couple of hundred of randomly lost genes among friends :-) bill cole
The problem with Common Descent is we don't actually know what determines form. Until we know that we don't know what has to change which means it is an untestable concept. Saying it predicts certain patterns is nonsense as the patterns depend on the mechanisms involved. ET
gpuccio- I have always been a fan of nested hierarchies. With biology they fly in the face of universal common descent via gradual processes. And they support a Common Design. They are plans, in a sense. I just posted the bit about nested hierarchies because it shreds what they claim over on TSZ. If Bill wanted to get them with something that is it. ET
gpuccio
However, let’s keep an open mind. My only point is, scientific arguments must be drawn only by facts, never by pre-conceived ideas.
I completely agree and if common descent eventually is validated thats fine. The issue I have is that evolutionary biology uses it as an a priori assumption which can be misleading. Almost all evolutionary papers assume the truth of common descent driven by the blind watchmaker plus other natural mechanisms. So is common descent driven by natural processes currently being used as a hypothesis or a pre-conceived idea? I think your support of common descent with added information is a solid working hypothesis and you have made real arguments for that position. I am interested to see how this shakes out and look forward to ongoing discussions. bill cole
ET: I have never been a fan of nested hierarchies, whatever their use, in favor of CD or against it. I simply find the idea unappealing. As you can see, my argument for CD is completely different. gpuccio
bill cole: Gene loss is not rare in the existing proteomes. It is, however, and exception and not the rule. Not an extremely rare exception, but an exception just the same. Some cases are easier to explain, others are somewhat weird. However, there are prbpably specific explanations in each case, but of course we don't know them all. But that happens in all biological fields. The general trend is what counts most, even if the exceptions are interesting. gpuccio
bill cole: I have no reasons to force a belief in common descent on anyone. My simple point is that it is IMO the best explanation for what we observe. However, I have made clear that I see common descent as discontinuous as far as new complex functional information is involved: each speciation event, or most of them, is probably an instance of design intervention. The "descent" implies however the physical transmission of the already existing information to the new species or organism, while the added information does not descend at all: it is just added. Of course, some events are certainly more "discontinuous" than others. You point, very correctly, to the emergence of eukaryotes. Does that mean that in those cases there was no physical descent? I don't know. Eukaryotes certainly have huge new structures. But they also use a lot of prokaryotic stuff, for which a lot of evidence of common descent can be found. Moreover, it is extremely likely that the mitochondria derive from bacteria and the chloroplast from cyanobacteria, in both cases thorugh some form of symbiosis and a very strong re-engineering. One point where some non universal descent could be found is LUCA, which could have been a pool of different organisms. Bacteria and archaea, and according to someone even the ancestor of eukaryotes. Maybe... However, let's keep an open mind. My only point is, scientific arguments must be drawn only by facts, never by pre-conceived ideas. gpuccio
We had a discussion at TSZ around common design vs common descent where the key argument from the evolution side was the nested hierarchy.
Common Descent does not expect a nested hierarchy: keiths continues to puke all over himself when it comes to nested hierarchies. And even though it has been proven that Doug Theobald is totally wrong keiths continues to reference him on nested hierarchies. Theobald wrongly spews:
The only known processes that specifically generate unique, nested, hierarchical patterns are branching evolutionary processes.
WRONG! Linnaean Taxonomy is an objective nested hierarchy and it doesn't have anything to do with branching evolutionary processes. Corporations can be placed in objective nested hierarchies and again they have nothing to do with branching evolutionary processes. The US Army is a nested hierarchy and it too has nothing to do with branching evolutionary processes. Clearly Theobald is ignorant of nested hierarchies. He goes on to spew:
It would be very problematic if many species were found that combined characteristics of different nested groupings
Umm, TRANSITIONAL FORMs have combined characteristics of different nested groups, Dougy. And your position expects numerous transitional forms. But Doug's biggest mistake was saying that phylogenies form a nested hierarchy- they don't as explained in the Knox paper- “The use of hierarchies as organizational models in systematics”, Biological Journal of the Linnaean Society, 63: 1–49, 1998. Even Darwin knew that if you tried to include all of the alleged transitional forms you couldn't form distinguished groups:
Extinction has only defined the groups: it has by no means made them; for if every form which has ever lived on this earth were suddenly to reappear, though it would be quite impossible to give definitions by which each group could be distinguished, still a natural classification, or at least a natural arrangement, would be possible.- Charles Darwin chapter 14
Nested hierarchies require distinct and distinguished groups- again see Linnaean Taxonomy. AND nested hierarchies are artificial constructs. So only by cherry picking would Common Descent yield a nested hierarchy. And I understand why the losers here don't want to discuss it. Zachriel, Alan Fox and John Harshman are also totally ignorant when it comes to nested hierarchies. Now I know why I was banned from the skeptical zone- so I couldn't refute their nonsense to their faces. This way they can continue to ignore reality and prattle on like a bunch of ignoramuses. Sad, really. Here is another hint from the Knox paper:
Regardless of what is eventually learned about the evolution of Clarkia/Heterogaura, the complex nature of evolutionary processes yields patterns that are more complex than can be represented by the simple hierarchical models of either monophyletic systematization or Linnaean classi?cation.
Notice the either or at the end? Only Linnaean classification is the objective nested hierarchy with respect to biology. And what does UC Berkley say about Linnaean classification?:
Most of us are accustomed to the Linnaean system of classification that assigns every organism a kingdom, phylum, class, order, family, genus, and species, which, among other possibilities, has the handy mnemonic King Philip Came Over For Good Soup. This system was created long before scientists understood that organisms evolved. Because the Linnaean system is not based on evolution, most biologists are switching to a classification system that reflects the organisms' evolutionary history.
and
*The standard system of classification in which every organism is assigned a kingdom, phylum, class, order, family, genus, and species. This system groups organisms into ever smaller and smaller groups (like a series of boxes within boxes, called a nested hierarchy).
It was based on a common design scheme. Dr Denton destroys the argument in "Evolution: a theory in crisis"- back in the 1980s. ET
gpuccio I think you have made a good case for additional common descent but the lines of demarkation are still fuzzy to me. We had a discussion at TSZ around common design vs common descent where the key argument from the evolution side was the nested hierarchy. I think this is a weak argument however some of the guys over there are experts in this area so my opinion may be based on ignorance. I think you would agree that the eukaryotic cell is a separate origin event at this point given the information content of PRP8 gene and the overall size of the spliceosome along with the nuclear pore complex and chromosome structure. If you agree then we both don't support universal common descent. The question in my mind is how many separate origin events are there in the history of life. The following Venn diagram was introduced by Sal Cordova in the common design vs common descent argument. I named it Sals Flower :-) http://www.sci-news.com/genetics/article01036.html As you look at the diagram what your see are genes appearing in what appears to be a distant genetic relationship to humans and then re appearing. The evolution guys explain this as gene loss but with no real explanation how genes get lost and found. This flower is not what I would expect if all these species shared a common ancestor. bill cole
asauber: "The implication being there is still the purpose, knowledge, and skill of making designed watches." Very true! :) It's strange how even Dawkins, in choosing the title for his famous book, had to borrow the image of a designer, however unable to see, to lend some credibility to his concept of evolution. Neo darwinists try to do that all the time: unguided evolution has become a person, a god, an artist, a genius, a saint, whatever. And its imaginary powers are continuosly "discovered" by some new scientific paper with unprecedented "amazement", "surprise", "awe" or other mystical experiences. And they are right! Each new thing that is discovered in those scientific papers does deserve amazement, surprise and awe, and has some of the aspects of a mystical experience, in a sense. What a pity that it's not their theory of evolution that did those things! :) gpuccio
Or it's an oxymoron ET
“blind watchmaker”
ET, And if you think about it, a blind watchmaker is still a watchmaker. The implication being there is still the purpose, knowledge, and skill of making designed watches. Andrew asauber
My apologies but this is too good not to post here. TSZ'z "entropy" is totally clueless:
All I know about some “blind watchmaker” is that such wording is in the title of a book by Richard Dawkins that I didn’t read (and that I have no intention to read). So you can go to hell with your demands for a defence of a book I didn’t read and I don’t care about.
LoL! It isn't just the name of the book. It is what Darwin proposed and what every evolutionary biologist since accepts. Here, you can end your clueless willful ignorance by reading what Jerry Coyne says: Natural selection and evolution: material, blind, mindless, and purposeless How ignorant are our opponents? Bill Cole- feel free to let entropy know how ignorant it is ET
gpuccio Thanks for the detailed response.
a) There is one original sequence which is incorporated into eacn new species at the moment of its creation b) That sequence has always the same nucleotides, including the same original synonimous sites c) From the moment the species is created, neutral variation changes synonimous sites accordign to time, while constrained sites are kept by purifying selection Have I understood well?
I would not say every new specie has a unique created genome as that would eliminate all speciation events which is not realistic. I also think you have shown here that this is very unlikely. The time element needs to include generation times as mutation is tied to reproduction. I will spend some time with the work you have done and respond soon. bill cole
bill cole: Unfortunately, it does not work. Just one example. These are Ks value that I have just computed for ATP synthase beta chain, a very conserved sequence just from prokaryotes. Now, let's look at the vertebrate lineage: Human - Callorhincus milii: Ks = 1.34839615 Human - Danio rerio: Ks = 1.19113878 Human - mouse: Ks = 0.40751117 Human - chimp: Ks = 0.01580554 Now, if I understand you well, you are saying that those results, which are very well explained by common descent, could be also explained by something like that: a) There is one original sequence which is incorporated into eacn new species at the moment of its creation b) That sequence has always the same nucleotides, including the same original synonimous sites c) From the moment the species is created, neutral variation changes synonimous sites accordign to time, while constrained sites are kept by purifying selection Have I understood well? OK, that does not work. The time of appearance of one final species and the time of divergence of two lineages are two different things. For example, the human lineage diverged from bony fish when tetrapods appear, maybe 340 million years ago, but the species Danio rerio, which I have used in my computations as a bony fish, is certainly much more recent. It could probably have an age comparable with the age of the mouse. Yet, its Ks computed with the human protein is 1.19113878, while the Ks of the human - mouse comparison is only 0.40751117. It's the time of split between lineages which determines the Ks, not the age of the individual species. Let's look at a confirmation for that. Here are the Ks values for some non vertebrate species. These are all protostomia, so the time of split is the split between protostomia and deuterostomia, which is certainly older that the origin of vertebrates in deuterostomia. Let's say well beyond the 400 million years. Human - Drosophila: Ks = 1.81009369 Human - Apis mellifera (bee): Ks = 1.83476336 Human - Bombus impatient (bumblebee): Ks = 1.96105600 Apis mellifera - Bombus impatient: Ks = 0.41196043 Drosophila - Apis mellifera: Ks = 1.78977470 Now, you can see that the separation of diptera (flies) from hymenoptera (bees, wasps, ants) is rather old, and that can be seen in the high Ks. Instead, the separation between the two bees is rather recent, and the Ks is similar to the one we see between human and mouse. We are at about 100 million years. But the two bees are certainly rather recent species, the oldest bee fossil is at 100 million years. And yet, look at their Ks in the comparison to humans: 1.83476336 and 1.96105600 perfectly comparable to the Ks between Drosophila and humans: 1.81009369 Why? Because those species, even if they are rather recent species, share a very old separation from the human lineage: the protostomes - deuterostomes split. So, I would say that what we see in Ks is the effect of the time of split between lines, and not of the age of the species. gpuccio
gpuccio
No. If the sequence has reamained so similar after 400 million years of separation for each line, and yet the synonimous sites change according to the separation times, how do you explain that?
The mutation was contained by purifying selection just as your hypothesis. Over 400 years it randomly mutated within that constraint. So we are looking at a sequence that mutated through a very tight constraint for 400 million years and the original "designed" sequence is unknown. Lets say as a working hypothesis that a highly conserved protein x always had the original DNA sequence when a new genome was introduced. What we then are looking at with saturation is comparing old genome to a new one.
But the synonimous sites (Ks) are completely different between sharks and humans, only partially different between mouse and humans, and very similar between chimp and humans. How do you explain that?
Common descent or genomes of different ages.
So you can say that the homologies are present because of common design. Not convincing, but possible.
In the case of common design they are present because of when the designed information originated and the time the genomes had to mutate.
But why would the designer “design” synonimous sites accordign to a gradient of similarity corresponding to time?
The working hypothesis is that the design mechanism used a standard blueprint and you are looking at the amount of time the standard sequence has had to mutate along the very constrained mechanism of purifying selection.
Why should the third nucleotides in codons, which are usually in great part non significant to the protein sequence, be so similar between chimps and humans (6 million years), less similar between mouse and humans (100 million years), and completely different between shark and humans (400 million years)?
The amount of time those genomes have been going through reproduction
I say that there is no reasonable explanation for that, if one denies common descent. If you have one, please let me know.
If we restrict our selves to material explanations then I agree but we are talking about 1000bit jumps in AA sequence information so we are probably discussing this outside spacetime anyway :-) bill cole
DATCG at #513, I'm here, I'm here. :) Sorry I am just now seeing your post. The fact is that just after GP started this OP, I was happily following along, but then I got called away and lost ground with the conversation (which was moving very fast). I then got called away again and again, never completely catching back up. At one point my beautiful bride even gave me a quick ride to the hospital to visit their cath lab. :) So, I've downloaded several of the links and book-marked the page to catch back up. I do stand by what I said earlier, this is easily one of the best articles ever on UD. Great job. I'm sure we'll soon be hearing that it all came about by chemical affinities. :) Upright BiPed
GP, your link in #532. Excellent! Upright BiPed
bill cole: "What the data looked like at the origin is unknown to us." No. If the sequence has reamained so similar after 400 million years of separation for each line, and yet the synonimous sites change according to the separation times, how do you explain that? Let's say that a very conserved protein has almost the same non synonimous sequence in cartilaginous fish, in mouse, in chimp and in humans. But the synonimous sites (Ks) are completely different between sharks and humans, only partially different between mouse and humans, and very similar between chimp and humans. How do you explain that? Let's say for a moment that the sequence was "created" from scratch in all species. So you can say that the homologies are present because of common design. Not convincing, but possible. But why would the designer "design" synonimous sites accordign to a gradient of similarity corresponding to time? Why should the third nucleotides in codons, which are usually in great part non significant to the protein sequence, be so similar between chimps and humans (6 million years), less similar between mouse and humans (100 million years), and completely different between shark and humans (400 million years)? I say that there is no reasonable explanation for that, if one denies common descent. If you have one, please let me know. That common descent is universal or not is quite another problem, and much more difficult. But that common descent is rather pervasive (in the sense of guided common descent) is, IMO, undeniable. Exactly the same type of reasons that makes me believe so strongly in biological design makes me believe also (a little less strongly, maybe) in guided common descent. gpuccio
gpuccio I think you have supported your theory well with your Ka and Ks data as it shows strong purifying selection and therefor strong functional information in the DNA and transcribed proteins. I think I got ahead of myself on support of common descent. While I think common descent is a viable hypothesis I am not sure how pervasive it is across all life forms. The data you're seeing could also be coming from new original life forms or completely de novo genomes. My error was not thinking through that the cartilaginous fish data you have was 400 million years after the fishes origin. What the data looked like at the origin is unknown to us. It is indeed possible that the human DNA sequences look very much like the fish sequences did 400 million years ago as it could be a very young genome. I eagerly await your thoughts on this.I am looking at start and stop condon data to see if it can shed any light on this situation. bill cole
To all: For those who still have doubts about the symbolic nature of the ubiquitin code, this is a recent paper about its role in DNA repair: Writers, Readers, and Erasers of Histone Ubiquitylation in DNA Double-Strand Break Repair https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4923129/ I would suggest to have a look at Fig. 2, which lists the "Major writers, readers, and erasers of DSB-associated histone ubiquitylation." And this is only for one specific function! :) gpuccio
DATCG: Interesting protein, RNK41. In single celled eukaryotes, we just find a low homology (best hit 97.4 bits), localized at the N terminal RING domain (AAs 12-132). But the protein emerges in a form much more similar to the human sequence already in the first metazoa: Sponges: 327 bits Cnidaria: 426 bits To get very quickly to the extremely high value of 624 bits in cartilaginous fish, as said at #530. IOWs, it is a protein whose main engineering takes place at the origin of metazoa, let's say around the Cambrian explosion. gpuccio
DATCG: This is an interesting methodology to study the connection between different pathways. They show how a realible interactome can be built for one protein, in this case the Ring finger protein 41 E3 ubiquitin ligase, validating 19 different interactions as high-confidence interactors. Of course, those 19 proteins belong to highly different pathways. The paper is paywall, but the graphical abstract is clear enough: High-Confidence Interactome for RNF41 Built on Multiple Orthogonal Assays https://pubs.acs.org/doi/full/10.1021/acs.jproteome.7b00704
Ring finger protein 41 (RNF41) is an E3 ubiquitin ligase involved in the ubiquitination and degradation of many proteins including ErbB3 receptors, BIRC6, and parkin. Next to this, RNF41 regulates the intracellular trafficking of certain JAK2-associated cytokine receptors by ubiquitinating and suppressing USP8, which, in turn, destabilizes the ESCRT-0 complex. To further elucidate the function of RNF41 we used different orthogonal approaches to reveal the RNF41 protein complex: affinity purification–mass spectrometry, BioID, and Virotrap. We combined these results with known data sets for RNF41 obtained with microarray MAPPIT and Y2H screens. This way, we establish a comprehensive high-resolution interactome network comprising 175 candidate protein partners. To remove potential methodological artifacts from this network, we distilled the data into a high-confidence interactome map by retaining a total of 19 protein hits identified in two or more of the orthogonal methods. AP2S1, a novel RNF41 interaction partner, was selected from this high-confidence interactome for further functional validation. We reveal a role for AP2S1 in leptin and LIF receptor signaling and show that RNF41 stabilizes and relocates AP2S1.
By the way, RNK41 (317 AAs) is in the class of extremely conserved E3 ligases: Cartilaginous fish - humans homology: 624 bits, 92% identities, 97% positives gpuccio
bill cole: Yes, it is correct. If we want to have it a little more technical, I would say: "What gpuccio showed is that the DNA of protein coding genes is affected, like all DNA, by random neutral variation, which interests all nucleotides which have no functional constraints, and that includes synonimous sites and non synonimous sites where the substitution of the AA has no relevant effect on the function of the protein. Non synonimous substitution which affect protein function, instead, are antagonized by purifying selection and therefore usually don't reach fixation. Over long evolutionary periods, that implies that highly conserved sites are functionally constrained, and therefore the blast bitscore between homologues that are separated by a long evolutionary split is a very good approximation of the functional information in that protein before the split which is conserved after the split. Information jumps (the apperance of huge amounts of functional information at some definite time window, which was not present before and will be conserved after) can only be interpreted as an addition of functional information to the previously existing genome, IOWs an intervention of engineering. So, genomes appear to be engineered throughout evolution by discrete additions of specific new functional information, which is added to the previously existing sequences, while the basic neutral sequence is retained, including the neutral variation which has already accumulated. As the accumulation of neutral variation at synonimous sites is grossly proportional to the evolutionary distance of the two lineages, it is not credible that genomes are re-written from scratch: those observations support instead an explanation based on directed common descent." gpuccio
gpuccio Does the follow explanation for your argument make sense to you. Any correction would be appreciated.
What gpuccio showed is that DNA was mutating but only until purifying selection stopped it due to a deleterious mutation caused by an AA substitution.This showed that a gene was mutating randomly over long periods of time and stopped only by purifying selection.Information additions to the genome appear to be based on the current genome and not whole genome changes per organism.This supports directed common descent.
bill cole
gpuccio This is very powerful data. I now am on board with common descent where information is infused along the way. If completely fresh information was infused or brand new genomes I would not expect the data you showed. The extreme PRP8 sensitivity for mutation is very strong evidence that the eukaryotic cell was a separate origin event. More data will help put this fascinating puzzle together. I would expect most nuclear proteins to have extreme mutation sensitivity. It turns out the Axe's data is probably conservative for most nuclear proteins. bill cole
bill cole: I have found my most recent comments about the ka/ks ratio. they are in the discussion about my spliceosome OP, here: https://uncommondesc.wpengine.com/intelligent-design/the-spliceosome-a-molecular-machine-that-defies-any-non-design-explanation/ at #317 and 319. gpuccio
bill cole: Yes, I have looked at the DNA sequences exactly for that, to compute the Ka/Ks ration in couples of proteins of different evolutionary history. I have also posted my results somewhere, not in an OP, I believe, but in comments, and I really don't remember where. My main reason to do that was to explain why I completely accept the idea of common descent. This is a matter that many ID friends have some confusion about. Many think that the main evidence for common descent comes from the conserved homologies. So they object that those homologies could be explained by common design. Now, that is already a weak argument, because it is certainly more convincing to explain homologies by common descent, but there could be some doubt anyway. Instead, the greatest evidence for common descent comes not from the homologies, but from the differences in homologues. As you say, from the Ks. Because the rates of Ks, even if with great variability, are grossly proportional to the evolutionary distance. For example, look here: https://www.ncbi.nlm.nih.gov/books/NBK21946/ at Fig. 26-17, where the data for the beta globin gene are presented. You can see that the mean rate of Ks (synonimous mutations for synonimous site) is about 0.67 mutations per site per 100 million years. That means that after 400 million years (the range I consider for cartilaginous fish - human divergence) any gene coding proteins will have undergone almost 3 synonimous mutations per synonimous site. That is more than enough to reach "saturation". IOWs, if we estimate the Ks between species that are distant, say, 50 million years, or 100, or 200, we can see (with many irregularities, of course) the increase of the Ks in relation to the evolutionary distance. My personal computations of the Ks value for a few couples of proteins, published here somewhere, do confirm that scenario. But at 300 - 400 million years we reach "saturation": the synonimous sites do not present any more detectable homology. So, a couple of proteins with 400 million years separation and a couple of proteins with 1 billion years separation will no more be distinguishable from their Ks values, which will be maximum in both cases. But the gradual increase of Ks in smaller evolutionary distances can be explained, IMO, only by common descent. I am aware of no other viable explanations for that. gpuccio
gpuccio First I really like the work you have done. I hope some of the folks at TSZ will begin to understand it. Second: Do you ever look at the DNA sequences that translate into the protein structure and how they change? It would be interesting to look at synonymous mutations, This might give us a clue if common descent was really occurring. If we were seeing lots of synonymous mutations and almost no AA substitutions then I would think that the hypothesis of speciation is supported and the result is from purifying selection. bill cole
bill cole: Yes, but I would add couple of clarifications: 2. I compare human proteins with a few selected groups of ancestral animals (those that you see mentioned on the red dotted line in Fig. 5 of this OP: cnidaria, cephalopoda, deuterostomia not vertebrate, cartilaginous fish, bony fish, amphibians, crocodiles, marsupialia, afrotheria). On the x axis you can find the approximate time of split of each group from the human lineage. I have a database with the results of the blast of the whole human proteome with the known proteins in all those groups. I always take the best hit for each protein blast. So, in each blast, it's always a single human proteins which is compared to all the known proteins in the chosen group of organisms. That's why I speak of human conserved information. 3. Yes, I assume common descent of proteins. Of course, I don't mean unguided common descent. Common descent is the background on which design interventions take place. It means that the protein sequence is physically passed from species to species, and remains the same except for the working of random mutations on non functional sites (neutral evolution) or of design interventions on the sequence. The idea is that neutral evolution changes the sequences unless they are functionally constrained (in which case, negative purifying selection guarantees their conservation). 400 million years are more than enough to ensure that any non functional homology will be erased by neutral evolution. 6. Yes. But I speak of "jumps" only when a big amount of new functional information appears at some step. So, even if some re-engineering of the protein can be seen almost always at almost all steps, real "jumps" are not the rule. But they are not rare at all. Of course, the bigger the jump, the greater the evidence for a design inference. You can find in my OPs examples of many jumps of hundreds of bits, or even of thousands, for one single protein. I have focused mainly on the transition to vertebrates, because the greatest engineering of proteins in relation to their human form seems to take place at that time, in those 40 million years. You can find a global evaluation of the total functional information jump from pre-vertebrates to vertebrates (measured using the whole human proteome) here: The amazing level of engineering in the transition to the vertebrate proteome: a global analysis https://uncommondesc.wpengine.com/intelligent-design/the-amazing-level-of-engineering-in-the-transition-to-the-vertebrate-proteome-a-global-analysis/ My final result, for the whole human proteome, is: 1,764,427 bits This result is well detailed in the mentioned OP. Quite amazing, I would say! :) gpuccio
Gpuccio
I hope that is enough as an asnwer to “Entropy”. Let me know if you are satisfied (he will not be, certainly).
Thank you very much. Entropy misunderstanding of your thesis is my fault as my description was incomplete. Your work is quite interesting and I hope it gets a fair evaluation. Let me summarize what I read and you can correct any misunderstandings. 1.You use uniprot to access protein sequences. 2.You compare homolgous proteins from an ancestral group of animals 3.You assume common descent as a working hypothesis 4.Conserved sequences are used as a measure of function as they are a strong indicator of purifying selection 5.You identify conserved sequences over long periods of time and identify when in history they appeared 6.The point in history they appeared is what you call an "information jump" bill cole
These guys don't even have a mechanism for producing eukaryotes and that is given starting populations of prokaryote and archaea. They don't have any place to talk about Universal Common Descent because of the total lack of a mechanism. They need to work on their own claims instead of flailing like losers at ID ET
Entropy is clueless and proud to be willfully ignorant. Entropy couldn't support blind watchmaker evolution if its life depended on it. ET
bill cole: Now, that said, let's go back to the original statement by "Entropy":
Entropy: He cannot know if information has increased or decreased over time unless he had access to all life existing at any given moment. Examining a few organisms, and comparing them to a few other, apparently less complex, ones, and concluding that information has increased, rather than reorganized, is quite a hasty conclusion.
"He cannot know if information has increased or decreased over time unless he had access to all life existing at any given moment." What does that mean? Absolutely nothing! I have made a very clear and explicit reasoning about human conserved functional information, and I have never had any need to "have access to all life existing at any given moment", which seems really a very daunting task, but fortumately not a necessary one! :) I am not interested in saying if "information has increased or decreased over time". I don't know why he says such a thing. I have simply followed the evolutionary history of a specific sequence information, found that it appears at a certain time, and inferred that it is functional from its very long conservation. I have no need to "have access to all life existing at any given moment" to say that, luckily. "Examining a few organisms, and comparing them to a few other, apparently less complex, ones, and concluding that information has increased, rather than reorganized, is quite a hasty conclusion." A procedure and a conclusion that I have never done or stated. I have not examined "a few organisms". I have taken the whole human proteome, and blasted it against the proteome of a few representative groups of organisms. So, when I say that the homology of Pricke1 (the whole protein), Q96MT3, has a functional information jump of 689 bits in vertebrates, it just means that if I blast the human protein against all known proteins in metazoa, except vertebrates, the best hit is: 500 bits (Branchiostoma belcheri, a lancelet) while if I blast it against all known proteins in cartilaginous fish, the best hit is: 1189 bits (Callorinchus milii, a shark) So, those 689 bits of sequence information appear in the precursor of cartilaginous and bony fish, and there is no trace of them before. And they are functional, because after their appearance they are retained up to humans. In all this reasoning I am nowhere saying that some organisms are more complex than others. I am not saying that the shark is more complex than the lancelet. I am only saying that the protein in the shark has 689 bits of functional information that are specific to the vertebrate form of the protein, and that will be retained in vertebrates up to humans. Of course, the lancelet can well have a fully functional protein, with hundreds of functional bits specific to its lineage. I hope that is clear. A final note. I have seen that some other commenters (or maybe "Entropy" himself) seem to believe that my reasonignis are not valid because I use modern sequences, and have no access to the old ancestors. This is, of course, completely silly. If I find such high sequence homologies between, say, cartilaginous fish and humans (as they are today), after 400+ million years from the pertinent split (the split between cartilaginous fish and bony fish, because the human lineage derives from bony fish), the only evolutionary explanation for that homology is that the sequence that I am observing today was already present in the common precursor of cartilaginous fish and bony fish, more than 400 million years ago. There is no other possible explanation, unless they want to renounce to the basic principles of evolutionary biology. So, what we see in modern organism by that procedure are not modern sequences: they are old sequences. Therefore, all my reasoning is perfectly correct. I hope that is enough as an asnwer to "Entropy". Let me know if you are satisfied (he will not be, certainly). gpuccio
bill cole: Let's go on: 3) Why human conserved functional information? There is no special reason. Of course we are more interested in functional information in humans. But it's important to understand that my procedure uses the 20000 human proteins sequences as "probes" to measure the evolutionary history of functional sequences. Of couse, if we use human proteins a s probes, we must use split times from the human lineage in the analysis. But we could use the proteome of the bee, for example, and use split time from the bee lineage, and obtain results that would be specific for the bee lineage. The procedure would be the same. So, just to be clear, there is no assumption at all in my procedure that humans are the final or highest realization of evolutionary history. Not even that they are more functional than c. elegans. Nothing at all like that. This is important to understand the next point. 4) When I say that human conserved functional information has incerased at some node in evolutionary history, I don't mean that the human form of the protein is more functional. I just mean that the specific sequence information that make the protein functional in humans appears at some point of evolutionary history, and not before. Let's consider my first OPs which used this methodology: Homologies, differences and information jumps https://uncommondesc.wpengine.com/intelligent-design/homologies-differences-and-information-jumps/ which was very successful, and its follow-up: Information jumps again: some more facts, and thoughts, about Prickle 1 and taxonomically restricted genes. which has been much less read and commented, but IMO expresses some very important concepts. In thes two OPs I discuss an important regulatory protein, Prickle1, and I find that it can be considered as made of two different parts: a domain part, with an evolutionary histori of rather gradual increase of the human conserved information, and another part, the no domain part, where the human conserved information appears practically from nothing in cartilaginous fish. I paste here the main points of the second OP, inviting all interested to read the details in the OP itself:
1) In the first post, I have focused on the human form of the protein, and used its two sequences to measure different levels of homology in metazoa. 2)The blue sequence in humans has been found to be highly conserved in vertebrates (and therefore almost certainly functional), and amazingly restricted to them. 3)But what about other metazoa? The important point is: there is always a “blue sequence” in the Prickle 1 protein, in all taxa. But it is completely different from the blue sequence in vertebrates. 4) The main point of this post is to demonstrate that the blue sequence in Prickle 1 is a good example of a functional sequence which is highly taxonomically restricted.
So, it should be clear that I am not saying that Prickle1 is more functional in humans than, say, in arthropoda. Not at all. I am only saying that the non domain part of the molecule (the blue part, in my OPs), os different in different groups of organisms, and determines the specificity of the protein function in the respective groups of organisms. In vertebrates, the specific "blue" sequence that we find in humans appears, practically wiht no priot antecedent, in cartilaginous fish, and already shows an extremely high homology with the sequence as we find it in humans. 5) That's why I can make two very important statements about the "blue" sequence in Prickle1, and about all the proteins which behave in a similar way: a) A very specific AA sequence appears in the evolutionary window, between pre-vertebrates (including the first chordata) and cartilaginous fish (or, more precicesly, the common ancestor of cartilaginous and bony fish). IOWs in an evolutionary window of approximately 40 million years. b) That new sequence is conserved for more than 400 million years, up to humans. From that simple fact we can infer that it is functional, and that it is a highly restrained functional sequence. c) We can use the bitscore of the BLAST between the human forma and the cartilaginous fish form as a very good approximate measure of the functional complexity of that particular sequence. 6) The conclusion in c) which could seem very strong, is indeed a direct consequence of the basic principles of evolutionary theory itself. In fact: a) I am assuming common descent of the protein b) I an considering the time split of different lineages in accord with what is known by evolutionary science c) I am assuming neutral random variation during the common descent for all the parts of the proteins that are not functionally constrained d) I am assuming purifying negative selection for all the parts of the protein that are functionally constrained e) I am assuming that a time window of 400+ million years (what we have between cartilaginous fish and the human lineage) is more than enough to destroy any homology in all non functional sites: that is proved by the simple fact that Ks reaches saturation for such an evolutionary split time. f) Therefore, the sequence information that is conserved between cartilaginous fish and humans is certainly highly functional and highly constrained by that function. IOWs that sequence information appears in cartilaginous fish and cannot change any more under the effect of neutral variation, because negative purifying selection preserves it. More in next post. gpuccio
bill cole: Here we are. At TSZ there is a lot of trash, and I cannot certainly answer everything. So, I will stick for the moment to the comment by Entropy mentioned by you at #514. Here it is:
Entropy: He cannot know if information has increased or decreased over time unless he had access to all life existing at any given moment. Examining a few organisms, and comparing them to a few other, apparently less complex, ones, and concluding that information has increased, rather than reorganized, is quite a hasty conclusion.
What this guy is saying is really unclear. However, it is clear enough to understand that he understands nothing of functional information and of my biological arguments. It's really difficult to decide where to start. 1) In my recent OPs, I have never generically stated that "information has increased" somewhere, sometime. My statements are much more precise. 2) I always refer to functional information. A concept of which I have given a very explicit definition here: Functional information defined https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ In brief, functional information is the information necessary to implement sone explicitly defined function, and is measured in relation to the defined function. Any function can be defined, and the functional information in some object can be measured as linked to that function. This concept is very dofferent from some generic concept of information, and to this concept we must refer in all my reasonings. If "Entropy" does not understand this simple idea, he is hopeless. By the way, many other discussnts at TSZ have cleraly shown that they do not understand this simple idea in their recent interventions, But we will stick to "Entropy" for the moment. 2) In all my recent biological OPs, I have never measured a generic form of functional information. I have measure a very specific form: human conserved functional information. I have explained my procedures in some detail here: Bioinformatics tools used in my OPs: some basic information. https://uncommondesc.wpengine.com/intelligent-design/bioinformatics-tools-used-in-my-ops-some-basic-information/ Now, if you look at Fig. 4 and Fig. 5, you can see that the quantity measured (on the y axis) is "Human conserved functional information". The x axisi in Fig. 5 reports instead the approximate time of split from human lineage of the various groups of organisms tested. In this particular case, the human conserved functional information is expressed in bits per aminoacid site (baa), but it can also be expressed in absolute BLAST bitscore. OK, more in next post. gpuccio
bill cole: Wow! I had decided not to look at TSZ after the poor quality of their first comments, but your posts here convinced me to have another try. Well, I must asy that I am touched by your commitment to defend my positions, and amazed at the predictable vacuum of their continued "arguments". The most amazing features seem to be: a) How much time and pretended intelligence they are dedicating to a poor guy like me, who apparently does not deserve it, being a complete ignorant of science affected by some ill defined form of mental disease. Human generosity knows no bounds, especially when it is supported by a skeptical compassion! :) b) The complete absence of any biological argument in favor of neo-darwinism c) The absolute faith in neo-darwinism, in the absence of any produced argument in its favor d) The complete lack of undrestanding, coupled to tons of misunderstanding, of the ID theory in general, and of my arguments in particular. That confirms my prior conviction that it's useless to debate with those people (and I am really sorry for that). I am a little disappointed: in the past there were better discussants at TSZ, but in some way natural selection must have worked hard there to purify fanaticism. I have not the time to answer your questions now, but I will do that later. Thank you again for your goodwill! :) gpuccio
Gpuccio Here is another comment I made to him.
I have left him a post at UD. I think we are working with different definitions of information and will spin our wheels until we can sync up on a definition.
bill cole
Gpuccio Here is one of my comments and challenges at TSZ. Any thoughts would be appreciated :-)
colewd: ME:His argument is based on information jumps in proteins that correspond with the age animals in the fossil record.He is looking at DNA and protein sequences and how information has increased over time. Entropy: He cannot know if information has increased or decreased over time unless he had access to all life existing at any given moment. Examining a few organisms, and comparing them to a few other, apparently less complex, ones, and concluding that information has increased, rather than reorganized, is quite a hasty conclusion.
bill cole
Gpuccio, To review Cbl, BCR and TCR, came across this paper from 2014. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111751/ Published online 2014 May 14. doi: 10.4161/cc.29213 E3 ubiquitin ligase Cbl-b in innate and adaptive immunity Qingjun Liu, Hong Zhou, Wallace Y Langdon, and Jian Zhang Abstract
Casitas B-lineage lymphoma proto-oncogene-b (Cbl-b), a RING finger E3 ubiquitin-protein ligase, has been demonstrated to play a crucial role in establishing the threshold for T-cell activation and controlling peripheral T-cell tolerance via multiple mechanisms. Accumulating evidence suggests that Cbl-b also regulates innate immune responses and plays an important role in host defense to pathogens. Understanding the signaling pathways regulated by Cbl-b in innate and adaptive immune cells is therefore essential for efficient manipulation of Cbl-b in emerging immunotherapies for human disorders such as autoimmune diseases, allergic inflammation, infections, and cancer. In this article, we review the latest developments in the molecular structural basis of Cbl-b function, the regulation of Cbl-b expression, the signaling mechanisms of Cbl-b in immune cells, as well as the biological function of Cbl-b in physiological and pathological immune responses in animal models and human diseases.
Introduction
Over the last decade, accumulating evidence suggests that ubiquitination of proteins by E3 ligases is a novel and crucial regulation mechanism in innate and adaptive immunity.1,2 The gene of Casitas B-lineage lymphoma proto-oncogene-b (Cbl-b), an E3 ubiquitin-protein ligase and an adaptor protein, was initially cloned and characterized by Keane et al. in 1995.3 Cbl-b belongs to the Cbl family, which consists of c-Cbl and Cbl-3 in addition to Cbl-b and has a broad spectrum of biological functions. Recent studies using gene-targeting approaches have yielded convincing evidence that Cbl-b negatively regulates the signaling pathways derived from the T-cell receptor (TCR),4,5 B-cell receptor (BCR), CD40,6,7 and Fc-epsilon-R1 (high affinity immunoglobulin epsilon receptor).8 Because of the diversities of substrates of Cbl-b in different cell types, it appears that Cbl-b regulates various signaling pathways in a cell type-dependent manner.
The Cbl family of ubiquitin ligases in mammals share highly conserved regions in their N-terminal halves, which encompass their TKB (protein tyrosine-kinase-binding), linker (L), and RING (really interesting new gene) finger (RF) domains (Fig. 1). The unique feature of the TKB domain is that it recognizes specific substrates of Cbl-b, which is achieved by binding to proteins containing specific phosphorylated tyrosine-containing motifs, such as Syk and Zap-70, and a range of receptor tyrosine kinases.6,13 Interaction of proteins with the TKB domain of Cbl is mediated by 3 distinct subdomains consisting of a 4-helix bundle (4H), a calcium-binding EF hand, and a variant SH2 domain, all 3 of which are functionally required to form a unique PTB (phosphotyrosine-binding) module.14
It's a balancing act between B, T and degradation. Yet again another highly regulated, tightly controlled system of delicate steps that must be maintained as an organized, collective whole. Or the consequences add up to disease or catastrophic failure. Also, it was fun reviewing the Antibody Affinity Maturation as an Engineering Process OP again :) There's a difference in GC B cells, variation, measured selection and Darwinism, but they refused to acknowledge the difference of a tightly controlled and measured system vs RM+NS. And so UB, UB UB, wherefore art thou UB? Answer: Because I am ;-) DATCG
Gpuccio, A bit off-topic, but interesting Engineering and Design. Found this while searching other areas on your original OP and #491 Engineers' Synthetic Immune Organ Produces Antibodies - Cornell
The immune organoid was created in the lab of Ankur Singh, assistant professor of mechanical and aerospace engineering, who applies engineering principles to the study and manipulation of the human immune system. The synthetic organ is bio-inspired by secondary immune organs like the lymph node or spleen. It is made from gelatin-based biomaterials reinforced with nanoparticles and seeded with cells, and it mimics the anatomical micro environment of lymphoid tissue. Like a real organ, the organoid converts B cells – which make antibodies that respond to infectious invaders – into germinal centers, which are clusters of B cells that activate, mature and mutate their antibody genes when the body is under attack.
Cool work, they can control the response and tune it.
The engineers have demonstrated how they can control this immune response in the organ and tune how quickly the B cells proliferate, get activated and change their antibody types. According to their paper, their 3-D organ outperforms existing 2-D cultures and can produce activated B cells up to 100 times faster.
Original published work(Paywall)... https://www.sciencedirect.com/science/article/pii/S0142961215005104 DATCG
Gpuccio @491, Just realized I missed comment 491 previously of yours! I'll give it a read and the previous OP you mention. And btw, the paper in the 2015 OP you reference is now Open Access :) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3947622/ DATCG
DATCG: "Hence, most USPs can be considered nonspecific with regard to the ubiquitin code but specific with respect to their substrates." That makes sense. Erasing the signal probably need less specificity than building it. However, USPs mist be able to recognize the substrate, and probably to recieve information about the subject status and other variables, because of course their action must be carefully balanced against all other components of the system (and maybe of other systems). gpuccio
Here's the Ubiquitin Code you've posted in OP for those who may not have accessed it yet in a PDF. For readers, there's some good reading, including sections of UPS, Structure, Writing, Reading and Erasing Code(HouseKeeping). The Ubiquitin Code David Komander1 and Michael Rape2 1Division of Protein and Nucleic Acid Chemistry, Medical Research Council Laboratory of Molecular Biology, Cambridge, United Kingdom; 2Department of Molecular and Cell Biology, University of California, Berkeley Besides the sections on Writing and Reading Code, thought I'd highlight DUBs - Deubiquinating Enzymes. Deubiquitinating Enzyme (DUBS): cleave ubiquitin from proteins and other molecules. There's more here at wiki: https://en.wikipedia.org/wiki/Deubiquitinating_enzyme There are ~100 DUBs in Humans. What ubiquitin does, DUBs undo or modify. Or, specifically, House Cleaning. But why house cleaning? In a blind, unguided stuff-happens "mechanism?" 5. ERASING THE CODE
Any useful code should be carefully employed only at times of need. Indeed, to prevent ubiquitylation from being constitutively on, modifications are reversed by DUBs. Human cells contain 55 USPs, 14 ovarian tumor DUBs (OTUs), 10 JAMM family DUBs, 4 ubiquitin C-terminal hydrolases (UCHs) and 4 Josephin domain DUBs (96). To specifically control ubiquitin-dependent signaling, these enzymes have to deal with chains of distinct linkage, topology, and length. 5.1. Housekeeping and Substrate-Specific Deubiquitinating Enzymes Several DUBs, referred to as housekeeping enzymes, play important roles in establishing the ubiquitin code. For example, proteasomebound DUBs, such as USP14, UCH37/UCHL5, and RPN11/POH1, protect ubiquitin from degradation (100). This process is vital for keeping sufficient levels of free ubiquitin that can be used for chain assembly. Similar functions might be performed by DUBs that interact with ubiquitin-processing complexes, such as the COP9 signalosome (USP15) (101), or the p97 segregase [YOD1 (102), VCIP135 (103), Ataxin-3 (104)]. Another large group of DUBs disassembles chains independently of the linkage, yet these enzymes gain specificity by being targeted to a select set of substrates. These DUBs include most members of the Ubiquitin-specific Protease(USP) family, which regulate many cellular reactions, including splicing, protein trafficking, or chromatin remodeling. Many USP DUBs are recruited to substrates through interaction domains (96) or adaptor subunits (105). Although a comprehensive analysis has not been reported, most USPs are active against all linkages (22, 32, 35) and also hydrolyze the isopeptide bond between the substrate and the first ubiquitin. An exception from this nonspecificity is CYLD, which prefers Met1- and Lys63-linked chains (35, 98, 106). Hence, most USPs can be considered nonspecific with regard to the ubiquitin code but specific with respect to their substrates.
. DATCG
Gpuccio, FYI A previous "Paywall" Paper in your OP is now Open Access :) An Interaction Landscape of Ubiquitin Signaling Downloaded PDF last night, but HTML is available online as well. A few paragraphs after the Intro:
The complexity of ubiquitin signaling is augmented by polyUb chains with distinct topologies. Eight homotypic polyUb linkages are known to exist and are linked via the C terminus of donor ubiquitin and any of the seven lysine residues (Lys6, Lys11, Lys27, Lys33, Lys48, and Lys63) or the amino terminal methionine residue (Met1) of the acceptor ubiquitin. Recent studies also revealed the in vivo existence of branched and mixed polyUb chains (Peng et al., 2003, Emmerich et al., 2013, Meyer and Rape, 2014). Another layer of complexity is added by post-translational modifications (PTMs) of ubiquitin, including acetylation and phosphorylation (Herhaus and Dikic, 2015). All of these structurally unique polyUb chains and ubiquitin PTMs make up a “ubiquitin code” that determines the function and fate of protein substrates. How do cells decode this ubiquitin code into proper cellular responses? Recent studies have indicated that members of a protein family, ubiquitin-binding proteins (UBPs), mediate the recognition of ubiquitinated substrates. UBPs contain at least one of 20 ubiquitin-binding domains (UBDs) functioning as a signal adaptor to transmit the signal from ubiquitinated substrates to downstream effectors (Husnjak and Dikic, 2012). Since many UBDs recognize the same hydrophobic binding patch on ubiquitin (Ile44-Leu8-Val70), the nature of UBP selective recognition of different ubiquitin linkages remains elusive. Nevertheless, accumulating evidence suggests that many UBDs selectively bind to particular ubiquitin linkages (Husnjak and Dikic, 2012, Komander and Rape, 2012). Linkage-selective interactions are achieved either by a single UBD that binds to a certain ubiquitin linkage with high affinity or by multiple UBDs that cooperatively bind with high avidity to a specific ubiquitin linkage. For different ubiquitin linkages, the selective recognition by UBDs depends on the spatial distribution of ubiquitin moieties (Husnjak and Dikic, 2012). In addition, a linker region between ubiquitin moieties can determine ubiquitin linkage-selective interactions, as exemplified by the selective interaction between NEMO and Met1 linkages (Rahighi et al., 2009). Mutagenesis studies have revealed that the selective ubiquitin binding activity of UBPs regulates important cellular functions, as illustrated by several UBDs that are involved in regulating nuclear factor ?B (NF-?B) signaling (Husnjak and Dikic, 2012). More importantly, mutations in UBDs of NEMO and ABIN1 have been found in patients with inflammatory diseases (Cohen, 2014). These examples emphasize that studying UBP-ubiquitin interactions on a proteome-wide scale would be of great value to decipher the functions of ubiquitin signaling in health and disease.
Much to read in that one paper alone and follow through on. Also "Ubiquitin Code" in your OP at Code Biology in PDF format is not "Paywall" and Open Access as well. Not posted much since I've been reading several of these papers. There were a few things stated in previous post by opponents of ID that make little sense today in light of ENCODE and non-Darwinian processes. But will leave that for another day. DATCG
bill cole: Fig. 4 from the "Ubiquitin modifications" paper linked at the previous comment is very good too. gpuccio
bill cole: Hi Bill, welcome here! :) A semiotic structure is any structure whose function includes a consistent use of a symbolic code. IOWs, if we have an arbitary set of configurations that are consistently mapped to specific outcomes, and the mapping is arbitrary, that is a semiotic system. UB has done a lot of good work on that concept and its application to ID. When we say "arbitrary", we mean that the mapping is not due to any law of nature, but is established in the system, it is generated by the configuration itself of the system. IOWs, the system provides a translation system which recognizes the coded signal and maps it to the outcome. The translation system is independent from the signal and separated from it. Of course, the best known semiotic system in biology is the genetic code. But many other symbolic codes exist, for example the DNA methylation code. The ubiquitin system is another example. As I have argued in the OP, ubiquitin is a single molecule that can assume a lot of different configurations when it is linked to a target molecule. Apart from the obvious specificity linked to what residue is ubiquitinated in the target protein, we have a full range of different signals according to what type of ubiquitination takes place. a) single mono-ubiquitination b) multiple mono-ubiquitination (at different AA sites) c) ubiquitin chain or chains of: c1) different lengths c2) different structures The c2 point provides the greatest diversification of the signal, because a lot of different structures are possible, thanks to the 8 different "switches" at which a new ubiquitin molecule can be added to the previous one. So, we have a number of different signals, or tags, and each of them is linked to a different outcome or set of outcomes. As we have seen in the OP and in the long discussion. See for example here: The emerging complexity of ubiquitin architecture https://academic.oup.com/jb/article/161/2/125/2712554 Fig. 1 And here: Ubiquitin modifications. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4822133/ Fig. 1 and Fig. 2 The ubiquitin code is arbitrary. Even if there is some rare indication that ubiquitination can in some cases contribute to the final outcome because of some configuration change it can induce in the target protein, it is universally recognized that it acts almost exclusively as a tag. IOWs, it is not ubiquitin itself that determines the outcome, but rather the recognition of the specific ubiquitin signal by the translation system. The translation system, as argued in the OP, is implemented by a full range of ubiquitin binding proteins, or ubiquitin binding domains included in bigger structures, like the proteasome. Even in the proteasome, the recognition system is very complex and diversified. See here: Ubiquitin recognition by the proteasome https://academic.oup.com/jb/article/161/2/113/2871240 in particular the sections: "Ubiquitin Receptors in the Proteasome: More than Two" and: "Ubiquitin Signals for the Proteasome: K48 Is Not Everything" gpuccio
Gpuccio Can you explain this semiotic structure to me and how you identify it.
The Ubiquitin system is a very important regulation network that shows two different signatures of design: amazing complexity and an articulated semiotic structure.
bill cole
They are a sad lot. I will not bring any more trash back from that site. ET
GPuccio @502 "I would really want that someone from the other side had the courage of addressing the real arguments. Someone who had the clarity of saying: no, you are wrong because our theory can explain the things that you describe and analyze, and I will show you the reasons why." Yes, indeed. If they believe that they are right, why do they not address the arguments? And if they sense that they cannot address the arguments, how can they believe they are right? Origenes
ET, Mung: Frankly, I am tired of those repeated pseudo-philosophical arguments from people who don't even understand the basics of philosophy of science. I would really want that someone from the other side had the courage of addressing the real arguments. Someone who had the clarity of saying: no, you are wrong because our theory can explain the things that you describe and analyze, and I will show you the reasons why. Instead of people who just say that molecular biology is not their specialty. And then go on defending a theory which can be easily falsified by molecular biology. Instead of people who just repeat that even if we presented the best arguments in the world to show that only design can explain facts, they would still reject design as a god-of-the-gaps argument. God-of-the gaps: what an argument, indeed! If there is anything that they can't explain, and that design can explain, that's a god-of-the-gaps argument, not a scientific reason to prefer the only available explanation. Or, better still: an argument from incredulity! As if being incredulous in front of things that cannot be believed is a crime. I would really like to fight. But how can you fight with people who never address the real issues? Do they admit that RV + NS can never explain the 1.7 million bits of functional information that appear at the vertebrate transition? No. But do they try to explain that simple fact? Or to deny it? No. Because molecular biology is not their specialty. Do they admit that a semiotic system like the ubiquitin system, which controls and regulates the most different cell processes, is a huge problem for their theories? No. But do they try to explain why? No. Not even a word. So, god-of-the-gaps. And the usual elementary school bullying, camouflaged as smart sarcasm. And the usual "group dogma", camouflaged as skepticism. I love intellectual discussion, even intellectual fight. But I am afraid that the "intellectual" thing has been completely lost in this debate. OK, I apologize for the harsh tone of this post. But when it's necessary, it's necessary. gpuccio
It's the "I don't have an argument" of the gaps argument. Mung
Science is foreign to the TSZ ilk. Forensic science must be a "criminal of the gaps" argument. Archaeology offers an "artisan of the gaps/ intentional agency of the gaps" argument. They call ID an argument from ignorance and yet the ignorance is all theirs. What is their justification for saying blind and mindless processes do it? The design inference is based on our knowledge of cause and effect relationships. Evolutionism is based on ignorance. ET
Oh my, now they are having issues with ID's falsification criteria because it forces them to do some actual work! Science really isn't their cup of anything. ET
Mung at #496: Very good question! OK, here is how I see things. a) Semiosis is an independent indicator of design, because it is a formal feature which, for its same nature, is incompatible with any non design interpretation. That's because no system which cannot have any understanding of the siubjective experience of meaning can really generate a symbolic code. However, even codes have different levels of complexity, and in that sense, the more complex a code is, the stronger is its power as an indicator of design. So, let's say that semiosis has a double aspect, as an indicator of design: a1) A formal aspect, that is the presence of a symbolic code, which is common to all semiotic systems. a2) A quantitative aspect, that is the functional complexity linked to the implementation of the code (which is a specific subset of functional complexity), which can differ from a semiotic system to another. So, all symbolic codes are indicators of design, but the higher their specific functional complexity, the better. b) Functional complexity and Irreducible complexity are more connected, and independent from semiosis. A protein can be (and usually is) functionally complex even if its function is not symbolic. The relationship between functional complexity and irreducible complexity is more subtle. Let's say that functional complexity referes usually to individual functional units, while irreducible complexity refers to some set of functional units, each of them functionally complex, which irreducibly sooperate to implement a function. So, let's say that a specifi set of E1-E2-E3 enzymes contributes to ubiquinate some specific target protein. Each of the three enzymes has a "local" function in relation to the ubiquitination process, and a functional complexity which can be measure in relation to that local function. However, the individula local functions are useless if the whole process is not there, because the treu utility of the process is the final ubiquitination of the target protein. And of course we can add the specific deubiquitinating enzyme which contributes to ensure the correct regulation of the target protein, and other possible factors involved (phosphorylation processes, and so on). The simple truth is that if any of those component is lacking, the regulation of the target protein is no more a regulation. So, the regulation of the target protein is the true function which is useful (and therefore could be in principle the object of NS). So, let's say that we have the functional complexitiesof the following proteins in relation to their local function (these are just fictional numbers): E1 580 bits E2 600 bits E3 950 bits DUB 730 bits The functional complexity of the whole system, if it is irreducibly complex, will be the product of those complexitis (that is the sum of the bit values). In this case, 2360 bits (if I am not wrong). That is a lot more of the individual functional complexities, because these are exponential values. Of course, some component can be shared between different systems. In the case of ubiquitin, for example, the E1 component is almost always the same. But we have seens that the E2 and E3 components provide great specificity, and can be rather unique for each system, or for a small subset of systems. So, irreducible complexity is a property which enhances exponentially the functional complexity of the individual components. Of course, the presence at the same time of all three features: a) High functional complexity of many individual proteins which b) form an irreducibly complex system which c) works at least in part by a semiotic code certainly adds greatly to the final design inference. That's why the ubiquitin system is such a treasure for ID! :) gpuccio
Mung at #494 and 495: You are a powerful opponent. I cannot try any active defense against you! :) gpuccio
This is only slightly off-topic so I hope you will forgive me. Do you see a) Functional complexity, b) Semiosis, and c) Irreducible complexity as being independent indicators of design such that when all found together they make a stronger cumulative case for design, or do you see them as all three always being present where design is present? Mung
gpuccio:
No active defense. Ever.
It's because you didn't post it at TSZ. Obviously. Mung
gpuccio @484. I see that you failed to answer at all the scientific objections I raised in my post @475. I perfectly understand if you cannot defend your silly "ID theory" and your OP. Your post doesn't intimidate me with all it's fancy words and pictures and any true scientist would just have a good laugh on reading it. Can't wait for the next one! Mung
ET: (quoting Alan Fox at #492)
I should have said “to attack evolutionary theory effectively, you need an alternative”.
According to Popper a theory must be falsifiable to be a scientific theory. Luckily, both neo-darwinism and ID can be falsified, and therefore are scientific theories. Falsification does not need any alternative theory: it can be accomplished by demonstrating that the mechanism on which the theory is built is logically or empirically inconsistent with the facts that the theory pretends to explain. ID can be falsified by showing that non design systems can generate new original complex functional information. Of course, nobody has ever been able to show that. Neo-darwinism can be falsified by showing that RV + NS cannot empirically explain what we observe. That has been done in many ways. I have summarized those which are IMO the most valid arguments that faslify neo-darwinism in my two OPs, many times linked here: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ and: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ I am available to discuss in detaill all the arguments presented there. If someone wants to try an active defense of the neo-darwinian theory. Good luck. Proposing an alternative theory is not a falsification of the existing theory. Many theories can compete, if they have not been falsified, and anyone is free to decide which of them is the best explanation. Competition and falsification are two different things, if one wants to keep a correct epistemological approach. gpuccio
Alan doubles down on his ignorance:
I should have said “to attack evolutionary theory effectively, you need an alternative”.
That is also false and in this case moot. There isn't any scientific theory of evolution to replace so no alternative is required. Also evolution by means of intelligent design is being used in genetic algorithms whereas no one uses evolution by means of blind and mindless processes for anything. ET
DATCG: This is specially interesting to me: Cbl Ubiquitin Ligases Control B Cell Exit from the Germinal-Center Reaction http://www.cell.com/immunity/fulltext/S1074-7613(18)30082-7
Summary Selective expansion of high-affinity antigen-specific B cells in germinal centers (GCs) is a key event in antibody affinity maturation. GC B cells with improved affinity can either continue affinity-driven selection or exit the GC to differentiate into plasma cells (PCs) or memory B cells. Here we found that deleting E3 ubiquitin ligases Cbl and Cbl-b (Cbls) in GC B cells resulted in the early exit of high-affinity antigen-specific B cells from the GC reaction and thus impaired clonal expansion. Cbls were highly expressed in GC light zone (LZ) B cells, where they promoted the ubiquitination and degradation of Irf4, a transcription factor facilitating PC fate choice. Strong CD40 and BCR stimulation triggered the Cbl degradation, resulting in increased Irf4 expression and exit from GC affinity selection. Thus, a regulatory cascade that is centered on the Cbl ubiquitin ligases ensures affinity-driven clonal expansion by connecting BCR affinity signals with differentiation programs.
So, ubiquitin and E3 ligases are directly involved in important regulation nodes of the antibody affinity maturation process! :) This is fascinating, because antibody affinity maturation is the best example we have of an embedded engineering process based on bottom up strategies. I have discussed that scenario from an ID point of view here: Antibody affinity maturation as an engineering process (and other things) https://uncommondesc.wpengine.com/intelligent-design/antibody-affinity-maturation-as-an-engineering-process-and-other-things/ Good to know that ubiqutin has an important role there too! :) gpuccio
George Castillo: "How was the p-value calculated in Figure 4?" Wilcoxon test for two independent samples. I use R for all statistical analyses. gpuccio
This is an interesting read, as I work my way through it. How was the p-value calculated in Figure 4? George Castillo
ET: (at #486, quoting Alan Fox) "To attack evolutionary theory, you need an alternative." This is completely false. Of course a theory can be falsified even if there is no alternative available. Science is about the best explanation, but an explanation that does not work is not an explanation at all. The epistemology of Alan Fox is strange indeed! gpuccio
I missed this gem from Alan:
Dembski’s “explanatory filter” was a pretty diagram, entirely useless as a scientific tool.
Oh my- the EF is standard operating procedure for anyone trying to determine the cause of whatever they are investigating. It forces the user to follow Newton's four rules of scientific reasoning. It is useful as a scientific tool- that is for anyone who understands science and investigation. ET
Alan Fox is clearly a fool or a liar. He sez that Dr Behe has been:
Debunked on the bacterial flagellum, debunked on chloroqhine resistance.
Nonsense- pure unadulterated nonsense. No one has ever refuted Dr Behe on any of his claims involving evidence for ID. Then he sez:
To attack evolutionary theory, you need an alternative.
That is just more nonsense. And there isn't any scientific theory of evolution. You don't have any testable hypotheses pertaining to blind and mindless processes. If a theory is shown to be false there doesn't need to be a replacement before you can it. So the problem is our opponents are liars who truly believe their lies and no one will ever be able to convince them otherwise. A total waste of time and space ET
Perhaps it's our opponents that are simple and ID and science are too difficult for them. No active defense you say? It's all settled science they say. Simple. ET
ET, Mung: The problem is simple. We in ID know that functional complexity, semiosis and irreducible complexity are reliable markers of design. We know that empirical evidence supports that beyond any possible doubt. I have given my explicit definition of design, and there can be no possible doubt about what I mean by design in my reasonings. I have given my explicit definition of functional complexity and measured it in many contexts, with a methodology which is objective and reproducible, and that I can defend explicitly. There is a very clear definition of semiosis, and UB has written a lot about that. Behe has written clearly about Irreducible complexity. So, most of us in ID agree very well about what these concepts are. And we agree that they allow a safe design inference, if correctly applied. Now, an OP like this (ubiquitin) has not the purpose of putting all that in discussion again. It has the purpose to show a well described and clear example of a system in biology that exhibits huge amounts of: a) Functional complexity b) Semiosis c) Irreducible complexity Now, what one would expect from a commenter on the other side is some possible criticism about my arguments, IOWs some argument that shows that the ubiquitin system does not exhibit one or all of those features, according to the explicit definitions that have been given. Or, alternatively, some recognition that my arguments are correct, and that the ubiquitin system does exhibit the features that I described, but with a reminder that the basic objection remains that those features, for our opponents, do not allow a design detection. That would be a reasonable discussion, about the topic of the thread. Instead, Alan Fox, who recognizes that he did not know well the subject of ubiquitin, seems to criticize me for describing a system which was discovered by others (???), and for suggesting as a "subtext" that the system points to design. Then he and his colleagues go on dismissing the basics of ID, without any reference to the issues in this OP. OK guys, we know that you don't accept ID. No need to remind that each time. We have discussed the reasons when possible, and in the end it is clear that there are deep differences in our views about science, about philosophy of science, about scientific methodology, and so on. But really, if you cannot address the specific issues in this topic, if you cannot say if you agree or not that the ubiquitin system shows evidence of functional complexity, semiosis, and irreducible complexity, if you don't even understand what functional complexity or irreducible complexity are (I hope you understand at least what semiosis is), if even if you understood the concepts you would never accept that they are connected to design, if you go on quoting papers that have nothing to do with the issue, only because they include the words "ubiquitin" and "evolution" in their abstract, and so on and so on, then what discussion can we have? None at all. My position is different. I don't reject others'ideas out of prejudice or of vague and wrong ideas about the philosophy of science. I reject neo-darwinism for very precise reasons, and I have dedicated a lot of discussion to express those reasons, including my two recent posts about RV and NS and their limits, which are very detailed in terms of biological arguments. I don't reject neo-darwinism saying that it is a darwin-of-the-gaps theory (although it certainly is). I try to make a specific analysis of what it says, and of the reasons why what it says is wrong. But our interlocutors seem not to be interested even in that. Their discussions are always vague a priori philosophical rejections of ID, whatever its arguments may be. But, strangely, they are never a defense of their own theory: they never really defend neo-darwinism. So, if I say that RV has severe limitations, I would expect from a convinced neo-darwinism an immediate reaction: no, you are wrong! and I will show you why you are wrong. Instead, nothing. I have published a table with a very generous computation of the probabilistic resources of our biological scenarios. No reaction. Am I wrong? Am I right? That does not seem to interest neo-darwinist. At most, we can expect something of the kind: but you have not demonstrated that what we say is impossible! No active defense. Ever. I have published a whole OP where I analyze in detail the known cases of NS, and I argue very specifically about what NS cannot do. No active defense. Ever. But NS is always invoked when one shows the limitations of RV. And neutral variation is always invoked when one shows the limitations of NS. And natural selection is invoked again when one shows that neutral variation has the same limitations of RV. And so on, and so on. Selectionists become neutralists when it is convenient, and neutralists invoke selection when only that option remains. What if someone just shows the limitations of both RV and NS? No active defense. Ever. After all, their theory is a dogma, and why should one actively defend a dogma? Any falsification of the dogma is, of course, a god-of-the-gaps argument. Because who can exclude that some day, in some place, some explanation compatible with the dogma will be found? No one. After all, it is possible No active defense. Ever. Faith is more than enough, for those who proudly define themselves "skeptics". gpuccio
And another shameless comment:
I showed you bones that exist in early bird embryos that fuse into fewer bones, over and over again.
Umm development does not = evolution. And you and yours don't have any explanation for developmental biology in the first place. Given starting populations of bacteria you don't have a mechanism capable of producing anything else besides more bacteria. ET
No shame Alan Fox strikes back:
I don’t see much future in a discussion on ID as science. It fails the hypothesis test in not having one. First find your hypothesis, then get back to me!
That has been provided and you just hand-waved it away. And when asked to show the testable hypothesis for evolution by means of blind and mindless processes you failed to deliver. So it appears that you have a bad case of willful ignorance.
The niche has designed a resistant bacterium.
Question begging. Why couldn't it be that bacteria were designed with the ability to adapt and that is exactly what we observe?
There is no theory or hypothesis of ID that is scientific or testable. ID is not science.
Testing Intelligent Design. It's even on TSZ! And guess what? It is more than you and yours have for blind watchmaker evolution. Alan Fox strikes out, again. ET
Glen is incapable of carrying on a discussion. All Glen wants to do in pontificate. ET
gpuccio:
To GlenDavidson, what can I say? Some of his arguments are, again, trivial stereotypes, like those about the “limitations of mindless evolutionary processes”, and so on. The rest I really don’t understand: in particular, the supposed difference between Paley and ID, which would make him “honest”, and IDists “dishonest”.
Someone else recently posted here that Paley's argument was decidedly not by way of analogy. IF that is in fact the case then Glen's objection is horribly misguided. Mung
gpuccio:
I really don’t think that this is worth the while.
Glen loves to get up on his soapbox and preach to the choir and then complain that there is a lack of discussion. Mung
Allan Keith isn't here because this is a scientific evidence thread ET
Mung is channeling his inner Alan Fox... ET
Makes you wonder why "Allan Keith" isn't participating in this thread. Probably afraid of getting banned. Mung
The problem with ubiquitin is just that, it is ubiquitous. It's like finding out that grains of sand can fill in any number of holes and declaring you've discovered a semiotic system, therefore holes filled with sand are designed. But what about the complexity! Well, my measure is 500 grains of sand. 500 grains of functional sand complexity (FSC)is enough to infer design. This is why I just can't take ID arguments seriously. Nice try gpuccio. Mung
The TSZ ilk are clueless. Now we have to know how it was designed and who designed it BEFORE we can infer it was designed. Those people are so anti-science they are a pathetic lot. No amount of evidence will ever convince them. That is because evidence doesn't mean anything to them. They already have their minds made up and won't change until they die. And they will never be able to support their claims. But that doesn't matter to them because evolutionism is being taught to unsuspecting kids. ET
DATCG: By the way, I corrected the link to the second paper referenced at #449, which was wrong. Many thanks to Dionisio for signaling that! :) gpuccio
DATCG at #470: "It appears as Gpuccio stated in #465 “… but only for very simple micro-evolutionary events”. Which are ordinary and not disputed, but often held up as evidence for macro events." (Emphasis mine) Thank you. That's exactly the important point! :) gpuccio
ET:
What’s the range of RV? That would be the $64,000 question.
Not really. It can be rather easily computed, at least as a higher threshold. I have written a whole OP about that (with some detailed disussion following): What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ In brief, if you look at the first table in that OP, you will see that the </b<theorical limit for our whole planet, computed with extreme generosity, and which defines what is really empirically impossible, is 160 bits for the whole prokaryotic world, much less for other scenarios,
Is Dr Behe right and it is very limited?
Yes, Behe is right. It is extremely limited. His empirical threshold of two coordinated mutations is valid in most observable cases, and confirmed by all that is known. However, we have seen that the theorical higher threshold for empirical impossibility (with, great, grreat generaosity in the computation) can reach 37 AAs for the whole system of all prokaryotes on out planet in 5 billion years. OK, let's say that the truth is somewhere in the middle. I would say that something between 3-5 coordinated AAs is probably the empirical threshold, for real cases. That is also supported by Axe's work. The simple truth is that no new function which has a starting complexity higher than a few AAs has any realistic probability to appear spontaneously (I mean without any design intervention) on our planet. Indeed, the two AAs starting event of chloroquine resistance is still the best documented complex event that I am aware of. I would like to remind here that most proteins are easily beyond that threshold, and that therefore even one single complex protein (practically almost all of them) is safely beyond any realistic power of RV. Almost all proteins have a specific functional information in the range of hundreds or thousands of bits, which correspond to hundreds of coordinated AAs. Can NS add to that? Yes, but only in a very limited way, and only tweaking the function that has already appeared and is naturally selectable. I have discussed that in detail here: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ In the discussion of that thread, I have also analyzed in great detail two of the best known scenarios of NS: a) Simple penicillin resistance b) Chloroquine resistance See for example comments 285 and following, and 311 and following. In brief, in penicillin resistance the initial starting function has a complexity of 1 AA, and NS can add 3 - 5 further AAs to tweak it. In chloroquine resistance, the initial starting function has a complexity of 2 AAs, and NS can add about 3 further AAs to teak it. The requisite for NS to act is that the inial new function must already be present, and efficient enought to be naturally selected, and that each new single AA variation can increase the already existing new function. That's exactly what happens in those two scenarios, which are the best documented cases of microevolution. And let's remind that they are also cases which have the best setting for NS to act: very high reproduction rates, very high population numbers, and above all an extreme environmental pressure (the antibiotic).
Is Dr. Spetner right and most mutations are not random?
I don't know in detail Spetner's work, so I don't know exactly what his point his. But the issue is rather simple after all. Random mutations do exist, and their power to generate new functional information is extremely limited, as said, and can never generate new complex functional information. On the other hand, new complex functional information has appeared all the time in the evolutionary history of our planet. Therefore, it is obvious that what we are observing is the result of designed variation. Tons of it. Of course, guided mutations are the most likely tool to achieve designed variation. The other main possibility is Intelligent Selection, which however has a more limited power. If that's what Spetner means, I fully agree. My preferite scenario, and the one most supported by known facts, is designed variation by guided transposon activity. As I have stated many times. In the end, I would like to copy again here my challenge, which has been offered many times in different threads, and that nobody has ever even started to answer. It is about a fundamental point for the whole NS scenario:
Will anyone on the other side answer the following two simple questions? 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?
gpuccio
#467 ET, I suspect so far from what observable research is showing it might be ... a) limited yes, appears so b) prescribed and reactionary - epigenetics c) I think Spetner's insights are interesting It appears as Gpuccio stated in #465 "... but only for very simple micro-evolutionary events". Which are ordinary and not disputed, but often held up as evidence for macro events. DATCG
Gpuccio @449, Excellent! The first paper details what I suspected.(correction, not same paper I thought at first, will read it.). The second paper oooh... will need to read it and 3rd! "IkB alpha is cinstantly synthesized and degraded in the cell independently from its link to the NF-kb TF." Yep! It must be readily available for any number of different interactions posted in Comment #447. Great stuff as usual :) Thanks! DATCG
and this from Glen:
You have to make the case for IC being a counter argument to Darwin’s mechanism, not just declare it to be so.
"Darwin's Black Box" by Dr Michael Behe, 1996. It does contain science, reasoning and evidence so it won't be of any interest to you. And they all still ignore the fact that ID is not anti-evolution. ET
No, an “evolutionary pathway” would be an explicit pathway where all the steps are in the range of RV, and each step can be shown to be naturally selectable.
What's the range of RV? That would be the $64,000 question. Is Dr Behe right and it is very limited? Is Dr. Spetner right and most mutations are not random? ET
To the discussants at TSZ: Thank you for the people who have had some kind words for me personally. I appreciate that. To Alan Fox, I have not much to add. He has his ideas, but I don't agree with him. We could engage in a long debate about the foundations of science, of ID and of epistemology, but frankly I don't see the intellectual basis for that, judging from his repeated statements. Just one simple note: if you look at my definition of design (which is in perfect accord with the general use of the word, and is the only one which makes sense for ID), the environment (the niche) cannot design. Indeed I define design as a process where specific forms are first represented subjectively in a consciousness, and then outputted to some material object. You can find that definition, and some more considerations, in my first OP here: Defining design https://uncommondesc.wpengine.com/intelligent-design/defining-design/ Therefore, environment cannot design, because environment is not a conscious agent and has no subjective representaions. I also strongly disagree with this statement: "What is there to say about the science? You have to accept the primary research at face value, unless you can repeat experiments or question the methodology or conclusions." Not so. Even if you accept the experiments, the methodology and the conclusions can always be questioned. That is the most important role of scientists: to evaluate critically the methodology and conclusions of what is published. But these are only a few aspects about which I disagree with you. There are many more, even more important. So, Alan, I would say: let's stop it here. My idea was to discuss my ideas about the ubiquitin system, not to face the usual stereotypes about ID. To GlenDavidson, what can I say? Some of his arguments are, again, trivial stereotypes, like those about the "limitations of mindless evolutionary processes", and so on. The rest I really don't understand: in particular, the supposed difference between Paley and ID, which would make him "honest", and IDists "dishonest". I really don't think that this is worth the while. gpuccio
ET: No, an "evolutionary pathway" would be an explicit pathway where all the steps are in the range of RV, and each step can be shown to be naturally selectable. Those kinds of pathways do exist, but only for very simple microevolutionary events. For example, I have discussed in detail known pathways for simple penicillin resistance and for chloroquine resistance, based on explicit paper from the literature, in my thread about NS: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ But of course no such pathways exist for new complex functions. If Alan Fox goes on with his "default" idea, I don't know what to say. I have tried to explain why that is wrong in my post #456. The simple point is: if pone rejects ID from the beginning, what's the point in detailing arguments in favor of ID, as I have tried to do here? They have produced nothing against my arguments, because they simply can't. So they just reject the whole ID theory! :) gpuccio
This is just sad- from Alan Fox:
I don’t accept the concept , to be honest but there seem two linked refutations: one that IC is not a barrier to evolution (Matzke and the bacterial flagellum, for instance) and two that an irreducibly complex system WRT to living organisms is an incoherent concept.
The concept is accepted by most, Alan. Perhaps you just don't understand it. As for Matzke he never demonstrated blind and mindless process could produce any bacterial flagellum. And the only people who think IC is incoherent are the willfully ignorant. So IC still stands unrefuted. By the way, even if you don't like the concept you still don't have a non-telic mechanism that can produce it.
But the essential IC argument is a false dichotomy. Accepting for the sake of argument there were some “irreducibly complex” system for which no evolutionary pathway could be found is merely a state of ignorance. We have no explanation rather than we default to “Design”. The “Design Inference” is just an empty concept.
The ignorance is all yours, though. The design inference is based on our knowledge of cause and effect relationships. And again your difficulty with the word "default" exposes your agenda of water muddying and not of an interest in a discussion. If the design inference is such an empty concept then why do we have archaeology and forensic science? We have them because it matters to an investigation how something came to be. And we study artifacts differently than we study natural rock formations. Methinks Alan has never conducted any investigation into the root cause of something. ET
This is almost priceless (from RodW):
If IR is a valid idea then any one system, such as the flagellum, is enough to prove the existence of the designer. Trotting out one biological phenomena after another may be informative for many but its superfluous. If IR is not valid then it doesn’t matter how many complex phenomena he describes, it doesn’t help his case.
I believe IR = IC. And to answer him I would say one person's flagellum is another's US. It also shows how deep the design goes. So yes pointing out there are many IC systems is always a good thing. The more the merrier. Again it shows just how much Intelligent Design went into living organisms. And it all slams the door on materialism. ET
But isn't even "evolutionary pathway" an equivocation? Does that make it a blind and mindless pathway? If the Intelligent Designer used a genetic algorithm of sorts, ie evolution by design, to produce this system does it invalidate the design inference? ET
ET: However, Alan Fox stating: "I don’t know what is currently on the table as detailed evolutionary pathways." is priceless. When did you see the last detailed evolutionary pathway on the table? (I am not holding my breath! :) ) gpuccio
ET: But the fact is, I have on old history of debate with him, and I am a nostalgic! Moreover, I can recognize some obstinate consistency in his approach. :) gpuccio
Reading petrushka is usually for entertainment only. The way petrushka mangles ID, while amusing, is still nauseating. ET
The "martyr" speaks:
My OP was intending to answer the question often asked in that thread at UD “where are all the critics?” The fact I and others are banned there is one reason there are no critics.
Alan, you are an ID critic in the sense of a 5 year old judging broccoli. You and yours get banned for your insipid trolling, your equivocations and your willful ignorance of what ID is and what evolutionism entails. You are a phony. There are ID critics that are allowed to post here, who complain that UD doesn't post anything dealing with science and are noticeably absent. You have banned at least one person just because he could give as good as he gets. You made a claim about him and refused to provide anything to back it up. Then you wanted assurances it wouldn't happen again. What wouldn't happen again- you never said. And he was exposing you and yours as poseurs. Now you and yours get to comment without regard to facts. You don't have to face the refutations of your claims. You can just ignore them as you do everything else. ET
petrushka at TSZ: Hi, nice to see you again! :) I think you have a lot of confused ideas about Behe's concepts. However, I will not go into detail about that now, because as you can see I have other things that keep me busy. However, it is a pleasure to interact again with you, even briefly and indirectly! gpuccio
Alan Fox at TSZ:
So while evolutionary biologist and biochemists cannot more than propose plausible pathways currently, there are no alternative pathways that I’m aware of being proposed by ID theorists. Please explain how your argument is not as I have understood it and is more than “evolution fails to explain X, therefore design”.
It's not that you don't understand my argument. You simply don't understand ID. ID is about specific empirical markers that are, in all empirical data available, constantly linked to design. Design is not a default. Not at all. It is an empirical explanation, derived from available data and available understanding. Just for simplicity, I will briefly sum up the reasoning for the first marker: functional complexity. It is universally oberved that the only safe examples of functional complexity are designed objects. And there is a specific rationale for that: systems which do not include the intervention fo a conscious intelligent designer cannot harness information towards a specific function, because they can rely only on RV and, if there is reproduction, NS. Those mechanisms have severe limits, and cannot go beyond simple results in generating functional information. IOW,s they can generate simple functional information, but never complex functional information (a general threshold of 500 bits will be more than enough in all cases). This connection between functional complexity and design is a positive empirical feature. And it has a perfectly understandable rationale, because we know very well that the conscious experiences of understanding meaning and of having purpose can easily overcome the probabilistic barriers implicit in non conscious systems. Demonstrating that the current dogma of RV + NS cannot do what it is believed to do is part of ID, because of course if that were true ID would be falsified. Like all scientific theories, ID can be falsified, and therefore we have to assess if neo-dariwnism is a valid falsification of ID. Well, it is not. But ID is no default to anything. It is a positive and completely rational and completely empirical approach to the problem of functional information. About the problem of what RV and NS can and cannot do, you can find my arguments here: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ And here: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ About my basic definitions of design and functional information you can look here: Defining Design https://uncommondesc.wpengine.com/intelligent-design/defining-design/ And here: Functional information defined https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ You say:
I don’t know what is currently on the table as detailed evolutionary pathways.
Nothing, I would say. That's a very easy answer.
I hope others with expertise in the field might chime in.
I hope that too! :) gpuccio
gpuccio, It doesn't matter. Even if every evolutionist alive could post here not one would produce the evidence that ubiquitin evolved by means of blind and mindless processes. Alan is so dim he thinks that ID is anti-evolution even though it has been demonstrated that it is OK with evolution. Evolution by design is still evolution. Alan is also still confused by the word "default". He thinks the design inference is the default even though alternatives have been carefully considered (that is the antithesis of default, Alan). Alan also fails to realize that just eliminating necessity and chance are not all there is. Ubiquitin also has specified complexity- the positive signal for intelligent design. Now only if Alan could propose some way to test the claim that ubiquitin arose via blind and mindless processes. We all know that isn't going to happen but until then Alan doesn't have anything to criticize but his own lame position. ET
John Harshman at TSZ:
John Harshman March 21, 2018 at 4:20 pm TomMueller: Still would like your take on the PNAS paper Seems fine to me, but I don’t see its relevance to the OP.
Neither do I! gpuccio
Alan Fox at TSZ:
But my invitation was consequent upon his complaint at not hearing from ID critics. I was pointing out that UD has banned a considerable number and arbitrarily deleted comments, actions which hardly encourage others who are not banned to comment there. Here, at least we try to provide a level playing field.
My "complaint" was that there aremany ID critics who do post here (therefore are not banned) and are very active whenever there is some debate about religion, morality, politics and so on, and are very keen on saying in those debates that UD lacks some scientific discourse, and then never comment when a scientific thread is there. Of course I understand that if one has been banned he will not comment at my threads. I have also made clear that I don't post at TSZ because I have not the time: I should post there instead of posting here, and that's not what I want.
So what is his argument, in your view?
Well, Mung can certainly answer that. In my view, instead, my argument is that there are three different markers that are linked to a design originn and therefore allwo empirically a design inference (that is the basic concept in ID, and I have discussed it many times in all its aspects). Those three features are: a) Functional complexity (the one I usually discuss, and which I have quantitatively assessed many times in detail) b) Semiosis (which has been abundantly discussed by UB) c) Irreducible complexity In my OP I have discussed in detail a specific biological system where all those three aspects are present. Therefore, a system for which a design inference is by far the only reasonable explanation. This is my argument. It is not a god-of-the-gap argument (whatever you mean by that). It is an empirical and scientific argument. gpuccio
In its typical unscientific manner Glen D wants ID to show the Intelligent Designer- he needs absolute proof before he will accept ID. Which is strange seeing that his position doesn't have anything but whining for support. ET
TSZ is hopeless and clueless. They definitely don't have any idea how blind and mindless processes could have produced ubiquitin and they don't have any idea how to test the claim. But they sure can erect straw man after straw man and tear them down. I originally included a link to their discussion then I figured I didn't even want to give them then traffic because they deserve to wallow in their ignorance and loathing. ET
ET: OK, TSZ seems to have taken the usual way of no discussion and self-referential arrogance. Fine, so I need not answer their non existing arguments. At least, Alan Fox and TomMueller had tried to say something. gpuccio
DATCG: This recent paper: NF-kappaB: Two Sides of the Same Coin https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5793177/ can add some more information about this important pathway. In particular, Fig. 1 is a (probably simplified) depiction of the idea of "horizontal" cross-talk between different pathways. Moreover, phosphorylation seems to be important also for the regulation of NF-kB TF subunits, as detailed in this paper: The Regulation of NF-kB Subunits by Phosphorylation https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4810097/ I have not found much about the inhibitor, IkB alpha. However, it seems that its dynamic state is important for the correct working of the whole pathway. IkB alpha is cinstantly synthesized and degraded in the cell independently from its link to the NF-kb TF. This paper shows that its degradation when it is in free form is different from its degradation when bound to the TF, and is not mediated by the IKK kinases: NF-kB dictates the degradation pathway of IkB alpha https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2374849/
Abstract IkB proteins are known as the regulators of NF-kB activity. They bind tightly to NF-kB dimers, until stimulus-responsive N-terminal phosphorylation by IKK triggers their ubiquitination and proteasomal degradation. It is known that IkB alpha is an unstable protein whose rapid degradation is slowed upon binding to NF-kB, but it is not known what dynamic mechanisms control the steady-state level of total IkB alpha. Here, we show clearly that two degradation pathways control the level of IkB alpha. Free IkB alpha degradation is not controlled by IKK or ubiquitination but intrinsically, by the C-terminal sequence known as the PEST domain. NF-kB binding to IkB alpha masks the PEST domain from proteasomal recognition, precluding ubiquitin-independent degradation; bound IkB alpha then requires IKK phosphorylation and ubiquitination for slow basal degradation. We show the biological requirement for the fast degradation of the free IkB alpha protein; alteration of free IkB alpha degradation dampens NF-kB activation. In addition, we find that both free and bound I?B? are similar substrates for IKK, and the preferential phosphorylation of NF-kB-bound IkB alpha is due to stabilization of IkB alpha by NF-kB. --- In the present study, we address these questions with new genetic tools and a mathematical model of the reactions that determine IkB alpha metabolism and nuclear NF-kB activity. We find that although free IkB alpha can be a good substrate of IKK in vivo, rapid degradation of free IkB alpha does not require IKK-mediated phosphorylation or lysine-targeted ubiquitination, and is instead regulated intrinsically by sequences in its C terminus. When the free IkB alpha degradation pathway is altered, NF-kB activation is severely dampened, proving the importance of a rapid free IkB alpha degradation pathway. We address the functional significance of these differential degradation rates and pathways, and find that they are critical for allowing stimulus-responsive NF-kB activation, while ensuring a low basal level of NF-kB activity.
gpuccio
#445 Gpuccio, Nice, will be interesting to see where this goes. Have a good day! DATCG
Gpuccio, As a general question, if and when you have time. Curious what you think of my point for IkB-alpha gene enhancer? As a pre-formatted structure waiting in the Cytoplasm? An overview and note other interactions from wiki entry...
IkB-alpha (nuclear factor of kappa light polypeptide gene enhancer in B-cells inhibitor, alpha) is one member of a family of cellular proteins that function to inhibit the NF-kB transcription factor. IkB-alpha inhibits NF-kB by masking the nuclear localization signals (NLS) of NF-kB proteins and keeping them sequestered in an inactive state in the cytoplasm.[5] In addition, I?B-alpha blocks the ability of NF-kB transcription factors to bind to DNA, which is required for NF-kB's proper functioning.[6] Interactions IkB-alpha has been shown to interact with: BTRC, C22orf25, CHUK, DYNLL1, G3BP2, Heterogeneous nuclear ribonucleoprotein A1, IKK2, NFKB1, P53, RELA, RPS6KA1 SUMO4,[25] and Valosin-containing protein.[26]
The IkBalpha-P50-P65 protein complex waits in suspended animation. For a) phosphorylation, then b) Ubiquitination and removal before c) P50-P65 release to nucleus. I'm wondering how long a time period the IkBalpha complex can last before scheduling of degradation. The heterodimer P50/P65... https://users.soe.ucsc.edu/~pchan/projects/NFkB/Content/p50p65_fs.html In the paper re: Fig 1 it states:
Cellular responses to bacterial or viral infections and to stress require rapid and accurate transmission of signals from cell-surface receptors to the nucleus [1]. These signalling(sic) pathways rely on protein phosphorylation and, ultimately, lead to the activation of specific transcription factors that induce the expression of appropriate target genes. Among the activated transcription factors, the nuclear factor-kB (NF-kB) family proteins are essential for inflammation, immunity, cell proliferation and apoptosis.
Following sentence we see what I call a Pre-formatted "state" described as "latent" state. by authors of the paper...
NF-kB exists in a latent state in the cytoplasm and requires a signalling pathway for activation.
"Latent state?" OK, it "exist" in the cytoplasm. I think it's interesting from a Design perspective. Putting aside normal protein cycles, aggregates and degradation. The term "latent" hides possible design and forethought of a series of events. Looking at possible outcomes of mutation and disease form wiki entry:
Disease linkage The gene encoding the IkB-alpha protein is mutated in some Hodgkin's lymphoma cells; such mutations inactivate the IkB-alpha protein, thus causing NF-kB to be chronically active in the lymphoma tumor cells and this activity contributes to the malignant state of these tumor cells.
Very interesting. IkBalpha Protein must remain as a continuous supply, "queuing" or waiting for signal cascades. Waiting in a "latent" or Pre-formatted state. From a design perspective, a "pre-formatted" ready-state to go, for a pre-programmed step awaiting ubiquitination.
Such NF-kB-activating pathways are triggered by a variety of extracellular stimuli and lead to the phosphorylation and subsequent proteasome-mediated degradation of inhibitory molecules, the inhibitor of NF-kB (IkB) proteins [2].
Activated after awaiting prescribed modification actions in the cytoplasm...
Activated NF-kB migrates into the nucleus to regulate the expression of multiple target genes. The NF-kB–IkB complex can also shuttle between the cytoplasm and the nucleus in unstimulated cells, but the nuclear export is more efficient and, therefore, the NF-kB–IkB complex is mainly cytoplasmic in resting cells.
Sometimes smaller elements intimate an engineered design. A sensor/receptor is considered small, but it's importance in Design is a highly specified detection system of alert(s). Same possibly for a "latent" or Pre-formatted Protein Structure in waiting? DATCG
Gpuccio, I was responding, but two hours in research I've decided to post-pone a larger comment. And now I see you have a possible "next post" appearing for a response. -------------------------- Generally, on neural network concept, maybe so as a node-decision process. I agree in general with most of what you're saying, but need to think through certain aspects. Highly specified decisions are tightly regulated and controlled within these cellular processes. And therefore conditionally related for specific outcomes. Not sure I understand how far you're expanding on neural networks comparison. DATCG
TomMueller at TSZ: I have explicitly mentioned the prokaryotic antecedents of ubiquitin in the OP. See the section: Evolution of the Ubiquitin system? You mention a paper of 2006 about that. In my OP, I mention a more recent paper (2012) about the same subject, and I quote integrally its abstract. My point has never been that there are no "antecedents" in prokaryotes. I stick to my comment in the OP: "As usual, we are dealing here with distant similarities, but there is no doubt that the ubiquitin system as we know it appears in eukaryotes." Then you quote another paper: Purifying selection and birth-and-death evolution in the ubiquitin gene family http://www.pnas.org/content/97/20/10866 which is the one mentioned by ET at #442. I am not sure why you quote it. It is an interesting paper about the modalities of neutral variation in ubiquitin gene families, comparing two different theories about how ubiquitin genes have varied in their synonimous sites after the original appearance of the functional sequence, which of course has not varied throughout natural history, as clearly stated by the authors:
It is one of the most highly conserved proteins (1), and 72 of the 76 amino acids appear to be invariant among fungi, plants, and animals (2).
Therefore, that paper has absolutely no relevance for my OP and for the design inference. The effects of negative purifying selection on protein sequences, which can be measured by the Ks (which in the paper is called Ps) are a fundamental and undeniable fact which is the foundation of all my reasoning about sequence conservation in proteins as a measure of functional information. Indeed, I have often used here the Ks values as my main argument in favor of common descent. So, I don't see what that paper has to do with the inference of design for the ubiquitin system. Moreover, even if it is true that we could infer design from the mere conservation of the ubiquitin protein (because it shows, for example, 100% conservation from fungi ti humans, for a tootal bitscore of 155 bits, which is more than enough to infer design), that is not the subject of my OP. My OP is about inferring design for the ubiquitin system, not for the single ubiquitin molecule. That's why I consider not only the huge functional information in the system (which is, of course, much much more that 155 bits), but also the semiotic nature of the system itself. Just to be precise. gpuccio
Alan Fox and other friends at TSZ: Alan, I will first address your arguments in the OP.
The subtext is that ubiquitin’s role is so widespread and diverse and conserved acrross all (so far known) eukaryotes, that it defies an evolutionary explanation. This appears to be yet another god-of-the-gaps argument.
It is an argument based on the amazing functional complexity of the ubiquitin system and its strong semiotic nature as a symbolic tagging system. It is an argument for extremely strong design inference, according to the principles of ID theory. If your only objection is that it is a God-of-the-gaps arguments, what can I say? I cannot certainly restate here all the basics of ID theory. I thought you could have some better arguments, but if that's all...
Take that, evolutionists! I’m not familiar with the ubiquitin system and thank gpuccio for his article (though I did note some similarities to the Wikipedia entry.
I often use Wikipedia as an useful summary guide to what is known about an issue. Of course, I always check and add further sources all the time. I often quote Wikipedia literally, too, and always with an explicit reference. But my main source, of course, is Pubmed and the scientific literature. Let's go to your reasons for the lack of comments by neo-darwininists:
1) In a sense, there’s little in gpuccio’s opening post to argue over. It’s a description of a biochemical system first elucidated in the late seventies and into the early eighties. The pioneering work was done by Aaron Ciechanover, Avram Hershko, Irwin Rose (later to win the Nobel prize for chemistry, credited with “the discovery of ubiquitin-mediated protein degradation”, all mainstream scientists.
Of course, I never suggested that it was me to elucidate the system! :) I have only reviewd part of what is known in an intelligent desing perspective. However, in my OP there are also some analyses made by me (see for example Figures 4 and 5.
2) Gpuccio hints at the complexity of the system and the “semiotic” aspects. It seems like another god-of-the-gaps argument. Wow, look at the complexity! How could this possibly have evolved! Therefore ID! What might get the attention of science is some theory or hypothesis that could be an alternative, testable explanation for the ubiquitin system. That is not to be found in gpuccio’s OP or subsequent comments.
Again, if any discussion about facts that cannot be explained by the neo-darwinist scenario and clearly point to information input by design are by default rejected as "god-of-the-gaps arguments", what can I say? You are entitled to your own worldviews about science. Of course, you could at least acknowledge that the concept of God has absolutely no role in my scientific reasonings about biological ID.
3) Uncommon Descent has an unenviable history on treatment of ID skeptics and their comments. Those who are still able to comment at UD risk the hard work involved in preparing a substantive comment being wasted as comments may never appear or are subsequently deleted and accounts arbitrarily closed.
That has never happened in my threads, as far as I can say. I am always happy to discuss. At most, I have sometimes stopped answering some specific discussant, when his discussing style was really exasperating, and devoid of any new argument. But I treasure opposition, and the better it is, the greater the fun! :)
So I’d like to suggest to gpuccio that he should bring his ideas here if he wouyd like them challenged. If he likes, he can repost his article as an OP here. I guarantee that he (and any other UD regulars who’d like to join in) will be able to participate here without fear of material being deleted or comment privileges being arbitrarily suspended.
I have done that in the past. And I have also made long parallel discussions with your site. I have said many times that debating at UD is already almost beyond my time and resources. That's why I cannot work on two sites. UD is my natural place, because here are those who share my ideas. So, I will go on posting here. I wll also try to answer the ideas posted in your thread, if it does not become too exacting! :) Believe me, I have no fear of being deleted or anything like that. It's just a question of personal resources. gpuccio
ET: Oh no! Not another corss-talk with TSZ! :) OK thanks for the notification. I checked TSZ and found the thread (rather short, for the moment, thanks God!). I must say that the paper you link was really mentioned by TomMueller, not Alan Fox (who is instead the author of the OP). Well, I am going to answer in next post. :) gpuccio
Alan Fox has taken umbrage with the claiming of ubiquitin as evidence for ID. Of course he doesn't have any idea how blind and mindless processes could have produced it and he doesn't have any idea how to test the claim that they could. So there. the alleged refutation of the concept of ubiquitin as evidence for ID Alan talks about alternatives- alternative to what? Alan's alleged evolutionary theory doesn't have some theory or hypothesis that could be a testable explanation for the ubiquitin system, to begin with. ET
DATCG: OK, I would like to go on with the more general discussion starting from what I have already said at #437. Why all the added complexity? I would like to start with an old friend, Michael Behe, and with one of his first metphors (in Darwin's black box): the Rude Goldberg machine. For example, here is an example: https://en.wikipedia.org/wiki/Rube_Goldberg_machine#/media/File:Rube_Goldberg%27s_%22Self-Operating_Napkin%22_(cropped).gif Now, it is rather obvious that the scenario we have seen at #437 about the NFkB pathway (see Fig. 1 of the quoted paper) does resemble a Rude Goldberg machine. But there is more. Behe offers, in his fundamental book, a couple of important examples of irreducible complexity: the bacterial flagellum and the coagulation cascade. But those examples are a little different from out regulation overkill. In a sense, the irreducible complexity is more "understandable" there. For example, in the flagellum the various parts, stator, rotor, filament and so on are parts of a machine. their role, therefore, is immediately obvious. In the coagulation cascade, the linear cascade can be explained, in a way, by the need to amplify linearly the signal to get a wide final effect. But in out regulation scenario, the explanation is less obvious. There is also another feature which can help us in our approach: the regulation network we have seen is not linear. If we look at the famous Fig. 1 already quoted, while the main cascade can be considered linear, there are many "cross-talks". For example, some phosphorylation systems and the ubiquitination step act "at the side" of the cascade. Now, I would propose what seems to me the only reasonable explanation for the "added complexity". The reason for the added complexity is that the pathway, like almost all similar pathways that work between the cell membrane and the nucleus, is not an isolated mechanism, but is part of a huge "neural network" which involves all the different pathways which transmit and integrate the communication between outward signals (the cell membrane) and the final transcription regulation in the nucleus. My idea is that the many proteins that are involved in the many "redundant" steps in the pathway are regulation nodes, and act as "sensors" which integrate the specific pathway with all that happens in the cytoplasm, receiving information from the other pathways and transmitting information to them. That is also an explanation to the more specific answer: Why are some apparently simple steps implemented by huge multi-protein complexes? For example, why is the double phosphorylation of IkB alpha performed by a structure made of 9 protein blocks, and not by a single kinase? One possible answer is: because it must receive and transmit information to other cell pathways, interacting with many other protein structures. IOWs, we see here something similar to what happens at the level of the nucleus with the combinatorial working of TFs in big multi-protein structures. Or, if we want to push the similitude even further, something similar to what happens in the synapses to integrate many different signals. OK, these are just a few tentative thoughts. But, if there is something true in these ideas, then the "neural network" of transmission pathways really deserved a lot of attention. And, of course, only a design perspective can help in this kind of issues. gpuccio
Note: @439 Bigger Picture: One question leads to another Why does a blind, unguided series of events "decide" in the past, millions of years ago to "evolve" a solution so that: a) a Pre-Formatted protein complex like NF-kB waits around in suspended animation for a cascade of events initiated by a signal event through a specific TNF pathway? Q: How many other Pre-formatted Protein Complexes are waiting in Suspended Animation of the Cytoplasm? The irreducible complexity of such networked systems amounts to multiple interdependent systems coordinating organized interactions while "simultaneously" and "blindly evolving" working solution steps at just the right time for any of these pathways to function. OK, now I must go. Will check in later. (Edit) and this does not include other possible pathways for this specific NF-kB protein complex for other factors(signals processing) and decisions. Either Pre or Post Phosphorylation and Ubiquitination. DATCG
Gpuccio, Before I go, restating for my own clarity and maybe others what you have stated re: "simpler" it might be on another pathway.
"...how simpler it would have been in this other way: 1) The interaction between TNF and its receptor enzumatically modifies a TF (p50-p65) OK, that's bypassing a huge Complex and many phosphorylation steps. So, am I correct in speculating Phosphorylation is crucial in ways I do not understand yet? 2) ...which was before in some inactive state. This is to me is an important Design concept or at least a good question: inactive state I question and expanded in previous comment @438. Why have NF-kB protein complex hanging around? I designate it as a "Pre-Formatted" state. Speed considerations? Quick response and deployment? Is there another Conditional element where different signals change the resulting pathway or proteolytic process? Why would a supposedly blind, unguided "process" do it? And how would it know it the Protein Complex would ever be utilized or "evolve" to be utilized in such a pathway? I've not seen previous remarks on this by neo-Darwinist. So if any readers supporting a neo-Darwinian pathway of evolution for this scenario can comment, I'd be happy to see it. Why does a blind, unguided series of events decide in the past, millions of years ago even that: a) I'm going to leave a Pre-Formatted protein complex like NF-kB waiting around in suspended animation? b) So a signal process can cause a chain reactions of Lemony Snicket series of "FORTUNATE" events? This does not make sense even for Lemony Snicket. 3) The active TF(edit) then relocates to the nucleus, where it does what it has to do. After a series of unfortunate Lemony Snicket's events! ;-) it proceeds to the Nucleus where a whole other series of unfortunate Lemony Snicket events translate, transcribe, Post-Modify and transpire to send an immune response to the target of the alert.
Amazing! :) That Lemony Snicket!
Simpler, isn’t it? And beware, that simpler version could easily be controlled too, for example at the level where the TF is activated by the membrane receptor. So, why all the added complexity?
Because, two kids showed up and created a new pathway for Lemony Snicket to go down, all by random walks, accidental events and "natural selection." . DATCG
Ciao Gpuccio @436-437 :) And thanks! Especially on clarifications and this one...
b) The phosphorylation described in a) is done by a big molecular complex, which includes the two IKK1 (or alpha) and IKK2 (or beta) proteins, NEMO and at least 3 other proteins (Hsp90, Cdc37 and ELKS). Many of those proteins must be phosphorylated to be active, in particular IKK1 at Ser 176 and 180, IKK2 at Ser 177 and 181.
To see IKK2(important role) at Ser 177 and 181 is interesting. I've missed something and thought it was Ser 32, Ser 36. Thanks for recognizing the question(however uninformed of biochemical pathways) about processes taking place - which seems like at initial look - to be over regulated for lack of better terminology. So, taking neo-Darwinist side, would a hodge-podge of pathways like this lead credence to their theory? Or, upon closer inspection will we find purposes for: a) multiple pathways b) reasons for phosphorylation steps c) ubiquitinylatoin steps d) Reason for Pre-Formatted NF-kB Waiting in the Cytoplasm? I'd really like a neo-Darwinist to explain D) Pre-formatted TF w/ inhibitor awaiting signals. Why? Why is is there? Makes no since for an unguided, blind process to pre-plan actions. Does it? Will neo-Darwinist appeal to bad design as an answer? So yes, the question again, is Why? Why for many of these steps. I was in the middle of breaking down the two pathways of Figure 1, Classical vs Alternative(TNF vs CD40) and simply do not have enough time at this moment. But will hopefully return later tonight and finish before posting. But what we see is Directed Response pathways for TNF and CD40, plus a third pathway which I'm not planning to cover. I like how you are detailing what might happen in a simpler pathway - if only we knew all of these processing techniques and other components. I think the more we peer in to different Signaling actions and Pathways, starting at the membrane, the more beneficiary it is to a Design hypothesis. First, it must distinguish between TNF and CD40. These are conditional signals for appropriate specified and targeted immune responses(ie. viruses in alternative pathway of CD40) OK, have very busy week ahead, but I'll respond when I can! Really enjoying this OP Gpuccio! And I hope other readers appreciate all your efforts here. And hope they feel OK to ask any questions. DATCG
DATCG at #432: Now, the more general discussion, for which I will use our NFkB as a model, but there are lots of similar systems in the cell. In your comment you have touched a very important point. An extremely important point, I would say. In brief, I will sum it up as followes: Why such complex multiple intertwined regulation systems to control one single pathway? I suggest that anyone who reads the following could refer to Fig. 1 in the already quoted paper: https://orbi.uliege.be/bitstream/2268/1280/1/21.%20Review%20phosphorylation%20NF-kB%20TIBS.pdf That will make the discussion simpler. Please, refer only to the left part of the Figure, which represents the canonical activation pathway. Now, the whole process coule be summarized as follows: a) A signal arrives at the cell membrane (The TNF cytokine) b) A transcription factor is activated to convey the message to the nucleus. c) The activated TF interacts with the genome and aucses a series of events there. In our Fig. 1, it is easy to identify the eseential actors. a) The cytokine (TNF) and its membrane receptor (TNFR1) are well visible at the top of the Figure (left part). b) Our TF is the p50-p65 entity (p65 is the same as RelA, so this is the same as RelA-p50 in Fig. 5 of the OP). c) The TF interacting with DNA can be seen at the bottom of the Figure. OK, so I think that the spontaneous question that everyone is asking is: But there are a lot of other things there! Why? And there are! a) The protein complex which forms at the inner side of the cell membrane (3 proteins, one of which phosphorylated, + SODD which has to be released for the activation) b) The big protein complex which phosphorylates the TF to release it from its inhibitor: 6 proteins, many of which phosphorylated. c) The inhibitor of the TF (IkB alpha) d) The SKP1 – beta TRC complex which ubiquinates the inhibitor after its double phosphorylation (not shown in the figure, see Fig. 7 in the OP): 6 proteins, including ubiquitin. e) And, of course, the proteasome. And I would bet that a lot of other components are not shown, maybe even not known (for example, the various Kinase systems that phosphorylate many of the mentioned proteins). Each of those components has an active role in the regulation and control of the process. And again, the question is: why? Of course, anyone can see how simpler it would have been in this other way: The interaction between TNF and its receptor enzumatically modifies a TF (p50-p65) which was before in some inactive state. The active TF then relocates to the nucleus, where it does what it has to do. Simpler, isn't it? And beware, that simpler version could easily be controlled too, for example at the level where the TF is activated by the membrane receptor. So, why all the added complexity? That is a difficult question. I will try to discuss some aspects in next post. gpuccio
DATCG: First of all, I would like to offer some clarifications about the complex issue of phosphorylation in the NFkB pathway, just to avoid confusion. a) The protein which is phosphorylated at serines 32 and 36 is IkB alpha, which is a direct inhibitor of RelA p50 (in the canonocal pathway of activation). The double phopshorylation leads to ubiquitination of the inhibitor and to its degradation in the proteasome, releasing the transcription factor (RelA p50, which is one form of NFkB TF). b) The phosphorylation described in a) is done by a big molecular complex, which includes the two IKK1 (or alpha) and IKK2 (or beta) proteins, NEMO and at least 3 other proteins (Hsp90, Cdc37 and ELKS). Many of those proteins must be phosphorylated to be active, in particular IKK1 at Ser 176 and 180, IKK2 at Ser 177 and 181. c) Not much is understood of the processes that lead to the phosphorylations described in b). The TAK1 kinase seems to be involved. d) However, the formation of the protein complex described in b) is cuased by another protein complex, which adheres to the cell membrane, and includes TRADD, TRAF2 and RIP (this one, too, phosphorylated). e) The formation of the complex described in d) is caused by the reaction between a specific membrane receptor and a cytokine (usually Tumor Necrosis Factor, TNF). Simple, isn't it? And this is only the canonical pathway! :) OK, the above information is not extremely recent, it comes from the following two papers: Phosphorylation of NF-kB and IkB proteins: implications in cancer and inflammation https://orbi.uliege.be/bitstream/2268/1280/1/21.%20Review%20phosphorylation%20NF-kB%20TIBS.pdf See especially Fig. 1. And: The IKK Complex, a Central Regulator of NF-?B Activation https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2829958/pdf/cshperspect-NFK-a000158.pdf I just wanted to clarify these points to go on with some more general discussion in next post. gpuccio
So, is that activity around serines 177 and 181 applicatlbe to serines 32 and 36? I don't know if same rules apply. It is from another report in the book linked in #434, that describes... "two potentially novel components of the IKK complex, namely Cdc37 and Hsp90. Apparently, formation of the core IKK complex with Cdc37/Hsp90 is required for TNF induced activation and recruitment of the core IKK complex from the Cytoplasm to the membrane." Different functional requirements may be unrelated. DATCG
Hmmmm... "Phosphorylation of IkBalpha on serines 32 and 36 is mediated by IkB kinases(IKKs), whose activity is induced by activators of the NFkB pathway. IKK activity exists as large Cytoplasmic multi-subunit complex(700-900kDAa) containing two kinase subunits, IKK1(IKKalpha) and IKK2(IKKbeta), and regulartory subunit, NEMO..." "Sequence analysis revealed that both IKK1 and IKK2 contain a canonical MAP kinase(MAPKK) activation loop motif. This region contains specific sites whose phosphorylation induces a conformational change that results in kinase activation." Hmmm, is that Kinase activation in support of Tagging. OK, so this is interesting. Paragraphs above and blockquote below are from google book source. There may be a few typos: Regulation of Organelle and Cell Compartment Signaling: Cell Signaling... edited by Ralph A. Bradshaw, Edward A. Dennis(1st Edition 2011).
Phosphorylation within the activation loop typically occurs through the action of an upstream kinase or through transphosphorylation enabled by regulated proximity between two kinase subunits. IKK2 activation loop mutations, in which serines 177 and 181 were replaced with alanine, render the kinase refarctory to stimulus dependent activiatio. In contrast, replacement of serine 177 and 181 with glutamic acid, to mimic phoserine, yielded a constitutively active kinase, and was capable of cell stimulation. The corresponding mutations in IKK1 did not interfere wiht NFkB activation in response to IL-1 or TNF, providing the first data suggesting that IKK2 plays a more prominent role in NFkB activation in response to proinflammatory cytyokines.
. DATCG
Correction of sentence from 432 above: It must match the signal input. And in turn the pathway has to launch the correct response by a Directive or Rules based procedure code that has addresses or addressable-location mechanisms built-in to guide for eventual locations and active response(i.e. inflammatory and/or immune responses must be directed to correct area of inflammation and repair) DATCG
Hey guys, hope everyone had a great weekend, restful and invigorating :) Gpuccio @425 - 430! :) Wow! It's always fun to read all the spectacular details of what you've posted. I especially like Splicesome Ubiquitin connections as expected. And I'd love to delve into that area you highlighted, especially a favorite area of introns for me :) My time is limited however so will focus on the part of your OP on Phosphorylation --------------------- On #425... building upon an analogy of Design process and Markup Languages... The Phosphorylation Tag as a refined, or fine tuning condition step is very interesting. What is it accomplishing and why precede UPS with a unique tag? Whats the reason for this specific pathway? To release a transcription factor? For that matter, why is this a Pre-release state required as a partial degradation for release? Weird, huh? Is there a precedent in Design and Coding realms for an Information processing analogy? There's a reason Bill Gates said, “DNA is like a computer program but far, far more advanced than any software ever created.” Well, why would he say that? Because he is intimately aware of how a CPU works with Coding languages. He recognizes Code when he sees it in an operating system. So yes, there is a precedent in information processing. Programmers maintain internal information, code, data and append internal tags or external tags in a table or database and strip data or "inhibitors" if you like as Pre-processing requirements are met, based upon different input signals. So, if we view the Cytoplasm as another method of large CPU-processing Memory in the Cell, Or, lets call it an "aqueous MotherBoard*" this begins to make sense as an Information Processing analogy. Data or in this case, the NF-?B TF Struture is Pre-formatted. That alone is a sign of Design. The NF-kB Transcription Factor is ready and waiting for activation to instigate it's release based upon specific input signal(s) for it's eventual modification and release into the Nucleus. One of my first questions is... Why have a pre-formatted structure waiting for use in a blind, unguided neo-Darwinian story? This Pre-formatted TF does not fire off for any reason. In fact, if it starts firing off for any reason, it would cause chaos and possible damage downstream. So it's tightly regulated and only released upon correct Signal(s). It must match the signal input. And in turn the pathway has to launch the correct response by a Directive or Rules based procedure code that has locations built-in for eventual locations. It's like a data-table or structure is be preloaded into memory ready for quick access and retrieval by a CPU-nucleus instruction set for transcription. This is not a single Input -> Modification -> Output process. These rapid process are going on in parallel, especially for quick reaction immune systems and/or repair mechanisms and any millions of trillions of cellular transactions at any one time in our body, brain, heart, skin, immune, gut, etc. etc., etc. But simplifying, we have... Input to Aqueous Cytoplasm "Motherboard" for the Release of Transcription Factor : -> Signal A ---> Structure Awaiting Signal A in Wet Memory -----> Tag for Modification -------> Alert UPS ---------> Strip Component Part, freeing TF -----------> Send NF-?B TF to Nucleus for processing An interesting question arises which I think is even more succinct and descriptive of a Design process. Why not ubiquitin Mono, Poly, or Branched tags? Why a requirement for a conditional, two-step Tagging process? Why not a one-step UPS solution or chaining events? That would satisfy partial degradation and release of TF to the nucleus? I need more time to review. Unfortunately I do not have enough background in biochemistry and atomic structures to know why these different Tagging procedures might be required. A few searches have not produced answers though I may easily be missing them. Is it a conformational component where Phosphorylation is required to adjust folds? If by Design, we should be able to qualify the reasons for conditions of refinement specificity that you mention and Tagging requirements. So much to review :) *Aqueous Motherboard - what else can we call it? Or designate the Cytoplasm as? Other than it's aqueous structure of floating functions and organelles - it resembles a motherboard. The Cytoplasm functions as a motherboard for active, instant retrieval of Pre-formatted structures, specified functions and specialized processing units(1). These units and floating pre-formatted functions surround the CPU - Core process-Nucleus instruction set for Eukaryotes. This enables high speed throughput, as signals enter and ignite pre-programmed responses. (1) specialized-processing units = organelles. What am I missing? What more might be added I'm leaving out in symbolic tagging and semiosis? Or processing functionality? DATCG
UB "ES, I do not see any mail, my friend". That explains your silence :) I'll send another one. Thanks for letting me know. I always enjoy reading this kind of OPs, as you know. But time is really tight just now. I hope to be able to devote more time to it in due course. EugeneS
DATCG: Here is another protein regulated by ubiquitin and involved in splicing, Sde2: Sde2 is an intron-specific pre-mRNA splicing regulator activated by ubiquitin-like processing. http://emboj.embopress.org/content/37/1/89.long
Abstract The expression of intron?containing genes in eukaryotes requires generation of protein?coding messenger RNAs (mRNAs) via RNA splicing, whereby the spliceosome removes non?coding introns from pre?mRNAs and joins exons. Spliceosomes must ensure accurate removal of highly diverse introns. We show that Sde2 is a ubiquitin?fold?containing splicing regulator that supports splicing of selected pre?mRNAs in an intron?specific manner in Schizosaccharomyces pombe. Both fission yeast and human Sde2 are translated as inactive precursor proteins harbouring the ubiquitin?fold domain linked through an invariant GGKGG motif to a C?terminal domain (referred to as Sde2?C). Precursor processing after the first di?glycine motif by the ubiquitin?specific proteases Ubp5 and Ubp15 generates a short?lived activated Sde2?C fragment with an N?terminal lysine residue, which subsequently gets incorporated into spliceosomes. Absence of Sde2 or defects in Sde2 activation both result in inefficient excision of selected introns from a subset of pre?mRNAs. Sde2 facilitates spliceosomal association of Cactin/Cay1, with a functional link between Sde2 and Cactin further supported by genetic interactions and pre?mRNA splicing assays. These findings suggest that ubiquitin?like processing of Sde2 into a short?lived activated form may function as a checkpoint to ensure proper splicing of certain pre?mRNAs in fission yeast.
And: Intron specificity in pre-mRNA splicing https://link.springer.com/article/10.1007%2Fs00294-017-0802-8
Abstract The occurrence of spliceosomal introns in eukaryotic genomes is highly diverse and ranges from few introns in an organism to multiple introns per gene. Introns vary with respect to their lengths, strengths of splicing signals, and position in resident genes. Higher intronic density and diversity in genetically complex organisms relies on increased efficiency and accuracy of spliceosomes for pre-mRNA splicing. Since intron diversity is critical for functions in RNA stability, regulation of gene expression and alternative splicing, RNA-binding proteins, spliceosomal regulatory factors and post-translational modifications of splicing factors ought to make the splicing process intron-specific. We recently reported function and regulation of a ubiquitin fold harboring splicing regulator, Sde2, which following activation by ubiquitin-specific proteases facilitates excision of selected introns from a subset of multi-intronic genes in Schizosaccharomyces pombe
Both of January 2018. Strangely, Sde2 is not linked to splicing in Uniprot:
Involved in both DNA replication and cell cycle control (PubMed:27906959). Unprocessed SDE2 interacts with PCNA via its PIP-box. The interaction with PCNA prevents monoubiquitination of the latter thereby inhibiting translesion DNA synthesis. The binding of SDE2 to PCNA also leads to processing of SDE2 by an unidentified deubiquinating enzyme, cleaving off the N-terminal ubiquitin-like domain. The resulting mature SDE2 is degraded by the DCX(DTL) complex in a cell cycle- and DNA damage dependent manner (PubMed:27906959). Binding of SDE2 to PCNA is necessary to counteract damage due to ultraviolet light induced replication stress. The complete degradation of SDE2 is necessary to allow S-phase progression
So, its role in intron splicing seems to be a really recent discovery. By the way, human Sde2 is not highly conserved, but has a rather slow emergence in evolutionary history, with two dicrete jumps at the vertebrate and mammal transitions. gpuccio
DATCG: Well, the issue is more complex than I thought. It seems that NineTeen complex is involved in the many phases of the spliceosome assembly, with all the activities described in the paper linked at #428. However, the complex itself is formed by at least 8 core proteins (in yeats): Prp19, Cef1, Syf1, Syf2, Syf3, Snt309, Isy1 and Ntc20, plus about 18 associated proteins. See here: The function of the NineTeen Complex (NTC) in regulating spliceosome conformations and fidelity during pre-mRNA splicing. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4234902/
Abstract: The NineTeen Complex (NTC) of proteins associates with the spliceosome during pre-mRNA splicing and is essential for both steps of intron removal. The NTC and other NTC-associated proteins are recruited to the spliceosome where they participate in regulating the formation and progression of essential spliceosome conformations required for the two steps of splicing. It is now clear that the NTC is an integral component of active spliceosomes from yeast to humans and provides essential support for the spliceosomal snRNPs (small nuclear ribonucleoproteins). In the present article, we discuss the identification and characterization of the yeast NTC and review recent work in yeast that supports the essential role for this complex in the regulation and fidelity of splicing.
In particular, Table 1 But the strange point is that:
The NTC is named after the splicing factor Prp19, which was first identified in 1993 as a splicing factor in the yeast S. cerevisiae [4]. Prp19 is essential for splicing but is not a constituent of any of the individual spliceosomal snRNPs [5,6]. Association of Prp19 with itself to form tetramers provides the basis for the hypothesis that Prp19 provides a scaffold for NTC organisation [7]. Prp19 contains a U-box domain which exhibits E3 ubiquitin ligase activity in vitro [8], however, a target for this activity in the spliceosome is still lacking.
Emphasis mine. IOWs, Prp19 is definitely an E3 ligase, but that activity is not documented in the spliceosome assembly (although it could certainly be present). However, the very recent paper referenced at #427 clearly documents the E3 ligase activity, but in relation to DNA repair, not to spliceosome assembly. Moreover, the 3 proteins listed at #427 are required for the E3 ligase activity in DNA repair, but are not apparently part of the NTC complex, as I had believed initially (I have corrected my comment in that sense). These proteins are really amazing, I would say. :) gpuccio
DATCG: And this is about the NineTeen Complex (NTC): The NineTeen Complex (NTC) and NTC-associated proteins as targets for spliceosomal ATPase action during pre-mRNA splicing. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4615276/
Abstract: Pre-mRNA splicing is an essential step in gene expression that removes intron sequences efficiently and accurately to produce a mature mRNA for translation. It is the large and dynamic RNA-protein complex called the spliceosome that catalyzes intron removal. To carry out splicing the spliceosome not only needs to assemble correctly with the pre-mRNA but the spliceosome requires extensive remodelling of its RNA and protein components to execute the 2 steps of intron removal. Spliceosome remodelling is achieved through the action of ATPases that target both RNA and proteins to produce spliceosome conformations competent for each step of spliceosome activation, catalysis and disassembly. An increasing amount of research has pointed to the spliceosome associated NineTeen Complex (NTC) of proteins as targets for the action of a number of the spliceosomal ATPases during spliceosome remodelling. In this point-of-view article we present the latest findings on the changes in the NTC that occur following ATPase action that are required for spliceosome activation, catalysis and disassembly. We proposed that the NTC is one of the main targets of ATPase action during spliceosome remodelling required for pre-mRNA splicing.
Look at Fig. 1 for a "simple" summary. :) gpuccio
DATCG: OK, it had to happen: Here are the ubiquitin system and the spliceosome joined together, with additional involvement in DNA repair. And in a very complex way. Mol Cell. 2018 Mar 15: Prp19/Pso4 Is an Autoinhibited Ubiquitin Ligase Activated by Stepwise Assembly of Three Splicing Factors
Abstract Human nineteen complex (NTC) acts as a multimeric E3 ubiquitin ligase in DNA repair and splicing. The transfer of ubiquitin is mediated by Prp19-a homotetrameric component of NTC whose elongated coiled coils serve as an assembly axis for two other proteins called SPF27 and CDC5L. We find that Prp19 is inactive on its own and have elucidated the structural basis of its autoinhibition by crystallography and mutational analysis. Formation of the NTC core by stepwise assembly of SPF27, CDC5L, and PLRG1 onto the Prp19 tetramer enables ubiquitin ligation. Protein-protein crosslinking of NTC, functional assays in vitro, and assessment of its role in DNA damage response provide mechanistic insight into the organization of the NTC core and the communication between PLRG1 and Prp19 that enables E3 activity. This reveals a unique mode of regulation for a complex E3 ligase and advances understanding of its dynamics in various cellular pathways. --- The Prp19/nineteen complex (NTC) is a multifunctional protein complex involved in very diverse biological processes, including pre-mRNA splicing and the DNA damage response (DDR)
Prp19 is one more Prp involved, among other things, in the spliceosome assembly. From Uniprot:
Ubiquitin-protein ligase which is a core component of several complexes mainly involved pre-mRNA splicing and DNA repair. Core component of the PRP19C/Prp19 complex/NTC/Nineteen complex which is part of the spliceosome and participates in its assembly, its remodeling and is required for its activity. During assembly of the spliceosome, mediates 'Lys-63'-linked polyubiquitination of the U4 spliceosomal protein PRPF3. Ubiquitination of PRPF3 allows its recognition by the U5 component PRPF8 and stabilizes the U4/U5/U6 tri-snRNP spliceosomal complex (PubMed:20595234). Recruited to RNA polymerase II C-terminal domain (CTD) and the pre-mRNA, it may also couple the transcriptional and spliceosomal machineries (PubMed:21536736). The XAB2 complex, which contains PRPF19, is also involved in pre-mRNA splicing, transcription and transcription-coupled repair (PubMed:17981804). Beside its role in pre-mRNA splicing PRPF19, as part of the PRP19-CDC5L complex, plays a role in the DNA damage response/DDR. It is recruited to the sites of DNA damage by the RPA complex where PRPF19 directly ubiquitinates RPA1 and RPA2. 'Lys-63'-linked polyubiquitination of the RPA complex allows the recruitment of the ATR-ATRIP complex and the activation of ATR, a master regulator of the DNA damage response (PubMed:24332808). May also play a role in DNA double-strand break (DSB) repair by recruiting the repair factor SETMAR to altered DNA (PubMed:18263876). As part of the PSO4 complex may also be involved in the DNA interstrand cross-links/ICLs repair process (PubMed:16223718). In addition, may also mediate 'Lys-48'-linked polyubiquitination of substrates and play a role in proteasomal degradation (PubMed:11435423). May play a role in the biogenesis of lipid droplets (By similarity). May play a role in neural differentiation possibly through its function as part of the spliceosome
So many proteins involved in this extraordinary multi-protein complex: Prp19 (504 AAs) (as an homotetramer) SPF27 (225 AAs) CDC5L (802 AAs) PLRG1 (514 AAs) All of them highly conserved! :) gpuccio
DATCG: By the way, have you seen that our private party here has got a lot of new visibility, thanks to this kind OP by Barry Arrington? :) https://uncommondesc.wpengine.com/intelligent-design/they-wont-dance-they-wont-mourn/ gpuccio
DATCG at #421: "This Ubiquitin post of yours sparked my interest more than usual." Mine too, I must say. "Precisely because it’s a Tagging system. Or Markup Language as an analogy?" Exactly. The tagging/markup analogy is perfectly justified! I would like to add a few thoughts about the nature of the tag. Of course, ubiquitin is not the only "tag" here. We have seen that many other systems cooperate, and some of them just give the appropriate signal to the ubiquitin system. One good example is phosphorylation, which often serves as a tag to recruit the ubiquitin system to some specific target. So, we have a double specificity here: the ubiquitin system recognizes the target (usually by the E3 ligase), and it also recognizes the tag (phosphorylation). A good example of that mechanism can be found in the OP, where it is mentioned that phosphorylation of I?B? at serines 32 and 36 is the signal for the ubiquitination of the IkB alpha inhibitor (See Fig. 6). OK, so what is the main difference between, say, the phosphorylation tag and the ubiquitin tag? I would say that it is the fact that ubiquitin is a collection of different tags: IOWs, the system is much richer. Phosphorylation is a very powerful tag, but it is one tag, and therefore its symbolic meaning is linked essentially to the positions that are phosporylated in the target porteoin. For example, serines 32 and 36 inb the case of I?B?. The same would be true for ubiquitin if only mono-ubiquitination, single or multiple, existed. Then we would have one tag, whcih can assume different meanings according to the positions where it is added. But, as we well know, things are much more complex for ubiquitin. Much of the signaling, here, is made not ny mono-ubiquitination, but by ubiquitin chains. So, while the position where the chian is added retains all its symbolic meaning, a new layer of coding is added: the length and nature of the chain. In that sense, ubiquitin is really a miraculous peotein. Its special fold provides 8 different switches that can be used to build chains. So we have the following combinatorial degrees of freedom: a) The length of the chain can vary b) Homogeneous chains can be buith using each of the possible switches. c) Heterogeneous chains can be built by mixing different switches. That's simply outstanding! :) gpuccio
ES, I do not see any mail, my friend. Upright BiPed
DATCG at #421: Wow! What a tour the force! As usual, it is late now. I will comment on it tomorrow! :) gpuccio
EugeneS: Nice to see you here! :) I hope you will like the discussion about our friend ubiquitin. It has been much wider than each of us expected. And, of course, any comments from you will be greatly appreciated! :) gpuccio
Upright Biped, Gpuccio, Thanks guys, enjoy learning from you both! Gpuccio, your background in medicine helps so much. And I like your detailed analysis. This Ubiquitin post of yours sparked my interest more than usual. Precisely because it's a Tagging system. Or Markup Language as an analogy? Most of what I worked on dealt with conditional processing of language specific identifiers, imaging systems, document management and packaging. Input Processing, Tagging and Rules based systems were created to coordinate a tightly controlled decision tree of subroutines. Built on specific language requirements used across all 50 states. All of it controlled by municipalities, medical boards, state and federal regulations. Lots of legalese w/ medical and beneficiary enrollment plans - healthcare. For large corporations and government it was a lengthy process and a possible legal nightmare if a single mistake(mutation) was made. Every decision made revolved around Tagging and Rules Based language procedures for Identification and Information processing routines. It was seen as fairly revolutionary at the time by a small software startup. What traditionally took six months to a year, even two years in large cases were reduced to mere days or weeks(and that only due to Human reviews). Simpler applications reduced it to mere seconds for request to end users. We were doing Markup Language and Tagging before HTML was fully accepted. Looking back realized I worked with some of the brightest in the industry. Visionary developers at that time. Was a great experience. These were legacy systems that eventually crossed over to PC and Browsers. Fortunately, I was chosen to bridge the gap for a few clients. So I learned many different platforms and markup languages over the years. OK, all this to explain my interest in this post Gpuccio. Going back to your OP, you identified Ubiquitin as a Tagging Solution. Precisely! In my view this Tagging solution and "Markup Language" or Ubiquitin Code Identification leaped out as Design from the start. Functional information processing systems or Cellular Processing cannot exist without Identification and Tagging or Marking Codes. A quick review of your initial post(note: edited)
The semiosis: the ubiquitin code The title of this OP makes explicit reference to semiosis. Let’s try to see why. The simplest way to say it is: ubiquitin is a tag. The addition of ubiquitin to a substrate protein marks that protein for specific fates, the most common being degradation by the proteasome. Nonproteolytic Functions of Ubiquitin in Cell Signaling
Abstract: In the past few years..., nonproteolytic functions of ubiquitin have been uncovered at a rapid pace. These functions include <b<(Tagging of:) membrane trafficking, protein kinase activation, DNA repair, and chromatin dynamics. A common mechanism(Tagging) underlying these functions is that ubiquitin, or polyubiquitin chains, serves as a signal to recruit proteins harboring ubiquitin-binding domains, thereby bringing together ubiquitinated proteins and ubiquitin receptors to execute specific biological functions. Another important aspect is that ubiquitin is not one tag, but rather a collection of different tags. IOWs, a tag based code.
Bingo :) a "Tag Based Code" Or, Markup Language Identifier? One area of clarification. Gpuccio you and Dionisio previously highlighted and discussed the missing procedures? Where are the governing Rules and procedures? Thinking from a Design interpretation. If we look at it through combination of instantiated information processing(substrates, Tags and Markings), we see a series of different Markup Languages and Tagging Identifiers. Mono, Poly-ubiquitin and branch Ubiquitin, etc. Can we designate these as external Tags(markups)? Based upon an information tagging processes being researched and discovered today of regulatory UB systems. But, what is missing? Might it be an internal Rules based, Tagging system? Back to Language Markup principles and Design. We used internal Markup Languages and Rules for identification of external tag markers, conditions based or Contextual and systems based, including transformation across different coded networks and languages. We internally marked(Tagged) every bit of language in whole document pagkages by Document IDs, pages, sections, paragraphs, down to single words, even characters, and internal translations, including special post-processing moditifications. All of this modular packaging sytems clients could pick and choose for whatever best represented their requirements. There was external Markup Language that End Users edited documents with, as identifiers to internal systems processing and tagging for eventual output destinations and other decision-required processing. All of it regulated by internal identifiers - Tags and Procedures only Developers could change or update. Overtime we allowed more overrides by customers to speed-up client specializations and less dependency on Developers. A more Open based User friendly markup. Q: Are we are on a similar threshold looking in as End Users today across cellular processes? Am I applying to many Information Process techniques of Markup languages and Tags to the Ubiquitin System? Does the analogy or application of Markup Language(Tags) make some rudimentary common sense? Or go to far afield from what you guys may be thinking? Can we state for example, For a Semiotic Code of Life to be interpreted and appropriate responses and actions to take place of any kind, it requires a Rules based, or Procedural Markup language? Both external and internal? For many different facets of: 1) Input 2) Identification, Tagging 3) Procedures and Rules based calls 4) Functional Operations - subsets of functions 5) Interactions and Communications(Bridges) between Functions, Systems and network subsystems. 6) Error Checking, Maintenance, Stress Management 7) Final output or result Reflecting on Gpuccio's posting at #369,
Ube2V2 Is a Rosetta Stone Bridging Redox and Ubiquitin Codes, Coordinating DNA Damage Responses.
On the scale of Life we see Semiosis. Multiple Codes and "Markup Languages." MetaCodes, MetaLayers, and "forms" and/or Cellular processing techniques are conserved across millions of years in evolutionary terms, while some are plastic and vary across phyla, kingdoms and domains: Bacteria, Archaea, Eukaryota . This returns to Epigenetic Regulatory Code of Life. It seems safe to say, it's larger than the blueprint itself upon which all core systems processing turns. Like any functionally organized, complex system, the blueprint must adhere to a large network of regulatory functions for initial design and importantly future maintenance. Poor Dan Graur, as Gpuccio rightly pointed out and turned Graur's words back on him: “If Evolution is wrong, ENCODE is right” I wish there was a way to track the artificial boundary set by Dan's 75% threshold. I wonder if ENCODE project is tracking the areas and numbers of Functions, including percentages of formerly declared "JUNK" DNA regions that today show important, tightly controlled functions. Since they laid out 80%, it would be in their interest to do so in comparison to Graur's dogmatic response.
DATCG
Hello Evgeny! Good to see you back on UD. With your interest in semiosis, you will certainly enjoy this thread. hmm ... I do not see any mail !? Upright BiPed
GPuccio Just quickly passing by. Another great bookmark in the browser! I will try to read this OP as soon as I find time. I appreciate your efforts in laying out really hard-code ID stuff. I still would like to propose that readers have access to OPs by author on this blog! It would just be a lot more convenient! Upright Biped You have mail :) EugeneS
DATCG at #400: Great video! And here are the last two papers published by the 4D Nucleome Project: A pathway for mitotic chromosome formation http://science.sciencemag.org/content/early/2018/01/17/science.aao6135?rss=1
Abstract: Mitotic chromosomes fold as compact arrays of chromatin loops. To identify the pathway of mitotic chromosome formation, we combined imaging and Hi-C of synchronous DT40 cell cultures with polymer simulations. We show that in prophase, the interphase organization is rapidly lost in a condensin-dependent manner and arrays of consecutive 60 kb loops are formed. During prometaphase ~80 kb inner loops are nested within ~400 kb outer loops. The loop array acquires a helical arrangement with consecutive loops emanating from a central spiral-staircase condensin scaffold. The size of helical turns progressively increases during prometaphase to ~12 Mb. Acute depletion of condensin I or II shows that nested loops form by differential action of the two condensins while condensin II is required for helical winding.
and: Real-time imaging of DNA loop extrusion by condensin http://science.sciencemag.org/content/early/2018/02/21/science.aar7831.long
Abstract: It has been hypothesized that Structural Maintenance of Chromosomes (SMC) protein complexes such as condensin and cohesin spatially organize chromosomes by extruding DNA into large loops. Here, we provide unambiguous evidence for loop extrusion by directly visualizing the formation and processive extension of DNA loops by yeast condensin in real-time. We find that a single condensin complex is able to extrude tens of kilobase pairs of DNA at a force-dependent speed of up to 1,500 base pairs per second, using the energy of ATP hydrolysis. Condensin-induced loop extrusion is strictly asymmetric, which demonstrates that condensin anchors onto DNA and reels it in from only one side. Active DNA loop extrusion by SMC complexes may provide the universal unifying principle for genome organization.
Great work for condensins! :) gpuccio
DATCG at #414: Thank you for sharing some of your background. You have really done a lot of brilliant work! :) I think you share some personal history with Dionisio, in terms of coming from Information Technology but having a deep love and understanding of biology. I have followed some different, but in a way specular, path, coming from medicine (and therefore, indirectly, biology) but having always loved, and in some way practiced, informatics. My good experience in medical data analysis and statistics has certainly helped too. I think that ID, as a new and revolutionary scientific paradigm, is specially attracting to people like us, who in some way have an interdisciplinary attitude. Maybe it's also easier for us to be less conditioned by academic dogmas. Another thing that, IMO, unites people like you and UB and Dionisio and me is a genuine enthusiasm for ID as a scientific enterprise. I believe that, whatever our personal worldviews, we feel no particular need to overlap our more general beliefs with our scientific approach to facts. Whatever the reasons, I think we make a great team! :) gpuccio
DATCG at #414 Beautiful. - - - - - - - - - - - - - - - (I knew this wasn't your first rodeo in this area) Thanks for sharing. Upright BiPed
I did not get this post-edit in place for 414... Regulatory Network of Epigenetic Processes oversees the following: Input: Digestive, Environmental, Solar, etc. VDRPOL: Digestive Tract, etc.(note: correction) Output: cellular function of skin cell production, reproduction and repair mechanisms Which in this OP includes the mighty Ubiquitin System :) And anyone who can think programmatically through of an input/output process understands that regulatory systems usually dwarf the core process to insure the core process never stops running. We are essentially a bunch of highly regulated, walking, talking consciousness of Organic Variable Data Reformatters ;-) As are plants, trees, leaves, reformatting photons and CO2 for growth and structure, etc. DATCG
#407 UB, First, I may have to call the Evergreen SJ Warriors on you for limiting me to a Binary choice Upright Bi-Ped! In fact, did you know many Uprights today consider themselves to be Poly-Peds! Oh my gosh, we need a meeting to review this and a sit in of UD! All Poly-Peds heed the call! I'm a guy. ;-) and I'm stumbling through code as usual ;-) OK, hope this does not bore you guys, I have a background in debugging production problems for large-scale, enterprise solutions years ago. Included print stream translations, hexadecimal to binary, etc., EBCDIC, ASCII, IBM, Xerox MetaCode, HP, and a myriad other solutions in imaging technology. After multiple input files were reformatted, merged, and post-processed, they were distributed to a multitude of different print streams, bar-coding, more post-processing and shipment, or imaging archival and viewing solutions for our clients. In the debugging process, often we had to go through reams of client code and reproduce it. Including bit and byte analysis of imaging and print streams to determine faults locally or upstream. As a result of client growth and expansion of many different streaming conventions, I developed a series of steps and solutions to quicken the debugging process. Much like the researchers, we might do "knock-outs" or obviously tracking and dumps, etc. We had 24hr windows to turnaround production for our clients. I formalized debugging solutions in a series of visual diagrams and checkpoints and gave out to our clients in technical presentations. Found out a decade later they were still using it. I guess that background helps. But not much different than most programmers would experience in debugging solutions. But mainly I'm fascinated how these cellular processes all work. And believe the Design heuristic holds the most promise going forward. And I'm a bit driven to find out how so much of it works together in such highly coordinated fashion. Every research paper we've seen posted here shows typical debugging steps to find problems in a multitude of Codes and branching steps or interactions. And researchers are getting better at debugging the different codes so to speak. Not to trivialize Life to much, but we are a collection of functional groupings of input/output steps, right? As an example, we can think of our skin as the finalized output of input and a cellular process to reformat the input. Therefore to know why carcinoma may exist in various forms of skin cells, we must know the cellular steps of: Input Processing -> Variable Data Reformatting Process of Organic Life -> Output Processing Input: Digestive, Environmental, Solar, etc. VDRPOL: Digestive Track, etc. Output: cellular function of skin cell production, reproduction and repair mechanisms I'm obviously leaving out a lot of steps and communication. This is way over simplification, but if in fact we are designed, it's how I look at it from a design perspective. Start with Simplification of Top Down Structured thinking, then go to each branch, sub-branch, sub, sub, sub and loop backs. Throw in Modular concepts and OOP, networking communications, translations and/or transcribing and Post-Translation Modifications, etc., etc. But, the DNA code can be read backwards and forwards - come on! :) LOL! I mean, the compression algorithms blow away anything today by modern methods. Amazing stuff! Now, expand by how many Input/Output Cellular Processes there are? I mean literally, you can find Ubiquitin's role in the Gut ;-) I was going to post a research paper on Gut and Ubiquitin processing earlier, but ran out of time. I am enjoying this! Studying molecular biology and cellular processes. Deciphering Codes and OPs like this by Gpuccio! But I'm stumbling through it. Thankfully Gpuccio is patient. Thanks Gpuccio :) It's been years since chemistry and genetics courses in undergrad. There's so much I'm having to relearn. I switched from mechanical engineering to CS and left any trace of biochemistry and genetics behind. But I've always loved this area of scientific research. Going through massive amount of new terminology on these different Ubiquitin interactions is reminding me just how much I do enjoy it! :) Have a great weekend guys. DATCG
#410 and #409 Gpuccio, Yes! It's fascinating "stuff" our neural capacity, plasticity and change factors. Including all forms of stress, embryonic development and formation. Long-term memory, epigenetic regulatory systems and well, our little ubiquitin friends. "I think that this field is in great expansion, and maybe we can see something more specific in a short time." I agree! Very interested in this area. Hope to devote more time to study these areas in future. I'm encouraged by the advancements in research being made at a rapid pace. DATCG
#411 Gpuccio, re: CK1alpha and Slimb Thanks! It seems UB E3 Slimb being a little "slower" makes sense due to species specific needs? The pesky human brain for example? On Condensins, Chromosomes, and beautiful DNA packaging and compression... "Let's put it among our future plans!" Ahaha! :) Your list is growing Gpuccio! DATCG
DATCG at #399: "Q: if CK1alpha is highly conserved, then is UBLigase-E3 Slimb highly conserved with it?" Yes they are both highly conserved in metazoa. CK1alpha (337 AAs): the human protein shows 71% identities and 84% positives in Fungi (483 bits, 1.433 baa), and reaches a practically complete homology (99% identities, 100% positives) in Cartilaginous Fish. Amazing! Slimb (542 AAs) is just a little "slower": 40% idenitites and 62% positives in Fungi, 91% and 93% in Cartilaginous Fish. Fascinating data about the condensin complexes. This issue of chromosome and chromatin structure certainly deserves some in-depth analysis. Let's put it among our future plans! :) gpuccio
DATCG at #393: Nice stuff about ubiquitin and addiction. By the way, the linked Table 1 is really amazing! :) gpuccio
DATCG at #393: Fascinating facts about long term memory. I think that this field is in great expansion, and maybe we can see something more specific in a short time. The working of the brain and nervous system is probably to be explained, as far as that is possible, at two different levels: a) The network of connections between neurons (and other cell types). This is amazing, if we think that we have about 10^11 neurons, and maybe 10^15 neuron connections. Those are big numbers, indeed. Any expert in hardware and software engineering knows all to well how important it is to have the right connections. And neuronal connections are dynamic, they can change and be rewired. b) But even more amazing is the biochemical plasticity of all that happens in neurons, and especially in synapses. And out friend ubiquitin, as you have shown, is critically linked to all this. I think that the only luck for neo-darwinists as far as the central nervous system is concerned is that we really understand too litlle of how it works. At least for the moment. gpuccio
Upright BiPed: All of you, DATCG, Dionisio and you, have given great contributions! :) I was just focusing on DATG because his comments have become really prominent in the last phase of the discussion... gpuccio
GP at 404 Agreed -- and Dio's contributions as well. I don't know about this DATCG character. We'll have to keep our eyes on him/her. He/she appears to be exceptionally bright. Clearly not his/her first rodeo in this area. :) :) :) Upright BiPed
#404 Gpuccio, and Dionisio contributions add even more! I've not come close to reviewing all of his papers! Simply not enough time. It's a cornucopia of Ubiquitin fruit ;-) DATCG
#401-403 Gpuccio, Thanks, that illuminates the field of Epigenetics. I'd not considered Transcription's dependence on Epigenetic factors. Then there's so many other networks dependent upon Epigenetics. The 75% threshold by Dan Graur was precarious from the start. But I think he built that artificial wall based upon what he must have for neo-Darwinian faith to continue. By creating this in his anger and stubborn attitude he's erected what appears to be a Humpty Dumpty Wall made of cards. and... “If Evolution is wrong, ENCODE is right” ah :) Now you have juxtaposed a good turnabout is fair play. Dan and his mirror... as ENCODE proceeds and non-coded regions are explored with new functions found every day around the world. Mirror, mirror on the wall, who is right after all? Is it Darwin, is it Dan? Do unguided lots make up plans? Mirror, mirror on the wall, is neo-Darwinism due to fall? Humpty Dumpty Darwin's game, does blind search fall in shame? DATCG
Upright BiPed at # 397: "I’m trying to catch up, but it seems almost impossible." I can understand you. Sometimes it seems almost impossible to me to catch up with myself! :) However, it seems that adding DATCG to myself works combinatorially in fully unexpected ways. The results are really scary! :) gpuccio
DATCG at #392: “If ENCODE is right, Evolution is wrong” What a pity! If it had been the other way round: "If Evolution is wrong, ENCODE is right" we could be certain that ENCODE is right! :) However, I agree with you that maybe some non coding DNA could be non functional, but it's certainly not 75%! This 4D Nucleome Project is extremely interesting. I am sure that the 3D dynamic structure of chromatin and its constant modifications in time are one of the most important keys to understand something which goes beyond a mere accumulation od fetails. Hi-C (and its variants) is a really promising technique. The real primary aim is to understand how TFs work, their combinatorial nature, their ability to form chromatin loops and to connect distant parts of the genome in functional complexes. At that level, we are really just beginning to understand things. gpuccio
DATCG at #391: "Does it logically follow that Ubiquitin is fully dependent upon Epigenetic layers of meta-code to function in all these different areas covered so far in this one OP?" Well, probably almost everything is under the control of epigenetic layers, because all transcription is fully dependent upon them. But ubuquitin has a definite role on epigenetic layers, as shown for example by its many roles with histones. It's not a cse that we find ever more often the term "cross-talk" in biological papers. One thing is astonishingly clear: the cell has many, many independent layers of regulations, and all of them are constantly influencing one another and exchanging information. I think this is unprecedented, even in human programming and engineering. gpuccio
Especially if that network works by coded symbols, like the different types of signals implemented by ubiquitin chains. Especially if the network is made by hundreds and hundreds of specific sub-networks. Especially if the network controls not one, but tons of different complex functions, practically every function we can imagine.
Terrifying -- if it’s your job to make sure there are never enough dissenters, that they might change the paradigm. :) Going to need some extra dogma. I suspect some shame, threats, and group enforcement will come in handy as well. Upright BiPed
Packing a Genome, Step-by-Step - Condensin II Just to cool. While the video or "steps" do not say it. Somewhere, where there's regulation, there's Ubiquitination. Remember Condensin II and E3-ligase Slimb regulation in coordination with phosphorlation. - Ooops @399, sorry - missed a closing Bold Font highlight. DATCG
Walking thru time on the 4D Nucleome Project, I ventured out a bit to see if I could find Ubiquitin involvement in different areas. Here's an interesting related area to review... Drosophila Casein Kinase I Alpha Regulates Homolog Pairing and Genome Organization by Modulating Condensin II Subunit Cap-H2 Levels PLoS Genet. 2015 Feb;11(2): e1005014.Published online 2015 Feb 27. doi: 10.1371/journal.pgen.1005014 Huy Q. Nguyen, Jonathan Nye, Daniel W. Buster, Joseph E. Klebba, Gregory C. Rogers,and Giovanni Bosco, R. Scott Hawley, Editor Abstract
The spatial organization of chromosomes within interphase nuclei is important for gene expression and epigenetic inheritance. Although the extent of physical interaction between chromosomes and their degree of compaction varies during development and between different cell-types, it is unclear how regulation of chromosome interactions and compaction relate to spatial organization of genomes. Drosophila is an excellent model system for studying chromosomal interactions including homolog pairing. Recent work has shown that condensin II governs both interphase chromosome compaction and homolog pairing and condensin II activity is controlled by the turnover of its regulatory subunit Cap-H2. Specifically, Cap-H2 is a target of the SCFSlimb E3 ubiquitin-ligase which down-regulates Cap-H2 in order to maintain homologous chromosome pairing, chromosome length and proper nuclear organization. Here, we identify Casein Kinase I alpha(CK1-alpha) as an additional negative-regulator of Cap-H2. CK1alpha-depletion stabilizes Cap-H2 protein and results in an accumulation of Cap-H2 on chromosomes. Similar to Slimb mutation, CK1alpha depletion in cultured cells, larval salivary gland, and nurse cells results in several condensin II-dependent phenotypes including dispersal of centromeres, interphase chromosome compaction, and chromosome unpairing. Moreover, CK1alpha loss-of-function mutations dominantly suppress condensin II mutant phenotypes in vivo. Thus, CK1alpha facilitates Cap-H2 destruction and modulates nuclear organization by attenuating chromatin localized Cap-H2 protein.
Introduction
Interphase genome organization in eukaryotic cells is non-random [1,2,3]. Indeed, organization of the genome is crucial because it influences nuclear shape and processes such as DNA repair and replication, as well as gene expression [4, 5, 6]. While chromosomes are highly organized within the nucleus, they must also remain extremely dynamic. Chromosome dynamics facilitate events that occur not only during cell division, but also during interphase, when cells respond to developmental and environmental cues that require changes in gene expression. Interphase events include trans-interactions such as homolog pairing, chromosome remodeling and compaction, and DNA looping. Although numerous studies using Fluorescent In-Situ Hybridization (FISH), live cell imaging, and chromosome conformation capture techniques have revealed the three-dimensional (3D) organization of genomes, much remains to be discovered regarding the factors that govern the overall conformation of interphase chromosomes. An equally important task is to identify the molecular mechanisms that regulate and maintain specific 3D genome organizational states. Condensin complexes are highly conserved from bacteria to humans [7,8,9] and have been identified as key drivers of genome organization [10]. Eukaryotes have two condensin complexes, condensin I and II, which share the core SMC2 and SMC4 (Structural Maintenance of Chromosomes) subunits but differ in their non-SMC Chromosome Associated Protein (CAP) subunits. Condensins have long been known to play vital roles in shaping mitotic chromosomes. While condensin I promotes lateral chromosome compaction, condensin II promotes axial compaction; both of which are necessary for faithful mitotic condensation and chromosome segregation [11]. Condensins also display different localization patterns: condensin I only associates with mitotic chromosomes, whereas condensin II is present in the nucleus, where it is bound to chromatin throughout the cell cycle [12,13,14,15]
Fascinating. So where Ubiquitin? Here, Slimb E3 ligase... Ubiquitin E3 coordination with Phosphorlation and Degradation
note: ? = alpha subunit Moreover, Cap-H2 protein levels are controlled by the SCFSlimb ubiquitin-ligase, maintaining low levels of Cap-H2 in vivo and in cultured Drosophila cells [20]. Interestingly, Slimb(E3) recognizes its target proteins through a phosphodegron motif [29], suggesting that one or more kinases must phosphorylate Cap-H2 before Slimb can target it for destruction. A Slimb-binding site consensus sequence (DSGXXS) exists in the extreme C-terminus of Cap-H2 and deletion of this region renders Cap-H2 non-degradable [20]. As expected for a Slimb substrate, Cap-H2 protein mobility on SDS-PAGE was sensitive to phosphatase treatment, suggesting that Cap-H2 is phosphorylated [20]. Given that Cap-H2 protein levels may be regulated by its phosphorylation state, we set out to identify kinases that target Cap-H2 for Slimb recognition and that lead to its degradation. We show that in Drosophila cultured S2 cells, Casein Kinase I alpha (CK1?) depletion results in the hypercondensation of interphase chromatin in a condensin II-dependent manner. We also found that CK1? and condensin II genetically interact in vivo, and that CK1? depletion leads to Cap-H2 protein enrichment on polytene and cultured cell chromosomes. Similar to Slimb(E3) depletion [20], CK1? depletion also results in stabilization of Cap-H2 protein in cultured cells. Our findings further elucidate the mechanism by which Cap-H2, and thus condensin II, is regulated and contribute significantly to our understanding of how interphase genome organization, homolog pairing, and chromosome compaction is modulated.
Results Casein Kinase I alpha is required for interphase chromatin reorganization
Previously, we discovered that the Cap-H2 subunit of condensin II is a SCFSlimb ubiquitination-target in Drosophila cells [20]. In a whole genome RNAi screen, Slimb was also identified as a homolog pairing-promoting factor, and it was shown to affect pairing in a Cap-H2 dependent manner[18]. In cultured S2 and Kc cells, depletion of SCFSlimb components Slimb, Cul-1 and SkpA prevents Cap-H2 degradation and leads to condensin II hyperactivation during interphase and the remodeling of each chromosome into a compact globular structure (Fig. 1A-C). Based on their overall appearance, we refer to these hypercondensed chromosomes as “chromatin-gumballs” (Fig. 1A). Overexpression of a GFP tagged wild type Cap-H2 also induces this phenotype [20]. Since phosphorylation of the Slimb-binding domain within its substrates is required for Slimb binding [29], we reasoned that depletion of a kinase involved in this pathway would also stabilize Cap-H2 and phenocopy the effect on chromatin remodeling observed after Slimb depletion.
CK1alpha Highly Conserved Kinase CK1?=CK1alpha
CK1? is a highly conserved serine/threonine kinase involved in Wnt signaling pathways, DNA repair, cell cycle progression, and mRNA metabolism [35,47,48]. Identification of CK1? furthers our understanding of the mechanisms by which condensin II is regulated. The chromodomain protein Mrg15 is involved in the loading of Cap-H2, while the E3 Ubiquitin ligase, SCFSlimb ubiquitylates Cap-H2, removing it from chromatin and targeting it for proteasomal degradation [20,21]. Phosphorylation is known to be a prerequisite for Slimb recognition of its target proteins
Q: if CK1alpha is highly conserved, then is UBLigase-E3 Slimb highly conserved with it?
It is tempting to speculate that cytokine signaling could trigger the activation of a condensin II antagonist, leading to the decrease in condensin II activity. This would lead to decondensation of chromatin allowing STAT5 access to DNA. Our findings in the Drosophila model suggest that similar interphase condensin II functions may be at play, and CK1? along with Slimb are critical regulators of this condensin II activity. However, at present it is not known if mammalian condensin II activity is regulated by Slimb or CK1?, and it should be noted that mouse and human Cap-H2 do not have clear Slimb binding consensus sequences. It will be of great value to identify additional kinases that may collaborate with CK1? and Slimb to negatively regulate Drosophila condensin II activity, and to further elucidate the biological significance of this interphase condensin II function in Drosophila and other species.
OK, so Condensin II, next up a cool video. Packaging Pathways. . DATCG
Gpuccio @395 "So, how is it that no one from the other field has writeen one sinlge word here to try to explain how random variation and natural selection can do this?" Good question, where are the neo-Darwinist? I suspect they stay away for purposes of not making your excellent OPs legitimate - as in recognized not only by themselves, but in eyes of their own followers. Think of it. If they actually engage you - they can lose. And their followers might see your logic as correct. They cannot bear that possible outcome. And I suspect Hunt got a hint to back away. I could be wrong, but am surprised he'd back off for any other reason. Surely he can mount a defense of group II introns and spliceasome evolution, right? By not engaging, they hope Intelligent Design goes away. It's not, it's only growing. And more bright minds are learning every day a new way of seeing life as a result of Design. Discovery Institute Summer Seminars on Intelligent Design July 6-14 . DATCG
Simply outstanding work by the two of you. I'm trying to catch up, but it seems almost impossible. Great job. Upright BiPed
DATCG at #390: "How many future OPs are you entertaining now? " Indeed, I am thinking about 3 or 4 different possibilities. In the end, I will probably follow some sudden "inspiration"! :) The "prokaryote to eukaryote transition" is a fascinating issue. What a pity that we have no precise idea of when it happened, and least of all a reasonable early tree of eukaryotes! I think that both my OP on the spliceosome and this one about ubiquitin are good examples of highly specific eukaryotic machinery. But of course, there are many others! :) "This is where I disagreed with Arthur Hunt and thought BLASTing information was critical in reviewing informational jumps. Why he was critical of it, remains a mystery. Perhaps because it hits close to home." I disagree with Arthur Hunt too, as much as it is possible to disagree with someone who has not really expressed his thoughts. :) BLAST is a wonderful tool for us IDists. Neo-darwinists use it mostly to find vague distant homologies. But we can and do use it to detect functional information, which is much more interesting! OK, it's late now here in Italy. I will come back tomorrow. gpuccio
DATCG at #389: "Random mutations are the enemy, not the blind, unguided builder of such highly integrated, tightly regulated, Functionally Organized & Highly Coordinated Complex Systems." Of course. The idea that a complex regulation network may arise from random mutations and natural selection is ridiculous! Especially if that network works by coded symbols, like the different types of signals implemented by ubiquitin chains. Especially if the network is made by hubdreds and hundreds of specific sub-networks. Especially is the network control not one, but tons of different complex functions, practically every function we can imagine. So, how is it that no one from the other field has writeen one sinlge word here to try to explain how random variation and natural selection can do this? :) "The material covered is an avalanche of specified information, overwhelming any highly trained team of scientist and lab techs to keep up with." You bet! Just the 600+ E3 ligases are an example of thousands and thousands, maybe hundreds of thousands, of functional bits whose function is to recognize all the specific target proteins, thousands of them, and tag them in the correct way in each appropriate condition. Let's remember that about 5% of the whole protein coding genome is implied in the ubuquitin network! "OK, I just stated the obvious. But sometimes the obvious must be stated" Absolutely! :) When no one seems to have the courage to deny the absurd, stating the obvious is probably the only salvation. :) gpuccio
Part of Drug Abuse and Addiction is Conditioning. Habit forming over time. While certainly dealing with different aspects of neural development and different areas related to addiction, I'd now expect to find UB or UPS, and DUBS in some role. So, searched on UB and Addiction. And oddly enough found the Review Article below in the same journal... Roles of the ubiquitin proteasome system in the effects of drugs of abuse Front. Mol. Neurosci., 06 January 2015 | https://doi.org/10.3389/fnmol.2014.00099 Nicolas Massaly, Bernard Francès and Lionel Moulédous What I did not expect immediately to see, but makes absolute sense is the overlapping of Memory in conjunction with addiction and UB.
Because of its ability to regulate the abundance of selected proteins the ubiquitin proteasome system (UPS) plays an important role in neuronal and synaptic plasticity. As a result various stages of learning and memory depend on UPS activity. Drug addiction, another phenomenon that relies on neuroplasticity, shares molecular substrates with memory processes. However, the necessity of proteasome-dependent protein degradation for the development of addiction has been poorly studied. Here we first review evidences from the literature that drugs of abuse regulate the expression and activity of the UPS system in the brain. We then provide a list of proteins which have been shown to be targeted to the proteasome following drug treatment and could thus be involved in neuronal adaptations underlying behaviors associated with drug use and abuse. Finally we describe the few studies that addressed the need for UPS-dependent protein degradation in animal models of addiction-related behaviors.
Interesting, regulation after Drug Exposure(B) in Figure 1 UPS components regulated(see B) after Drug Exposure Figure 1. The Ubiquitin Proteasome System and its components regulated after drug exposure.
(A) Schematic representation of the Ubiquitin Proteasome System. The external and internal rings constitute the 20S proteasome. The lid and base constitute the 19S regulatory complex. In some cases, it can be replaced by the PA28 or 11S regulatory complex, constituted of a single ring of 7 subunits. (B) Classification of the UPS components found to be regulated after drug exposure.
All drugs of abuse can thus affect the expression and abundance of key UPS proteins.
However, the data reported above are only descriptive. Moreover, UPS components are affected differently depending on the drug type, its method of administration, the duration of the treatment and the cell type or brain region considered (Table 1). Complementary studies have also found that drugs of abuse modify the activity of the UPS in parallel with changes in the expression of its various components. Indeed morphine was demonstrated to inhibit the activity of the 20S proteasome in human neuroblastoma cells, with neuroprotective consequences (Rambhia et al., 2005). On the contrary, PKC-dependent inhibition of the UPS was linked to the autophagy-mediated toxicity of methamphetamine in dopaminergic neurons (Lin et al., 2012). In addition it has been proposed that the higher toxicity of methamphetamine compared to cocaine was due to its long inhibitory effect on proteasome activity (Dietrich et al., 2005). Finally, a recent study demonstrated that chronic ethanol induces toxicity in mice through a Toll-like receptor 4-dependent impairment of the UPS (Pla et al., 2014).
Balance between Protein Synthesis and Degradation again
This deleterious effect of UPS blockade on long term changes in neurons has been suggested to be due to an alteration in the balance between protein synthesis and degradation (Fonseca et al., 2006). Indeed the authors showed that the deleterious effects produced by inhibiting either protein synthesis or degradation on LTP can be reversed by inhibition of the two processes at the same time. In addition to synaptic proteins the UPS is also involved in the regulation of the activity of transcription factors, thus revealing a close relationship between protein synthesis and proteasome action. For example I?B and CREM (cAMP-responsive element modulator), repressors of the transcription factors NF-?B and CREB (cAMP response element binding) respectively, can be ubiquitinated and degraded by the UPS (Woo et al., 2010; Liu and Chen, 2011). In that sense the UPS clearly plays a major role in the regulation of protein turnover implicated in neuronal plasticity acting directly through the degradation of some proteins and indirectly through the modulation of transcriptional activity and protein synthesis.
Oh, the authors recognize Jarome, et al., previous work in UPS regulation and memory(LTM), but I'm moving on to another paragraph.
The precise mechanisms underlying the involvement of the proteasome in memory are just beginning to be discovered but it is now clearly established that, in addition to protein synthesis, neuronal protein degradation by the UPS is a mandatory process to create, store and maintain memories and in that sense participates to adaptive behaviors of mammals. Since drug addiction shares common mechanisms with memory processes (Hyman et al., 2006; Milton and Everitt, 2012) it is important to question the role of the UPS in the long term effects of drugs of abuse such as opioids, stimulants, ethanol, nicotine and cannabinoids.
Indeed, very interesting material.
In the case of opioids, it was shown in a cellular model that a prolonged 72 h morphine treatment modifies the abundance of two proteasome subunits (?3 and ?6) (Neasta et al., 2006). In vivo, intra-cerebro-ventricular (icv) infusion of morphine for 72 h results in an increase in the tyrosine-phosphorylated form of the ?4 subunit in the rat frontal cerebral cortex (Kim et al., 2005). A longer intermittent treatment (2 weeks) produces a decrease in the amount of the DUB Ubiquitin C-terminal hydrolase L-1 in the nucleus accumbens (Nacc) (Li et al., 2006). 4 days after morphine withdrawal, the quantity of this enzyme, as well as that of the ?3 subunit of the proteasome, increases in rat dorsal root ganglia (Li et al., 2009). Similarly, chronic treatment (90 days) and drug withdrawal have been shown to have opposite effects on the amount of ?5 subunit in the Nacc of rhesus monkeys (Bu et al., 2012). The levels of Ubiquitin-conjugating enzyme E2 and of Ubiquitin C-terminal hydrolase L-3 are also modulated in this model. Finally, in a morphine-induced conditioned place preference (CPP) paradigm which tests the rewarding properties of the drug, both development, extinction and re-instatement are accompanied by a down-regulation of several DUBs and ? and ? subunits (Lin et al., 2011).
Hmmmm, now, what of legal pharmaceuticals? Not to beat up on Big Pharma, but what of psychotropic medicines intended for good that are suddenly removed from a patient? What happens with build up and changes in different areas? How does the brain and neural network balance after sudden removal, including UPS, DUBS and other regulatory systems involved at cognition, perception and memory? What remains unchanged? What is inherited Epigenetic changes passed down to offspring? Table 1. UPS-related molecular and cellular consequences of the treatment with drugs of abuse. That's a load of UPS regulatory functions and consequences by drugs of abuse. Wonder if similar studies exist for legal drugs showing similar areas of changes and modifications for public access. . DATCG
#387 Gpuccio, back on topic Memory and UPS, a research review by Timothy Jarome in a paper he published 2014. Open Access. Ubiquitin Role in Long Term Memory Formation with Protein Degradation and Synthesis REVIEW ARTICLE Front. Mol. Neurosci., 26 June 2014 | https://doi.org/10.3389/fnmol.2014.00061 Timothy J. Jarome and Fred J. Helmstetter
Long-term memory (LTM) formation requires transient changes in the activity of intracellular signaling cascades that are thought to regulate new gene transcription and de novo protein synthesis in the brain. Consistent with this, protein synthesis inhibitors impair LTM for a variety of behavioral tasks when infused into the brain around the time of training or following memory retrieval, suggesting that protein synthesis is a critical step in LTM storage in the brain. However, evidence suggests that protein degradation mediated by the ubiquitin-proteasome system (UPS) may also be a critical regulator of LTM formation and stability following retrieval. This requirement for increased protein degradation has been shown in the same brain regions in which protein synthesis is required for LTM storage. (balance and trade-off of keeping memory retainable across time) Additionally, increases in the phosphorylation of proteins involved in translational control parallel increases in protein polyubiquitination and the increased demand for protein degradation is regulated by intracellular signaling molecules thought to regulate protein synthesis during LTM formation. (balance must be maintained again) In some cases inhibiting proteasome activity can rescue memory impairments that result from pharmacological blockade of protein synthesis, suggesting that protein degradation may control the requirement for protein synthesis during the memory storage process. Amazing, balance Results such as these suggest that protein degradation and synthesis are both critical for LTM formation and may interact to properly “consolidate” and store memories in the brain. Here, we review the evidence implicating protein synthesis and degradation in LTM storage and highlight the areas of overlap between these two opposing processes. (Opposing Processes = Balancing Act) We also discuss evidence suggesting these two processes may interact to properly form and store memories. LTM storage likely requires a coordinated regulation between protein degradation and synthesis at multiple sites in the mammalian brain.
Amazing stuff, this seesaw of regulation, synthesis and degradation.
Recently, attention has turned to the potential role of protein degradation in learning-dependent synaptic plasticity. Indeed, there is now convincing evidence that UPS-mediated protein degradation is likely involved in various different stages of memory storage. However, while some studies have suggested potential roles for protein degradation in long-term memory (LTM) formation and storage (Kaang and Choi, 2012), one intriguing question is whether protein degradation is linked to the well-known transcriptional and translational alterations thought to be critical for memory storage in the brain (Johansen et al., 2011). Here, we discuss evidence demonstrating a role for protein degradation and synthesis in the long-term storage of memories in the mammalian brain, highlighting instances in which a requirement for protein degradation correlates with a requirement for protein synthesis. Additionally, we discuss evidence suggesting that both protein degradation and synthesis may be regulated by CaMKII signaling during LTM formation. Collectively, we propose that LTM storage requires coordinated changes in protein degradation and synthesis in the brain, which may be primarily controlled through a CaMKII-dependent mechanism.
Had not read down this far but oh so cool... What Comes First, Degradation or Synthesis?
A majority of the studies discussed here reveal a strong correlation between protein degradation and synthesis during LTM formation. This leads to one important question: Which comes first? While the exact relationship between protein degradation and synthesis during memory formation currently remains equivocal, the available evidence suggests that protein degradation likely regulates protein synthesis. For example, fear conditioning leads to an increase in polyubiquitinated proteins being targeted for degradation by the proteasome (Jarome et al., 2011). While a majority of the proteins being targeted by the proteasome for degradation remain unknown, the RNAi-induced Silencing Complex (RISC) factor MOV10 has been identified as a target of the proteasome during increases in activity-dependent protein degradation in vitro (Banerjee et al., 2009) and following behavioral training and retrieval in vivo (Jarome et al., 2011). Increases in the degradation of MOV10 are associated with increased protein synthesis in vitro, suggesting that the proteasome could regulate protein synthesis during LTM formation through the removal of translational repressor proteins such as various RISC factors. However, it is currently unknown if the selective degradation of MOV10, or any RISC factor, is critical for memory formation in neurons. Nonetheless, studies such as these provide indirect evidence that protein degradation by the UPS could regulate protein synthesis during memory formation in the brain.
The balancing act and pre-programmed responses to conditioning possibly exert factors on enhanced protein synthesis processing.
Some of the best evidence that protein degradation may be upstream of protein synthesis during memory storage comes from studies examining memory reconsolidation following retrieval. For example, inhibiting proteasome activity can prevent the memory impairments that normally result from post-retrieval blockade of protein synthesis in the hippocampus, amygdala, and nucleus accumbens (Lee et al., 2008; Jarome et al., 2011; Ren et al., 2013) as well as during LTF in aplysia (Lee et al., 2012), suggesting that protein degradation is upstream of protein synthesis during memory reconsolidation. This remains some of the best evidence directly linking protein degradation to protein synthesis during memory storage, but it is possible that the rescue of memory impairments in the face of protein synthesis inhibition may occur as an indirect consequence of blocking protein degradation rather than a direct effector.
Now, how is ubiquitin code and protocol utilized(or inhibited, modified) in Addictions, Drug Abuse or legal drugs for reducing pain? I'll move this to a new comment. DATCG
Continued from #391 ... offtopic a bit, but looking ahead. On possible links between Ubiqutin and Epigenetic roles - does this lead to decimation of Dan Graur's argument for large amount of "Junk" DNA sill existing? According to his last paper? In his last retort he states at least 75% of our DNA must be JUNK. I'm not entertaining that the entire Genome is functional with Epigenomic regions. But, I do think this is a key area where the bell tolls for Neo-Darwinian evolutionist like Graur, who said, "If ENCODE is right, Evolution is wrong" If you look carefully at what he's stated, much of it is still based upon neo-Darwinian assumptions. That were in the past based largely upon ignorance and today seems to be a stubborn adherence to antiquated beliefs. None of this includes recent projects in the last several years unfolding since ENCODE. 4D Nucleome Network Project Overview at Nature published September 2017 4D Nucleome Project Consortium North America 4D Nucleome European Initiative 1) Ubiquitin System Wide Role 2) Epigenetic Role of UPS(Ubiquitin Proteome System) 3) Epigenetic Research turns up new roles and fucntions every day 4) 4D Nucleome Project will not help neo-Darwinist and can only hurt stubborn Darwinist like Graur From the EU initiative for 4D Nucleom research...
Recent technological advances in high resolution and live microscopy, high-throughput genomics/cell biology approaches and modelling, coupled with increased awareness of the importance of genome organization will soon allow to perform precision analysis of our genomic organization and its dynamic translations from one epigenome to another, as cells differentiate, age, and respond to the environment. This a perfect time to launch a concerted effort towards characterizing the dynamic organization of the genome, the epigenome, and the rules that govern determination and maintenance of cell types in face of both internal and external stress linked to disease. We can now envisage having a complete 3D atlas in time (4D) of nuclei within the many cell types that form our body. The huge challenge before us is to take the one dimensional genome sequence provided by the Human Genome Project, decorated with the valuable annotations provided by the ENCODE project, and create an integrated 4D understanding of the complexity of this incredible, living, breathing machine that holds the secret of life.
Semiosis, Rules, Meta-Layers of Code upon Code, Dynamic Post-Translation Modifications, Organization and Functional Networked Systems of Tightly Controlled, Interdependent Systems, Coordination and Coherence. To have a Rule, there must be a Rule-maker. Recognition that Rules exist in symbolic representations is by definition a teleological argument for Design. Otherwise, stop calling them Rules. Yet, they cannot stop doing so because there's no other way to logically and coherently describe the process. DATCG
#387 Gpuccio, Another great find and aspect of UPS regulatory network. I'd looked briefly at other papers on Parkisons, Alzheimers and other diseases and roles of UPS and aggregate protein accumulations. That paper's behind a paywall, but found some previous papers by the author at Research Gate. That paper is so new he's not listed yet on his Bio Page! ;-) Thought it interesting to look at his area of research as well. Dr. Timothy Jarome Bio - Research Area
Research in the Jarome lab is focused on elucidating the cellular and molecular mechanisms of memory formation and storage, with an emphasis on understanding how stressful or traumatic events alters brain chemistry that drives future behavioral and physiological responses. These future responses are often maladaptive, resulting in a variety health concerns, and can be passed to future generations through “epigenetic” mechanisms. The lab focuses on mechanisms of initial memory storage and those involved in memory modification following retrieval (recall). Currently, the lab has several areas of interest: - An epigenetic role for the ubiquitin-proteasome system in fear memory formation - Epigenetic mechanisms of fear memory modification following retrieval To address these topics, we combine a traditional rodent behavioral paradigm (Pavlovian fear conditioning) with a variety of traditional and modern molecular biology and neuroscience techniques. This includes using in vivo pharmacology, siRNA-mediated gene knockdown, and CRIPSR-dCas9 transcriptional editing to manipulate specific genes and/or cellular processes during learning or memory retrieval and analyzing the effects of these manipulations on the cellular memory storage process using western blotting, qRT-PCR, chromatin immunoprecipitation, methylated DNA immunoprecipitation, bisulfite sequencing and other molecular biology methods. Students who join the lab will have the opportunity to learn these techniques and, as they advance, will have the opportunity to take projects in new directions or initiate new topics.
So what's happening here? Besides discovery of Ubiquitin and UPS mechanisms for regulatory role, we see once again Epigenetics roles emerging at the forefront of knowledge on disease control, including in this case Memory Storage and Retrieval, associated with FEAR complex, etc. I wish we had a way to measure all the latest Epigenetic Research and Discovery of Function in formerly labeled "JUNK" DNA zones by the Darwinist. Is it fair to say and borrow the term Ubiquitous for Epigenetic Roles? As in Epigenetic Roles are Ubiquitous through out Eukaryotes? Or, is that expanding the nature of the Epigenome to fast before evidence? This is a bit off-topic, but when we see a research scientist involved in this one area, of Ubiquitin and Epigenetic roles, then ... 1) We know Ubiquitin is Network Wide across all areas of Function and space in the Human genome 2) We know of this area - Memory - and many others where Epigenetic regulatory roles work with Ubiquitin systems. Or as this scientist has stated, "an Epigenetic role for UPS" !!! :) Does it logically follow that Ubiquitin is fully dependent upon Epigenetic layers of meta-code to function in all these different areas covered so far in this one OP? Is there an areas where Ubiquitin functions without Epigenetic layers or Epigenetic regulatory systems involved? Just a thought. I'm not sure we can extrapolate this to mean that ENCODE 80% functionality claim is strengthened by this, but sure seems like a good, informed guess? DATCG
#386 Gpuccio, oooohhhhh... a future OP, OK, that would be really cool to dissect the Replisome :) How many future OPs are you entertaining now? I know Dionisio listed several, plus you discussed Missing Procedures.
Certainly, it’s really surprising that such a basci function like DNA replication should be so different in eukaryotes as compared to prokaryotes. This is further confirmation of the all-round re-engineering that took place at the eukaryotes transition!
"... really surprising" Surprising for Darwinist or Design Theorist or both? I was thinking it might be expected considering the enormous amount of Epigenetic information and regulatory systems. Maybe, or would it be beneficial for a future OP on Jumps in Functional Complexity across systems from prokaryote to eukaryote transition? A Summary OP of Information Jumps, if you will allow such a description, based on your past OP research and any others you'd like to include. This is where I disagreed with Arthur Hunt and thought BLASTing information was critical in reviewing informational jumps. Why he was critical of it, remains a mystery. Perhaps because it hits close to home. I know you've covered this in other OPs and systems reviews. Really enjoyed learning that aspect of your OPs. Including learning to use BLAST for these type of searches. DATCG
#385 Upright BiPed, "So much to read here..." See you next year ;-) At least for me it will take that long, not including a specialized degree and a lifetime of research. Amazing material covered in this OP by Gpuccio. It's a large task for thousands of scientist :) I knew this OP would be fun and expansive, but had no idea the naming of Ubiquitin was so on target ;-) Or the amount of networked regulatory systems content we would be reviewing. To go from one area of specialization to another invites an expanding vocabulary of specific terminology for critical systems interactions of Ubiquitin targeting, tagging, recycling, and/or degradation by UPS. Highly specialized researchers across so many areas of discipline are discovering fascinating areas of tightly controlled regulatory networks bound by or regulated by the UPS, DUBs, etc. If any of these "tightly controlled" regulatory systems experience "random mutations" or stress conditions with Ubiquitin inteactions, it leads to numerous diseases across a spectrum of human organs and networks with immune systems responses that depend upon Ubiquitin signal and recognition systems for conditions based processing. Whew....... Random mutations are the enemy, not the blind, unguided builder of such highly integrated, tightly regulated, Functionally Organized & Highly Coordinated Complex Systems. The material covered is an avalanche of specified information, overwhelming any highly trained team of scientist and lab techs to keep up with. Even specialist in their field must be experiencing an overload trying to keep up with latest research and discovery including epigenetic programs. Physicist are involved at level of Quantum Mechanics as well in inter-disciplinary talks on the Ubiquitin System. It's an intellectual smorgasbord of regulatory systems identification and reverse-engineering. ;-) OK, I just stated the obvious. But sometimes the obvious must be stated ;-) But I'm sure, given enough time, Darwin Did It! DATCG
DATCG: What about human Embryonic Stem Cells? A very hot topic, I would say. Insights into the ubiquitin-proteasome system of human embryonic stem cells https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5840266/
Abstract: Human embryonic stem cells (hESCs) exhibit high levels of proteasome activity, an intrinsic characteristic required for their self-renewal, pluripotency and differentiation. However, the mechanisms by which enhanced proteasome activity maintains hESC identity are only partially understood. Besides its essential role for the ability of hESCs to suppress misfolded protein aggregation, we hypothesize that enhanced proteasome activity could also be important to degrade endogenous regulatory factors. Since E3 ubiquitin ligases are responsible for substrate selection, we first define which E3 enzymes are increased in hESCs compared with their differentiated counterparts. Among them, we find HECT-domain E3 ligases such as HERC2 and UBE3A as well as several RING-domain E3s, including UBR7 and RNF181. Systematic characterization of their interactome suggests a link with hESC identity. Moreover, loss of distinct up-regulated E3s triggers significant changes at the transcriptome and proteome level of hESCs. However, these alterations do not dysregulate pluripotency markers and differentiation ability. On the contrary, global proteasome inhibition impairs diverse processes required for hESC identity, including protein synthesis, rRNA maturation, telomere maintenance and glycolytic metabolism. Thus, our data indicate that high proteasome activity is coupled with other determinant biological processes of hESC identity.
gpuccio
DATCG: Well, this is new, too. Memory formation. The Ubiquitin-Proteasome System and Memory: Moving Beyond Protein Degradation. http://journals.sagepub.com/doi/pdf/10.1177/1073858418762317
Abstract: Cellular models of memory formation have focused on the need for protein synthesis. Recently, evidence has emerged that protein degradation mediated by the ubiquitin-proteasome system (UPS) is also important for this process. This has led to revised cellular models of memory formation that focus on a balance between protein degradation and synthesis. However, protein degradation is only one function of the UPS. Studies using single-celled organisms have shown that non-proteolytic ubiquitin-proteasome signaling is involved in histone modifications and DNA methylation, suggesting that ubiquitin and the proteasome can regulate chromatin remodeling independent of protein degradation. Despite this evidence, the idea that the UPS is more than a protein degradation pathway has not been examined in the context of memory formation. In this article, we summarize recent findings implicating protein degradation in memory formation and discuss various ways in which both ubiquitin signaling and the proteasome could act independently to regulate epigenetic-mediated transcriptional processes necessary for learning-dependent synaptic plasticity. We conclude by proposing comprehensive models of how non-proteolytic functions of the UPS could work in concert to control epigenetic regulation of the cellular memory consolidation process, which will serve as a framework for future studies examining the role of the UPS in memory formation.
gpuccio
DATCG: The Replisome is another huge subject, and it woul probably deserve an OP of its own. We'll see. :) Certainly, it's really surprising that such a basci function like DNA replication should be so different in eukaryotes as compared to prokaryotes. This is further confirmation of the all-round re-engineering that took place at the eukaryotes transition! Just as an example, the Mcm 2-7 heterohexamer ring which is an integral part of the CMG complex which serves as helicase to start DNA replication is made of 6 different proteins, Mcm 2-7, about 700 - 900 AAs long, all of them highly conserved in eukaryotes, which at sequence level share only modest homology between them (about 300 bits). Although one homolog is described in Archaea, it is almost completely different at sequence level. And this is just part of the starting complex! :) gpuccio
So much to read here, and to catch up on. Excellent OP. Upright BiPed
DATCG: "Gee Gpuccio, I’m guessing you had some idea just how far reaching the “Ubiquitin” System was, but it must still be amazing how much is unfolding today, before us in research across multi-discipline areas of disease, functions and applications." You are perfectly right. While working at this OP and at the following discussion with you and the other friends, I have been constantly surprised and overwhelmed at the ever new complexity, scope and "omnipresence" in the cell of the molecular system I had chosen to study in some detail! I suppose that happens in some measure with all molecular systems in the cell, but this time the "measure" is really huge. :) gpuccio
More on Replisome and Ubiquitin regulation in Nature: Cell Death and Differentiation News and Commentary (Open Access) Two Paths to Let the Replisome Go Vincenzo D'Angiolella & Daniele Guardavaccaro Cell Death and Differentiation(2017)24,1140–1141; doi:10.1038/cdd.2017.75; published online 19 May 2017
Accurate DNA replication is essential for genome maintenance. Two recent reports have uncovered new molecular mechanisms controlling the termination phase of DNA replication in higher eukaryotes and established crucial roles for the CRL2LRR1 ubiquitin ligase and the p97 segregase in replisome unloading from chromatin. Eukaryotic DNA replication can be divided into three distinct steps. During licensing, pre-replication complexes (pre-RCs) assemble at DNA replication origins in the G1 phase of the cell cycle. This is followed by replication initiation at the G1-S phase transition, when CDKs (cyclin-dependent kinases) and DDKs (DBF4-dependent kinases) promote the recruitment of the GINS complex and CDC45 to assemble an active CMG (Cdc45-MCM-GINS) helicase that initiates bidirectional DNA synthesis.1 When DNA synthesis is completed, the CMG helicase is disassembled and unloaded from chromatin during replication termination.
"A multitude of studies have demonstrated that the early phases of DNA replication are regulated by ubiquitylation."
For instance, the E3 ubiquitin ligase complexes cullin-RING ligase-1 (CRL1) and cullin-RING ligase-4 (CRL4) prevent re-replication and the occurrence of genome instability by targeting pre-RC components for proteasomal degradation (reviewed in Truong et al.2). Cullin-RING ligases (CRLs) constitute a protein family of 200 modular E3s and are composed of eight distinct subfamilies containing different cullins, namely CUL1, CUL2, CUL3, CUL4A, CUL4B, CUL5, CUL7 and CUL9.3 Cullins work as molecular scaffolds assembling the different complex subunits, that is, a RING-finger protein (RBX1 or RBX2), which interacts with the ubiquitin-conjugating enzyme, an adaptor protein and one of many substrate-receptor subunits. The activity of CRLs is primarily controlled at the level of substrate recruitment. The direct recognition of the target protein by the substrate-receptor subunit and its recruitment to the core CRL platform are in fact regulated in response to specific stimuli. Moreover, all CRLs are activated through the covalent attachment of the ubiquitin-like protein Nedd8 to the cullin subunit.
Figure 1 Replisome Unloading - 2 Pathways Including Backup Mechanism
Replisome unloading is controlled by two pathways. (a) During DNA replication termination, the CRL2LRR1 ubiquitin ligase and the p97 segregase trigger replisome unloading from chromatin. (b) An additional backup mechanism that depends on the p97 adaptor UBXN3 drives replisome unloading from mitotic chromatin. See text for details. For the sake of clarity, the different components of the replisome are not shown.
Gee Gpuccio, I'm guessing you had some idea just how far reaching the "Ubiquitin" System was, but it must still be amazing how much is unfolding today, before us in research across multi-discipline areas of disease, functions and applications. DATCG
As a follow-up to #381, PCNA and therefore ubiquitin mono and Poly are invovled in this paper's coverage below. But do not have access to full paper. Behind Paywall... The Eukaryotic Replication Machine. Zhang D1, O'Donnell M2. Author information 1 The Rockefeller University, New York, NY, United States. 2 The Rockefeller University, New York, NY, United States; Howard Hughes Medical Institute, The Rockefeller University, New York, NY, United States. Electronic address: odonnel@rockefeller.edu. Abstract
The cellular replicating machine, or "replisome," is composed of numerous different proteins. The core replication proteins in all cell types include a helicase, primase, DNA polymerases, sliding clamp, clamp loader, and single-strand binding (SSB) protein. The core eukaryotic replisome proteins evolved independently from those of bacteria and thus have distinct architectures and mechanisms of action. The core replisome proteins of the eukaryote include: 11-subunit CMG helicase DNA polymerase alpha-primase leading strand DNA polymerase epsilon lagging strand DNA polymerase delta PCNA clamp RFC clamp loader and RPA SSB protein There are numerous other proteins that travel with eukaryotic replication forks, some of which are known to be involved in checkpoint regulation or nucleosome handling, but most have unknown functions and no bacterial analogue. Recent studies have revealed many structural and functional insights into replisome action. Also, the first structure of a replisome from any cell type has been elucidated for a eukaryote, consisting of 20 distinct proteins, with quite unexpected results. This review summarizes the current state of knowledge of the eukaryotic core replisome proteins, their structure, individual functions, and how they are organized at the replication fork as a machine.
DATCG
DNA Replication, the Replisome, PCNA Ubiquitination and Quality Control Systems Limitations or the ability of neo-Darwinian Magic to "evolve" Highly Organized Complex Functional Network Regulatory Systems to Halt, Decide, and designate different Repair Mechanisms. The Replication Fork: Understanding the Eukaryotic Replication Machinery and the Challenges to Genome Duplication Published 2013 Adam R. Leman†* and Eishi Noguchi* Abstract
Abstract Eukaryotic cells must accurately and efficiently duplicate their genomes during each round of the cell cycle. Multiple linear chromosomes, an abundance of regulatory elements, and chromosome packaging are all challenges that the eukaryotic DNA replication machinery must successfully overcome. The replication machinery, the “replisome” complex, is composed of many specialized proteins with functions in supporting replication by DNA polymerases. Efficient replisome progression relies on tight coordination between the various factors of the replisome. Further, replisome progression must occur on less than ideal templates at various genomic loci. Here, we describe the functions of the major replisome components, as well as some of the obstacles to efficient DNA replication that the replisome confronts. Together, this review summarizes current understanding of the vastly complicated task of replicating eukaryotic DNA. Keywords: DNA replication, replisome, replication fork, genome stability, checkpoint, fork barriers, difficult-to-replicate sites, (PCNA Ubiquitination)
( ) emphasis mine The DNA Sliding Clamp: PCNA
DNA sliding clamps have evolved, promoting the processivity of replicative polymerases. In eukaryotes, this sliding clamp is a homotrimer known as Proliferating Cell Nuclear Antigen(PCNA), which form a ring structure. The PCNA ring has polarity with a surface that interacts with DNA polymerases and tethers them securely to DNA. PCNA-dependent stabilization of DNA polymerases has a significant effect on DNA replication because it enhances polymerase processivity up to 1,000-fold. Various PCNA modifications regulate the replisome through specific circumstances during DNA replication. The modifications of PCNA have dramatic effects on its function. Although there are some species-specific modifications of PCNA throughout eukaryota, the principles remain conserved. Upon DNA damage, PCNA is monoubiquitinated, which changes PCNA’s affinity from replicative polymerases to the damage-tolerant translesion synthesis (TLS) polymerases [61,62]. PCNA ubiquitination is dependent on the DNA damage checkpoint pathway and regulates dynamic changes in the replication fork. This process allows for bypass of bulky DNA damage that would otherwise prevent replication fork progression, although this method of damage bypass is error prone [62,63]. In contrast, polyubiquitination of the same site directs the cell towards DNA damage bypass by poorly characterized, but essentially error-free mechanisms [64,65,66]. PCNA can also be SUMOylated (small ubiquitin-like modifier) at the same site in yeast, and SUMOylated PCNA exists in vertebrates [67,68]. This modification is thought to suppress ubiquitination of PCNA, therefore inhibiting TLS and other DNA repair pathways, which are potentially harmful to the cell because they can introduce mutations and genome rearrangements [69,70,71]. These findings indicate that PCNA modifications play critical roles in controlling pathway selection for DNA damage management during DNA replication.
Clamp Loaders evolved
Eukaryotes have evolved multiple clamp loading complexes, each of which appears to function in a separate pathway. The canonical clamp loader essential for DNA replication is RFC and includes Rfc1, Rfc2, Rfc3, Rfc4 and Rfc5. At least three RFC-like complexes exist in eukaryotic cells. RFCCtf18, which contains Ctf18 in place of Rfc1, promotes sister chromatid cohesion and regulates replication speed [78,79,80,81]. RFCElg1, which contains Elg1, is thought to unload SUMOylated PCNA in the presence of DNA damage to allow for replication progression through damaged DNA templates [71]. The RFCRad17/Rad24 clamp does not load PCNA, but loads the 9-1-1 complex at DNA damage sites during the replication checkpoint response [82]. Thus, DNA replication can be regulated at the level of PCNA clamp loading, in order to accommodate multiple processes that take place during DNA replication (Figure 4).
Sliding Clamps, Clamp Loaders, Processivity "evolved" along with all the checkpoint mechanisms, signals and ubiquitination for DNA regulation of DNA replication repair. Magic Darwin . DATCG
Looking at the Magic of Darwin... From the Intro of Previous posted Paper @377 on Quality Control Mechanisms and determination of Protein Degradation choices. Note: Edited sections for breakout of roles
Intracellular QC is regulated by several mechanisms: transcriptional translational posttranslational Posttranslational modifications (PTMs) are: - phosphorylation - ubiquitination - nitrosylation - oxidation - and more... Posttranslational mechanisms: - expand size of the proteome exponentially - are pivotal in the regulation of proteins in the need for Protein: - stability - distribution - and function Emerging evidence supports a major role of PTMs in regulating multiple pathways of intracellular Quality Control.
Protein Quality Control(PQC) is: A set of molecular mechanisms. Ensuring that misfolded and damaged proteins are: - repaired or removed in a timely fashion Thereby minimizing the toxic effects of misfolded proteins (Figure 1 - see above figure in #377). The QC of proteins targeted for the secretary pathway (ie, proteins passing through the Endoplasmic Reticulum[ER]) is performed by ER-associated Protein Quality Control. This involves retrotranslocation. Where: 1) misfolded proteins from ER lumen are moved to the cytosol 2) degradation of them via ER-associated degradation pathways such as the Proteasome for instance There is a different Protein Quality Control System for non-ER proteins. In both cases: - PQC is performed by molecular chaperones and target protein degradation. - Chaperones serve as the sensor of misfolded proteins - and in some cases attempt to Repair misfolding by unfolding/refolding IF repair fails: misfolded proteins termed as a terminally misfolded proteins are escorted by chaperones for: - degradation primarily by the ubiquitin-proteasome system (UPS) - and perhaps by chaperone-mediated autophagy (CMA). When(IF) misfolded proteins escape the surveillance of chaperones and target degradation: - they tend to form aberrant aggregates. The intermediate, highly unstable, soluble species of aggregates are very toxic to the cell.4 Small aggregates assimilate into larger ones that are insoluble and perhaps less toxic to the cell. Finally, with assistance from the microtubule transportation system, - small aggregates may be translocated to the microtubule organizing center to fuse with one another to form large inclusion bodies, termed, by some, aggresomes.5 The insoluble aggregates and aggresomes are: - unlikely to be accessible to the proteasome and CMA - both of which can only degrade proteins individually Hence, the removal of aggregated proteins requires a different mechanism that is capable of bulk degradation of substrates, a role filled by macroautophagy.
And there's the reason for multiple recycling functions and larger scale systems of degradation. What's interesting is the aggregation system - aggresomes. This cannot happen by accident either. It must be organized or damaged, misfolded proteins just wonder all over the cytoplasm or ER. If individual processing breaks down. To relieve Qaulity Control Systems backlog, the Backlog Checking System springs into action rerouting individual proteins targeted for recycling to Bulk Degradation - autophagy. Lovely! My gosh how well Darwin Magic works. ;-) . DATCG
From the previous paper on Quality Control I posted, from the Intro... "Hence, the cell has evolved intracellular quality control (QC) mechanisms at protein and organelle levels to minimize the level and toxicity of misfolded proteins and defective organelles in the cell." Wow, blind, unguided mutations and natural selection Did It. Just wham, bam, thank you mutation Ma'am magically created a Quality Control System. This blind, unguided mythology is sheer genius. In doing so, it created a semiotic system of post-translation communication systems and codes to bridge between two other systems control features. Darwin is Magic and Magic is Darwin. DATCG
Gpuccio, "Redox code, degrons, recognins: ubiquitin is certainly offering us a lot of beautiful gifts!" Beautiful indeed :) Screaming Design once again. Wish I had more time to go through all the different papers you and Dionisio post here. Such a great treasure trove of Biological Function and FSOC(Functional Sequence Organized Complexity) and Irreducible Complexity! Lets take out Redox Codes. the Bridge of Semiotic Language, Degrons, Recognins between the two systems of Proteasome and Autophagy, and what happens to Quality Control? DATCG
#371 Gpuccio, Dionisio, Upright BiPed, So, why have a conditional check and backup system for overload? Why have two pathway systems for degradation and recyling: Proteasome and Autophagy at all? I think a typical Darwin's response would be, this is wasteful and shows no planning, a result of random mutations and natural selection. It's a "Bad" Design. OK, so back to "bad" design argument for Darwinist? Those type of Darwinist arguments failed for Eye Design. Returning to PTM-Posttranslational Modification. A nice chart(2013) below follows of pathways for Proteasome, CMA(chaperone-mediated autophagy) and Macroautophagy(or autophagy). Or, A pre-programmed Quality Control System for distribution and recycling? This answers a question(for me - clarifies it) many days ago on Protein Aggregation and balance. Essentially this is Quality Control Systems. And any good QCS has multiple checks and balances especially during times of stress, or in this case aggregation of unwanted product backlogs. Posttranslational Modification and Quality Control - Figures and Tables Xuejun Wang, J. Scott Pattison, Huabo Su https://doi.org/10.1161/CIRCRESAHA.112.268706 Circulation Research. 2013;112:367-381 Originally published January 17, 2013
Figure 1: An illustration of protein quality control in the cell. Chaperones help fold nascent polypeptides, unfold misfolded proteins and refold them, and channel terminally misfolded proteins for degradation by the ubiquitin-proteasome system (UPS) or chaperone-mediated autophagy (CMA). When(IF) escaped from targeted degradation, misfolded proteins form aggregates via hydrophobic interactions. Aggregated proteins can be selectively targeted by macroautophagy to, and degraded by, the lysosome. hsc indicates heat shock cognate 70; and LAMP-2A, lysosome-associated membrane protein 2A.
Very interesting. Also note: Tables at bottom of page.
Examples of Posttranslational Modifications in Intracellular Quality Control Targets PTMs Regulating Enzymes Biological Function
The Research Paper at Circulation Research: Posttranslational Modification and Quality Control Some concluding remarks from the paper:
Dissection of the upstream pathways that regulate the PTM will be crucial to identify novel targets or strategies for developing pharmaceutics to improve QC in the cell. It is anticipated that comprehensive investigations into intracellular QC in cardiac physiology and pathology will give rise to new therapeutics to better battle heart diseases, the leading cause of death of humans. On macroautophagy, there is a good body of evidence supporting that activation of macroautophagy improves PQC and thereby protects the heart23; nonetheless, excessive macroautophagy on certain conditions such as reperfusion may be detrimental.94 Some studies, but not others, have shown that pharmacologically induced ubiquitous proteasome inhibition protects against I/R injury and pressure-overloaded cardiac hypertrophy.143,144 However, genetically induced moderate proteasome inhibition in cardiomyocytes was recently shown to exacerbate acute I/R injury in mice.18 Furthermore, administration of bortezomib, a proteasome inhibitor, to multiple myeloma patients can cause reversible heart failure.145 Conversely, recent genetic studies reveal that proteasome function enhancement in the cardiomyocytes of diseased hearts can slow down the progression of a bona fide cardiac proteinopathy and minimize acute I/R injury in mice.2 Hence, it is envisioned that enhancing proteasome proteolytic function may be a potential new strategy to treat heart diseases with increased proteolytic stress. Question - what's causing the need for Enhanced "proteasome proteolytic function?" Should not the CAUSE for Increased proteolytic function be target of research?
People eat bad processed foods, then researchers must "fix" the problematic results on the corrosive downside of abnormal functionality . Instead of correcting the input side, our eating habits. GIGO = Garbage In, Garbage Out Or, You can't have your cake and eat it too Correct bad eating habits, correct the outcome. Correct the soaring cost of health care. This is only one area. Obviously stress conditions can arise unrelated to eating habits. . DATCG
DATCG: Redox code, degrons, recognins: ubiquitin is certainly offering us a lot of beautiful gifts! :) gpuccio
Upright BiPed: I thought you would like it! :) This redox code is really interesting, maybe we will have to deepen the issue sometime. gpuccio
#371 Gpuccio, Another great find, those little Degrons are important. Also, Recognins.
It has been a mystery, however, why studies for the past five decades identified only a handful of Nt-arginylated substrates in mammals, although five of 20 principal amino acids are eligible for arginylation.
Here, we show that the Nt-Arg functions as a bimodal degron that directs substrates to either the ubiquitin (Ub)-proteasome system (UPS) or macroautophagy depending on physiological states.
Check Physiological State - Stress Table IF Condition-State = Normal, Select Proteasome Pathway, Else IF Condition-State = Pertubed, Select ChangePath; MacroAutophagy Pathway
In normal conditions, the arginylated forms of proteolytic cleavage products, D101-CDC6 and D1156-BRCA1, are targeted to UBR box-containing N-recognins and degraded by the proteasome. However, when(IF) proteostasis by the UPS is perturbed, their Nt-Arg redirects these otherwise cellular wastes to macroautophagy through its binding to the ZZ domain of the autophagic adaptor p62/STQSM/Sequestosome-1. Upon binding to the Nt-Arg, p62 acts as an autophagic N-recognin that undergoes self-polymerization, facilitating cargo collection and lysosomal degradation of p62-cargo complexes. A chemical mimic of Nt-Arg redirects Ub-conjugated substrates from the UPS to macroautophagy and promotes their lysosomal degradation. Our results suggest that the Nt-Arg proteome of arginylated proteins contributes to reprogramming global proteolytic flux under stresses.
(IF) emphasis mine This is a pre-programmed backup response to stress conditions overload checkpoint. If overload conditions exist, tag, reroute for destruction until conditions change back to normal. Remember, it has to 1) signal overload or recognize stress conditions, 2) tag and mark for rerouting, 3) reroute and destruct, 3) once stress conditions or physiological conditions are no longer true, turn off tagging and marking signal for rerouting and resume normal condition posture. . DATCG
#369 Gpuccio, Wow! Great find! Ahaha, Upright BiPed is dancing the Post-Edit, Semiotic dance ;-) From wiki... Lingua Franca:
A lingua franca, also known as a bridge language, common language, trade language, vehicular language, or link language is a language or dialect systematically used to make communication possible between people who do not share a native language or dialect, particularly when it is a third language that is distinct from both native languages.
Darwinist, Darwinist, whatcha you gonna do, whatcha gonna do when Design comes for you? . DATCG
Posttranslational modifications (PTMs) are the lingua franca of cellular communication.
BOOM! - - - - - - - - - - - - EDIT: But hey, there's no evidence of design in biology. :) Upright BiPed
DATCG, Dionisio: The degron scenario is not complete, yet. Here comes the role of N terminal arginylation: N-terminal arginylation generates a bimodal degron that modulates autophagic proteolysis. http://www.pnas.org/content/early/2018/02/27/1719110115.long
Abstract The conjugation of amino acids to the protein N termini is universally observed in eukaryotes and prokaryotes, yet its functions remain poorly understood. In eukaryotes, the amino acid l-arginine (l-Arg) is conjugated to N-terminal Asp (Nt-Asp), Glu, Gln, Asn, and Cys, directly or associated with posttranslational modifications. Following Nt-arginylation, the Nt-Arg is recognized by UBR boxes of N-recognins such as UBR1, UBR2, UBR4/p600, and UBR5/EDD, leading to substrate ubiquitination and proteasomal degradation via the N-end rule pathway. It has been a mystery, however, why studies for the past five decades identified only a handful of Nt-arginylated substrates in mammals, although five of 20 principal amino acids are eligible for arginylation. Here, we show that the Nt-Arg functions as a bimodal degron that directs substrates to either the ubiquitin (Ub)-proteasome system (UPS) or macroautophagy depending on physiological states. In normal conditions, the arginylated forms of proteolytic cleavage products, D101-CDC6 and D1156-BRCA1, are targeted to UBR box-containing N-recognins and degraded by the proteasome. However, when proteostasis by the UPS is perturbed, their Nt-Arg redirects these otherwise cellular wastes to macroautophagy through its binding to the ZZ domain of the autophagic adaptor p62/STQSM/Sequestosome-1. Upon binding to the Nt-Arg, p62 acts as an autophagic N-recognin that undergoes self-polymerization, facilitating cargo collection and lysosomal degradation of p62-cargo complexes. A chemical mimic of Nt-Arg redirects Ub-conjugated substrates from the UPS to macroautophagy and promotes their lysosomal degradation. Our results suggest that the Nt-Arg proteome of arginylated proteins contributes to reprogramming global proteolytic flux under stresses.
IOWs, ubiquinated proteins are usually dergaded by the proteasome, but if the proteasoms is in trouble, they are shifted to the macroautophagy pathway, and the switch seems to be N-terminal arginylation. This is a complrehensive review about this "alternative" degradation pathway: p62/SQSTM1/Sequestosome-1 is an N-recognin of the N-end rule pathway which modulates autophagosome biogenesis
Abstract: Macroautophagy mediates the selective degradation of proteins and non-proteinaceous cellular constituents. Here, we show that the N-end rule pathway modulates macroautophagy. In this mechanism, the autophagic adapter p62/SQSTM1/Sequestosome-1 is an N-recognin that binds type-1 and type-2 N-terminal degrons (N-degrons), including arginine (Nt-Arg). Both types of N-degrons bind its ZZ domain. By employing three-dimensional modeling, we developed synthetic ligands to p62 ZZ domain. The binding of Nt-Arg and synthetic ligands to ZZ domain facilitates disulfide bond-linked aggregation of p62 and p62 interaction with LC3, leading to the delivery of p62 and its cargoes to the autophagosome. Upon binding to its ligand, p62 acts as a modulator of macroautophagy, inducing autophagosome biogenesis. Through these dual functions, cells can activate p62 and induce selective autophagy upon the accumulation of autophagic cargoes. We also propose that p62 mediates the crosstalk between the ubiquitin-proteasome system and autophagy through its binding Nt-Arg and other N-degrons.
p62, or Sequestosome 1, is an ubiquitin binding protein. gpuccio
DATCG, Dionisio: This is about ROS (Reactive oxygen species) and RES (Reactive electrophile species). RES seem to be specially important in signaling. ROS-mediated lipid peroxidation and RES-activated signaling. https://www.annualreviews.org/doi/abs/10.1146/annurev-arplant-050312-120132?rfr_dat=cr_pub%3Dpubmed&url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org&journalCode=arplant
Abstract; Nonenzymatic lipid oxidation is usually viewed as deleterious. But if this is the case, then why does it occur so frequently in cells? Here we review the mechanisms of membrane peroxidation and examine the genesis of reactive electrophile species (RES). Recent evidence suggests that during stress, both lipid peroxidation and RES generation can benefit cells. New results from genetic approaches support a model in which entire membranes can act as supramolecular sinks for singlet oxygen, the predominant reactive oxygen species (ROS) in plastids. RES reprogram gene expression through a class II TGA transcription factor module as well as other, unknown signaling pathways. We propose a framework to explain how RES signaling promotes cell “REScue” by stimulating the expression of genes encoding detoxification functions, cell cycle regulators, and chaperones. The majority of the known biological activities of oxygenated lipids (oxylipins) in plants are mediated either by jasmonate perception or through RES signaling networks.
4-Hydroxynonenal (HNE) is the RES product quoted in the paper linked at my previous comment. gpuccio
DATCG, Dionisio: This is really interesting. It introduces us to a new code, the redox code, and a nex cross-talk with the ubiquitin code. Ube2V2 Is a Rosetta Stone Bridging Redox and Ubiquitin Codes, Coordinating DNA Damage Responses. https://pubs.acs.org/doi/pdf/10.1021/acscentsci.7b00556
Abstract: Posttranslational modifications (PTMs) are the lingua franca of cellular communication. Most PTMs are enzyme-orchestrated. However, the reemergence of electrophilic drugs has ushered mining of unconventional/non-enzyme-catalyzed electrophile-signaling pathways. Despite the latest impetus toward harnessing kinetically and functionally privileged cysteines for electrophilic drug design, identifying these sensors remains challenging. Herein, we designed "G-REX"-a technique that allows controlled release of reactive electrophiles in vivo. Mitigating toxicity/off-target effects associated with uncontrolled bolus exposure, G-REX tagged first-responding innate cysteines that bind electrophiles under true kcat/Km conditions. G-REX identified two allosteric ubiquitin-conjugating proteins-Ube2V1/Ube2V2-sharing a novel privileged-sensor-cysteine. This non-enzyme-catalyzed-PTM triggered responses specific to each protein. Thus, G-REX is an unbiased method to identify novel functional cysteines. Contrasting conventional active-site/off-active-site cysteine-modifications that regulate target activity, modification of Ube2V2 allosterically hyperactivated its enzymatically active binding-partner Ube2N, promoting K63-linked client ubiquitination and stimulating H2AX-dependent DNA damage response. This work establishes Ube2V2 as a Rosetta-stone bridging redox and ubiquitin codes to guard genome integrity.
(Public access. Emphasis mine) Some interesting passages:
Through a phenomenal research effort we now understand much about complex post-translational regulation in cell signaling. Approximately 10% of the genome is involved in phosphorylation and ubiquitination --- Against the backdrop of these exquisite enzyme-regulated signaling subsystems, the cell has also harnessed reactive smallmolecule signaling mediators to fine-tune responses. In this paradigm, reactive oxygen or electrophilic species (ROS/RES) directly modify a specific signal-sensing protein, preempting decision-making. --- These data indicate that redox signaling HNEylation of one regulatory protein (at a site with no “expected” reactivity) can affect ubiquitin signaling via a third-party enzyme containing a catalytic cysteine (Ube2N).
gpuccio
DATCG, Dionisio: In plants: E3 ubiquitin ligases: key regulators of hormone signaling in plants http://www.mcponline.org/content/early/2018/03/07/mcp.MR117.000476.full.pdf
Abstract Ubiquitin-mediated control of protein stability is central to most aspects of plant hormone signaling. Attachment of ubiquitin to target proteins occurs via an enzymatic cascade with the final step being catalyzed by a family of enzymes known as E3 ubiquitin ligases, which have been classified based on their protein domains and structures. While E3 ubiquitin ligases are conserved among eukaryotes, in plants they are well-known to fulfill unique roles as central regulators of phytohormone signaling, including hormone perception and regulation of hormone biosynthesis. This review will highlight up-to-date findings that have refined well-known E3 ligase-substrate interactions and defined novel E3 ligase substrates that mediate numerous hormone signaling pathways. Additionally, examples of how particular E3 ligases may mediate hormone crosstalk will be discussed as an emerging theme. Looking forward, promising experimental approaches and methods that will provide deeper mechanistic insight into the roles of E3 ubiquitin ligases in plants will be considered.
Now, I suppose that regulating hormone crosstalk in plants is something again completely different from all the functions we have already listed. It is really amazing how the ubiquitin system seems capable of regulating practically anything! :) gpuccio
DATCG, Dionisio: This is new and interesting: Ubiquitin Modulates Liquid-Liquid Phase Separation of UBQLN2 via Disruption of Multivalent Interactions. http://www.cell.com/molecular-cell/pdf/S1097-2765(18)30102-3.pdf
Abstract: Under stress, certain eukaryotic proteins and RNA assemble to form membraneless organelles known as stress granules. The most well-studied stress granule components are RNA-binding proteins that undergo liquid-liquid phase separation (LLPS) into protein-rich droplets mediated by intrinsically disordered low-complexity domains (LCDs). Here we show that stress granules include proteasomal shuttle factor UBQLN2, an LCD-containing protein structurally and functionally distinct from RNA-binding proteins. In vitro, UBQLN2 exhibits LLPS at physiological conditions. Deletion studies correlate oligomerization with UBQLN2's ability to phase-separate and form stress-induced cytoplasmic puncta in cells. Using nuclear magnetic resonance (NMR) spectroscopy, we mapped weak, multivalent interactions that promote UBQLN2 oligomerization and LLPS. Ubiquitin or polyubiquitin binding, obligatory for UBQLN2's biological functions, eliminates UBQLN2 LLPS, thus serving as a switch between droplet and disperse phases. We postulate that UBQLN2 LLPS enables its recruitment to stress granules, where its interactions with ubiquitinated substrates reverse LLPS to enable shuttling of clients out of stress granules.
There seems to be almost averything here: ubiquitin chains, membraneless organelles, intrinsically disordered domains. UBQLN2 (Ubiquilin 2) is a strange object. It is a 624 AAs protein, and here is the Uniprot function section:
Plays an important role in the regulation of different protein degradation mechanisms and pathways including ubiquitin-proteasome system (UPS), autophagy and the endoplasmic reticulum-associated protein degradation (ERAD) pathway. Mediates the proteasomal targeting of misfolded or accumulated proteins for degradation by binding (via UBA domain) to their polyubiquitin chains and by interacting (via ubiquitin-like domain) with the subunits of the proteasome (PubMed:10983987). Plays a role in the ERAD pathway via its interaction with ER-localized proteins FAF2/UBXD8 and HERPUD1 and may form a link between the polyubiquitinated ERAD substrates and the proteasome (PubMed:24215460, PubMed:18307982). Involved in the regulation of macroautophagy and autophagosome formation; required for maturation of autophagy-related protein LC3 from the cytosolic form LC3-I to the membrane-bound form LC3-II and may assist in the maturation of autophagosomes to autolysosomes by mediating autophagosome-lysosome fusion (PubMed:19148225, PubMed:20529957). Negatively regulates the endocytosis of GPCR receptors: AVPR2 and ADRB2, by specifically reducing the rate at which receptor-arrestin complexes concentrate in clathrin-coated pits (CCPs) (PubMed:18199683).
It has an ubiquitin-like domain, but technically it does not seem to be an E3 ligase, nor an ubiquitin binding protein. But it certainly regulates many ubiquitin related pathways. Things get stranger and stranger! :) gpuccio
DATCG: "And what was interesting is the appeal to Wagner as if his wild imaginations about hyper-astronomical library eliminates all the steps and makes it easier to evolve." Frankly, I could never find anything credible in what Wagner says. "Though they refuse to admit Design to the table, Third Way is admitting failure of neo-darwinism to save Darwin." Yes, in a sense they are. But I must admit that it is more difficult for me to understand people who understand and don't admit. In a sense, I have more sympathy for Dawkins... "BTW, anyone else notice how Asian nations are not held down by Darwinism? They think outside the “Black Box?”" Yes, I noticed. They are doing a lot of good work. I think they are probably more competitive, and interested to the results. And truth is often needed to get results! :) "Darwin I think half the time remains due to England and Western influence of society, not due to scientific rigor." It's the lingering power of a static Academia, still conditioned by old philosophies and by ever young political feuds. gpuccio
From the video Dionisio posted @271, Denis Noble, @31.40 mark... Denis Noble @31.44 "I include this slide because it beautifully illustrates that vastly more must be transmitted to the next generation than just the DNA of the nucleus..." As he's showing the slide of the Goldfish, the Carp and the resulting combination leading to something aligned in the middle to a Gold-Carp ;-) He then goes on to quote cytoplasmic factors in the egg cell from a paper by a Chinese Scientist. BTW, anyone else notice how Asian nations are not held down by Darwinism? They think outside the "Black Box?" Darwin I think half the time remains due to England and Western influence of society, not due to scientific rigor. It's as if the old guard cannot let go of a 19th century failed paradigm. DATCG
Gpuccio #358, re: Dionisio's #271, Great video Find Dionisio! I love it when we see the other side admit the real issues are wide open and unsolved by Darwinian mechanisms! Excellent, it's another addition of information I had not reviewed yet. I quickly went to minute 30 Dionisio remarked on and very interesting to hear Denis Noble's remarks. Gold Fish, Carp -> nucleus replacment = -> Something in the Middle :) :) :) Thanks for a clear and sober look at state of genomics, epigenome and many other issues related to cellular organisms and evolution. Going back to a previous OP you recommended earlier, where Darwinist supporters kept trying to say it was easy to see evolution "did it" mantra. I see they would glance over these difficult issues in favor of story telling. Leaving out vast amounts of details and frankly, millions of gradual steps if they were to be honest about Random mutations and Natural Selection via gradual process. And what was interesting is the appeal to Wagner as if his wild imaginations about hyper-astronomical library eliminates all the steps and makes it easier to evolve. So, now they've eliminated Darwin? Haha... they really are in a Catch-22 these days. At least Noble of Third Way admits Darwin's dead and one of his earlier videos shreds Richard Dawkins. Though they refuse to admit Design to the table, Third Way is admitting failure of neo-darwinism to save Darwin. DATCG
DATCG at #357: DNA repair: another key issue! Just a curiosity: many of the proteins involved have a similar evolutionary history, more or less as can be seen in Fig. 5 in the OP for BRCA1, with a late development of the human sequence, and some important late jump in mammals. They are: RNF8 (485 AAs) RNF168 (571 AAs) BRCA1 (1863 AAs) BRCA2 (3418 Aas) MDC1 (2089 AAs) RAP80 (719 AAs) BARD1 (777 AAs) Ctip (897 AAs) Other proteins in the process, instead, are older (highly conserved in eukaryotes), and a couple mof them are mainly engineered (in human form) at the vertebrate transition. The presence of so many late-engineered proteins is maybe surprising, because DNA repair seems to be an old problem. But it seems that it requires new or more specific solutions, especially in mammals. It would be interesting to understand why. gpuccio
Dionisio: "Oops! You used a politically incorrect word in scientific discussions: “miracles”" It's an old problem. In my whole life, I have never been able to be politically correct! :) gpuccio
Byrne, Robert, Thomas Mund, and Julien D. F. Licchesi. “Activity-Based Probes for HECT E3 Ubiquitin Ligases.” ChemBioChem 18, no. 14 (July 18, 2017): 1415–27. https://doi.org/10.1002/cbic.201700006. Flack, Joshua E., Juliusz Mieszczanek, Nikola Novcic, and Mariann Bienz. “Wnt-Dependent Inactivation of the Groucho/TLE Co-Repressor by the HECT E3 Ubiquitin Ligase Hyd/UBR5.” Molecular Cell 67, no. 2 (July 2017): 181–193.e5. https://doi.org/10.1016/j.molcel.2017.06.009. Gabrielsen, Mads, Lori Buetow, Mark A. Nakasone, Syed Feroj Ahmed, Gary J. Sibbet, Brian O. Smith, Wei Zhang, Sachdev S. Sidhu, and Danny T. Huang. “A General Strategy for Discovery of Inhibitors and Activators of RING and U-Box E3 Ligases with Ubiquitin Variants.” Molecular Cell 68, no. 2 (October 2017): 456–470.e10. https://doi.org/10.1016/j.molcel.2017.09.027. Gorelik, Maryna, and Sachdev S. Sidhu. “Specific Targeting of the Deubiquitinase and E3 Ligase Families with Engineered Ubiquitin Variants: Gorelik and Sidhu.” Bioengineering & Translational Medicine 2, no. 1 (March 2017): 31–42. https://doi.org/10.1002/btm2.10044. Iimura, Akira, Fuhito Yamazaki, Toshiyasu Suzuki, Tatsuya Endo, Eisuke Nishida, and Morioh Kusakabe. “The E3 Ubiquitin Ligase Hace1 Is Required for Early Embryonic Development in Xenopus Laevis.” BMC Developmental Biology 16, no. 1 (December 2016). https://doi.org/10.1186/s12861-016-0132-y. Kai, Masatake, Naoto Ueno, and Noriyuki Kinoshita. “Phosphorylation-Dependent Ubiquitination of Paraxial Protocadherin (PAPC) Controls Gastrulation Cell Movements.” Edited by Jung Weon Lee. PLOS ONE 10, no. 1 (January 12, 2015): e0115111. https://doi.org/10.1371/journal.pone.0115111. Lorenz, Sonja. “Structural Mechanisms of HECT-Type Ubiquitin Ligases.” Biological Chemistry 399, no. 2 (January 26, 2018). https://doi.org/10.1515/hsz-2017-0184. Mlodzik, Marek. “Ubiquitin Connects with Planar Cell Polarity.” Cell 137, no. 2 (April 2009): 209–11. https://doi.org/10.1016/j.cell.2009.04.002. Mund, Thomas, Michael Graeb, Juliusz Mieszczanek, Melissa Gammons, Hugh R. B. Pelham, and Mariann Bienz. “Disinhibition of the HECT E3 Ubiquitin Ligase WWP2 by Polymerized Dishevelled.” Open Biology 5, no. 12 (December 2015): 150185. https://doi.org/10.1098/rsob.150185. Narimatsu, Masahiro, Rohit Bose, Melanie Pye, Liang Zhang, Bryan Miller, Peter Ching, Rui Sakuma, et al. “Regulation of Planar Cell Polarity by Smurf Ubiquitin Ligases.” Cell 137, no. 2 (April 2009): 295–307. https://doi.org/10.1016/j.cell.2009.02.025. Ramakrishnan, Aravinda-Bharathi, Abhishek Sinha, Vinson B. Fan, and Ken M. Cadigan. “The Wnt Transcriptional Switch: TLE Removal or Inactivation?” BioEssays 40, no. 2 (February 2018): 1700162. https://doi.org/10.1002/bies.201700162. Xie, Zhongdong, Han Liang, Jinmeng Wang, Xiaowen Xu, Yan Zhu, Aizhen Guo, Xian Shen, Fuao Cao, and Wenjun Chang. “Significance of the E3 Ubiquitin Protein UBR5 as an Oncogene and a Prognostic Biomarker in Colorectal Cancer.” Oncotarget 8, no. 64 (December 8, 2017). https://doi.org/10.18632/oncotarget.22531. Zhang, Wei, Maria A. Sartori, Taras Makhnevych, Kelly E. Federowicz, Xiaohui Dong, Li Liu, Satra Nim, et al. “Generation and Validation of Intracellular Ubiquitin Variant Inhibitors for USP7 and USP10.” Journal of Molecular Biology 429, no. 22 (November 2017): 3546–60. https://doi.org/10.1016/j.jmb.2017.05.025. Dionisio
DATCG at #356: Great video! The Kinetochore is certainly another incredible structure. From Wikipedia:
A 2010 study uses a complex method (termed multiclassifier combinatorial proteomics or MCCP) to analyze the proteomic composition of vertebrate chromosomes, including kinetochores.[32] Although this study does not include a biochemical enrichment for kinetochores, obtained data include all the centromeric subcomplexes, with peptides from all 125 known centromeric proteins. According to this study, there are still about one hundred unknown kinetochore proteins, doubling the known structure during mitosis, which confirms the kinetochore as one of the most complex cellular substructures. Consistently, a comprehensive literature survey indicated that there had been at least 196 human proteins already experimentally shown to be localized at kinetochores
gpuccio
gpuccio @358: "In the end, we know that the information, whatever its form, must be in some way connected to the physical organism that reproduces, because there is no doubt that it is transmitted: otherwise, we could not explain how the miracles of function and differentiation take place each time, with remarkable precision." Oops! You used a politically incorrect word in scientific discussions: "miracles" You better watch out! Next time, please refrain from using words like that. :) Dionisio
DATCG at #355: Very good thoughts. I really don't know how the problem of Big Data and of overwhelming details can be treated to get a glimpse of the controlling procedures. The problem in the end is to find where the information is. Let's take for example the fundamental problem of the transition from single celled organisms to multicellular. The problem could be summed up as follows: if I start with soem single celled organism, let's say yeast, and I want to get a multicellular organism, let's say c. elegans, what kind of information do I need to add? Is it all genomic? And what is it exactly? Now, of course we know that yeast has "only" 6000 genes, while c. elegans has almost 20000. But the mere number of genes is not a good parameter, considering for example that some other fungus, like Phanerochaete chrysosporium, has almost 12000, and that drosophila has 13600, while humans themselves are at about 20000. The nubler of genes is not a good answer. Non coding DNA is probably better, but of course it is still difficult to understand the role of great part of it. Epigenetic states can keep a lot of information, but are they dependent on genomic information? Or can they store supplementary information which is tgransmitted directly, and does not rely on genomic stoiring? In the end, we know that the information, whatever its form, must be in some way connected to the physical organism that reproduces, because there is no doubt that it is transmitted: otherwise, we could not explain how the miracles of function and differentiation take place each time, with remarkable precision. I think we should probably consider a living being as a whole system, that includes genetic and epigenetic information which, at any moment, is in a dynamic state of interaction. The whole system probably bears the necessary information for life and reproduction and cell differentiation. But that system could store infromation in many different ways, some of which we certainly have to discover yet, some of which could be very different from what we usually think of. For example, I believe that we must go beyond the mere biochemical states, and look more deeply to what biophysics can tell us. We already know that for DNA biophysical states are complex and still poorly understood, and that they certainly have a major role in transcription regulation. The same can be said for proteins, especially considering what we have found about conditional folding, intrinsical disorder, and so on. The same can be said for cellular states, including organelles and compartments, both membrane-linked and membraneless. In a sense, the whole cell is a multi-faceted information system, that we still need to decode. Extremely interesting, from this point of view, is the goldfish-carp experiment referenced in the Denis Noble video linked by Dionisio at #271, starting at mark 31:46, and whose original paper can be found here: Cytoplasmic Impact on Cross-Genus Cloned Fish Derived from Transgenic Common Carp (Cyprinus carpio) Nuclei and Goldfish (Carassius auratus) Enucleated Eggs https://academic.oup.com/biolreprod/article/72/3/510/2666963 And here is a follow-up, with interesting (and unexpected) news about mitochondria: The carp–goldfish nucleocytoplasmic hybrid has mitochondria from the carp as the nuclear donor species https://www.sciencedirect.com/science/article/pii/S0378111913016855?via%3Dihub Here is a more general review: The egg and the nucleus: a battle for supremacy http://dev.biologists.org/content/140/12/2449.long#sec-10 And here, too: Interspecies Somatic Cell Nuclear Transfer: Advancements and Problems https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3787369/ gpuccio
here's another "simple" animation for Damaged DNA and Repair from 2011... DNA Damage Response to double-stranded DNA break -- Homologous Recombination And accompanying information summary of work flow. Check out all the ubiqutin tags and ubiquitin chains.
The DNA damage response (DDR) of the cell includes: (i) sensing, (ii) signalling (iii) repair of such damage Double strand breaks (DSBs) are the most toxical DNA damage for the cell. They can be induced by ionizing radiation, laser beam, bleomycin, Topoisomerase II enzyme, endonucleases or also can be produced during the repair of single-stranded breaks (SSB) DNA. It has been reported that approximately nine DSB per cell and day are produced in physiological conditions. The Homologus Recombination (HR) pathway is in charge of DSBs repair, in a error-free fashion, during S or G2 phases of the cell cycle, by using sister chromatid as template3. A proficient sensing and signaling of DSBs is very important for the maintenance of the genome and chromosomal stability. Recent research works, stressed out the key role of post-traslational modification of DDR proteins such as: phosphorilation, acetylation, methylation, ubiquitination and sumoylation in regulating the DNA damage signalling and response. The HR DDR signalling is believed to act in the following order: first, the DSB lesion is recognized by MRN complex (MRE11-RAD50-NBS1), that recruits the ATM (mutated in Ataxia Telangiectasia) kinase into the damage site. ATM phosphorilates the serine 139 of the ?H2AX at the damage site and also in large number of nucleosomes around the DSB. ATM also phosphorilates MDC1 (mediator of DNA damage checkpoint protein 1). At this point the ?H2AX-ATM-MDC1 connection generates a possitive feedback loop, that contributes to amplify DSB signal along the whole nucleous. Following this, RNF8 (an ubiquitin ligase), modifies ?H2AX. Then RNF168, an E3 ubiquitin ligase, detects the RNF8 signal in histones and amplify it by creating poly-ubiquitin chains. At this time, the RAP80-Abraxas complex is recruited into the damage site, followed by the BRCA1-BRCC36 complex. BARD1 it is believed to win the competition against Ctip protein for coupling with BRCA1-BRCC36 complex. An important role is attributed to RAP80 ubiquitination by RNF8, and BRCA1 SUMOylation by PIAS1-UBC9 complex. At this time, Ctip couples to MRN complex, displace most of the DDR proteins at DSB site, and catalyzes the DSBs ends resection by using its exonuclease activity. Immediately RPA binds to ssDNA. Then BRCA2 displaces RPA and enhance the RAD51 binding to the ssDNA filaments. This is the main signal for recruitment of recombination machinery to complete the DSB repair.
Well, that's a load... for sure! . DATCG
And... now I jump to the Kinetochore :) Which Dionisio first posted in Comment #22, then subsequent comments with differing roles of ubiquitin. We live in amazing times where we enjoy all of this rich information available to us these days. Here's original video from 2012 starting 6 min mark of the Kinetochore broadcasting signals for eventual separation of chromosomes... https://www.youtube.com/watch?v=WFCvkkDSfIU Hope you guys had a great weekend. DATCG
Gpuccio, Thanks, it's a bit overwhelming, amazing and adventurous :) I guess what we do know or can infer is it's a Rules Based system, Modular and Conditional with massive amounts of parallel processing(billions of cells) and semiotic. The nervous system... whew.... back to being overwhelmed but in a good way :) But as you point out...
Because what we know is really amazing. And what we don’t know is exponentially more than what we know. However, I continue to believe that the procedures, those which are really essential, those which explain how things really work, are still missing. We know a lot of details, but we never understand what really controls the details.
Yes, many more details have come to light in just those few years since your 2015 post. And yet, so much more details to learn, remains! This is the great architecture of all systems in the world combined. The Sensors and signals processing fascinates me as that is part of control, procedural and rules based. The control systems are massive. Once a sensor detects an aberration, it's not enough to react. It must react, signal, monitor, then decide to keep reacting or stop the reaction - in case of immune systems defense. Or in any other case as we discussed on the balance of protein synthesis and degradation and recycling, a delicate balance. I've been reading when I can about Sensors discovered in these or other processes. We know how important some systems are conserved. And we know why some are intentionally allowed to vary. We need a Big Table of Rules and Conditions for Big Data. Problem, Detection, Rules(If Then, Boolean, etc.), Signal, Proteins, Actions, Do-Until ... Next Step Maybe that's just to simplified, but somehow all of these actions need to be codified, procedurally assigned and pattern checked. Before I saw you and Dionisio discuss Procedures, an idea came that maybe one method of realization and visualization would be to track a specific change, damage, or protein life from start to finish across all systems interaction. From birth to death or recycling(degradation) in a cell? And then codify the specific actions along the way, Step 1, 2, 3, 4... Detection, Alerts, Rules to Alerts, Actions to Take, etc. Maybe Protein Synthesis, damage, repair, splicing, modification and eventual targeting for recycling. So far, all these excellent post by you from many different OPs has been looking at Modular Systems and Components. And those are overwhelming when we look at the multiplication of interactions like Ubiquitin. But if we dial it down to one specific point of functional interaction, then follow maybe it narrows down the thought process of uncovering procedures and rules? Is that fair to say? Along with a variety of interactions of course that grow and influence them. Again, I may have missed some other post in the past. Maybe it could be limited by intentional direction of steps. Leave out some conditional steps, maybe even whole steps at first and fill in those steps later. To keep it from being an overwhelming amount of information at once. Even mark areas "unknown" or TBD(to be discovered). Whatever we don't know - will be fun to discover! Really appreciate all the efforts you provide and explanations. Thanks again. Looking back on your 2015 post was good to review. DATCG
DATCG: Some time ago, I expressed my desire to write an OP about the problem of the "missing procedures". What I meant was (and is) that with all that we know about the genome, and I would add also about epigenetics, we see a wonderful display of intelligent coordination and differentiation of programs and of regulations, but in the end we don't know where and how the real procedures that control all that are written. In software, we have to write down the effectors, but we also need to write down the procedures that control the effectors. In biological software, we know much about the proteins (the effectors), but we understand too little about what controls the ordered and differentiated manifestations of proteins (and other components) in all the various engineered outcomes that we see in cells: above all, the different types of cells and cellular states in multicellular beings. Of course, today we know much more than yesterday. This same thread is evidence of that. And there is the huge fiels of epiganetics, which has added a lot of understanding. And we know a lot also of cell differentiation. A lot more than we knew, certainly. That's the reason why I have renounced for the moment to deal with the problem of the missing procedures: I wanted to understand better what we know and what we don't know. And, as I said to Dionisio, that has been, and is, a very "intensive preparation". Because what we know is really amazing. And what we don't know is esponentially more than what we know. However, I continue to believe that the procedures, those which are really essential, those which explain how things really work, are still missing. We know a lot of details, but we never understand what really controls the details. The fact remains that in that elusive genome, with its 20000+ protein coding genes, and its non coding DNA, plus all the possible information in the cytoplasm or other epigenetic markings, there must be the information sufficient to guide all the various cell differentiations which lead to tissues and organs. And there must be a satisying answer to the problem of morphogenesis. And there must be an answer to the architecure of systems like the immune system or, even worse, the nervous system of humans, for those who are not satisfied with the blind belief that a minor bundle of nucleotide variations in a few genes are enough to guide and determine the structure of 10^11 interconnected neurons, with all the advanced functions that, undeniably, the human brain seems to own. Those are the missing procedures. I don't think that we have any good idea of where and how that information is written. But I hope that, as we gather billions and billions of new details (the famous "Big data" problem), sooner or later some major breakthrough will take place. gpuccio
Dionisio, Gpuccio, "Procedures" and future OPs? What procedures? I've missed something after visiting hyper-astronomical dimensions. DATCG
#340 Gpuccio, After going back to 2015 post, I must have landed in a Wagner Hyper-astronomical subspace. Ha! grrr... lost my original comment Wagner's hypercube somewhere ;-) and no amount of steps to find it! That was some fun reading! :) I went back into dimensional subspace where everything was connected by a single step ;-) I see the light now, one step here, another there and voila from pink kittens to pink unicorns! What was so funny is using Wagner as a defense laid open the failures of neo-Darwinism. Yet the claim was self-organization and hypothetical dimensions solve the very problem Darwinist claimed did not exist. Interesting! Forgotten about that OP by you Gpuccio! Interesting look back! And even that I participated in it. Must have dropped off before the talk of astronomical library poofed into existence. And today, here we are with more evidence that your positions are rock solid. And where is the blind, unguided hyper-astronomical solution? We know Darwinism is dead, that Extended Synthesis is now at play in attempts to hold on to materialist doctrine. And neo-darwinism is dying on the vine as well, as some form of subset to whatever is next, hyper-astronomical libraries I guess. That is desperation for sure. But then, maybe from the beginning so was Darwin's attempt to write off Design by a series of gradual steps. DATCG
Chapard, C., P. Meraldi, T. Gleich, D. Bachmann, D. Hohl, and M. Huber. “TRAIP Is a Regulator of the Spindle Assembly Checkpoint.” Journal of Cell Science 127, no. 24 (December 15, 2014): 5149–56. https://doi.org/10.1242/jcs.152579. Hoffmann, Saskia, Stine Smedegaard, Kyosuke Nakamura, Gulnahar B. Mortuza, Markus Räschle, Alain Ibañez de Opakua, Yasuyoshi Oka, et al. “TRAIP Is a PCNA-Binding Ubiquitin Ligase That Protects Genome Stability after Replication Stress.” The Journal of Cell Biology 212, no. 1 (January 4, 2016): 63–75. https://doi.org/10.1083/jcb.201506071. Ma, Xingjie, Junjie Zhao, Fan Yang, Haitao Liu, and Weibo Qi. “Ubiquitin Conjugating Enzyme E2 L3 Promoted Tumor Growth of NSCLC through Accelerating P27kip1 Ubiquitination and Degradation.” Oncotarget 8, no. 48 (October 13, 2017). https://doi.org/10.18632/oncotarget.20449. Min, M., T. E. T. Mevissen, M. De Luca, D. Komander, and C. Lindon. “Efficient APC/C Substrate Degradation in Cells Undergoing Mitotic Exit Depends on K11 Ubiquitin Linkages.” Molecular Biology of the Cell 26, no. 24 (December 1, 2015): 4325–32. https://doi.org/10.1091/mbc.E15-02-0102. Nath, Somsubhra, Taraswi Banerjee, Debrup Sen, Tania Das, and Susanta Roychoudhury. “Spindle Assembly Checkpoint Protein Cdc20 Transcriptionally Activates Expression of Ubiquitin Carrier Protein UbcH10.” Journal of Biological Chemistry 286, no. 18 (May 6, 2011): 15666–77. https://doi.org/10.1074/jbc.M110.160671. Iimura, Akira, Fuhito Yamazaki, Toshiyasu Suzuki, Tatsuya Endo, Eisuke Nishida, and Morioh Kusakabe. “The E3 Ubiquitin Ligase Hace1 Is Required for Early Embryonic Development in Xenopus Laevis.” BMC Developmental Biology 16, no. 1 (December 2016). https://doi.org/10.1186/s12861-016-0132-y. Kai, Masatake, Naoto Ueno, and Noriyuki Kinoshita. “Phosphorylation-Dependent Ubiquitination of Paraxial Protocadherin (PAPC) Controls Gastrulation Cell Movements.” Edited by Jung Weon Lee. PLOS ONE 10, no. 1 (January 12, 2015): e0115111. https://doi.org/10.1371/journal.pone.0115111. Dionisio
DATCG at #337:
Decipher: 1 – decode 1a – decipher a secret message 3a – to make out the meaning of despite indistinctness or obscurity 3b – to interpret the meaning of Code: 3a – a system of signals or symbols for communication 3b – a system of symbols (such as letters or numbers) used to represent assigned and often secret meanings 4 – genetic code 5 – instructions for a computer (as within a piece of software
Wonderful clarification of terms which are often badly used. It's refreshing to see how the subjective experience of meaning is central even in simple definitions. And how the symbolic nature of codes is crystal clear in language. Codes and design are connected just from the beginning by their definitions themselves: both words have no sense if we don't refer in some way to the subjective experience of understanding meanings! gpuccio
Dionisio: "It would definitely add some “spice” to the discussion to have a couple of serious opponents actively participating, but where have they all gone?" I would like to know. Some of them were pretty good! "Has anybody heard of professor Arthur Hunt lately?" Apparently not. "I’m willing to get off this thread if that’s the condition for professor Larry Moran to come back." I don't believe that it would work! :) I don't think that I have ever discussed directly with Larry Moran, even if I have commented about some of his statements a couple of times. "GP as the owner and moderator of this thread will ensure that all “tricky” words are written in bold font so nobody misses their presence in the text." Well, I have never "moderated" anything in my life, I would not like to begin with you! :) I confide in your self-discipline to ensure that all bolds are assigned in a politically correct way. gpuccio
Dionisio: "To illustrate the refreshingly funny assessment of this discussion and its effect on our knowledge, let’s add that around 120 papers have been referenced in this thread so far." Yes. Not bad. And, I would say, almost all rather pertinent. And many of them extremely recent. gpuccio
Dionisio: "BTW, are your OPs part of your intensive preparation for a potentially future OP on “procedures”?" I suppose they are. I must say that the "preparation" is much more "intensive" than I could imagine! :) gpuccio
gpuccio @338: To illustrate the refreshingly funny assessment of this discussion and its effect on our knowledge, let's add that around 120 papers have been referenced in this thread so far. Dionisio
gpuccio @339: That's encouraging. I look forward to reading it someday. Thanks. BTW, are your OPs part of your intensive preparation for a potentially future OP on "procedures"? As UB stated before, if you ever decide to write a book with all your OP + follow up comments, you'll make many happy campers around here and out there! :) Dionisio
gpuccio @338: Very refreshing sense of humor pointing to what's going on here. Thanks. I have learned (and still learning) much from this OP + discussion thread. Much more than I expected at the start, even though my expectations were high. Thanks. Dionisio
gpuccio @341:
Just to sum up, we have seen tons of examples of: a) Huge functional complexity, and of the highest type, the regulatory type. b) The ubiquitous presence of refined semiosis, everywhere. c) Hundreds, maybe thousands, of individual systems exhibiting, each of them, irreducible complexity.
Excellent summary! Thanks. Dionisio
gpuccio @340: Where are DNA_Jock, sparc, and other politely dissenting interlocutors that were so active in your interesting 2015 OP and discussion thread that you pointed at? It would definitely add some "spice" to the discussion to have a couple of serious opponents actively participating, but where have they all gone? :) Has anybody heard of professor Arthur Hunt lately? I'm willing to get off this thread if that's the condition for professor Larry Moran to come back. At least that would reassure him that nobody will ask dishonest questions with "tricky" words like "exactly" subliminally embedded in the questions. GP as the owner and moderator of this thread will ensure that all "tricky" words are written in bold font so nobody misses their presence in the text. :) Dionisio
DATCG at #336: "All the papers you posted and Gpuccio commented on shows overwhelming evidence of Design and planning. At so many different levels of expertise, not only coding, but of engineering which as you’ve said, we’ve not seen nothing yet!" Of course. We have been witnessing here, in this interesting thread, a perfect example of design of the highest kind, a kind that vastly outperforms anything we can yet try to conceive. Just to sum up, we have seen tons of examples of: a) Huge functional complexity, and of the highest type, the regulatory type. b) The ubiquitous presence of refined semiosis, everywhere. c) Hundreds, maybe thousands, of individual systems exhibiting, each of them, irreducible complexity. But our kind interlocutors seem not to be interested in all that. OK, but they miss a lot of fun! :) gpuccio
DATCG at #336: "Or internal factors with prescribed actionable network systems response." The subject of intelligent and functional algorithmic responses to environment is fascinating. We have certainly many examples of that. One is well known, and I have written about it in a previous OP: https://uncommondesc.wpengine.com/intelligent-design/antibody-affinity-maturation-as-an-engineering-process-and-other-things/ Antibody affinity maturation is indeed a wonderful example of algorithmic process which creates important functional information based on the acquisition of information from the environment (the contact with the antigen) and from a highly complex computational process (the maturation process), essentially of the bottom-up type. It is interesting that many times it gas been pointed to as an example of "darwinian evolution", which is good evidence of how confused are sometimes our kind interlocutors. Of course, such a refined computational process is outstanding evidence of design: designed objects can indeed comput new information about the pre-defined function and using new information inputs and their pre-programmed computing information: that's what computers, or neural networks, do all the time. I think that another system designed to provide that kind of functionality is probably the plasmid system in prokaryotes. However, computational systems, even computers, always have the same fundamental limit in themselves: they can only compute what they have been directly or indirectly programmed to compute, and nothing else. That's why the generation of really new complex functional information always requires a conscious designer. gpuccio
Dionisio: "Maybe the functional complexity of cellular* membranes could be a future topic for an OP? (*) including organelle membrane too" It's certainly a possibility. gpuccio
Dionisio at #131: "Evolution of our understanding of ubiquitin?" Well, our understanding of ubiquitin has certainly "evolved" from the beginning of this thread! :) I suppose it was not a completely unguided process, however. It took some specific work and attempts at understanding by a small group of rather insubordinate people (including me), a lot of not really "natural" selection of papers from the literature, and some effort to express relevant thoughts in the 300+ comments by 4+ commenters in the discussion. I would say that environmental pressure (comments from the other side) had no relevant role in shaping that evolution (indeed, no role at all!). RV certainly was present, mostly in the form of typos, even if our small structure has a very efficient proof checking system (you know what I mean! ;) ) That said, the results are not bad. And I can see a lot of convergent evolution all around! :) gpuccio
Ran a quick search on technology and Ubiquitin to determine how far science and scientist have "evolved" in use of technology to decode the Ubiquitin Code. Tracing down linear ubiquitination New technology enables detailed analysis of target proteins Date: March 20, 2017 Source: Goethe University Frankfurt Summary: Researchers have developed a novel technology to decipher the secret ubiquitin code.
Scientists often refer to it as the secret ubiquitin code, which still needs to be fully deciphered. Recently, scientists discovered that ubiquitin molecules are not only assembled in a non-linear manner, but also build linear chains, in which the head of one ubiquitin is linked to the tail of another ubiquitin molecule. So far, only two highly specific enzymes are known capable of synthesizing and degrading such linear ubiquitin chains, and both are being extensively studied at the Institute of Biochemistry II at the Goethe University Frankfurt. However, target proteins of linear ubiquitination, as well as their specific cellular functions, have largely remained elusive. The novel technology developed by the team around Koraljka Husnjak from the Goethe University Frankfurt now enables the systematic analysis of linear ubiquitination targets. "The slow progress in this research area was mainly due to the lack of suitable methods for proteomic analysis of proteins modified with linear ubiquitin chains," explains Koraljka Husnjak whose native country is Croatia. Her team solved the problem by internally modifying the ubiquitin molecule in such a way that it maintains its cellular functions whilst at the same time enabling the enrichment and further analysis of linear ubiquitin targets by mass spectrometry.
Only a year ago this technique began. Amazing. So many, many more papers to come in future using this technique of identification.
With this technology at hand, it is now possible to identify target proteins modified by linear ubiquitin, and to detect the exact position within the protein where the linear chain is attached. Scientists praise this highly sensitive approach as an important breakthrough that will strongly improve our understanding of the functions of linear ubiquitination and its role in diseases. Dr. Husnjak already provided the proof of this concept and identified several novel proteins modified by linear ubiquitin chains. Amongst them are essential components of one of the major pro-inflammatory pathways within cells. "Linear ubiquitin chains relay signals that play an important role in the regulation of immune responses, in pathogen defence and immunological disorders. Until now we know very little about how small slips in this system contribute to severe diseases, and how we can manipulate it for therapeutic purposes" comments Husnjak the potential of the new technology. Errors in the ubiquitin system have been linked to numerous diseases including cancer and neurodegenerative disorders such as Parkinson's disease, but also to the development and progression of infections and inflammatory diseases.
Great work by Dr. Husnjak and his team(s) at Goethe University Frankfurt. Katarzyna Kliza, Christoph Taumer, Irene Pinzuti, Mirita Franz-Wachtel, Simone Kunzelmann, Benjamin Stieglitz, Boris Macek & Koraljka Husnjak Nature Methods volume 14, pages 504–512 (2017) doi:10.1038/nmeth.4228 Scientific paper behind pay wall... Nature - Internally tagged ubiquitin: a tool to identify linear polyubiquitin-modified proteins by mass spectrometry Decipher: 1 - decode 1a - decipher a secret message 3a - to make out the meaning of despite indistinctness or obscurity 3b - to interpret the meaning of Code: 3a - a system of signals or symbols for communication 3b - a system of symbols (such as letters or numbers) used to represent assigned and often secret meanings 4 - genetic code 5 - instructions for a computer (as within a piece of software DATCG
Dionisio @ 327/331 Ha! :) As knowledge increases of Functional Sequence Complex - inter-Dependent Organized Systems(FSC-iDOS), I think we find "evolve" is an over-hyped term in "evolutionary" biology. Gpuccio always points out Variation? Random Variation. There is deleterious mutations and then significantly controlled, programmatic, conditional logic of Allowed Variation. Based off environmental qu.... uh stimuli ;-) Or internal factors with prescribed actionable network systems response. All the papers you posted and Gpuccio commented on shows overwhelming evidence of Design and planning. At so many different levels of expertise, not only coding, but of engineering which as you've said, we've not seen nothing yet! Like "junk" DNA, functions abound in places Darwinist once said had no function... Appendix Might Save Your Life - 2012 SciAm
ou may have heard the appendix is vestigial, a relict of our past like the hind leg bones of a whale. Parker heard that too, he just disagrees. Parker thinks the appendix serves as a nature reserve for beneficial bacteria in our guts. When we get a severe gut infection such as cholera (which happened often during much of our history and happens often in many regions even today), the beneficial bacteria in our gut are depleted. The appendix allows them to be restored. In essence, Parker sees the appendix as a sanctuary for our tiny mutualist friends, a place where there is always room at the inn4. If he is right, the appendix nurtures beneficial bacteria even as our conscious brains and cultures tell us to kill, kill, kill them with wipes and pills.
"Evolve" "Junk" "Vestigial" You keep using that word, I do not think it means what you think it means . DATCG
Poot, Stefanie A.H. de, Geng Tian, and Daniel Finley. “Meddling with Fate: The Proteasomal Deubiquitinating Enzymes.” Journal of Molecular Biology 429, no. 22 (November 2017): 3525–45. https://doi.org/10.1016/j.jmb.2017.09.015. Boutouja, Fahd, Rebecca Brinkmeier, Thomas Mastalski, Fouzi El Magraoui, and Harald Platta. “Regulation of the Tumor-Suppressor BECLIN 1 by Distinct Ubiquitination Cascades.” International Journal of Molecular Sciences 18, no. 12 (November 27, 2017): 2541. https://doi.org/10.3390/ijms18122541. Dionisio
Layman, Awo A. K., and Paula M. Oliver. “Ubiquitin Ligases and Deubiquitinating Enzymes in CD4 + T Cell Effector Fate Choice and Function.” The Journal of Immunology 196, no. 10 (May 15, 2016): 3975–82. https://doi.org/10.4049/jimmunol.1502660. Skieterska, Kamila, Pieter Rondou, and Kathleen Van Craenenbroeck. “Regulation of G Protein-Coupled Receptors by Ubiquitination.” International Journal of Molecular Sciences 18, no. 12 (April 27, 2017): 923. https://doi.org/10.3390/ijms18050923. Ohtake, Fumiaki, Hikaru Tsuchiya, Yasushi Saeki, and Keiji Tanaka. “K63 Ubiquitylation Triggers Proteasomal Degradation by Seeding Branched Ubiquitin Chains.” Proceedings of the National Academy of Sciences 115, no. 7 (February 13, 2018): E1401–8. https://doi.org/10.1073/pnas.1716673115. Grice, Guinevere L., and James A. Nathan. “The Recognition of Ubiquitinated Proteins by the Proteasome.” Cellular and Molecular Life Sciences 73, no. 18 (September 2016): 3497–3506. https://doi.org/10.1007/s00018-016-2255-5. Dionisio
Dionisio, "... queue" is a word I abuse quite frequently. Think it is a leftover from many visits to London and parts of England and Scotland where I had to stand in a queue :) For some reason my brain mistranslates cue to queue. Not the first time. Maybe I'll just use a different word entirely. Environmental Factors ;-) EFs. or Input. DATCG
maybe the functional complexity of cellular* membranes could be a future topic for an OP? (*) including organelle membrane too https://www.ncbi.nlm.nih.gov/Structure/pdb/6BMF https://www.ncbi.nlm.nih.gov/Structure/pdb/6AP1 Dionisio
@327:
Since its discovery as a post-translational signal for protein degradation, our understanding of ubiquitin (Ub) has vastly evolved.
Perhaps this is a case where the meaning of the term "evolution" is unanimously accepted? :) Evolution of our understanding of ubiquitin? Dionisio
is this off-topic? not sure...
We present an atomic model of a substrate-bound inner mitochondrial membrane AAA+ quality control protease in yeast, YME1. Our ~3.4-angstrom cryo-electron microscopy structure reveals how the adenosine triphosphatases (ATPases) form a closed spiral staircase encircling an unfolded substrate, directing it toward the flat, symmetric protease ring. Three coexisting nucleotide states allosterically induce distinct positioning of tyrosines in the central channel, resulting in substrate engagement and translocation to the negatively charged proteolytic chamber. This tight coordination by a network of conserved residues defines a sequential, around-the-ring adenosine triphosphate hydrolysis cycle that results in stepwise substrate translocation. A hingelike linker accommodates the large-scale nucleotide-driven motions of the ATPase spiral relative to the planar proteolytic base. The translocation mechanism is likely conserved for other AAA+ ATPases.
Puchades, Cristina, Anthony J. Rampello, Mia Shin, Christopher J. Giuliano, R. Luke Wiseman, Steven E. Glynn, and Gabriel C. Lander. “Structure of the Mitochondrial Inner Membrane AAA+ Protease YME1 Gives Insight into Substrate Processing.” Science 358, no. 6363 (November 3, 2017): eaao0464. https://doi.org/10.1126/science.aao0464.
The hexameric AAA ATPase Vps4 drives membrane fission by remodeling and disassembling ESCRT-III filaments. Building upon our earlier 4.3 Å resolution cryo-EM structure (Monroe et al., 2017), we now report a 3.2 Å structure of Vps4 bound to an ESCRT-III peptide substrate. The new structure reveals that the peptide approximates a ?-strand conformation whose helical symmetry matches that of the five Vps4 subunits it contacts directly. Adjacent Vps4 subunits make equivalent interactions with successive substrate dipeptides through two distinct classes of side chain binding pockets formed primarily by Vps4 pore loop 1. These pockets accommodate a wide range of residues, while main chain hydrogen bonds may help dictate substrate-binding orientation. The structure supports a 'conveyor belt' model of translocation in which ATP binding allows a Vps4 subunit to join the growing end of the helix and engage the substrate, while hydrolysis and release promotes helix disassembly and substrate release at the lagging end.
Han, Han, Nicole Monroe, Wesley I Sundquist, Peter S Shen, and Christopher P Hill. “The AAA ATPase Vps4 Binds ESCRT-III Substrates through a Repeating Array of Dipeptide-Binding Pockets.” ELife 6 (November 22, 2017). https://doi.org/10.7554/eLife.31324. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5716660/pdf/elife-31324.pdf
Dionisio
Cellular protein homeostasis is maintained by two major degradation pathways, namely the ubiquitin-proteasome system (UPS) and autophagy. Until recently, the UPS and autophagy were considered to be largely independent systems targeting proteins for degradation in the proteasome and lysosome, respectively. However, the identification of crucial roles of molecular players such as ubiquitin and p62 in both of these pathways as well as the observation that blocking the UPS affects autophagy flux and vice versa has generated interest in studying crosstalk between these pathways. Here, we critically review the current understanding of how the UPS and autophagy execute coordinated protein degradation at the molecular level, and shed light on our recent findings indicating an important role of an autophagy-associated transmembrane protein EI24 as a bridging molecule between the UPS and autophagy that functions by regulating the degradation of several E3 ligases with Really Interesting New Gene (RING)-domains.
Nam, Taewook, Jong Hyun Han, Sushil Devkota, and Han-Woong Lee. “Emerging Paradigm of Crosstalk between Autophagy and the Ubiquitin-Proteasome System.” Molecules and Cells 40, no. 12 (December 31, 2017): 897–905. https://doi.org/10.14348/molcells.2017.0226. http://www.molcells.org/journal/download_pdf.php?doi=10.14348/molcells.2017.0226
Dionisio
The eukaryotic 26S proteasome is a large multisubunit complex that degrades the majority of proteins in the cell under normal conditions. The 26S proteasome can be divided into two subcomplexes: the 19S regulatory particle and the 20S core particle. Most substrates are first covalently modified by ubiquitin, which then directs them to the proteasome. The function of the regulatory particle is to recognize, unfold, deubiquitylate, and translocate substrates into the core particle, which contains the proteolytic sites of the proteasome. Given the abundance and subunit complexity of the proteasome, the assembly of this ~2.5MDa complex must be carefully orchestrated to ensure its correct formation. In recent years, significant progress has been made in the understanding of proteasome assembly, structure, and function. Technical advances in cryo-electron microscopy have resulted in a series of atomic cryo-electron microscopy structures of both human and yeast 26S proteasomes. These structures have illuminated new intricacies and dynamics of the proteasome. In this review, we focus on the mechanisms of proteasome assembly, particularly in light of recent structural information.
Budenholzer, Lauren, Chin Leng Cheng, Yanjie Li, and Mark Hochstrasser. “Proteasome Structure and Assembly.” Journal of Molecular Biology 429, no. 22 (November 2017): 3500–3524. https://doi.org/10.1016/j.jmb.2017.05.027.
Dionisio
Since its discovery as a post-translational signal for protein degradation, our understanding of ubiquitin (Ub) has vastly evolved. Today, we recognize that the role of Ub signaling is expansive and encompasses diverse processes including cell division, the DNA damage response, cellular immune signaling, and even organismal development. With such a wide range of functions comes a wide range of regulatory mechanisms that control the activity of the ubiquitylation machinery. Ub attachment to substrates occurs through the sequential action of three classes of enzymes, E1s, E2s, and E3s. In humans, there are 2 E1s, ? 35 E2s, and hundreds of E3s that work to attach Ub to thousands of cellular substrates. Regulation of ubiquitylation can occur at each stage of the stepwise Ub transfer process, and substrates can also impact their own modification. Recent studies have revealed elegant mechanisms that have evolved to control the activity of the enzymes involved. In this minireview, we highlight recent discoveries that define some of the various mechanisms by which the activities of E3-Ub ligases are regulated.
Vittal, Vinayak, Mikaela D. Stewart, Peter S. Brzovic, and Rachel E. Klevit. “Regulating the Regulators: Recent Revelations in the Control of E3 Ubiquitin Ligases.” Journal of Biological Chemistry 290, no. 35 (August 28, 2015): 21244–51. https://doi.org/10.1074/jbc.R115.675165. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4571856/pdf/zbc21244.pdf
Dionisio
Covalent, reversible, post-translational modification of cellular proteins with the small modifier, ubiquitin (Ub), regulates virtually every known cellular process in eukaryotes. The process is carried out by a trio of enzymes: a Ub-activating (E1) enzyme, a Ub-conjugating (E2) enzyme, and a Ub ligase (E3) enzyme. RING-in-Between-RING (RBR) E3s constitute one of three classes of E3 ligases and are defined by a RING-HECT-hybrid mechanism that utilizes a E2-binding RING domain and a second domain (called RING2) that contains an active site Cys required for the formation of an obligatory E3~Ub intermediate. Albeit a small class, RBR E3s in humans regulate diverse cellular process. This review focuses on non-Parkin members such as HOIP/HOIL-1L (the only E3s known to generate linear Ub chains), HHARI and TRIAD1, both of which have been recently demonstrated to work together with Cullin RING E3 ligases. We provide a brief historical background and highlight, summarize, and discuss recent developments in the young field of RBR E3s. Insights reviewed here include new understandings of the RBR Ub-transfer mechanism, specifically the role of RING1 and various Ub-binding sites, brief structural comparisons among members, and different modes of auto-inhibition and activation.
Dove, Katja K., and Rachel E. Klevit. “RING-Between-RING E3 Ligases: Emerging Themes amid the Variations.” Journal of Molecular Biology 429, no. 22 (November 2017): 3363–75. https://doi.org/10.1016/j.jmb.2017.08.008.
Dionisio
Protein ubiquitylation is an important post-translational modification, regulating aspects of virtually every biochemical pathway in eukaryotic cells. Hundreds of enzymes participate in the conjugation and deconjugation of ubiquitin, as well as the recognition, signaling functions, and degradation of ubiquitylated proteins. Regulation of ubiquitylation is most commonly at the level of recognition of substrates by E3 ubiquitin ligases. Characterization of the network of E3-substrate relationships is a major goal and challenge in the field, as this expected to yield fundamental biological insights and opportunities for drug development. There has been remarkable success in identifying substrates for some E3 ligases, in many instances using the standard protein-protein interaction techniques (e.g., two-hybrid screens and co-immunoprecipitations paired with mass spectrometry). However, some E3s have remained refractory to characterization, while others have simply not yet been studied due to the sheer number and diversity of E3s. This review will discuss the range of tools and techniques that can be used for substrate profiling of E3 ligases.
O’Connor, Hazel F., and Jon M. Huibregtse. “Enzyme–substrate Relationships in the Ubiquitin System: Approaches for Identifying Substrates of Ubiquitin Ligases.” Cellular and Molecular Life Sciences 74, no. 18 (September 2017): 3363–75. https://doi.org/10.1007/s00018-017-2529-6.
Dionisio
Ubiquitylation is a tightly regulated process that is essential for appropriate cell survival and function, and the ubiquitin pathway has shown promise as a therapeutic target for several forms of cancer. In this issue of the JCI, Kedves and colleagues report the identification of a subset of gynecological cancers with repressed expression of the polyubiquitin gene UBB, which renders these cancer cells sensitive to further decreases in ubiquitin production by inhibition of the polyubiquitin gene UBC. Moreover, inducible depletion of UBC in mice harboring tumors with low UBB levels dramatically decreased tumor burden and prolonged survival. Together, the results of this study indicate that there is a synthetic lethal relationship between UBB and UBC that has potential to be exploited as a therapeutic strategy to fight these devastating cancers.
Ubiquitin levels: the next target against gynecological cancers? Haakonsen DL, Rape M J Clin Invest. 2017 Dec 1;127(12):4228-4230. doi: 10.1172/JCI98262 https://www.jci.org/articles/view/98262/pdf
Dionisio
Posttranslational modification with ubiquitin chains controls cell fate in all eukaryotes. Depending on the connectivity between subunits, different ubiquitin chain types trigger distinct outputs, as seen with K48- and K63-linked conjugates that drive protein degradation or complex assembly, respectively. Recent biochemical analyses also suggested roles for mixed or branched ubiquitin chains, yet without a method to monitor endogenous conjugates, the physiological significance of heterotypic polymers remained poorly understood. Here, we engineered a bispecific antibody to detect K11/K48-linked chains and identified mitotic regulators, misfolded nascent polypeptides, and pathological Huntingtin variants as their endogenous substrates. We show that K11/K48-linked chains are synthesized and processed by essential ubiquitin ligases and effectors that are mutated across neurodegenerative diseases; accordingly, these conjugates promote rapid proteasomal clearance of aggregation-prone proteins. By revealing key roles of K11/K48-linked chains in cell-cycle and quality control, we establish heterotypic ubiquitin conjugates as important carriers of biological information.
Assembly and Function of Heterotypic Ubiquitin Chains in Cell-Cycle and Protein Quality Control. Yau RG1, Doerner K2, Castellanos ER3, Haakonsen DL1, Werner A2, Wang N4, Yang XW4, Martinez-Martin N5, Matsumoto ML6, Dixit VM7, Rape M Cell. 2017 Nov 2;171(4):918-933.e20. doi: 10.1016/j.cell.2017.09.040
Dionisio
Human gut Bacteroides species produce different types of toxins that antagonize closely related members of the gut microbiota. Some are toxic effectors delivered by type VI secretion systems, and others are non-contact-dependent secreted antimicrobial proteins. Many strains of Bacteroides fragilis secrete antimicrobial molecules, but only one of these toxins has been described to date (Bacteroidales secreted antimicrobial protein 1 [BSAP-1]). In this study, we describe a novel secreted protein produced by B. fragilis strain 638R that mediated intraspecies antagonism. Using transposon mutagenesis and deletion mutation, we identified a gene encoding a eukaryotic-like ubiquitin protein (BfUbb) necessary for toxin activity against a subset of B. fragilis strains. The addition of ubb into a heterologous background strain conferred toxic activity on that strain. We found this gene to be one of the most highly expressed in the B. fragilis genome. The mature protein is 84% similar to human ubiquitin but has an N-terminal signal peptidase I (SpI) signal sequence and is secreted extracellularly. We found that the mature 76-amino-acid synthetic protein has very potent activity, confirming that BfUbb mediates the activity. Analyses of human gut metagenomic data sets revealed that ubb is present in 12% of the metagenomes that have evidence of B. fragilis. As 638R produces both BSAP-1 and BfUbb, we performed a comprehensive analysis of the toxin activity of BSAP-1 and BfUbb against a set of 40 B. fragilis strains, revealing that 75% of B. fragilis strains are targeted by one or the other of these two secreted proteins of strain 638R.
Gut Symbiont Bacteroides fragilis Secretes a Eukaryotic-Like Ubiquitin Protein That Mediates Intraspecies Antagonism Maria Chatzidaki-Livanis, Michael J. Coyne, Kevin G. Roelofs,* Rahul R. Gentyala,* Jarreth M. Caldwell,* and Laurie E. Comstock mBio. 2017 Nov-Dec; 8(6): e01902-17. doi: 10.1128/mBio.01902-17 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5705921/pdf/mBio.01902-17.pdf
Dionisio
gpuccio @320: "[...] this is certainly a variant of the ubiquitin concept, but it appears from the beginning and is different form the beginning, and it maintains its difference, because its difference is functional, is specific, and is therefore conserved." Interesting. Thanks. Dionisio
Dionisio at #317: Yes, ubiquitin like proteins certainly add a lot to the complexity of the system. And the paper you linked is a very good and very recent review of what is known about them. SUMO is one of the most important in the group. SUMO1 is a 101 AAs long protein in humans. Strangley, it does not exhibit a great sequence homology with ubiqutin (13 identities, 33 positives, 29.3 bits, a weakly significant e-value of 9e-07). However, its sequence is highly conserved in eukaryotes. Not so much as ubiquitin, but highly conserved just the same. The human protein shows 47 identities and 66 positives with fungi (102 bits, e value 2e-27). But those values of homology rapidly increase in metazoa. The protein is one of those which undergo important engineering in vertebrates, passing from 138 to 178 bits of homology, a 0.396 baa jump. The protein in cartilaginous fish shows 84% identities and 92% positives with the human form. This is very strong conservation. So, the obvious point is: SUMO is an ubiquitin-related protein, but it is different: different in sequence, different in functions and functional networks. It has its specific E1-E2-E3 systems. And this "different" protein is already present in single celled eukaryotes, and is well conserved throughout the whole eukaryotic history. Much more conserved than it is similar to ubiquitin itself. So, what does that mean? It means that this is certainly a variant of the ubiquitin concept, but it appears from the beginning and is different form the beginning, and it maintains its difference, because its difference is functional, is specific, and is therefore conserved. gpuccio
The homeostasis of MCPH1 in association with the ubiquitin-proteasome system ensures mitotic entry independent of cell cycle checkpoint.
The E3 ubiquitin ligase APC/CCdh1 degrades MCPH1 after MCPH1-?TrCP2-Cdc25A-mediated mitotic entry to ensure neurogenesis. Liu X1, Zong W1, Li T1,2, Wang Y3, Xu X4,5, Zhou ZW6, Wang ZQ EMBO J. 2017 Dec 15;36(24):3666-3681. doi: 10.15252/embj.201694443.
Dionisio
[...] protein turnover by the ubiquitin-proteasome system provides a vital mechanism for the regulation of centrosome protein levels.
PC/CFZR-1 Controls SAS-5 Levels To Regulate Centrosome Duplication in Caenorhabditis elegans Jeffrey C. Medley, Lauren E. DeMeyer, Megan M. Kabara, and Mi Hye Song G3 (Bethesda). 2017 Dec; 7(12): 3937–3946. doi: 10.1534/g3.117.300260 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5714490/pdf/3937.pdf
Dionisio
Ubiquitin-like proteins (Ubl’s) are conjugated to target proteins or lipids to regulate their activity, stability, subcellular localization, or macromolecular interactions. Similar to ubiquitin, conjugation is achieved through a cascade of activities that are catalyzed by E1 activating enzymes, E2 conjugating enzymes, and E3 ligases. In this review, we will summarize structural and mechanistic details of enzymes and protein cofactors that participate in Ubl conjugation cascades. Precisely, we will focus on conjugation machinery in the SUMO, NEDD8, ATG8, ATG12, URM1, UFM1, FAT10, and ISG15 pathways while referring to the ubiquitin pathway to highlight common or contrasting themes. We will also review various strategies used to trap intermediates during Ubl activation and conjugation.
Ubiquitin-like Protein Conjugation: Structures, Chemistry, and Mechanism Laurent Cappadocia† and Christopher D. Lima Chem Rev. 2018 Feb 14; 118(3): 889–918. doi: 10.1021/acs.chemrev.6b00737 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5815371/pdf/cr6b00737.pdf
Dionisio
DATCG at #307: Great review of the known roles of K63 ubiquitin chains, "the second most abundant form of ubiquitylation"! This kind of ubiquitination is specially interesting because it is usually proteasome independent (K48 and K11 being the proteasome linked ubiquitinations). And look at the number of intriguing and complex functions implemented by K63 ubiquitination: modifications of plasma membrane proteins and cargoes, internalization of receptors, sorting to multivesicular bodies, other forms of cell trafficking, signaling pathways, selective autophagy, mitophagy, xenophagy. These are just the main titles of the various sections in the paper, where each of these complex subjects is well summarized according to our present understanding. And all these functions must be added to the multitude of specific functions that the ubiquitin system implements by K48 ubiquitination and proteosamal degradation, as we have discussed in detail previously! :) gpuccio
DATCG at #308: Very interesting paper about TFs and their evolutionary rate! I think the really interesting data is in Fig. 2A, where it is shown that the evolutionary pattern of TFs, as referred to the whole molecule, is strongly related to the number of known TF -TF interactions. The analysis here is done with 1552 TFs, and it is a linear regression, but I suppose that a p value of 5e-36 can never be questioned by anybody! :) That is the true, strong point: TFs which have a high number of interactions with other TFs are highly contrained (IOWs, their whole sequence is strongly conserved). A few comments: a) Just as a clarification for possible readers, the parameter they are using to measure sequence conservation is the dN/dS ratio, which is nothing else than the Ka/Ks ratio (against, nomenclature!) that I have often used in my discussions, IOWs the ration between non synonimous mutations (per non synonimous site) and synonimous mutations (per synonimous site). The lower this value, the higher the sequence conservation. The reference to synonimous mutations makes the measure relatively independent from evolutionary times (at least for evolutionary times which are not too long). b) I am not too sure that the number of TF - TF interactions can be interpreted only as a measure of pleiotropy, IOWs of multiple function. As the working of TFs for one single function is often combinatorial, with many TFs joining in very big protein complexes to achieve the fine tuning of the function itself, I would say that the number of known TF - TF interactions is also a measure of the complexity of the individual functions regulated by those TFs, and not only of the number of functions to which each TF contributes. c) The important point is: TFs are highly functional molecules, and their whole molecule contributes to their function, not only the DBD, or even the known protein interaction domains. As we have seen, the sequences with "conditional folding" are probably the most important in the final regulatory functions. gpuccio
The anaphase-promoting complex (APC/C) is a multimeric RING E3 ubiquitin ligase that controls chromosome segregation and mitotic exit. Its regulation by coactivator subunits, phosphorylation, the mitotic checkpoint complex, and interphase inhibitor Emi1 ensures the correct order and timing of distinct cell cycle transitions. Here, we used cryo-electron microscopy to determine atomic structures of APC/C-coactivator complexes with either Emi1 or a UbcH10-ubiquitin conjugate. These structures define the architecture of all APC/C subunits, the position of the catalytic module, and explain how Emi1 mediates inhibition of the two E2s UbcH10 and Ube2S. Definition of Cdh1 interactions with the APC/C indicates how they are antagonized by Cdh1 phosphorylation. The structure of the APC/C with UbcH10-ubiquitin reveals insights into the initiating ubiquitination reaction. Our results provide a quantitative framework for the design of experiments to further investigate APC/C functions in vivo.
Atomic structure of the APC/C and its mechanism of protein ubiquitination. Chang L#1, Zhang Z#1, Yang J1, McLaughlin SH1, Barford D1. Nature. 2015 Jun 25;522(7557):450-454. doi: 10.1038/nature14471. Epub 2015 Jun 15. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4608048/pdf/emss-65381.pdf
Dionisio
The anaphase promoting complex or cyclosome (APC/C) is a large multi-subunit E3 ubiquitin ligase that orchestrates cell cycle progression by mediating the degradation of important cell cycle regulators. During the two decades since its discovery, much has been learnt concerning its role in recognizing and ubiquitinating specific proteins in a cell-cycle-dependent manner, the mechanisms governing substrate specificity, the catalytic process of assembling polyubiquitin chains on its target proteins, and its regulation by phosphorylation and the spindle assembly checkpoint. The past few years have witnessed significant progress in understanding the quantitative mechanisms underlying these varied APC/C functions. This review integrates the overall functions and properties of the APC/C with mechanistic insights gained from recent cryo-electron microscopy (cryo-EM) studies of reconstituted human APC/C complexes.
Visualizing the complex functions and mechanisms of the anaphase promoting complex/cyclosome (APC/C) Claudio Alfieri,† Suyang Zhang,† and David Barford Open Biol. 2017 Nov; 7(11): 170204. doi: 10.1098/rsob.170204 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5717348/pdf/rsob-7-170204.pdf
Dionisio
Correct segregation of the mitotic chromosomes into daughter cells is a highly regulated process critical to safeguard genome stability. During M phase the spindle assembly checkpoint (SAC) ensures that all kinetochores are correctly attached before its inactivation allows progression into anaphase. Upon SAC inactivation, the anaphase promoting complex/cyclosome (APC/C) E3 ligase ubiquitinates and targets cyclin B and securin for proteasomal degradation. Here, we describe the identification of Ribonucleic Acid Export protein 1 (RAE1), a protein previously shown to be involved in SAC regulation and bipolar spindle formation, as a novel substrate of the deubiquitinating enzyme (DUB) Ubiquitin Specific Protease 11 (USP11). Lentiviral knock-down of USP11 or RAE1 in U2OS cells drastically reduces cell proliferation and increases multipolar spindle formation. We show that USP11 is associated with the mitotic spindle, does not regulate SAC inactivation, but controls ubiquitination of RAE1 at the mitotic spindle, hereby functionally modulating its interaction with Nuclear Mitotic Apparatus protein (NuMA).
USP11 deubiquitinates RAE1 and plays a key role in bipolar spindle formation Anna Stockum, Ambrosius P. Snijders, Goedele N. Maertens PLoS One. 2018; 13(1): e0190513. doi: 10.1371/journal.pone.0190513 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5749825/pdf/pone.0190513.pdf
Dionisio
In the dividing eukaryotic cell the spindle assembly checkpoint (SAC) ensures each daughter cell inherits an identical set of chromosomes. The SAC coordinates the correct attachment of sister chromatid kinetochores to the mitotic spindle with activation of the anaphase-promoting complex/cyclosome (APC/C), the E3 ubiquitin ligase that initiates chromosome separation. In response to unattached kinetochores, the SAC generates the mitotic checkpoint complex (MCC), a multimeric assembly that inhibits the APC/C, delaying chromosome segregation. Conformational variability of the complex allows for UbcH10 association, and we show from a structure of APC/CMCC in complex with UbcH10 how the Cdc20 subunit intrinsic to the MCC (Cdc20MCC) is ubiquitinated, a process that results in APC/C reactivation when the SAC is silenced.
Molecular basis of APC/C regulation by the spindle assembly checkpoint Claudio Alfieri,#1 Leifu Chang,#1 Ziguo Zhang,1 Jing Yang,1 Sarah Maslen,1 Mark Skehel,1 and David Barford1 Nature. 2016 Aug 25; 536(7617): 431–436. doi: 10.1038/nature19083 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5019344/pdf/emss-69174.pdf
Dionisio
gpuccio @293:
That the same regulation system, which is in itself as complex and multi-faceted as we have seen in this thread, can at the same time be the key regulator in such different processes (together with all the others we have discussed) is IMO mind-boggling. I really cannot imagine any way to put such a system to work with any bottom-up strategy, even if designed. You really need a strict top-down engineering to get that kind of results. And even then, you need an unbelievable attention to details and connections between systems, and a perfect control of the symbolic system you are using. The cross-talk between inner compartments in the cell is really a fascinating issue: too often we think of the cell as some rather homogeneous environment, or we just acknowledge the separation between nucleus and cytoplasm in eukaryotes. But the cytoplasm is anything but homogeneous. Organelles are separated by membranes, and membranes, as we have seen, are dynamic tools which are almost magically shaped and reshaped by complex molecular systems. And, even where mebranes are not present, a lot of functional sub-sections can be dynamically assessed, continuosly created and destroyed, shaping a functional landscape of cytoplasmic events which we still have to start to understand: second messengers, signaling pathways, and so on. No part of cytoplasm is the same as any other, each landscape is unique and functional.
Agree. Beyond fascinating. Dionisio
DATCG @308: "[...] there are Constraints, conserved regions and hot spots meant for rapid evolution according to environmental queues, or stress, but very limited in novel forms." environmental queues? huh? did you mean "cues"? Dionisio
Gpuccio, a bit off topic again, but thought it's related as well in so many ways as we keep being amazed by all the intricate interactions and interdependency of so many systems working, coordinating together with Ubiquitin and multiple functions of different genes and proteins. When I came across this, made me think of you and TFs. And this shows TF's constrain evolution. Transcription Factors, Pleiotropy and Constraints on Evolution hattip: Jeffery Tompkins PhD - ICR.org My overall thoughts are there are Constraints, conserved regions and hot spots meant for rapid evolution according to environmental queues, or stress, but very limited in novel forms. Like finch beaks. Sure, they get large or small based upon seasons, rain, droughts, but overall body plan, the bird is still a bird. And I simply cannot imagine the Ubiquitin System allowing for much more change. What level of constraint is the Ubiquitin System on evolution? And how would that begin to be measured? Cross-posted this here at UD post: Best guesses fail with plant evolution DATCG
Special Issue "Protein Ubiquitination" 14 papers... published 2014. http://www.mdpi.com/journal/cells/special_issues/protein_ubiquitination One of interest is: Versatile Roles of K63-Linked Ubiquitin Chains in Trafficking Zoi Erpapazoglou 1,2, Olivier Walker 3 and Rosine Haguenauer-Tsapis 1,* http://www.mdpi.com/2073-4409/3/4/1027 Abstract
Modification by Lys63-linked ubiquitin (UbK63) chains is the second most abundant form of ubiquitylation. In addition to their role in DNA repair or kinase activation, UbK63 chains interfere with multiple steps of intracellular trafficking. UbK63 chains decorate many plasma membrane proteins, providing a signal that is often, but not always, required for their internalization. In yeast, plants, worms and mammals, this same modification appears to be critical for efficient sorting to multivesicular bodies and subsequent lysosomal degradation. UbK63 chains are also one of the modifications involved in various forms of autophagy (mitophagy, xenophagy, or aggrephagy). Here, in the context of trafficking, we report recent structural studies investigating UbK63 chains assembly by various E2/E3 pairs, disassembly by deubiquitylases, and specifically recognition as sorting signals by receptors carrying Ub-binding domains, often acting in tandem. In addition, we address emerging and unanticipated roles of UbK63 chains in various recycling pathways that function by activating nucleators required for actin polymerization, as well as in the transient recruitment of signaling molecules at the plasma or ER membrane. In this review, we describe recent advances that converge to elucidate the mechanisms underlying the wealth of trafficking functions of UbK63 chains.
Keywords of the publication: - ubiquitin - ubiquitin chain - Sumo - 26S proteasome - protein stability - protein localization - E3 ligases - cellular regulation - signal transduction - development Example of Trafficking Steps involving UbK63 chains... http://www.mdpi.com/cells/cells-03-01027/article_deploy/html/images/cells-03-01027-g002.png
"It is now clear that an expanding list of mammalian membrane proteins are modified by UbK63 chains at the plasma membrane (Table S1)."
. DATCG
#303, "Maybe we are in some kind of niche market…" not by accident, only by Design ;-) BTW, many readers might be searching for these papers we have all listed and come across this site as well. I've noticed several times UD gets listed fairly high, even on 1st page organic search sometimes on past references. Of course the searches are usually highly specific, long-tail SEO type searches. DATCG
#304, hahahaha... you caught that did you? ;-) here's a image representation of our little friend Cdc48, ubiquitin, retrotranslocation complex, proteasome, ERAD-C & ERAD-L no tinker toys here ;-) She would have to bring a bigger box! http://www.cell.com/cms/attachment/614723/4947446/gr3.jpg and interestingly, another representation from August 2014 of threading and Cdc48 regulation to the proteasome. Notice all the Question Marks at end of each explanation. Not sure if that's an error or just a valid - we don't know for sure... http://www.mdpi.com/cells/cells-03-00824/article_deploy/html/images/cells-03-00824-ag.png and the associated paper... Regulation of Endoplasmic Reticulum-Associated Protein Degradation (ERAD) by Ubiquitin http://www.mdpi.com/2073-4409/3/3/824/htm DATCG
DATCG at #302: The thing I like most in the Tinker toys video is how she tries to be as precise as possible in wrapping the string around the wooden nucleosome, so that it is more or less 1.67 turns! (OK, more or less...) :) gpuccio
Dionisio, DATCG: "FYI – the prolific Italian composer GP got another ‘song’ in the ‘hit parade’ top 5 in the first 3 weeks since its release!" Well, it would have been impossible without your constant support! :) "This is interesting because the ‘pure science’ genre doesn’t seem very popular in this world. Who are those anonymous readers?" Maybe we are in some kind of niche market... gpuccio
Ubiquitin Chain formation, simplified overview shown by Fun with Tinker toys, multiple Chain positions... https://www.youtube.com/watch?v=miZYmuDKO2s Lecture in below video on Ubiquitin and Autophagy, can go to minute 4.20 mark for a quick look. Lecturer states about 90% of proteins are controlled by one of these two systems. Main message for good health? Don't stop exercising! Resting for to long activates the proteolytic systems of Ubiquitin and autophagy! Muscles become weaker.... as Contractile Proteins are removed. https://www.youtube.com/watch?v=tliw477USx0 DATCG
#297 Dionisio, and congrats to the systems ID review to the Mastro Gpuccio for another Top 5 composition :) DATCG
#298-188... I think someone mentioned it before ;-) Might be a good job to get into, high demand ;-) and good pay for sure! DATCG
#297 Curious Bio-technophiles maybe? ;-) Woot, wooot... Which reminds me, I was going to post something the other day on ER and celluar structures - organelles and your postings reminded me, I like pictures ;-) or videos. And I'm guessing some readers and lurkers do as well. There are many fine examples on youtube, but this is a good start and people can then see all the other choices should they like to learn more refined knowledge of each structure. This video shows Eukaryote and Prokaryote cells. Including a special guest performance by the irreducibly complex flagella ;-) https://www.youtube.com/watch?v=URUJD5NEXC8 Protein Synthesis... https://www.youtube.com/watch?v=kmrUzDYAmEI and maybe more later. DATCG
DATCG @294: "...is there time to review all of these ubiquitin-related networks and functions?" Good question. Have you heard of the "Big Data Problem in Biology"? Dionisio
DATCG, FYI - the prolific Italian composer GP got another 'song' in the 'hit parade' top 5 in the first 3 weeks since its release! This is interesting because the 'pure science' genre doesn't seem very popular in this world. Who are those anonymous readers? :)
Popular Posts (Last 30 Days)
News-watch: yet another incident of mass violence in FL, USA (1,980) My conclusion (so far) on the suggested infinite past,… (1,692) Stephen Hawking continues to talk widely celebrated nonsense (1,409) Becky’s Lesson, a Viginette (1,321) The Ubiquitin System: Functional Complexity and Semiosis… (1,282)
Dionisio
off-topic, lncRNA treatments, although I assume, somewhere ubiquitin is in the pudding ;-)
Through this approach, the team identified 570 lncRNA molecules that were expressed differently in healthy and cancerous tissues. Further, they were able to uncover 633 previously unknown biomarkers that could act as predictive tools for 14 cancer types. The team then used this knowledge to try to treat mice that had been grafted with human lung cancer tissue. They injected each mouse with an agent that blocked the activity of the relevant lnRNA (locked nucleic acid antisense oligonucleotides) twice a week and examined the effects to the tumours. They found that within 15 days, their treatment had led to a tumour size reduction of almost 50%.
Epigenetics, what once was thought to be Junk turns out to be crucial for optimized health and a key part of solving health issues. http://www.frontlinegenomics.com/news/20048/noncoding-rnas-implicated-lung-cancer/ DATCG
#293 Cpuccio, interesting... It is nevertheless emerging that cell compartmentalization is also achieved by steady-state membrane-less assemblies in the nucleus, such as nucleoli, Cajal bodies and nuclear speckles, and in the cytoplasm, such as RNA based C. elegans P-granules, P-bodies, ribosomes, as well as others that do not contain RNA, like centrosome, proteasome and aggresome (Rajan et al., 2001). In this case, what is "steady-state" referring too? Also, here is "intrinsically disordered domains"
They are flexible and have the propensity to adopt a large range of conformations. These proteins often display intrinsically disordered domains that have low complexity sequences (Huntley and Golding, 2002). Low complexity sequences are regions of poor amino-acid diversity, such as repeats of certain amino-acids (Q, N, S, G, Y, R) in prion-like domains (Alberti et al., 2009), repeats of alternating charges, such as RG, and other domains without regular sequences.
Then ubiquitination...
Last, it is also clear that stress specific post-translational modifications promote the formation of membrane-less compartments in vivo and phase separation in vitro (Han et al., 2012; Kato et al., 2012). This is case for SUMOylation and phosphorylation (Banani et al., 2016), ubiquitination for proteasome storage granules (Peters et al., 2013), poly-ADP ribosylation (Leung et al., 2011) and mono-ADP-ribosylation (Aguilera-Gomez et al., 2016). Conversely, arginine methylation by PRMT1 has been shown to be inhibitory (Jun et al., 2017; Nott et al., 2015), and phosphorylation by DYRK3 leads to stress granule dissolution (Wippich et al., 2013).
Reaction to stress, creation of stress induced solutions, reversible(!) after stress period is finished. Amazing stuff! And this all done in "compartment-less" area through what I assume is unique solution? Not sure if this is replicated in prokaryotes? Or, unique to eukaryotes?
I really cannot imagine any way to put such a system to work with any bottom-up strategy, even if designed. You really need a strict top-down engineering to get that kind of results. And even then, you need an unbelievable attention to details and connections between systems, and a perfect control of the symbolic system you are using.
Great thing is, if it's designed, molecular engineers, communications and network engineers, coders, etc., can reverse engineer it :) Which is why Design Theory is a better heuristic going forward! Darwin is dead, neo-Darwinism is too as an overall solution guide, and only ancillary mutations it appears, mostly deleterious or weak form of survival mechanism. oh wow...
In Drosophila cells, the stress of amino-acid starvation also inhibits protein transport through the secretory pathway (Zacharogianni et al., 2011) and leads to the remodeling of the ERES components into a novel membrane-less stress assembly, the Sec body (Zacharogianni et al., 2014). During the period of stress, Sec bodies store and protect most of the COPII components and Sec16 from degradation. They are round and display FRAP properties compatible with having liquid droplet properties. Importantly, they are pro-survival and rapidly disassemble upon stress relief. When stress is relieved, Sec bodies rapidly dissolve releasing their functional components that resume protein transport (Zacharogianni et al., 2014).
Yep a system spontaneously generated with stress reaction mediation and then dispersion and back to normal. Sure... from abiogenesis to coordinated systems networking. DATCG
Dionisio, Gpuccio, When I began looking at ERAD, and translocation, or retrotranslocation, I was like, wow, wow, wow... when you posted on retroChaperones. Great papers Dionisio, now when is there time to review all of these ubiquitin-related networks and functions? :) DATCG
Dionisio at #278 and 279: These two processes of ERAD (ER-associated degradation, with associated retrotranslocation) and Mitochondrial fusion are really surprising. They are both critically dependant on an unexpected dynamin plasticity of membranes in inner organelles. I am not suprised at all, instead, that the related mechanisms and controls remain "poorly understood" or "elusive". But ubiquitin certainly a major role in both. Now, that's really, really weird! That the same regulation system, which is in itself as complex and multi-faceted as we have seen in this thread, can at the same time be the key regulator in such different processes (together with all the others we have discussed) is IMO mind-blogging. I really cannot imagine any way to put such a system to work with any bottom-up strategy, even if designed. You really need a strict top-down engineering to get that kind of results. And even then, you need an unbelievable attention to details and connections between systems, and a perfect control of the symbolic system you are using. The cross-talk between inner compartments in the cell is really a fascinating issue: too often we think of the cell as some rather homogeneous environment, or we just acknowledge the separation between nucleus and cytoplasm in eukaryotes. But the cytoplasm is anything but homogeneous. Organelles are separated by membranes, and membranes, as we have seen, are dynamic tools which are almost magically shaped and reshaped by complex molecular systems. And, even where mebranes are not present, a lot of functional sub-sections can be dynamically assessed, continuosly created and destroyed, shaping a functional landscape of cytoplasmic events which we still have to start to understand: second messengers, signaling pathways, and so on. No part of cytoplasm is the same as any other, each landscape is unique and functional. See, for example, here: Membrane-bound organelles versus membrane-less compartments and their control of anabolic pathways in Drosophila https://www.sciencedirect.com/science/article/pii/S0012160617300131 (Public access)
Abstract Classically, we think of cell compartmentalization as being achieved by membrane-bound organelles. It has nevertheless emerged that membrane-less assemblies also largely contribute to this compartmentalization. Here, we compare the characteristics of both types of compartmentalization in term of maintenance of functional identities. Furthermore, membrane less-compartments are critical for sustaining developmental and cell biological events as they control major metabolic pathways. We describe two examples related to this issue in Drosophila, the role of P-bodies in the translational control of gurken in the Drosophila oocyte, and the formation of Sec bodies upon amino-acid starvation in Drosophila cells.
gpuccio
Gpuccio, I am reminded now of your Open Access paper you referenced at #89, Ubiquitin Enzymes in the Regulation of Immune Responses and Figure 3... ;-) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5490640/figure/F0003/ So, yeah, whew... trying to impact these steps, rules interactions is mind boggling. So we have, trying to think through this multiple checks and balances on disease fighting systems, heavily regulated by Ubiquitin Systems and DUBS, etc. Maybe I've conflated the two in my rush to think of different solutions. Though I'm guessing SNPs can cause problems for E1, E2, and E3 steps breaking down, then the cascade of steps following. OK, apologies for going to far off topic. DATCG
follow up to #290... Recognizing different solutions may or may not be advantageous dependent upon different mechanisms within cells and error correction features. I was thinking one reason to use Error Correction is it already exist as a functional step. It would be like adding another Conditional Check? Maybe. But recognize researchers do not know all the steps to simply add a new check feature(or override) at this time. Nor do researchers know all the rules especially in human cells. But the Harvard article gives me hope. DATCG
#267 Gpuccio, follow-up to 287, I may be asking wrong questions and mistaken on pathways to tumor cells. If so, it explains my confusion of a Proteasome "lockout" solution to stop cancer cells from growing and elimination of them. I do recognize it is a solution, but was thinking there might be more efficient methods with less side effects in treatment of multiple myeloma by Kyprolis. As an example, but no means trying to target any specific medication. Certainly it works in a certain percentage of patients. But it can lead to other consequences in patients. So it's knocking out one problem, but creating another. I don't know of better solutions, but thinking if we were to look at the different pathways to the cancerous cells, what logical points along the way would we find the breakdown(deleterious mutations) and then see if there's an alternative solution to a blocking attempt in the proteasome. Maybe a recognition of mutation prior to the signal for degradation is a way. Not easy, but maybe in breakdown of systems immunity, there's a missing conditional check of mutations by error correction. Is it even feasible to think of adding such a new "check" for error correction. And then what would be downside of doing so. If there's a SNP, point mutation, or... well, in searching came across this a Stop Codon mutation and a way to correct it in research done in Yeast. It's a fairly good review of why this is so difficult as well. http://sitn.hms.harvard.edu/flash/2011/issue97/ These are broad and difficult questions I know or researchers would already have these answers. DATCG
Playing devil's advocate since we have no participation by opponents to Design in favor of a blind, unguided "process," I'm putting my Hunter-cap on. You may want to Google it. ;-) (ps if this is considered to off-topic we can discuss another time) The challenge.. "The hslV protein has been hypothesized to resemble the likely ancestor of the 20S proteasome.HslV is generally not essential in bacteria, and not all bacteria possess it, while some protists possess both the 20S and the hslV systems." So, is hslV a possible ancestor to 20S Proteasome? Could it be? Might it be? These scientist may have found a likely candidate which might be related to an ancestral gene, which could be a breakthrough in understanding the possible evolution of the Proteasome by natural sequence of events by a gradual process of random mutations and natural selection. Evolution of Proteasome Regulators in Eukaryotes
The 20S (alpha) and (beta) subunits share structural similarity and likely originated from an ancestral gene that duplicated before the divergence of archaea and eukaryotes (Gille et al. 2003). In contrast to the 20S proteasome, the evolutionary history of PAs remains fragmentary and scattered. Here, we present a comprehensive view of the evolution of the three types of activators and of PI31 from archaeal to eukaryotic lineages, using the classification of eukaryotes recently revised by Adl et al. (2012). We examined genomic data available for a total of 17 clades, spreading over 3.5 billion years of evolution and covering archaea and most of the eukaryote supergroups, that is, Opisthokonta (including Metazoans, Choanoflagellida, Ichthyosporea, and Fungi), Amoebozoans, Excavates (including Metamonads [Diplomonadida and Parabasalia] and Discoba [Heterolobosea and Englenozoa/Kinetoplastids]), Archaeplastida (Choloroplastida and Rhodophyceae), SAR (Stramenopiles, Alveolates, and Rhizaria), and two unclassified clades, Cryptophyta and Haptophyta, previously classified as Chromalveolates with the SAR group. We show that the full current repertoire of proteasome regulators was already present in the last eukaryotic common ancestor (LECA) and has subsequently evolved through independent duplication/loss events in specific lineages.
DATCG
ah, educational portal of PDB http://pdb101.rcsb.org/browse and NCBI's 3D viewer.. https://www.ncbi.nlm.nih.gov/Structure/icn3d/full.html?complexity=3&buidx=1&showseq=1&mmdbid=60755 DATCG
Dionisio, you are on a roll :) LOL @RollingStone reference. DATCG
#267 "Yes, but unfortunately sometimes it’s easier to buid something again than to repair it." Oh, like what you're pointing out. So, "naturally" speaking or by Design, we have multiple routes to organized redistribution and/or total destruction of proteins. The Proteaosome itself is not total destruction of all cellular matter, correct? It's not a garbage disposal per say as an apt analogy? The proteins, misfolded, etc., go in and are broken down to component parts that can then be recycled for new parts, correct? I'm bypassing or leaving out the full spectrum. But there's apoptosis and other methods as well. To add, we are expected to believe that a system decision like this - to prevent proteolysis, or to allow, then recycle is by a blind, unguided RM & NS "process." "The problem with neoplastic cells is that, once the initial transformation takes place, a lot of further mutations or functional impairments is very likely to follow." Agree! "That’s also the reason for resistance to therapy in relapsed neploasias." Agree again, so my question is, what is correctly terminology for molecular biology? I used "upstream" for me meaning to a) detect, b) correct the problem prior to neoplasia. Is that to difficult? Is the current process that corrects missing critical points of mutation? And can it be... hmmm, helped to recognize them? I may be assuming to much to take on here from an overall systems perspective. DATCG
Deregulation of centriole duplication has been implicated in cancer and primary microcephaly. Accordingly, it is important to understand how key centriole duplication factors are regulated. E3 ubiquitin ligases have been implicated in controlling the levels of several duplication factors, including PLK4, STIL and SAS-6, but the precise mechanisms ensuring centriole homeostasis remain to be fully understood. Here, we have combined proteomics approaches with the use of MLN4924, a generic inhibitor of SCF E3 ubiquitin ligases, to monitor changes in the cellular abundance of centriole duplication factors. We identified human STIL as a novel substrate of SCF-?TrCP. The binding of ?TrCP depends on a DSG motif within STIL, and serine 395 within this motif is phosphorylatedin vivoSCF-?TrCP-mediated degradation of STIL occurs throughout interphase and mutations in the DSG motif causes massive centrosome amplification, attesting to the physiological importance of the pathway. We also uncover a connection between this new pathway and CDK2, whose role in centriole biogenesis remains poorly understood. We show that CDK2 activity protects STIL against SCF-?TrCP-mediated degradation, indicating that CDK2 and SCF-?TrCP cooperate via STIL to control centriole biogenesis. Arquint, Christian & Cubizolles, Fabien & Morand, Agathe & Schmidt, Alexander & Nigg, Erich. (2018). The SKP1-Cullin-F-box E3 ligase ?TrCP and CDK2 cooperate to control STIL abundance and centriole number. Open Biology. 8. 170253. 10.1098/rsob.170253. https://www.researchgate.net/profile/Alexander_Schmidt3/publication/323170017_The_SKP1-Cullin-F-box_E3_ligase_bTrCP_and_CDK2_cooperate_to_control_STIL_abundance_and_centriole_number/links/5aa1389da6fdcc22e2d10921/The-SKP1-Cullin-F-box-E3-ligase-bTrCP-and-CDK2-cooperate-to-control-STIL-abundance-and-centriole-number.pdf Dionisio
Ubiquitin-specific protease 15 (USP15) is a widely expressed deubiquitylase that has been implicated in diverse cellular processes in cancer. Here we identify topoisomerase II (TOP2A) as a novel protein that is regulated by USP15. TOP2A accumulates during G2 and functions to decatenate intertwined sister chromatids at prophase, ensuring the replicated genome can be accurately divided into daughter cells at anaphase. We show that USP15 is required for TOP2A accumulation, and that USP15 depletion leads to the formation of anaphase chromosome bridges. These bridges fail to decatenate, and at mitotic exit form micronuclei that are indicative of genome instability. We also describe the cell cycle-dependent behaviour for two major isoforms of USP15, which differ by a short serine-rich insertion that is retained in isoform-1 but not in isoform-2. Although USP15 is predominantly cytoplasmic in interphase, we show that both isoforms move into the nucleus at prophase, but that isoform-1 is phosphorylated on its unique S229 residue at mitotic entry. The micronuclei phenotype we observe on USP15 depletion can be rescued by either USP15 isoform and requires USP15 catalytic activity. Importantly, however, an S229D phospho-mimetic mutant of USP15 isoform-1 cannot rescue either the micronuclei phenotype, or accumulation of TOP2A. Thus, S229 phosphorylation selectively abrogates this role of USP15 in maintaining genome integrity in an isoform-specific manner. Finally, we show that USP15 isoform-1 is preferentially upregulated in a panel of non-small cell lung cancer cell lines, and propose that isoform imbalance may contribute to genome instability in cancer. Our data provide the first example of isoform-specific deubiquitylase phospho-regulation and reveal a novel role for USP15 in guarding genome integrity. Fielding, Andrew & Concannon, Matthew & Darling, Sarah & V. Rusilowicz-Jones, Emma & Sacco, Joseph & Prior, Ian & J. Clague, Michael & Urbé, Sylvie & Coulson, Judy. (2018). The deubiquitylase USP15 regulates topoisomerase II alpha to maintain genome integrity. Oncogene. 10.1038/s41388-017-0092-0. https://www.researchgate.net/publication/323127826_The_deubiquitylase_USP15_regulates_topoisomerase_II_alpha_to_maintain_genome_integrity/fulltext/5a81cb2aa6fdcc6f3ead658d/323127826_The_deubiquitylase_USP15_regulates_topoisomerase_II_alpha_to_maintain_genome_integrity.pdf Dionisio
DATCG and gpuccio, Please, be alert for repeated references. I may have messed up some required steps in the Zotero rules, causing some papers to get posted twice by mistake. Just raise a red flag if you notice such a case. Thanks. Dionisio
Post-translational modification of proteins by ubiquitylation is increasingly recognised as a highly complex code that contributes to the regulation of diverse cellular processes. In humans, a family of almost 100 deubiquitylase enzymes (DUBs) are assigned to six subfamilies and many of these DUBs can remove ubiquitin from proteins to reverse signals. Roles for individual DUBs have been delineated within specific cellular processes, including many that are dysregulated in diseases, particularly cancer. As potentially druggable enzymes, disease-associated DUBs are of increasing interest as pharmaceutical targets. The biology, structure and regulation of DUBs have been extensively reviewed elsewhere, so here we focus specifically on roles of DUBs in regulating cell cycle processes in mammalian cells. Over a quarter of all DUBs, representing four different families, have been shown to play roles either in the unidirectional progression of the cell cycle through specific checkpoints, or in the DNA damage response and repair pathways. We catalogue these roles and discuss specific examples. Centrosomes are the major microtubule nucleating centres within a cell and play a key role in forming the bipolar mitotic spindle required to accurately divide genetic material between daughter cells during cell division. To enable this mitotic role, centrosomes undergo a complex replication cycle that is intimately linked to the cell division cycle. Here, we also catalogue and discuss DUBs that have been linked to centrosome replication or function, including centrosome clustering, a mitotic survival strategy unique to cancer cells with supernumerary centrosomes. Darling, Sarah & Fielding, Andrew & Sabat-Po?piech, Dorota & Prior, Ian & Coulson, Judy. (2017). Regulation of the cell cycle and centrosome biology by deubiquitylases. Biochemical Society Transactions. 45. BST20170087. 10.1042/BST20170087. http://www.biochemsoctrans.org/content/early/2017/09/07/BST20170087.full-text.pdf Dionisio
For over a century, the abnormal movement or number of centrosomes has been linked with errors of chromosomes distribution in mitosis. While not essential for the formation of the mitotic spindle, the presence and location of centrosomes has a major influence on the manner in which microtubules interact with the kinetochores of replicated sister chromatids and the accuracy with which they migrate to resulting daughter cells. A complex network has evolved to ensure that cells contain the proper number of centrosomes and that their location is optimal for effective attachment of emanating spindle fibers with the kinetochores. The components of this network are regulated through a series of post-translational modifications, including ubiquitin and ubiquitin-like modifiers, which coordinate the timing and strength of signaling events key to the centrosome cycle. In this review, we examine the role of the ubiquitin system in the events relating to centriole duplication and centrosome separation, and discuss how the disruption of these functions impacts chromosome segregation. Zhang, Ying, and Paul J. Galardy. “Ubiquitin, the Centrosome, and Chromosome Segregation.” Chromosome Research 24, no. 1 (January 2016): 77–91. https://doi.org/10.1007/s10577-015-9511-7. https://www.researchgate.net/profile/Paul_Galardy/publication/287971598_Ubiquitin_the_centrosome_and_chromosome_segregation/links/5759653208ae9a9c954ed1f7/Ubiquitin-the-centrosome-and-chromosome-segregation.pdf Dionisio
A conserved AAA+ ATPase, called Cdc48 in yeast and p97 or VCP in metazoans, plays an essential role in many cellular processes by segregating polyubiquitinated proteins from complexes or membranes. For example, in endoplasmic reticulum (ER)-associated protein degradation (ERAD), Cdc48/p97 pulls polyubiquitinated, misfolded proteins out of the ER and transfers them to the proteasome. Cdc48/p97 consists of an N-terminal domain and two ATPase domains (D1 and D2). Six Cdc48 monomers form a double-ring structure surrounding a central pore. Cdc48/p97 cooperates with a number of different cofactors, which bind either to the N-terminal domain or to the C-terminal tail. The mechanism of Cdc48/p97 action is poorly understood, despite its critical role in many cellular systems. Recent in vitro experiments using yeast Cdc48 and its heterodimeric cofactor Ufd1/Npl4 (UN) have resulted in novel mechanistic insight. After interaction of the substrate-attached polyubiquitin chain with UN, Cdc48 uses ATP hydrolysis in the D2 domain to move the polypeptide through its central pore, thereby unfolding the substrate. ATP hydrolysis in the D1 domain is involved in substrate release from the Cdc48 complex, which requires the cooperation of the ATPase with a deubiquitinase (DUB). Surprisingly, the DUB does not completely remove all ubiquitin molecules; the remaining oligoubiquitin chain is also translocated through the pore. Cdc48 action bears similarities to the translocation mechanisms employed by bacterial AAA ATPases and the eukaryotic 19S subunit of the proteasome, but differs significantly from that of a related type II ATPase, the NEM-sensitive fusion protein (NSF). Many questions about Cdc48/p97 remain unanswered, including how it handles well-folded substrate proteins, how it passes substrates to the proteasome, and how various cofactors modify substrates and regulate its function.
Bodnar, Nicholas, and Tom Rapoport. “Toward an Understanding of the Cdc48/P97 ATPase.” F1000Research 6 (August 3, 2017): 1318. https://doi.org/10.12688/f1000research.11683.1.
Dionisio
Mitochondrial integrity relies on homotypic fusion between adjacent outer membranes, which is mediated by large GTPases called mitofusins. The regulation of this process remains nonetheless elusive. Here, we report a crosstalk between the ubiquitin protease Ubp2 and the ubiquitin ligases Mdm30 and Rsp5 that modulates mitochondrial fusion. Ubp2 is an antagonist of Rsp5, which promotes synthesis of the fatty acids desaturase Ole1. We show that Ubp2 also counteracts Mdm30-mediated turnover of the yeast mitofusin Fzo1 and that Mdm30 targets Ubp2 for degradation thereby inducing Rsp5-mediated desaturation of fatty acids. Exogenous desaturated fatty acids inhibit Ubp2 degradation resulting in higher levels of Fzo1 and maintenance of efficient mitochondrial fusion. Our results demonstrate that the Mdm30-Ubp2-Rsp5 crosstalk regulates mitochondrial fusion by coordinating an intricate balance between Fzo1 turnover and the status of fatty acids saturation. This pathway may link outer membrane fusion to lipids homeostasis.
Cavellini, Laetitia, Julie Meurisse, Justin Findinier, Zoi Erpapazoglou, Naïma Belgareh-Touzé, Allan M. Weissman, and Mickael M. Cohen. “An Ubiquitin-Dependent Balance between Mitofusin Turnover and Fatty Acids Desaturation Regulates Mitochondrial Fusion.” Nature Communications 8 (June 13, 2017): 15832. https://doi.org/10.1038/ncomms15832.
Dionisio
The endoplasmic reticulum (ER) serves as a warehouse for factors that augment and control the biogenesis of nascent proteins entering the secretory pathway. In turn, this compartment also harbors the machinery that responds to the presence of misfolded proteins by targeting them for proteolysis via a process known as ER-associated degradation (ERAD). During ERAD, substrates are selected, modified with ubiquitin, removed from the ER, and then degraded by the cytoplasmic 26S proteasome. While integral membrane proteins can directly access the ubiquitination machinery that resides in the cytoplasm or on the cytoplasmic face of the ER membrane, soluble ERAD substrates within the lumen must be retrotranslocated from this compartment. In either case, nearly all ERAD substrates are tagged with a polyubiquitin chain, a modification that represents a commitment step to degrade aberrant proteins. However, increasing evidence indicates that the polyubiquitin chain on ERAD substrates can be further modified, serves to recruit ERAD-requiring factors, and may regulate the ERAD machinery. Amino acid side chains other than lysine on ERAD substrates can also be modified with ubiquitin, and post-translational modifications that affect substrate ubiquitination have been observed. Here, we summarize these data and provide an overview of questions driving this field of research.
The evolving role of ubiquitin modification in endoplasmic reticulum-associated degradation G. Michael Preston, Jeffrey L. Brodsky Biochemical Journal Feb 03, 2017, 474 (4) 445-469; DOI: 10.1042/BCJ20160582 http://biochemj.org/lookup/doi/10.1042/BCJ20160582
Dionisio
gpuccio @276:
If someone reading this thread is starting to believe that I am probably making up things, I am certainly not offended!
Yes, it seems like "fake news" indeed. :) Dionisio
DATCG, Dionisio: Again our friend TERA/VCP/p97/CDC48, in some new role! :) The following paper is of January 2018: Cdc48 regulates a deubiquitylase cascade critical for mitochondrial fusion https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5798933/ (Public access)
Abstract: Cdc48/p97, a ubiquitin-selective chaperone, orchestrates the function of E3 ligases and deubiquitylases (DUBs). Here, we identify a new function of Cdc48 in ubiquitin-dependent regulation of mitochondrial dynamics. The DUBs Ubp12 and Ubp2 exert opposing effects on mitochondrial fusion and cleave different ubiquitin chains on the mitofusin Fzo1. We demonstrate that Cdc48 integrates the activities of these two DUBs, which are themselves ubiquitylated. First, Cdc48 promotes proteolysis of Ubp12, stabilizing pro-fusion ubiquitylation on Fzo1. Second, loss of Ubp12 stabilizes Ubp2 and thereby facilitates removal of ubiquitin chains on Fzo1 inhibiting fusion. Thus, Cdc48 synergistically regulates the ubiquitylation status of Fzo1, allowing to control the balance between activation or repression of mitochondrial fusion. In conclusion, we unravel a new cascade of ubiquitylation events, comprising Cdc48 and two DUBs, fine-tuning the fusogenic activity of Fzo1.
Mitochondrial fusion? Yes, because we learn that:
Mitochondria are little compartments within a cell that produce the energy needed for most biological processes. Each cell possesses several mitochondria, which can fuse together and then break again into smaller units. This fusion process is essential for cellular health. --- Mitochondria are dynamic organelles constantly undergoing fusion and fission events, modulated by a variety of post-translational modifiers including ubiquitin --- The ubiquitin-specific chaperone Cdc48/p97 is required to maintain mitochondrial morphology (Esaki and Ogura, 2012). However, the underlying molecular mechanism of how Cdc48 regulates mitochondrial dynamics is not understood. --- Here, we identify a role of Cdc48 in mitochondrial fusion, as part of a novel enzymatic cascade consisting of Cdc48, Ubp12 and Ubp2. Cdc48 negatively regulates Ubp12, which negatively regulates Ubp2, explaining why these two DUBs exert opposite effects on their targets and on ubiquitin homeostasis.
If someone reading this thread is starting to believe that I am probably making up things, I am certainly not offended! :) gpuccio
DATCG at #259: I agree with you that protein nomenclature is often misleading. Proteins that are clear homologues in many organisms receive often a lot of different names. You can find that multitude of names in their Uniprot record, usually. For example, our much discussed p97 is reported in Uniprot, for humans, as: TERA_HUMAN: Transitional endoplasmic reticulum ATPase But also: TER ATPase VCP: Valosin-containing protein 15S Mg(2+)-ATPase p97 subunit and we know that, in yeast, it is called: CDC48: Cell division control protein 48 In papers, you can often find those different names, and it can be difficult sometimes to understand that some papers are referring to the same protein! And, in this case, we are talking of a very conserved protein, and there can be no doubt that human TERA and yeast CDC48 are homologues, because they share 1178 bits and 68% identities and 83% positives. gpuccio
Dionisio at #273: Interesting. This strange E3 ligase, Psh1p, 406 AAs long, is practically taxonomycally restricted to Saccharomycetes. This is really amazing. Indeed, it shares practically no homology (except for a few low hits limited to the RING domain) with any organism outside of fungi, and even in fungi the homology is rather low (100 - 200 bits) outside of Saccharomycetes. Its function remains elusive, even after reading the interesting paper you linked. Its only known target seems to be CSE4p, a strange Histone H3 like protein, 229 AAs long. From Uniprot:
Histone H3-like variant which exclusively replaces conventional H3 in the nucleosome core of centromeric chromatin at the inner plate of the kinetochore. Required for recruitment and assembly of kinetochore proteins, mitotic progression and chromosome segregation. May serve as an epigenetic mark that propagates centromere identity through replication and cell division. Required for functional chromatin architecture at the yeast 2-micron circle partitioning locus and promotes equal plasmid segregation.
This strange variant seems, too, essentially restricted to Saccharomycetes, except for the partial homology (about 130 bits) to histone H3 in the C terminal part. So, this complex biological system linked to yeast plasmids seems to be a remarkable example of taxonomically restricted complexity. Involving, of course, ubiquitin! :) gpuccio
The Ubiquitin Ligase (E3) Psh1p Is Required for Proper Segregation of both Centromeric and Two-Micron Plasmids in Saccharomyces cerevisiae Meredith B. Metzger, Jessica L. Scales, Mitchell F. Dunklebarger and Allan M. Weissman G3: Genes, Genomes, Genetics November 1, 2017 vol. 7 no. 11 3731-3743; https://doi.org/10.1534/g3.117.300227 http://www.g3journal.org/content/7/11/3731.full.pdf Dionisio
@271 addendum isn't professor Denis Noble one of the pioneers of systems biology and founder of the 3rd way (the raft some evolutionists are using to jump out of the sinking neo-Darwinian ship)? Dionisio
Off topic.
CHARLEMAGNE DISTINGUISHED LECTURE SERIES with Prof. Denis Noble Ph.D. Title: From Pacing the Heart to the Pace of Evolution Abstract Multi-mechanism interpretations of cardiac pacemaker function reveal the extent to which many physiological functions are buffered against genomic change. Contrary to Schrodinger's claim in What is Life? (1944) which led to the Central Dogma of Molecular Biology (Crick 1970), biological functions at higher levels harness stochasticity at lower levels. This harnessing of stochasticity is a prerequisite for the processes by which the pace of evolution can be accelerated through guided control of mutation rates and of buffering by regulatory networks in organisms. Schrodinger E. 1944 What is life? Cambridge, UK: Cambridge University Press. Crick FHC. 1970 Central dogma of molecular biology. Nature 227, 561 – 563. (doi:10.1038/227561a0) Noble D. Dance to the Tune of Life. Biological Relativity. Cambridge University Press 2016 Noble D. Evolution viewed from physics, physiology and medicine. Interface Focus 2017, 7, 20160159. Noble R & Noble D. Was the Watchmaker Blind? Or was she One-eyed? Biology, 2017, 6, 47.
Check this out at your convenience. If you don't want to watch the whole video, just skip to around the mark 30:00 and listen to the last ten minutes. The visual part of the presentation is not very clear, which seems like a defect of the way the presentation was recorded. Maybe there is a better video of the same lecture? https://www.youtube.com/embed/XbS2dwn04fQ https://www.aices.rwth-aachen.de/en/ Dionisio
DATCG @254: “And agree the neo-darwinist would certainly show up if they had a rebuttal.” gpuccio @266: "Maybe they are simply shy!" No, they aren't shy at all! :) The most probable reason behind their conspicuous absence here is that most of what is discussed can be easily explained through RV+NS, hence this discussion is trivial and not worth their time. :) Dionisio
gpuccio @265:
After all, design is the tool to satisfy a desire. Maybe ubiquitin chains have a role in expressing satisfaction, too!
[emphasis added] That reminds me of some loud musicians that couldn't get no satisfaction, even though they kept trying (or at least that's what they claimed) since the mid 1960s. :) Maybe they didn't know much about ubiquitin back then? :) Dionisio
DATCG at #256: It's great to be ahead of Wikipedia! :) :) gpuccio
DATCG at #255: "It’s a mouthful of networking semantic diagnosis and reverse engineering!" Yes, but unfortunately sometimes it's easier to buid something again than to repair it. The problem with neoplastic cells is that, once the initial transformation takes place, a lot of further mutations or functional impairments is very likely to follow. That's also the reason for resistance to therapy in relapsed neploasias. gpuccio
DATCG: "And agree the neo-darwinist would certainly show up if they had a rebuttal." Maybe they are simply shy! :) gpuccio
Dionisio @253: You too are not bad at picking pèapers which "just came out of the printing press"! :) I specially liked this phrase: "When the checkpoint is satisfied, anaphase is initiated by the disassembly of MCC." (Emphasis mine) After all, design is the tool to satisfy a desire. Maybe ubiqutin chians have a role in expressing satisfaction, too! :) gpuccio
DATCG @259, That's interesting. DATCG @261, That's interesting. DATCG @262, Thanks. Dionisio
DATCG @246:
What more is there to add? Anything we’re missing or have not covered? Or to highlight?
gpuccio @251:
So, a new function for our protein, as though the “old” functions were not enough! Retrochaperone. :) And, again, a critical role of ubiquitin chains.
Dionisio
Morning Dio :) Have a good day. I'm out for now. DATCG
Dionisio, Gpuccio, Question, do either of you have sources for images of Proteins that you like to refer to? Or for any active process and genetic material? If so, please share. Would like to build up different resources for viewing. Came across a resource trying to find images of Ubiquitin Proteins. This is of WWP1(WW domain containing E3 ubiquitin protein ligase 1) This includes a HECT domain. Atlas of Genetics and Cytogenetics in Oncology and Haematology - WWP1 containing E3 ubiquitin ligase 1 - Alias AIP5 The images of above link are of WWP1 expression in 22Rv1 prostate cancer cell line. Descriptions, notations and info...
A: WWP1 protein B: Exogenous WWP1 expression in the 22Rv1 prostate cancer cell line was detected under a confocal microscopy. The endosomes are indicated by GFP-Rab5. C: Protein structure of WWP1 Description: 922 amino acids; approximatively 110 kDa protein; The C2 domain at N-terminus is responsible for calcium-dependent phospholipid binding. The four WW domains in the middle are responsible for protein-protein interaction with PY motifs. The HECT domain at the C-terminus is responsible for the ubiquitin transfer. The Cystein 890 is the catalytic center. The underlined WWP1 substrates do not have a PY motif (PPXY). A smaller WWP1 protein isoform was detected in two prostate cancer cell lines PC-3 and LAPC-4 (Chen C, 2007). Protein structure: The HECT domain of WWP1 (see Figure 2C.)(Verdecia MA., 2003). Expression: The WWP1 protein is lowly expressed in normal prostate and breast but is frequently upregulated in prostate and breast cancers due to the gene amplification. Localisation: Predominately on membrane structures in cytoplasm and occasionally in nucleus (see Figure 2B.). Function: WWP1 is an E3 ubiquitin ligase. WWP1 negatively regulates the transforming growth factor-beta (TGF-b) signaling by targeting its molecular components, including TGF-beta receptor 1 (TbR1) (Komuro A, 2004), Smad2 (Seo SR, 2004), and Smad4 (Moren A., 2005) for ubiquitin mediated degradation. In addition, WWP1 has been reported to target the epithelial Na+ channel (ENaC) (Malbert-Colas L, 2003), Notch (Shaye DD, 2005), Runx2 (Jones DC, 2006; Shen R, 2006), KLF2 (Zhang X, 2004), and KLF5 (Chen C, 2005) for ubiquitin-mediated proteolysis. Recently, WWP1 has been demonstrated to inhibit p53 activity through exporting p53 from the nucleus after ubiquitination (Laine A,.2007). Overall, WWP1 may play a pro-survival role in several tumor types including breast (Chen C, 2007) and prostate (Chen C, 2007). WWP1 has also shown to promote virus budding (Martin-Serrano J, 2005; Heidecker G, 2007). Homology: WWP1 belongs to the C2-WW-HECT E3 family which contains 8 other members (Chen C, 2007). The WWP1 gene is highly-conserved among species (from human to c. elegant). Mutations Somatic: The WWP1 gene is rarely mutated in human prostate cancer (Chen C, 2007). Two sequence alterations were detected in prostate cancer xenografts. One was 2393A-->T (Glu798Val) in CWR91 and the other was 721A-->T (Thr241Ser) in LuCaP35. Additionally, some mutations in the HECT domain decrease the E3 ligase activity (Verdecia MA., 2003).
DATCG
@215 update the list - add the following item from gpuccio @251:: Neal, Sonya, Raymond Mak, Eric J. Bennett, and Randolph Hampton. “A Cdc48 ‘Retrochaperone’ Function Is Required for the Solubility of Retrotranslocated, Integral Membrane Endoplasmic Reticulum-Associated Degradation (ERAD-M) Substrates.” Journal of Biological Chemistry 292, no. 8 (February 24, 2017): 3112–28. https://doi.org/10.1074/jbc.M116.770610. Dionisio
This comment is off-topic a bit. My pet peeve of Nomenclature and chaotic naming conventions of functions. While searching on Choromsome Segregation and ubiquitination of open access papers I came across a Chapter by Mitsuhiro Yanagida. Besides the main chapter, on basics of Chromosome Segregation which mentions ubiquitin interplay and roles Yanagida mentions Nomenclature. He points out the problem of Nomenclature at Chapter 2.4, pg 25. You may not be interested in this at all, but it's 2nd person I've found frustrated a bit, detailing why it's important for easy identification of functions. Another reason I personally think this is important is from a Design perspective. The chapter automatically opens a PDF btw for download from Springer.com... Basics of Chromosome Segregation - Mitsuhiro Yanagida - 2009 I like the points he is making about recognize Functions across organisms!
The nomenclature used for genes involved in chromosome segregation is a serious problem in communicating results obtained in different organisms. Many genes are initially identified through the use of mutants, antibodies, or amino acid sequences of purified proteins and their molecular functions are not known. Thus, many of the gene names do not give functional clues and are difficult to remember. Although similar proteins exist in other organisms, researchers tend to use their own organism’s nomenclature, as it is often unclear whether these genes are functionally equivalent to similar genes in other organisms. Indeed, genes with analogous sequences but distinct functions are not uncommon. It is therefore very difficult for researchers in other fields and for newcomers to the field to understand the functions of a particular gene by reading the literature.
Thank you as a newcomer! :) But hmmm, just as a systems molecular biologist it sure seems cumbersome, chaotic and unproductive as well.
A number of protein complexes essential for chromosome segregation, however, have been given common names across organisms. The presence of multiple subunits that all share sequence similarity in different organisms is convincing evidence of the functional similarity of these complexes, such as condensin, cohesin, anaphase-promoting complex (APC/C), and mitotic checkpoint complex (MCC). The use of a common nomenclature for these complexes promotes integrated studies. For example, condensin is a hetero-pentameric complex required for mitotic chromosome architecture. It consists of two subunits belonging to the structural maintenance of chromosome (SMC) ATPase protein family, and three non-SMC components (reviewed in Nasmyth and Haering 2005, Belmont 2006, Hirano 2006). Frog condensin contains XCAP-C (SMC4) and XCAP-E (SMC2), two heterodimeric coiled-coil SMCs and three non-SMC proteins: XCAP-H, -G, and -D2. In S. cerevisiae, the dimeric Smc2 and Smc4 associate with three nonSMC subunits, Ycg1, Ycs4, and Brn1. Similarly, two SMC proteins of S. pombe, Cut3 and Cut14, form a heterodimer and bind to three non-SMC subunits, Cnd1, Cnd2, and Cnd3 (Nasmyth and Haering 2005, Belmont 2006, Hirano 2006). The sequences of each of these sets of five subunits are similar from fungi to human, indicating that they are functionally conserved. Although different names remain for individual subunits, they are less important than those of complexes. Complexes required for chromosome segregation are often multifunctional. Condensin (see above) is also required for interphase activities, such as DNA-damage repair (Heale et al., 2006). Cohesin, the multiprotein complex that holds sister chromatids together following DNA replication, is also required for DNA-damage repair (Strom et al., 2007, Unal et al., 2007, Ball¨ and Yokomori 2008) and developmental transcriptional regulation (Dorsett et al., 2005, Dorsett 2007, Gullerova and Proudfoot 2008, Wendt et al., 2008). The name, usually based on the initially discovered function, might only partially represent the functions mediated by the complex and could be misleading. Therefore, biologists and geneticists should use caution when naming a complex according to its originally discovered function. The anaphase-promoting complex/cyclosome (APC/C) has an instructive history with regard to the naming. The APC/C was discovered as a complex and called a cyclosome (Sudakin et al., 1995), as it is essential for the degradation of mitotic cyclin in vitro. This same complex was also called the APC, as it was defined as an anaphase-promoting complex (King et al., 1995). The APC/C, which contains 15 subunits (Passmore et al., 2005), is the E3 ubiquitin ligase that poly-ubiquitylates mitotic cyclin and securin for degradation in a destruction-box (DB)-dependent manner (reviewed in Sullivan and Morgan 2007). APC/C activation is inhibited by the spindle assembly checkpoint (also called the spindle checkpoint or mitotic checkpoint; see Chapter 11). Poly-ubiquitylated cyclin and securin are rapidly degraded by the 26S proteasome, leading to the activation of separase, the cleavage of cohesin, the separation of the sister chromatids, and the onset of anaphase (Morgan 2006, Fig. 2.1). Because the abbreviation APC also refers to the frequently cited tumor suppressor protein adenomatous polyposis coli, it is currently recommended that the abbreviation APC/C be used to avoid confusion. This distinction has become particularly necessary as the tumor suppressor APC interacts with the plus ends of the microtubules and is implicated in the spindle checkpoint (Draviam et al., 2006). While the APC/C regulates the exit of mitosis in dividing cells (Sullivan and Morgan 2007), it is also abundant in non-dividing cells such as neurons and muscles (van Roessel et al., 2004, Zarnescu and Moses 2004). The APC/C seems to have a postmitotic role at Drosophila neuromuscular synapses: in neurons, the APC/C controls synaptic size, and in muscles, it regulates synaptic transmission (van Roessel et al., 2004). The roles of the APC/C in non-dividing differentiated cells are elusive, but clearly different from its role in mitotic progression and exit. Thus, a new name, particularly one based on a single function, could cause misconceptions concerning the roles of these complexes.
Thank you Mitsuhiro Yanagida! He deserves an award for common sense :) Not much about Ubiquitin but thank you for elucidating the problems of naming conventions across disparate areas. DATCG
#252, 253, Agreed :) And thanks for interesting abstract on chromosome segregation in mitosis and MCC Mitotitic Checkpoint Complex. And role of Ubiquitylation. DATCG
a bit of humor to add to the mix from some college students I'm guessing. The sad case of a misfolded protein and UPR(Unfolded Protein Response) on the "Ugly Protein Network" https://www.youtube.com/watch?v=XYGlzNnHoTw DATCG
#251 Gpuccio, Congrats on finding a rarely used term, RetroChaperone! :) Haha, I checked and wiki still does not have it. Only used mainly in this paper and a few others. Maybe chaperone is good enough, but retro sounds cool to identify along with retrotranslocation to the cytosol. They do speak about regulation of retro-translocation of chaperones but interestly not updated with Cdc48 yet! Wiki is falling behind ;-) And wow... yeah Cdc48 chaperoning misfolded ER local proteins to the cytosol. Maintaining solubility by binding to ubiquitinated ERAD M-substrates - retrotranslocated. What can possibly go wrong? ;-) To degrade or not to degrade, this is the question of the protein life cycle. I'll review this new informatoin you provided. And I have other papers I put on hold as I'm working thru them. Oh, during review of other papers, maybe my use of "upstream" is inappropriate? I've an old habit of thinking in terms of Top-down structured programming. I'll search for an example. DATCG
#249 Gpuccio, Yep agree with all you said. I think as I get an opportunity to peek inside how it's done which is quite remarkable the explanation by Dr. Deshaies actually to do that it was very eye opening and at same time humbling at how far we have to go.
About going upstream: I don’t know, “detecting mutations that allow tumors to form in the first place and replacing those mutations” seems still rather far away. Of course detecting tumors when they are still at the beginning would be great, but the problem is that they are really a lot of different things, from a biological point of view, even when they have similar clinical manifestations.
I humbly have no idea where to start, just a vague understanding at all. I'm guessing openly here, much out of ignorance that if we're looking at a designed system, then we may find a pattern of weak-links(?) so to speak? That may eventually be understood as hot spots for deleterious mutations.
And a lot of random events are probably implied in the initial phases of the disease. Here again complexity makes it difficult for us to really understand, and unfortunately it is here the complexity of possible random devastations of extremely complex functions.
Ohhh yes... agree at the difficulty. Maybe I'm reaching to far when I say upstream, but for some reason think it's not out of the realm of possibility to discover in the future. Maybe a bit naive too. But I keep thinking if designed, then based upon environmental input - see how the branching of deleterious mutations form patterns of failure. That end in tumors. Then tracing it back upstream to the regulatory functions or master regulators even and other coordinated interdependencies. It's a mouthful of networking semantic diagnosis and reverse engineering! :) DATCG
Gpuccio @248, Well OK then, more to follow :) Just wanted to make sure I was staying in the right theme of things. Sounds great! And agree the neo-darwinist would certainly show up if they had a rebuttal. DATCG
The mitotic checkpoint system ensures the fidelity of chromosome segregation in mitosis by preventing premature initiation of anaphase until correct bipolar attachment of chromosomes to the mitotic spindle is reached. It promotes the assembly of a mitotic checkpoint complex (MCC), composed of BubR1, Bub3, Cdc20, and Mad2, which inhibits the activity of the anaphase-promoting complex/cyclosome (APC/C) ubiquitin ligase. When the checkpoint is satisfied, anaphase is initiated by the disassembly of MCC. Previous studies indicated that the dissociation of APC/C-bound MCC requires ubiquitylation and suggested that the target of ubiquitylation is the Cdc20 component of MCC. However, it remained unknown how ubiquitylation causes the release of MCC from APC/C and its disassembly and whether ubiquitylation of additional proteins is involved in this process. We find that ubiquitylation causes the dissociation of BubR1 from Cdc20 in MCC and suggest that this may lead to the release of MCC components from APC/C. BubR1 in MCC is ubiquitylated by APC/C, although to a lesser degree than Cdc20. The extent of BubR1 ubiquitylation was markedly increased in recombinant MCC that contained a lysine-less mutant of Cdc20. Mutation of lysine residues to arginines in the N-terminal region of BubR1 partially inhibited its ubiquitylation and slowed down the release of MCC from APC/C, provided that Cdc20 ubiquitylation was also blocked. It is suggested that ubiquitylation of both Cdc20 and BubR1 may be involved in their dissociation from each other and in the release of MCC components from APC/C.
Role of ubiquitylation of components of mitotic checkpoint complex in their dissociation from anaphase-promoting complex/cyclosome. Sitry-Shevah D, Kaisari S, Teichner A, Miniowitz-Shemtov S, Hershko A Proc Natl Acad Sci U S A. 2018 Feb 20;115(8):1777-1782. doi: 10.1073/pnas.1720312115.
Dionisio
DATCG @246:
What more is there to add? Anything we’re missing or have not covered? Or to highlight?
gpuccio has covered (along with your contributions) a substantial area of important subtopics within the main theme of this thread. But note that the number of biology-related research papers seem to increase quite rapidly revealing interesting things that had not been considered until now. I would refrain from thinking the discussion has been exhausted. As we saw, many details remain elusive at best. As outstanding questions get answered, new ones are raised. This gives the impression of a never-ending story. The complexity of the functionally specified informational organization keeps deepening with no end in sight yet. Dionisio
DATCG: Again about VCP/p97/CDC48 (February 2017): A Cdc48 “Retrochaperone” Function Is Required for the Solubility of Retrotranslocated, Integral Membrane Endoplasmic Reticulum-associated Degradation (ERAD-M) Substrates https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5336148/ Retrochaperone?
Endoplasmic reticulum (ER)2-associated degradation (ERAD) refers to a group of quality control pathways that degrade damaged or misfolded ER-localized proteins (1, 2). ERAD occurs through the ubiquitin-proteasome pathway, by which ubiquitin is attached to ERAD substrates to cause proteasomal degradation (3,–5). --- ERAD pathways present the cell with a spatial challenge. The 26S proteasome resides in the cytosol and connected compartments, along with the E1 ubiquitin-activating enzyme and most E2s. Accordingly, a unifying feature of all ERAD pathways is the requirement for movement of substrates from the ER membrane or lumen to the cytosol for degradation. This transport component of ERAD is broadly referred to as dislocation or retrotranslocation, and it has been known to occur since the earliest studies of ERAD --- In this work we sought to discover the retrochaperones that allow multispanning membrane proteins to remain soluble after retrotranslocation during ERAD. --- We then used unbiased proteomics to identify and confirm Cdc48 as the principal retrochaperone allowing multispanning membrane proteins to remain soluble in the cytosol. --- We explored the features of Cdc48-client interaction. The tripartite Cdc48 complex binds polyubiquitin chains. Proteolytic removal of the polyubiquitin from Hmg2-GFP abolished its Cdc48 association and rendered the retrotranslocated Hmg2-GFP insoluble. Similarly, addition of excess polyubiquitin chains to the assay supernatant resulted in complete loss of Cdc48 binding to retrotranslocated Hmg2-GFP, and again it caused drastic loss of Hmg2-GFP solubility. Thus, it was clear that polyubiquitin-mediated association of the Cdc48 complex is critical for the maintained solubility of the eight-spanning Hmg2-GFP, indicating that Cdc48 is a bone fide retrochaperone in addition to being a ubiquitin-dependent “dislocase.” --- The polyubiquitin binding of Cdc48 was a critical component of Cdc48 retrochaperone function. --- Cdc48/p97 are AAA hexameric ATPases. They are thought to use ATP hydrolysis to generate the considerable conformational force needed for the many versions of ubiquitin-dependent extraction and dislocation for which they are well known (19). Especially in the case of ERAD-M retrotranslocation, prodigious energy would be required for substrate removal from the membrane. However, it is less clear if the retrochaperoning role of Cdc48 is also ATP-dependent. It will be important to evaluate the role of ATP in this novel holdase function, and a number of straightforward experiments are now possible with the assays and techniques developed herein. Taken together, these studies show that the Cdc48 complex has a critical and general function as an “ERAD holdase” or retrochaperone. It has been clear for a number of years that Cdc48/p97 accompanies ERAD substrates on their way to the proteasome, but this work demonstrates that the solubility of the ubiquitinated substrates is only possible due to the chaperoning functions of the complex. There are both functional and pathological consequences of this critical new action. It will be intriguing to understand the mechanics and structural aspects of Cdc48 chaperoning and to eventually understand the breadth of this function in cellular processes, including both degradation and possible refolding of damaged proteins that engage the AAA-ATPases in the course of proteostasis.
So, a new function for our protein, as though the "old" functions were not enough! Retrochaperone. :) And, again, a critical role of ubiquitin chains. gpuccio
Dionisio: Thank you for your updates! :) gpuccio
DATCG: The problem of a functional, or "pathogenetic" therapy of tumours is complex, and in general rather frustrating. For a lot of time the therapy of tumors and leukemias has been highly empirical, and based essentially on drugs which are toxic to all cells. Understanding the biological features of neoplastic cells has always been a great aim, but unfortunately our increase in understanding has not always provided really useful therapeutic strategies. However, things are probably changing, and maybe with time we can get better results. About going upstream: I don't know, "detecting mutations that allow tumors to form in the first place and replacing those mutations" seems still rather far away. Of course detecting tumors when they are still at the beginning would be great, but the problem is that they are really a lot of different things, from a biological point of view, even when they have similar clinical manifestations. And a lot of random events are probably implied in the initial phases of the disease. Here again complexity makes it difficult for us to really understand, and unfortunately it is here the complexity of possible random devastations of extremely complex functions. But understanding is always the foundation for all. After understanding, some power of intervention must come, sooner or later. gpuccio
DATCG: Well, I think we have certainly covered a lot of important and interesting issues. I certainly agree with your thoughts. Please, feel free to post something new if you think it is worthwhile, or not to post if you prefer so. I will do more or less the same. I think this whole subject is really good evidence for design, for a lot of different and intertwining reasons that we have tried to highlight in our "private party" here. But our kind interlocutors probably think that all this is trivial or irrelevant, otherwise they would certainly have joined the discussion to show us out serious errors! :) gpuccio
@217 updated list (added 3 references posted after #217): 1. Abel, David L., and Jack T. Trevors. “Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information.” Theoretical Biology and Medical Modelling 2, no. 1 (August 11, 2005): 29. https://doi.org/10.1186/1742-4682-2-29. 2. Durston, Kirk K, David KY Chiu, Andrew KC Wong, and Gary CL Li. “Statistical Discovery of Site Inter-Dependencies in Sub-Molecular Hierarchical Protein Structuring.” EURASIP Journal on Bioinformatics and Systems Biology 2012, no. 1 (December 2012). https://doi.org/10.1186/1687-4153-2012-8. 3. Liu, Ke, Lei Lyu, David Chin, Junyuan Gao, Xiurong Sun, Fu Shang, Andrea Caceres, et al. “Altered Ubiquitin Causes Perturbed Calcium Homeostasis, Hyperactivation of Calpain, Dysregulated Differentiation, and Cataract.” Proceedings of the National Academy of Sciences 112, no. 4 (January 27, 2015): 1071–76. https://doi.org/10.1073/pnas.1404059112. 4. Meyer, H., and C. C. Weihl. “The VCP/P97 System at a Glance: Connecting Cellular Function to Disease Pathogenesis.” Journal of Cell Science 127, no. 18 (September 15, 2014): 3877–83. https://doi.org/10.1242/jcs.093831. 5. Pla, A, M Pascual, J Renau-Piqueras, and C Guerri. “TLR4 Mediates the Impairment of Ubiquitin-Proteasome and Autophagy-Lysosome Pathways Induced by Ethanol Treatment in Brain.” Cell Death & Disease 5, no. 2 (February 2014): e1066–e1066. https://doi.org/10.1038/cddis.2014.46. 6. Ravid, Tommer, and Mark Hochstrasser. “Diversity of Degradation Signals in the Ubiquitin–proteasome System.” Nature Reviews Molecular Cell Biology 9, no. 9 (September 2008): 679–89. https://doi.org/10.1038/nrm2468. 7. Rogers, J. M., V. Oleinikovas, S. L. Shammas, C. T. Wong, D. De Sancho, C. M. Baker, and J. Clarke. “Interplay between Partner and Ligand Facilitates the Folding and Binding of an Intrinsically Disordered Protein.” Proceedings of the National Academy of Sciences 111, no. 43 (October 28, 2014): 15420–25. https://doi.org/10.1073/pnas.1409122111. 8. Ruiz i Altaba, Ariel, Vân Nguyên, and Verónica Palma. “The Emergent Design of the Neural Tube: Prepattern, SHH Morphogen and GLI Code.” Current Opinion in Genetics & Development 13, no. 5 (October 2003): 513–21. https://doi.org/10.1016/j.gde.2003.08.005. 9. Savage, Kienan I., and D. Paul Harkin. “BRCA1, a ‘Complex’ Protein Involved in the Maintenance of Genomic Stability.” The FEBS Journal 282, no. 4 (February 2015): 630–46. https://doi.org/10.1111/febs.13150. 10. Srikanthan, S., W. Li, R. L. Silverstein, and T. M. McIntyre. “Exosome Poly-Ubiquitin Inhibits Platelet Activation, Downregulates CD36 and Inhibits pro-Atherothombotic Cellular Functions.” Journal of Thrombosis and Haemostasis 12, no. 11 (November 2014): 1906–17. https://doi.org/10.1111/jth.12712. 11. Uversky, Vladimir N. “Functional Roles of Transiently and Intrinsically Disordered Regions within Proteins.” FEBS Journal 282, no. 7 (April 2015): 1182–89. https://doi.org/10.1111/febs.13202. 12. Wang, Yi-Ting, and Guang-Chao Chen. “The Role of Ubiquitin System in Autophagy.” In Autophagy in Current Trends in Cellular Physiology and Pathology, edited by Nikolai V. Gorbunov and Marion Schneider. InTech, 2016. https://doi.org/10.5772/64728. Dionisio
Hey guys, Dionisio, Gpuccio, UB, etc., What more is there to add? Anything we're missing or have not covered? Or to highlight? I'm tempted to post more papers but did not want to do so Gpuccio if you think the subject matter for this post has been fully expanded upon. I think for me, there's the overall picture, big image of the systems control aspects, semiosis, then Conserved Functions over time in eukaryotes you've highlighted. These systems are so large, complex, integrated it's hard to set back and look upon them as well understood units in a larger frame of reference. Even with all the infographics, step by step processes, and videos, still hard to comprehend it all in formalized actions and conditions. There's one issue I do not understand in video 3 by Dr. Deshaies presentation on inhibiting cancerous tumors by blocking the Proteasome. Looking at his commercial site, including some prescriptions approved we can see even targeted solutions still result in possible serious repercussions to people as side affects. It seems the methods utilized today, though much better are a bit like using a hammer on a screw. I thought from a Design perspective, it would be more upstream in detection systems or cutting off supply to the tumor by a better method. Or, farther upstream, detecting mutations that allow tumors to form in the first place and replacing those mutations - maybe - not saying it's easy. Just thinking through the process. That would mean fully understanding the detailed circumstances that allowed the mutation upstream. If it could be done, eliminating need for post-treatment of tumors after they've started. Which is late in the process. Though I would not rule out better Post-treatment methodology. Just some thoughts. DATCG
Gpuccio, Thanks for your detailed response, specifically on conservation in eukaryotes and humans, including the Blast stats! On the conformational changes, it's quite amazing what it goes through and this is yet again conditional.
By the way, it is an 806 AAs long protein (in humans), extremely conserved in all eukaryotes (almost as much as ubiquitin). 78% identities and 89% positives in fungi, 1313 bits, 1.63 baa.
Conserved together as they would have to be from the beginning for any of this to make sense, correct? There is no(or very little) room for mutations here. Disease is the result if mutations impact these different systems working together on crucial time-dependent delivery. Thus all the Quality Control systems and constraints in place to clear out mutations and damaged goods. DATCG
DATCG: You have definitely found a very important actor in the scene we have been debating! :) VCP-p97 seems to be as elusive as its many names (TERA, Transitional endoplasmic reticulum ATPase, VCP, Valosin-containing protein, CDC48, and so on). The same name of its protein family is astounding: AAA+: extended family of ATPases associated with various cellular activities. And various it is! From Uniprot:
Necessary for the fragmentation of Golgi stacks during mitosis and for their reassembly after mitosis. Involved in the formation of the transitional endoplasmic reticulum (tER). The transfer of membranes from the endoplasmic reticulum to the Golgi apparatus occurs via 50-70 nm transition vesicles which derive from part-rough, part-smooth transitional elements of the endoplasmic reticulum (tER). Vesicle budding from the tER is an ATP-dependent process. The ternary complex containing UFD1, VCP and NPLOC4 binds ubiquitinated proteins and is necessary for the export of misfolded proteins from the ER to the cytoplasm, where they are degraded by the proteasome. The NPLOC4-UFD1-VCP complex regulates spindle disassembly at the end of mitosis and is necessary for the formation of a closed nuclear envelope. Regulates E3 ubiquitin-protein ligase activity of RNF19A. Component of the VCP/p97-AMFR/gp78 complex that participates in the final step of the sterol-mediated ubiquitination and endoplasmic reticulum-associated degradation (ERAD) of HMGCR. Involved in endoplasmic reticulum stress-induced pre-emptive quality control, a mechanism that selectively attenuates the translocation of newly synthesized proteins into the endoplasmic reticulum and reroutes them to the cytosol for proteasomal degradation (PubMed:26565908). Also involved in DNA damage response: recruited to double-strand breaks (DSBs) sites in a RNF8- and RNF168-dependent manner and promotes the recruitment of TP53BP1 at DNA damage sites (PubMed:22020440, PubMed:22120668). Recruited to stalled replication forks by SPRTN: may act by mediating extraction of DNA polymerase eta (POLH) to prevent excessive translesion DNA synthesis and limit the incidence of mutations induced by DNA damage (PubMed:23042607, PubMed:23042605). Required for cytoplasmic retrotranslocation of stressed/damaged mitochondrial outer-membrane proteins and their subsequent proteasomal degradation (PubMed:16186510, PubMed:21118995). Essential for the maturation of ubiquitin-containing autophagosomes and the clearance of ubiquitinated protein by autophagy (PubMed:20104022, PubMed:27753622). Acts as a negative regulator of type I interferon production by interacting with DDX58/RIG-I: interaction takes place when DDX58/RIG-I is ubiquitinated via 'Lys-63'-linked ubiquitin on its CARD domains, leading to recruit RNF125 and promote ubiquitination and degradation of DDX58/RIG-I (PubMed:26471729). May play a role in the ubiquitin-dependent sorting of membrane proteins to lysosomes where they undergo degradation (PubMed:21822278). May more particularly play a role in caveolins sorting in cells (PubMed:21822278, PubMed:23335559).
By the way, it is an 806 AAs long protein (in humans), extremely conserved in all eukaryotes (almost as much as ubiquitin). 78% identities and 89% positives in fungi, 1313 bits, 1.63 baa. Complex structure, complex interactions, and definitely a prima donna in the ubiquitin drama. The amazing thing is that, as usual, so many things are known about it (I am just starting to dig), and yet so little is really understood. Maybe just a look at Wikipedia for a brief summary:
Function[edit] p97/CDC48 performs diverse functions through modulating the stability and thus the activity of its substrates. The general function of p97/CDC48 is to segregate proteins from large protein assembly or immobile cellular structures such as membranes or chromatin, allowing the released protein molecules to be degraded by the proteasome. The functions of p97/CDC48 can be grouped into the following three major categories. Protein quality control[edit] The best characterized function of p97 is to mediate a network of protein quality control processes in order to maintain protein homeostasis.[49] These include endoplasmic reticulum-associated protein degradation (ERAD) and mitochondria-associated degradation.[14][50] In these processes, ATP hydrolysis by p97/CDC48 is required to extract aberrant proteins from the membranes of the ER or mitochondria. p97/CDC48 is also required to release defective translation products stalled on ribosome in a process termed ribosome-associated degradation.[51][52][53] It appears that only after extraction from the membranes or large protein assembly like ribosome, can polypeptides be degraded by the proteasome. In addition to this ‘segregase’ function, p97/CDC48 might have an additional role in shuttling the released polypeptides to the proteasome. This chaperoning