Uncommon Descent Serving The Intelligent Design Community

The Ubiquitin System: Functional Complexity and Semiosis joined together.

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

This is a very complex subject, so as usual I will try to stick to the essentials to make things as clear as possible, while details can be dealt with in the discussion.

It is difficult to define exactly the role of the Ubiquitin System. It is usually considered mainly a pathway which regulates protein degradation, but in reality its functions are much wider than that.

In essence, the US is a complex biological system which targets many different types of proteins for different final fates.

The most common “fate” is degradation of the protein. In that sense, the Ubiquitin System works together with another extremely complex cellular system, the proteasome. In brief, the Ubiquitin System “marks” proteins for degradation, and the proteasome degrades them.

It seems simple. It is not.

Ubiquitination is essentially one of many Post-Translational modifications (PTMs): modifications of proteins after their synthesis by the ribosome (translation). But, while most PTMs use simpler biochemical groups that are usually added to the target protein (for example, acetylation), in ubiquitination a whole protein (ubiquitin) is used as a modifier of the target protein.

 

The tool: Ubiquitin

Ubiquitin is a small protein (76 AAs). Its name derives from the simple fact that it  is found in most tissues of eukaryotic organisms.

Here is its aminoacid sequence:

MQIFVKTLTGKTITLEVEPSDTIENVKAKIQDKEGIPPD

QQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG

Essentially, it has two important properties:

  1. As said, it is ubiquitous in eukaryotes
  2. It is also extremely conserved in eukaryotes

In mammals, ubiquitin is not present as a single gene. It is encoded by 4 different genes: UBB, a poliubiquitin (3 Ub sequences); UBC, a poliubiquitin (9 Ub sequences); UBA52, a mixed gene (1   Ub sequence + the ribosomal protein L40); and RPS27A, again a mixed gene (1 Ub sequence + the ribosomal protein S27A). However, the basic ubiquitin sequence is always the same in all those genes.

Its conservation is one of the highest in eukaryotes. The human sequence shows, in single celled eukaryotes:

Naegleria: 96% conservation;  Alveolata: 100% conservation;  Cellular slime molds: 99% conservation; Green algae: 100% conservation; Fungi: best hit 100% conservation (96% in yeast).

Ubiquitin and Ubiquitin like proteins (see later) are characterized by a special fold, called  β-grasp fold.

 

The semiosis: the ubiquitin code

The title of this OP makes explicit reference to semiosis. Let’s try to see why.

The simplest way to say it is: ubiquitin is a tag. The addition of ubiquitin to a substrate protein marks that protein for specific fates, the most common being degradation by the proteasome.

But not only that. See, for example, the following review:

Nonproteolytic Functions of Ubiquitin in Cell Signaling

Abstract:

The small protein ubiquitin is a central regulator of a cell’s life and death. Ubiquitin is best known for targeting protein destruction by the 26S proteasome. In the past few years, however, nonproteolytic functions of ubiquitin have been uncovered at a rapid pace. These functions include membrane trafficking, protein kinase activation, DNA repair, and chromatin dynamics. A common mechanism underlying these functions is that ubiquitin, or polyubiquitin chains, serves as a signal to recruit proteins harboring ubiquitin-binding domains, thereby bringing together ubiquitinated proteins and ubiquitin receptors to execute specific biological functions. Recent advances in understanding ubiquitination in protein kinase activation and DNA repair are discussed to illustrate the nonproteolytic functions of ubiquitin in cell signaling.

Another important aspect is that ubiquitin is not one tag, but rather a collection of different tags. IOWs, a tag based code.

See, for example, here:

The Ubiquitin Code in the Ubiquitin-Proteasome System and Autophagy

(Paywall).

Abstract:

The conjugation of the 76 amino acid protein ubiquitin to other proteins can alter the metabolic stability or non-proteolytic functions of the substrate. Once attached to a substrate (monoubiquitination), ubiquitin can itself be ubiquitinated on any of its seven lysine (Lys) residues or its N-terminal methionine (Met1). A single ubiquitin polymer may contain mixed linkages and/or two or more branches. In addition, ubiquitin can be conjugated with ubiquitin-like modifiers such as SUMO or small molecules such as phosphate. The diverse ways to assemble ubiquitin chains provide countless means to modulate biological processes. We overview here the complexity of the ubiquitin code, with an emphasis on the emerging role of linkage-specific degradation signals (degrons) in the ubiquitin-proteasome system (UPS) and the autophagy-lysosome system (hereafter autophagy).

A good review of the basics of the ubiquitin code can be found here:

The Ubiquitin Code 

(Paywall)

It is particularly relevant, from an ID point of view, to quote the starting paragraph of that paper:

When in 1532 Spanish conquistadores set foot on the Inca Empire, they found a highly organized society that did not utilize a system of writing. Instead, the Incas recorded tax payments or mythology with quipus, devices in which pieces of thread were connected through specific knots. Although the quipus have not been fully deciphered, it is thought that the knots between threads encode most of the quipus’ content. Intriguingly, cells use a regulatory mechanism—ubiquitylation—that is reminiscent of quipus: During this reaction, proteins are modified with polymeric chains in which the linkage between ubiquitin molecules encodes information about the substrate’s fate in the cell.

Now, ubiquitin is usually linked to the target protein in chains. The first ubiquitin molecule is covalently bound through its C-terminal carboxylate group to a particular lysine, cysteine, serine, threonine or N-terminus of the target protein.

Then, additional ubiquitins are added to form a chain, and the C-terminus of the new ubiquitin is linked to one of seven lysine residues or the first methionine residue on the previously added ubiquitin.

IOWs, each ubiquitin molecule has seven lysine residues:

K6, K11, K27, K29, K33, K48, K63

And one N terminal methionine residue:

M1

And a new ubiquitin molecule can be added at each of those 8 sites in the previous ubiquitin molecule. IOWs, those 8 sites in the molecule are configurable switches that can be used to build ubiquitin chains.

Her are the 8 sites, in red, in the ubiquitin molecule:

MQIFVKTLTGKTITLEVEPSDTIENVKAKIQDKEGIPPD

QQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG

Fig 1 shows two ubiquitin molecules joined at K48.

Fig 1 A cartoon representation of a lysine 48-linked diubiquitin molecule. The two ubiquitin chains are shown as green cartoons with each chain labelled. The components of the linkage are indicated and shown as orange sticks. By Rogerdodd (Own work) [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

The simplest type of chain is homogeneous (IOWs, ubiquitins are linked always at the same site). But many types of mixed and branched chains can also be found.

Let’s start with the most common situation: a poli-ubiquitination of (at least) 4 ubiqutins, linearly linked at K48. This is the common signal for proteasome degradation.

By the way, the 26S proteasome is another molecular machine of incredible complexity, made of more than 30 different proteins. However, its structure and function are not the object of this OP, and therefore I will not deal with them here.

The ubiquitin code is not completely understood, at present, but a few aspects have been well elucidated. Table 1 sums up the most important and well known modes:

Code

Meaning

Polyubiquitination (4 or more) with links at K48 or at K11 Proteasomal degradation
Monoubiqutination (single or multiple) Protein interactions, membrane trafficking, endocytosis
Polyubiquitination with links at K63 Endocytic trafficking, inflammation, translation, DNA repair.
Polyubiquitination with links at K63 (other links) Autophagic degradation of protein substrates
Polyubiquitination with links at K27, K29, K33 Non proteolytic processes
Rarer chain types (K6, K11) Under investigation

 

However, this is only a very partial approach. A recent bioinformatics paper:

An Interaction Landscape of Ubiquitin Signaling

(Paywall)

Has attempted for the first time a systematic approach to deciphering the whole code, using synthetic diubiquitins (all 8 possible variants) to identify the different interactors with those signals, and they identified, with two different methodologies,  111 and 53 selective interactors for linear polyUb chains, respectively. 46 of those interactors were identified by both methodologies.

The translation

But what “translates” the complex ubiquitin code, allowing ubiquinated proteins to met the right specific destiny? Again, we can refer to the diubiquitin paper quoted above.

How do cells decode this ubiquitin code into proper cellular responses? Recent studies have indicated that members of a protein family, ubiquitin-binding proteins (UBPs), mediate the recognition of ubiquitinated substrates. UBPs contain at least one of 20 ubiquitin-binding domains (UBDs) functioning as a signal adaptor to transmit the signal from ubiquitinated substrates to downstream effectors

But what are those “interactors” identified by the paper (at least 46 of them)? They are, indeed, complex proteins which recognize specific configurations of the “tag” (the ubiquitin chain), and link the tagged (ubiquinated) protein to other effector proteins which implement its final fate, or anyway contribute in deffrent forms to that final outcome.

 

The basic control of the procedure: the complexity of the ubiquitination process.

So, we have seen that ubiquitin chains work as tags, and that their coded signals are translated by specific interactors, so that the target protein may be linked to its final destiny, or contribute to the desired outcome. But we must still address one question: how is the ubiquitination of the different target proteins implemented? IOWs, what is the procedure that “writes” the specific codes associated to specific target proteins?

This is indeed the first step in the whole process. But it is also the most complex, and that’s why I have left it for the final part of the discussion.

Indeed, the ubiquitination process needs to realize the following aims:

  1. Identify the specific protein to be ubiquitinated
  2. Recognize the specific context in which that protein needs to be ubiquitinated
  3. Mark the target protein with the correct tag for the required fate or outcome

We have already seen that the ubiquitin system is involved in practically all different cellular paths and activities, and therefore we can expect that the implementation of the above functions must be a very complex thing.

And it is.

Now, we can certainly imagine that there are many different layers of regulation that may contribute to the general control of the procedure, specifically epigenetic levels, which are at present poorly understood. But there is one level that we can more easily explore and understand, and it is , as usual, the functional complexity of the proteins involved.

And, even at a first gross analysis, it is really easy to see that the functional complexity implied by this process is mind blowing.

Why? It is more than enough to consider the huge number of different proteins involved. Let’s see.

The ubiquitination process is well studied. It can be divided into three phases, each of which is implemented by a different kind of protein. The three steps, and the three kinds of proteins that implement them, take the name of E1, E2 and E3.

 

Fig. 2 Schematic diagram of the ubiquitylation system. Created by Roger B. Dodd: Rogerdodd at the English language Wikipedia [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

 The E1 step of ubiquitination.

This is the first thing that happens, and it is also the simplest.

E1 is the process of activation of ubiquitin, and the E1 proteins is called E1 ubiquitin-activating enzyme. To put it simply, this enzyme “activates” the ubiquitin molecule in an ATP dependent process, preparing it for the following phases and attaching it to its active site cysteine residue. It is not really so simple, but for our purposes that can be enough.

This is a rather straightforward enzymatic reaction. In humans there are essentially two forms of E1 enzymes, UBA1 and UBA6, each of them about 1000 AAs long, and partially related at sequence level (42%).

 

The E2 step of ubiquitination.

The second step is ubiquitin conjugation. The activated ubiquitin is transferred from the E1 enzyme to the ubiquitin-conjugating enzyme, or E2 enzyme, where it is attached to a cysteine residue.

This apparently simple “transfer” is indeed a complex intermediate phase. Humans have about 40 different E2 molecules. The following paper:

E2 enzymes: more than just middle men

details some of the functional complexity existing at this level.

Abstract:

Ubiquitin-conjugating enzymes (E2s) are the central players in the trio of enzymes responsible for the attachment of ubiquitin (Ub) to cellular proteins. Humans have ∼40 E2s that are involved in the transfer of Ub or Ub-like (Ubl) proteins (e.g., SUMO and NEDD8). Although the majority of E2s are only twice the size of Ub, this remarkable family of enzymes performs a variety of functional roles. In this review, we summarize common functional and structural features that define unifying themes among E2s and highlight emerging concepts in the mechanism and regulation of E2s.

However, I will not go into details about these aspects, because we have better things to do: we still have to discuss the E3 phase!

 

The E3 step of ubiquitination.

This is the last phase of ubiquitination, where the ubiquitin tag is finally transferred to the target protein, as initial mono-ubiquitination, or to build an ubiquitin chain by following ubiqutination events. The proteins which implement this final passage are call E3 ubiquitin ligases. Here is the definition from Wikipedia:

A ubiquitin ligase (also called an E3 ubiquitin ligase) is a protein that recruits an E2 ubiquitin-conjugating enzyme that has been loaded with ubiquitin, recognizes a protein substrate, and assists or directly catalyzes the transfer of ubiquitin from the E2 to the protein substrate.

It is rather obvious that the role of the E3 protein is very important and delicate. Indeed it:

  1. Recognizes and links the E2-ubiquitin complex
  2. Recognizes and links some specific target protein
  3. Builds the appropriate tag for that protein (Monoubiquitination, mulptiple monoubiquitination, or poliubiquitination with the appropriate type of ubiquitin chain).
  4. And it does all those things at the right moment, in the right context, and for the right protein.

IOWs, the E3 protein writes the coded tag. It is, by all means, the central actor in our complex story.

So, here comes the really important point: how many different E3 ubiquitin ligases do we find in eukaryotic organisms? And the simple answer is: quite a lot!

Humans are supposed to have more than 600 different E3 ubiquitin ligases!

So, the human machinery for ubiquitination is about:

2 E1 proteins  –  40 E2 proteins – >600 E3 proteins

A real cascade of complexity!

OK, but even if we look at single celled eukaryotes we can already find an amazing level of complexity. In yeast, for example, we have:

1 or 2 E1 proteins  –  11 E2 proteins – 60-100 E3 proteins

See here:

The Ubiquitin–Proteasome System of Saccharomyces cerevisiae

Now, a very important point. Those 600+ E3 proteins that we find in humans are really different proteins. Of course, they have something in common: a specific domain.

From that point of view, they can be roughly classified in three groups according to the specific E3 domain:

  1. RING group: the RING finger domain ((Really Interesting New Gene) is a short domain of zinc finger type, usually 40 to 60 amino acids. This is the biggest group of E3s (about 600)
  2. HECT domain (homologous to the E6AP carboxyl terminus): this is a bigger domain (about 350 AAs). Located at the C terminus of the protein. It has a specific ligase activity, different from the RING   In humans we have approximately 30 proteins of this type.
  3. RBR domain (ring between ring fingers): this is a common domain (about 150 AAs) where two RING fingers are separated by a region called IBR, a cysteine-rich zinc finger. Only a subset of these proteins are E3 ligases, in humans we have about 12 of them.

See also here.

OK, so these proteins have one of these three domains in common, usually the RING domain. The function of the domain is specifically to interact with the E2-ubiquitin complex to implement the ligase activity. But the domain is only a part of the molecule, indeed a small part of it. E3 ligases are usually big proteins (hundreds, and up to thousands of AAs). Each of these proteins has a very specific non domain sequence, which is probably responsible for the most important part of the function: the recognition of the specific proteins that each E3 ligase processes.

This is a huge complexity, in terms of functional information at sequence level.

Our map of the ubiquinating system in humans could now be summarized as follows:

2 E1 proteins  –  40 E2 proteins – 600+ E3 proteins + thousands of specific substrates

IOWs, each of hundreds of different complex proteins recognizes its specific substrates, and marks them with a shared symbolic code based on uniquitin and its many possible chains. And the result of that process is that proteins are destined to degradation by the proteasome or other mechanisms, and that protein interactions and protein signaling are regulated and made possible, and that practically all cellular functions are allowed to flow correctly and smoothly.

Finally, here are two further compoments of the ubuquitination system, which I will barely mention, to avoid making this OP too long.

Ubiquitin like proteins (Ubl):

A number of ubiquitin like proteins add to the complexity of the system. Here is the abstract from a review:

The eukaryotic ubiquitin family encompasses nearly 20 proteins that are involved in the posttranslational modification of various macromolecules. The ubiquitin-like proteins (UBLs) that are part of this family adopt the β-grasp fold that is characteristic of its founding member ubiquitin (Ub). Although structurally related, UBLs regulate a strikingly diverse set of cellular processes, including nuclear transport, proteolysis, translation, autophagy, and antiviral pathways. New UBL substrates continue to be identified and further expand the functional diversity of UBL pathways in cellular homeostasis and physiology. Here, we review recent findings on such novel substrates, mechanisms, and functions of UBLs.

These proteins include SUMO, Nedd8, ISB15, and many others.

Deubiquitinating enzymes (DUBs):

The process of ubiquitination, complex as it already is, is additionally regulated by these enzymes which can cleave ubiquitin from proteins and other molecules. Doing so, they can reverse the effects of ubiquitination, creating a delicately balanced regulatory network. In humans there are nearly 100 DUB genes, which can be classified into two main classes: cysteine proteases and metalloproteases.

 

By the way, here is a beautiful animation of the basic working of the ubiquitin-proteasome system in degrading damaged proteins:

 

 

A summary:

So, let’s try a final graphic summary of the whole ubiquitin system in humans:

Fig 3 A graphic summary of the Ubiquitin System

 

Evolution of the Ubiquitin system?

The Ubiqutin system is essentially an eukaryotic tool. Of course, distant precursors for some of the main components have been “found” in prokaryotes. Here is the abstract from a paper that sums up what is known about the prokaryotic “origins” of the system:

Structure and evolution of ubiquitin and ubiquitin-related domains.

(Paywall)

Abstract:

Since its discovery over three decades ago, it has become abundantly clear that the ubiquitin (Ub) system is a quintessential feature of all aspects of eukaryotic biology. At the heart of the system lies the conjugation and deconjugation of Ub and Ub-like (Ubls) proteins to proteins or lipids drastically altering the biochemistry of the targeted molecules. In particular, it represents the primary mechanism by which protein stability is regulated in eukaryotes. Ub/Ubls are typified by the β-grasp fold (β-GF) that has additionally been recruited for a strikingly diverse range of biochemical functions. These include catalytic roles (e.g., NUDIX phosphohydrolases), scaffolding of iron-sulfur clusters, binding of RNA and other biomolecules such as co-factors, sulfur transfer in biosynthesis of diverse metabolites, and as mediators of key protein-protein interactions in practically every conceivable cellular context. In this chapter, we present a synthetic overview of the structure, evolution, and natural classification of Ub, Ubls, and other members of the β-GF. The β-GF appears to have differentiated into at least seven clades by the time of the last universal common ancestor of all extant organisms, encompassing much of the structural diversity observed in extant versions. The β-GF appears to have first emerged in the context of translation-related RNA-interactions and subsequently exploded to occupy various functional niches. Most biochemical diversification of the fold occurred in prokaryotes, with the eukaryotic phase of its evolution mainly marked by the expansion of the Ubl clade of the β-GF. Consequently, at least 70 distinct Ubl families are distributed across eukaryotes, of which nearly 20 families were already present in the eukaryotic common ancestor. These included multiple protein and one lipid conjugated forms and versions that functions as adapter domains in multimodule polypeptides. The early diversification of the Ubl families in eukaryotes played a major role in the emergence of characteristic eukaryotic cellular substructures and systems pertaining to nucleo-cytoplasmic compartmentalization, vesicular trafficking, lysosomal targeting, protein processing in the endoplasmic reticulum, and chromatin dynamics. Recent results from comparative genomics indicate that precursors of the eukaryotic Ub-system were already present in prokaryotes. The most basic versions are those combining an Ubl and an E1-like enzyme involved in metabolic pathways related to metallopterin, thiamine, cysteine, siderophore and perhaps modified base biosynthesis. Some of these versions also appear to have given rise to simple protein-tagging systems such as Sampylation in archaea and Urmylation in eukaryotes. However, other prokaryotic systems with Ubls of the YukD and other families, including one very close to Ub itself, developed additional elements that more closely resemble the eukaryotic state in possessing an E2, a RING-type E3, or both of these components. Additionally, prokaryotes have evolved conjugation systems that are independent of Ub ligases, such as the Pup system.

 

As usual, we are dealing here with distant similarities, but there is no doubt that the ubiquitin system as we know it appears in eukaryotes.

But what about its evolutionary history in eukaryotes?

We have already mentioned the extremely high conservation of ubiquitin itself.

UBA1, the main E1 enzyme, is rather well conserved from fungi to humans: 60% identity, 1282 bits, 1.21 bits per aminoacid (baa).

E2s are small enzymes, extremely conserved from fungi to humans: 86% identity, for example, for UB2D2, a 147 AAs molecule.

E3s, of course, are the most interesting issue. This big family of proteins behaves in different ways, consistently with its highly specific functions.

It is difficult to build a complete list of E3 proteins. I have downloaded from Uniprot a list of reviewed human proteins including “E3 ubiquitun ligase” in their name: a total of 223 proteins.

The mean evolutionary behavior of this group in metazoa is rather different from protein to protein. However, as a group these proteins exhibit an information jump in vertebrates which is significantly higher than the jump in all other proteins:

 

Fig. 4 Boxplots of the distribution of human conserved information jump from pre-vertebrates to vertebrates in 223 E3 ligase proteins and in all other human proteins. The difference is highly significant.

 

As we already know, this is evidence that this class of proteins is highly engineered in the transition to vertebrates. That is consistent with the need to finely regulate many cellular processes, most of which are certainly highly specific for different groups of organisms.

The highest vertebrate jump, in terms of bits per aminoacid, is shown in my group by the E3 ligase TRIM62. also known as DEAR1 (Q9BVG3), a 475 AAs long protein almost absent in pre-vertebrates (best hit 129 bits, 0.27 baa in Branchiostoma belcheri) and which flaunts an amazing jump of 1.433684 baa in cartilaginous fish (810 bits, 1.705263 baa).

But what is this protein? It is a master regulator tumor suppressor gene, implied in immunity, inflammation, tumor genesis.

See here:

TRIM Protein-Mediated Regulation of Inflammatory and Innate Immune Signaling and Its Association with Antiretroviral Activity

and here:

DEAR1 is a Chromosome 1p35 Tumor Suppressor and Master Regulator of TGFβ-Driven Epithelial-Mesenchymal Transition

This is just to show what a single E3 ligase can be involved in!

An opposite example, from the point of view of evolutionary history, is SIAH1, an E3 ligase implied in proteosomal degradation of proteins. It is a 282 AAs long protein, which already exhibits 1.787234 baa (504 bits) of homology in deuterostomes, indeed already 1.719858 baa in cnidaria. However, in fungi the best hit is only 50.8 bits (0.18 baa). So, this is a protein whose engineering takes place at the start of metazoa, and which exhibits only a minor further jump in vertebrates (0.29 baa), which brings the protein practically to its human form already in cartilaginous fish (280 identities out of 282, 99%). Practically a record.

So, we can see that E3 ligases are a good example of a class of proteins which perform different specific functions, and therefore exhibit different evolutionary histories: some, like TRIM62, are vertebrate quasi-novelties, others, like SIAH1, are metazoan quasi-novelties. And, of course, there are other behaviours, like for example BRCA1, Breast cancer type 1 susceptibility protein, a protein 1863 AAs long which only in mammals acquires part of its final sequence configuration in humans.

The following figure shows the evolutionary history of the three proteins mentioned above.

 

Fig. 5 Evolutionary history in metazoa of three E3 ligases (human conserved functional information)

 

An interesting example: NF-kB signaling

I will discuss briefly an example of how the Ubiquitin system interacts with some specific and complex final effector system. One of the best models for that is the NF-kB signaling.

NK-kB is a transcription factor family that is the final effector of a complex signaling pathway. I will rely mainly on the following recent free paper:

The Ubiquitination of NF-κB Subunits in the Control of Transcription

Here is the abstract:

Nuclear factor (NF)-κB has evolved as a latent, inducible family of transcription factors fundamental in the control of the inflammatory response. The transcription of hundreds of genes involved in inflammation and immune homeostasis require NF-κB, necessitating the need for its strict control. The inducible ubiquitination and proteasomal degradation of the cytoplasmic inhibitor of κB (IκB) proteins promotes the nuclear translocation and transcriptional activity of NF-κB. More recently, an additional role for ubiquitination in the regulation of NF-κB activity has been identified. In this case, the ubiquitination and degradation of the NF-κB subunits themselves plays a critical role in the termination of NF-κB activity and the associated transcriptional response. While there is still much to discover, a number of NF-κB ubiquitin ligases and deubiquitinases have now been identified which coordinate to regulate the NF-κB transcriptional response. This review will focus the regulation of NF-κB subunits by ubiquitination, the key regulatory components and their impact on NF-κB directed transcription.

 

The following figure sums up the main features of the canonical activation pathway:

 

Fig. 6 A simple summary of the main steps in the canonical activayion pathway of NF-kB

 

Here the NF-κB TF is essentially the heterodimer RelA – p50. Before activation, the NF-κB (RelA – p50) dimer is kept in an inactive state and remains in the cytoplasm because it is linked to the IkB alpha protein, an inhibitor of its function.

Activation is mediated by a signal-receptor interaction, which starts the whole pathway. A lot of different signals can do that, adding to the complexity, but we will not discuss this part here.

As a consequence of receptor activation, another protein complex, IκB kinase (IKK), accomplishes the Phosphorylation of IκBα at serines 32 and 36. This is the signal for the ubiquitination of the IkB alpha inhibitor.

This ubiqutination targets IkB alpha for proteosomal degradation. But how is it achieved?

Well, things are not so simple. A whole protein complex is necessary, a complex which implements many different ubiquitinations in different contexts, including this one.

The complex is made by 3 basic proteins:

  • Cul1 (a scaffold protein, 776 AAs)
  • SKP1 (an adaptor protein, 163 AAs)
  • Rbx1 (a RING finger protein with E3 ligase activity, 108 AAs)

Plus:

  • An F-box protein (FBP) which changes in the different context, and confers specificity.

In our context, the F box protein is called beta TRC (605 AAs).

 

Fig. 7 A simple diagram of the SKP1 – beta TRC complex

 

Once the IkB alpha inhibitor is ubiquinated and degraded in the proteasome, the NF-κB dimer is free to translocate to the nucleus, and implement its function as a transcription factor (which is another complex issue, that we will not discuss).

OK, this is only the canonical activation of the pathway.

In the non canonical pathway (not shown in the figure) a different set of signals, receptors and activators acts on a different NF-κB dimer (RelB – p100). This dimer is not linked to any inhibitor, but is itself inactive in the cytoplasm. As a result of the signal, p100 is phosphorylated at serines 866 and 870. Again, this is the signal for ubiquitination.

This ubiquitination is performed by the same complex described above, but the result is different. P100 is only partially degraded in the proteasome, and is transformed into a smaller protein, p52, which remains linked to RelB. The RelB – p52 dimer is now an active NF-κB Transcription Factor, and it can relocate to the nucleus and act there.

But that’s not all.

  • You may remember that RelA (also called p 65) is one of the two components of NF-kB TF in the canonical pathway (the other being p 50). Well, RelA is heavily controlled by ubiquitination after it binds DNA in the nucleus to implement its TF activity. Ubiquitination (a very complex form of it) helps detachment of the TF from DNA, and its controlled degradation, avoiding sustained expression of NF-κB-dependent genes. For more details, see section 4 in the above quoted paper: “Ubiquitination of NF-κB”.
  • The activation of IKK in both the canonical and non canonical pathway after signal – receptor interaction is not so simple as depicted in Fig. 6. For more details, look at Fig. 1 in this paper: Ubiquitin Signaling in the NF-κB Pathway. You can see that, in the canonical pathway, the activation of IKK is mediated by many proteins, including TRAF2, TRAF6, TAK1, NEMO.
  • TRAF2 is a key regulator on many signaling pathways, including NF-kB. It is an E3 ubiquitin ligase. From Uniprot:  “Has E3 ubiquitin-protein ligase activity and promotes ‘Lys-63’-linked ubiquitination of target proteins, such as BIRC3, RIPK1 and TICAM1. Is an essential constituent of several E3 ubiquitin-protein ligase complexes, where it promotes the ubiquitination of target proteins by bringing them into contact with other E3 ubiquitin ligases.”
  • The same is true of TRAF6.
  • NEMO (NF-kappa-B essential modulator ) is also a key regulator. It is not an ubiquinating enzyme, but it is rather heavily regulated by ubiquitination. From Uniprot: “Regulatory subunit of the IKK core complex which phosphorylates inhibitors of NF-kappa-B thus leading to the dissociation of the inhibitor/NF-kappa-B complex and ultimately the degradation of the inhibitor. Its binding to scaffolding polyubiquitin seems to play a role in IKK activation by multiple signaling receptor pathways. However, the specific type of polyubiquitin recognized upon cell stimulation (either ‘Lys-63’-linked or linear polyubiquitin) and its functional importance is reported conflictingly.”
  • In the non canonical pathway, the activation of IKK alpha after signal – receptor interaction is mediated by other proteins, in particular one protein called NIK (see again Fig. 1 quoted above). Well, NIK is regulated by two different types of E3 ligases, with two different types of polyubiquitination:
    • cIAP E3 ligase inactivates it by constant degradation using a K48 chain
    • ZFP91 E3 ligase stabilizes it using a K63 chain

See here:

Non-canonical NF-κB signaling pathway.

In particular, Fig. 3

These are only some of the ways the ubiquitin system interacts with the very complex NF-kB signaling system. I hope that’s enough to show how two completely different and complex biological systems manage to cooperate by intricate multiple connections, and how the ubiquitin system can intervene at all levels of another process. What is true for the NF-kB signaling pathway is equally true for a lot of other biological systems, indeed for almost all basic cellular processes.

But this OP is already too long, and I have to stop here.

As usual, I want to close with a brief summary of the main points:

  1. The Ubiquitin system is a very important regulation network that shows two different signatures of design: amazing complexity and an articulated semiotic structure.
  2. The complexity is obvious at all levels of the network, but is especially amazing at the level of the hundreds of E3 ligases, that can recognize thousands of different substrates in different contexts.
  3. The semiosis is obvious in the Ubiquitin Code, a symbolic code of different ubiquitin configurations which serve as specific “tags” that point to different outcomes.
  4. The code is universally implemented and shared in eukaryotes, and allows control on almost all most important cellular processes.
  5. The code is written by the hundreds of E3 ligases. It is read by the many interactors with ubiquitin-binding domains (UBDs).
  6. The final outcome is of different types, including degradation, endocytosis, protein signaling, and so on.
  7. The interaction of the Ubiquitin System with other complex cellular pathways, like signaling pathways, is extremely complex and various, and happens at many different levels and by many different interacting proteins for each single pathway.

PS:

Thanks to DATCG for pointing to this video in three parts by Dr. Raymond Deshaies, was Professor of Biology at the California Institute of Technology and an Investigator of the Howard Hughes Medical Institute. On iBiology Youtube page:

A primer on the ubiquitin-proteasome system

 

Cullin-RING ubiquitin ligases: structure, structure, mechanism, and regulation

 

Targeting the ubiquitin-proteasome system in cancer

Comments
gpuccio- I get it. Keep up your good work. They are useful for helping refine arguments. I will give them that. Unfortunately they aren't any good at forming coherent arguments. For example Glen Davidson brings up developmental biology without realizing his position doesn't have anything to account for it.ET
April 2, 2018
April
04
Apr
2
02
2018
08:41 AM
8
08
41
AM
PDT
ET: "This is a discussion?" I would say it is. As you can see, some good refinement of the issues has emerged. :) "Sure you are working on your argument which is always good but don’t think that you will convince the TSZ ilk." But my purpose is never to convince. This is about trying to understand truth. Of course the other discussant must express ideas, good or bad that they are, for the discussion to go on. I think Entropy has done that. Others have not. The only bad discussant is one that does not express any personal ideas.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
08:35 AM
8
08
35
AM
PDT
Until someone comes up with a way to test Common Descent the concept is not scientific. There aren't any known mechanisms capable of the feat so that would be a major problem. And as I said above we don't even know what makes an organism what it is. Chapter VI “Why is a Fly not a horse?” (same as the book’s title)
”The scientist enjoys a privilege denied the theologian. To any question, even one central to his theories, he may reply “I’m sorry but I do not know.” This is the only honest answer to the question posed by the title of this chapter. We are fully aware of what makes a flower red rather than white, what it is that prevents a dwarf from growing taller, or what goes wrong in a paraplegic or a thalassemic. But the mystery of species eludes us, and we have made no progress beyond what we already have long known, namely, that a kitty is born because its mother was a she-cat that mated with a tom, and that a fly emerges as a fly larva from a fly egg.”
No one knows what determines formET
April 2, 2018
April
04
Apr
2
02
2018
08:33 AM
8
08
33
AM
PDT
Entropy at TSZ:: The last point: Semiosis My statement: 3) Semiosis is a feature that by its same form is never found in non design systems, and clearly points to design. First level:
“Semiosis” is your inability to understand the concept, the problem, and the subjectivity, of anthropomorphism. You look at a system described in human terms (naturally, since it’s humans doing the describing), and take the metaphors and analogies to heart: therefore semiosis! It’s like ET’s problem with “teleology in biology,” which is yet another example of anthropomorphism.
Second level:
3. You might have never thought of the problem of anthropomorphism, or you think that’s an obvious inference, rather than an inclination from the fact that you’re human. That might be your true handicap. I have to warn you though, that convincing someone like me that your anthropomorphisms are anything but, might be a titanic task.
OK. let's start from the titanic task: that's not a problem, because I don't want to convince anyone, least of all "someone like you". :) I just want to clarify my arguments. Then anyone can decide for himself. That's the true aim of a good discussion: to refine and clarify the arguments, not to convince. Let's go to the "problem" of anthropomorphism. You say: "You look at a system described in human terms (naturally, since it’s humans doing the describing)" And here we agree. Of course, you will also agree that all science is about "looking at systems described in human terms". All science is human, as far as I know. So, if we agree on that, let's go on. You go on: "and take the metaphors and analogies to heart: therefore semiosis!" I don't understand what metaphors and analogies you are talking of! A semiotic system is a system which uses some form of symbolic code. A symbolic code is a code where something represents something else by some arbitrary mapping. The genetic code is a symbolic code, because the mapping from codons to AAs is arbitrary. See also my comment #227 to Bob O'H in this other thread: https://uncommondescent.com/intelligent-design/how-some-materialists-are-blinded-by-their-faith-commitments/ The ubiquitin code is a symbolic code, because the mapping from different ubiquitin tags to different outcomes is arbitrary. These are objective properties of the systems we are considering, not metaphors or analogies. Yes, they are described in human terms. Like all science. But they are not metaphors or analogies. Your second level does not add anything to your wrong argument about antropomorphism, so I think we are done here. In next post, I will answer your answers to my previous request.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
08:31 AM
8
08
31
AM
PDT
Entropy at TSZ: So, let's go on. Second point: Irreducible complexity My statement: 2) Irreducible complexity multiplies the functional complexity of the individual components of the system. GlenDavidson seems to agree (see #574) What do you have to say? First level:
So this is but insistence on point number 1.
Yes, it is. With exponential increase. Second level:
2. You really think that irreducible complexity adds to the bits “problem.” See above.
It certainly adds. And the bits problem is the fundamental problem, whatever you say. So, that was easy enough. More in next post.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
08:11 AM
8
08
11
AM
PDT
Corneel at TSZ: Let's try not to debate just for the sake of it, while we agree. We agree on CD. How I manage my discussion with others who are not sure about that is another matter. I have never been reluctant to defend CD,a you know it. The idea that the origin of information remains the main point is true. It is connected with CD, but only in part. My arguments depend on CD. But it is true that if CD were false, then the whole neo-darwinian explanation would be false by default, because it needs CD, while a design hypothesis does not necessarily need it. That's the only "true and reasonable consideration" I was referring to. That said, let's stop this useless discussion. I believe in CD, period. I don't expect to be commended by darwinists for that, but frankly I did not expect to be attacked for not defending CD, when that's the only thing I have ever done here about this issue! You say: "I am very happy that you promote common descent and (a limited form of) natural selection on UD, because if you were to succeed in persuading your ID friends, it would allow us to move on to more interesting discussions than “everything related to evolution must be false”. " I agree. That's why I do it. However, let me remind you that many important IDists, like Behe, do believe in CD. And, of course, in a limited form of NS. My only criteria are facts. I only defend ideas that, IMO, are strongly supported by facts. Then you ask: "And I have a question about your plots of human conserved functional information against the estimated time since divergence. You use them to infer that information jumps have taken place wherever there appears to be a steep increase in the bit score, is that correct? My question is: what would a plot look like from a protein neutrally evolving at a constant substitution rate? Could you generate such a plot with simulated data?" I am not sure I understand. Of course we can plot anything. Let's see if I understand. Let's say that we have a jump of, say, 1 baa from pre-vertebrates to vertebrate. Of course, that appears like a steep increase, because it is a lot of information (about half the total information of the protein) and the time window is not so big. Now, let's say that the protein if 500 AAs long. Then 1 baa corresponds more or less to 250 new identities in the sequence. I could certainly simulate the appearance of 250 1 AA transitions in the same period. The plot would remain the same. Is that what you mean? Are you proposing that the transition happened, in 30 million years, by 250 single transition of 1 AA? That would be the neo-darwinist idea, wouldn't it? And of course each single mutation gave a reproductive advantage, increasing the function of that particular protein, isn't it? And so each mutation was fixed (rather quickly, I would say), and completely obliterated the previous state, isn't it? Isn't that the neo-darwinian scenario? So, please, defend that scenario by even a trace of evidence. Or of credibility. Because I can see none. But please, don't repeat the ridiculous claim that: "you have not demonstrated that it is really impossible, so we win". Because that's not how science works. In the meantime, please, could you try to answer my often repeated challenge, that no one has ever tried to answer until now? You will easily understand that it is very relevant to your discussion. I copy it here again:
Will anyone on the other side answer the following two simple questions? 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?
gpuccio
April 2, 2018
April
04
Apr
2
02
2018
08:02 AM
8
08
02
AM
PDT
One has to wonder: Is the debate over? We continually see our side having all the arguments and the other side having none. Considering something like the Ubiquitin system, what is the best argument for unguided evolution? I have no idea. In fact, I am not aware of a good argument at all. The only way for them to stay in the discussion — to maintain the illusion that there is an ongoing balanced debate — is by misunderstanding the ID arguments. And surely, they will never stop doing so, because, once they have given up on that tactic, the whole atheistic edifice comes crashing down.Origenes
April 2, 2018
April
04
Apr
2
02
2018
07:04 AM
7
07
04
AM
PDT
OMagain:
Well, ET, care to name a single person who needs their designs to be complex multilayered interlocking messes?
No one designed your brain, OM. But I have a change- your brain is simple and not complex. But it is an interlocking mess. I can't name a single person capable of designing a living organism. Can you, OM? Can you find any evidence that blind and mindless processes can produce the ubiquitin system? No- then stop whining and get to workET
April 2, 2018
April
04
Apr
2
02
2018
07:01 AM
7
07
01
AM
PDT
This is a discussion? The other person is just hand-waving and poo-poo'ing. Sure you are working on your argument which is always good but don't think that you will convince the TSZ ilk. They don't understand science and how to assess evidence. And they definitely will never post anything in support of evolutionism. Heck to them it's all "settled science"- except it isn't settled and it isn't science. The point being is they can't even give us something so that we can compare. How cowardly is that?ET
April 2, 2018
April
04
Apr
2
02
2018
06:56 AM
6
06
56
AM
PDT
Entropy: I have seen your answers, thank you. That's exactly what I mean by a "discussion". I will complete my comments on your previous post, and then I will answer your answers. Let's go on this way, until it is possible.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
06:24 AM
6
06
24
AM
PDT
Entropy: So, let's finally come to the discussion of your points. In your post at TSZ, you comment my three points at two different levels, so I will join the two levels together, for better clarity. And the first point is: Functional information. My original statement is: 1) Functional complexity beyond some appropriate threshold (500 bits will do in all contexts) clearly allows a design inference. This is an old and fundamental point, I would say the foundation itself of ID. Let's see what you have to say: First level:
As I said, they just point to complexity and think that makes their absurd imaginary friend real. They compound it with misunderstood information theory and misapplied bit scores, but, in the end, it reduces to their inability to understand how could nature do something, therefore god-did-it. Same old god-of-the-gaps in disguise.
Second level:
1. You really think that talking about 500 bits is impressive and beyond nature. I can explain to you why I find that unconvincing. If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that. So, claiming that a designer is necessary to produce “information,” seems a lot like putting the cart before the horse. At the same time, enormous amounts of energy flow transforming into patterns happen all the time regardless of designers. So 500 bits? A joke for natural processes. Natural phenomena have energy flows to spare. So unimaginable, so unmanageable, so out of reach to any designers, that it makes the bits claim pathetic. Ants would be more justified in claiming that all the volcanoes in the planet cannot move as much material as a single ant colony.
Your "first level" is not so interesting. A God-of-the-gaps "argument" again, wholly unsubstantiated. With some mysterious reference to some "absurd imaginary friend", and to "misunderstood information theory" and to "misapplied bit scores" and to "nature". What a mess! I am afraid that you must be more specific. a) Who is the "absurd imaginary friend"? Where did I mention such a concept? b) What is misunderstood in my dealing with information, and with functional information in particular? Please, clarify the correct understanding. c) In what sense are bit scores misapplied in my reasonings_ Please, clarify how they should be correctly applied. d) The problem of @nature@ will be dealt with more in detailed in the following discussion. So, let's go to the second level, where you become a little more specific. "If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that. So, claiming that a designer is necessary to produce “information,” seems a lot like putting the cart before the horse." I can't follow your reasoning. Yes, designers use energy to create patterns. And so? The whole point is that non design systems cannot create complex functional patterns, whatever the available energy. Conscious understanding and purpose are necessary to "put that amount of information together", as you say. Caonscious systems can do that. Non sconscious systems cannot do that, even if the necessary energy is available. Because energy is not all that is needed. Functional information is needed, and that information derives from the subjective experiences of understanding meaning and having purpose. You say: "At the same time, enormous amounts of energy flow transforming into patterns happen all the time regardless of designers." Patterns maybe, but never functional information. Pllease give one example where a flow of energy is transformed into more than 500 bits of functional information in a non conscious system. I am not holding my breath. If you lack a clear definition of functional information, please look at this old OP of mine: Functional information defined https://uncommondescent.com/intelligent-design/functional-information-defined/ "So 500 bits? A joke for natural processes." Then show one single example of that. "Natural phenomena have energy flows to spare." Energy, but not functional information. "So unimaginable, so unmanageable, so out of reach to any designers, that it makes the bits claim pathetic." The only pathetic thing here are your unsupported statements. Look, just to be simple: a) This comment is, at this point, almost 5000 characters long. That makes a total complexity of about 20000 bits. That means,certainly, more than 500 bits of functional information. To understand why, please read this OP of mine: An attempt at computing dFSCI for English language https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ So, this is a clear demonstration that a simple conscious agent as I am can easily generate more than 500 bits of functional information. b) No non conscious system has ever been observed to do that. Please, feel free to show counter-examples, if you can. I will not comment about ants and volcanoes, just to be kind. More in next post.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
06:19 AM
6
06
19
AM
PDT
As predicted OMagain failed to support its claim. It can only think of one system tat can produce the ubiquitin system- yet it never says what that is nor provides any evidence for it. The only thing that will ever convince these people of the legitimacy of ID is a meeting with the Intelligent Designer(s). The TSZ ilk are useless, anti-science and angry keiths has been refuted more times than anyone else ever and it still prattles on unabated. Not one of them can form a coherent argument.ET
April 2, 2018
April
04
Apr
2
02
2018
06:08 AM
6
06
08
AM
PDT
Entropy:
That’s why I explained. If only you had read the whole comment you’d have some idea.
Again you misunderstand me. I just meant that I would answer with explicit references to your points (see the last phase at #576). I have been delayed by lack of time and by some distractions from your colleagues.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
05:51 AM
5
05
51
AM
PDT
OMagain at TSZ; For the generic objections to ID, please see my future comments to Entropy, which are coming (if others like you do not distract me too much). You ask: "Please feel free to go into detail regarding these “severe limits” and how you have determined that they exist at all." I have dedicated two whole OPs and long following discussions to the limits o NA and RV, with a lot of detail. Here they are: What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ And: What are the limits of Random Variation? A simple evaluation of the probabilistic resources of our biological world https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ Please, feel free to read them and to comment. I will answer.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
05:49 AM
5
05
49
AM
PDT
bornagain77: Hi BA, Welcome to the discussion! :)gpuccio
April 2, 2018
April
04
Apr
2
02
2018
05:39 AM
5
05
39
AM
PDT
as to Glen's claim that IDists suffer from the Dunning–Kruger effect
“In the field of psychology, the Dunning–Kruger effect is a cognitive bias wherein people of low ability suffer from illusory superiority, mistakenly assessing their cognitive ability as greater than it is. The cognitive bias of illusory superiority derives from the metacognitive inability of low-ability persons to recognize their own ineptitude; without the self-awareness of metacognition, low-ability people cannot objectively evaluate their actual competence or incompetence.”
That is an interesting claim to be coming from what is essentially a 'neuronal illusion',,,
The Confidence of Jerry Coyne – January 2014 Excerpt: Well and good. But then halfway through this peroration, we have as an aside the confession that yes, okay, it’s quite possible given materialist premises that “our sense of self is a neuronal illusion.” At which point the entire edifice suddenly looks terribly wobbly — because who, exactly, is doing all of this forging and shaping and purpose-creating if Jerry Coyne, as I understand him (and I assume he understands himself) quite possibly does not actually exist at all? The theme of his argument is the crucial importance of human agency under eliminative materialism, but if under materialist premises the actual agent is quite possibly a fiction, then who exactly is this I who “reads” and “learns” and “teaches,” and why in the universe’s name should my illusory self believe Coyne’s bold proclamation that his illusory self’s purposes are somehow “real” and worthy of devotion and pursuit? (Let alone that they’re morally significant: But more on that below.) http://douthat.blogs.nytimes.com/2014/01/06/the-confidence-of-jerry-coyne/?_php=true&_type=blogs&_r=0
,,, a neuronal illusion who has the illusion of free will,,,
Sam Harris's Free Will: The Medial Pre-Frontal Cortex Did It - Martin Cothran - November 9, 2012 Excerpt: There is something ironic about the position of thinkers like Harris on issues like this: they claim that their position is the result of the irresistible necessity of logic (in fact, they pride themselves on their logic). Their belief is the consequent, in a ground/consequent relation between their evidence and their conclusion. But their very stated position is that any mental state -- including their position on this issue -- is the effect of a physical, not logical cause. By their own logic, it isn't logic that demands their assent to the claim that free will is an illusion, but the prior chemical state of their brains. The only condition under which we could possibly find their argument convincing is if they are not true. The claim that free will is an illusion requires the possibility that minds have the freedom to assent to a logical argument, a freedom denied by the claim itself. It is an assent that must, in order to remain logical and not physiological, presume a perspective outside the physical order. http://www.evolutionnews.org/2012/11/sam_harriss_fre066221.html
,,, a neuronal illusion who has illusory perceptions of reality,,,
Donald Hoffman: Do we see reality as it is? - Video - 9:59 minute mark Quote: “fitness does depend on reality as it is, yes.,,, Fitness is not the same thing as reality as it is, and it is fitness, and not reality as it is, that figures centrally in the equations of evolution. So, in my lab, we have run hundreds of thousands of evolutionary game simulations with lots of different randomly chosen worlds and organisms that compete for resources in those worlds. Some of the organisms see all of the reality. Others see just part of the reality. And some see none of the reality. Only fitness. Who wins? Well I hate to break it to you but perception of reality goes extinct. In almost every simulation, organisms that see none of reality, but are just tuned to fitness, drive to extinction that perceive reality as it is. So the bottom line is, evolution does not favor veridical, or accurate perceptions. Those (accurate) perceptions of reality go extinct. Now this is a bit stunning. How can it be that not seeing the world accurately gives us a survival advantage?” https://youtu.be/oYp5XuGYqqY?t=601
,,, a neuronal illusion who, since he has no real time empirical evidence substantiating his grandiose claims, must make up illusory "just so stories",,,
Sociobiology: The Art of Story Telling – Stephen Jay Gould – 1978 – New Scientist Excerpt: Rudyard Kipling asked how the leopard got its spots, the rhino its wrinkled skin. He called his answers “Just So stories”. When evolutionists study individual adaptations, when they try to explain form and behaviour by reconstructing history and assessing current utility, they also tell just so stories – and the agent is natural selection. Virtuosity in invention replaces testability as the criterion for acceptance. https://books.google.com/books?id=tRj7EyRFVqYC&pg=PA530
,,,, a neuronal illusion who makes up illusory just so stories with the illusory, and impotent, 'designer substitute' of natural selection,,,,
“Darwinism provided an explanation for the appearance of design, and argued that there is no Designer — or, if you will, the designer is natural selection. If that’s out of the way — if that (natural selection) just does not explain the evidence — then the flip side of that is, well, things appear designed because they are designed.” Richard Sternberg – Living Waters documentary Whale Evolution vs. Population Genetics – Richard Sternberg and Paul Nelson – (excerpt from Living Waters video) https://www.youtube.com/watch?v=0csd3M4bc0Q
,,, to 'explain away' the appearance (illusion) of design,,
"Organisms appear as if they had been designed to perform in an astonishingly efficient way, and the human mind therefore finds it hard to accept that there need be no Designer to achieve this" Francis Crick - What Mad Pursuit - p. 30 “Biologists must constantly keep in mind that what they see was not designed, but rather evolved.” Francis Crick – What Mad Pursuit - p. 138 (1990) living organisms "appear to have been carefully and artfully designed" Richard C. Lewontin - Adaptation,” Scientific American, and Scientific American book 'Evolution' (September 1978)
,,, a neuronal illusion who must make up illusory meanings and purposes for his life since the reality of the nihilism inherent in his atheistic worldview is too much to bear,,,
Do atheists find meaning in life from inventing fairy tales? - March 2018 Excerpt: The survey admitted the meaning that atheists and non-religious people found in their lives is entirely self-invented. According to the survey, they embraced the position: “Life is only meaningful if you provide the meaning yourself.” https://uncommondescent.com/culture/do-atheists-find-meaning-in-life-from-inventing-fairy-tales/
Other than all that I guess Glen may have a point that this 'low ability' person finds his Darwinian worldview to be completely insane. But at least this 'low ability' person has not 'lost his mind' and is thus still a real person, and is not a neuronal illusion who is under the delusion that he is mentally superior to real people.
"It is not enough to say that design is a more likely scenario to explain a world full of well-designed things. Once you allow the intellect to consider that an elaborate organism with trillions of microscopic interactive components can be an accident...you have essentially lost your mind." Jay Homnick - senior editor of The American Spectator - 2005
Verse:
Romans 1:22-23 Claiming to be wise, they became fools, and exchanged the glory of the immortal God for images resembling mortal man and birds and animals and creeping things.
bornagain77
April 2, 2018
April
04
Apr
2
02
2018
04:18 AM
4
04
18
AM
PDT
Corneel at TSZ: Before going on with Entropy, I would like to give a couple of quick answers to your two posts.
The point where some non-universal descent could be found is in the the last universal common ancestor. Sure thing, pal.
The idea that LUCA could have been not one organism, but a population of organsism, is not mine, but has been debated in the literature. From the Wikipedia "Last Universal Common Ancestor page:
In 1998, Carl Woese proposed (1) that no individual organism can be considered a LUCA, and (2) that the genetic heritage of all modern organisms derived through horizontal gene transfer among an ancient community of organisms.[31] While the results described by the later papers Theobald (2010) and Saey (2010) demonstrate the existence of a single LUCA, the argument in Woese (1998) can still be applied to Ur-organisms. At the beginnings of life, ancestry was not as linear as it is today because the genetic code took time to evolve.
Theobald disagrees:
In 2010, based on "the vast array of molecular sequences now available from all domains of life,"[29] a formal test of universal common ancestry was published.[1] The formal test favored the existence of a universal common ancestor over a wide class of alternative hypotheses that included horizontal gene transfer. While the formal test overwhelmingly favored the existence of a single LUCA, this does not imply that the LUCA was ever alone. Instead, it was one of several early microbes.[1] However, given that many other nucleotides are possible besides those that are actually used in DNA and RNA today, it is almost certain that all organisms do have a single common ancestor. This is because it is extremely unlikely that organisms which descended from separate incidents where organic molecules initially came together to form cell-like structures would be able to complete a horizontal gene transfer without garbling each other's genes, converting them into noncoding segments. Further, many more amino acids are chemically possible than the twenty found in modern protein molecules. These lines of chemical evidence, taken into account for the formal statistical test by Theobald (2010), point to a single cell having been the LUCA in that, although other early microbes probably existed, only the LUCA's descendents survived beyond the Paleoarchean Era.[30] With a common framework in the AT/GC rule and the standard twenty amino acids, horizontal gene transfer would have been feasible and could have been very common later on among the progeny of that single cell.
I have no special preference about LUCA being one organism or a pool of organisms. I was just mentioning that both theories exist in the scientific literature. Then you say:
No, that is patently false. You are having your cake and eating it too. The “information jumps” that gpuccio introduces in his OP critically rely on the different genes he is comparing being homologs, i.e. on common descent being true. If he is unwilling to defend this, he must also drop that argument.
This is really funny! It is absolutely true that my argument here relies on common descent. I have clarified that I believe in common descent, and that I assume it for my biological resonings. But there is more. I have defended Common Descent in detail and with the best arguments that I can think of. see my comments here, #525, 526, 529, 534, 538 and 546. What can I do more than that? If others, like Bill Cole, still have doubts, I can only respect their opinion, which is what I do with everyone after having clarified what I think. I have also declared that I keep an open mind, which is IMO a very good attitude in all cases. But I have always said that I believe in CD, universal or not, and I have always explicitly defended CD here, in detail, and always by the same argument (the pattern of Ks). So, how can you say that I am "unwilling to defend this" I certainly believe that "explaining new genetic information" is the really important thing. And I use CD to demonstrate that only design can explain it. But it is also true that, if CD were not true (just a mental hypothesis, beware!) then the only explanation for the homologies in proteins would be common design. That's not what I believe, but it is a true and reasonable consideration.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
04:10 AM
4
04
10
AM
PDT
Entropy (and others) at TSZ: Before going to your arguments, I would like to clarify an important aspect. My OP here, and almost all the following discussion, is about specific biological issues, with the purpose of showing that: The ubiquitin system is a biological system that exhibits huge amounts of complex functional information, semiosis and irreducible complexity. You, like all your colleagues, have not touched that point in any way, as far as I can understand. You have rather repeatedly discussed the more genral issue: Are functional complexity, irreducible complexity and semiosis markers of design? Which is a completely different issue, that I gave for granted in the present OP, having discussed it many times and in great detail previously, even with you TSZ guys. OK, I will discuss it again here. But before that, I would like to ask you an explicit question about what you did not touch: Do you agree that my arguments here about the biology of the ubiquitin system do show that it is a system that exhibits huge amounts of complex functional information, semiosis and irreducible complexity? A simple "yes" will do. :) Or a simple no, but possibly accompanied by some real arguments to explain why it is no, for you. OK, I will not wait for your answer (but I would definitely appreciate an answer). So, I will go on to comment on your points. In next post.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
03:37 AM
3
03
37
AM
PDT
Entropy (and others) at TSZ: OK, let's come to your arguments. Because some argumetns you have expressed, and in an understandable pattern, and I must commend you for that, because none of your "colleagues" seems to have even tried to do that. You say it yourself:
I don’t know about others here, but I’m interested. Not in the way you’d wish though, since I understand the problems with those arguments. I’m interested more in the sense of wondering if you’d understand why I’m not impressed. I understand why you’re impressed though:
. "I'm interested". That's very, very good. Interest is the foundation for a good discussion. "Not in the way you’d wish though, since I understand the problems with those arguments." But you misunderstand me here. If you were interested just because you agree with me, I would appreciate it, but that would add nothing to the discussion. What I really need, and wish, and hope, is someone who is interested but "understands the problems" with my arguments. Or at least believes so. And who has the goodwill to make those "problems" explicit. That can certainly lead to a good discussion, and that's all that I wish. "I’m interested more in the sense of wondering if you’d understand why I’m not impressed." I think I do, but it will certainly help to look at your explanations, rather than simply imagine! :) "I understand why you’re impressed though:" That's good. Understanding is one thing, agreeing another one. I don't want to convince anyone, but understanding is certainly a precious achievement. More in next post.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
03:24 AM
3
03
24
AM
PDT
Entropy (and others) at TSZ: OK, I have nothing to say about GlenDavidson's "argument", because it is not an argument at all.
They really do learn very bad habits of thinking from their pseudoscience. In practice, ID is little more than a means of inducing the Dunning-Kruger effect for the sake of belief in ID.
For those who don't know it, here is a definition of the Dunning-Kruger effect, from Wikipedia: "In the field of psychology, the Dunning–Kruger effect is a cognitive bias wherein people of low ability suffer from illusory superiority, mistakenly assessing their cognitive ability as greater than it is. The cognitive bias of illusory superiority derives from the metacognitive inability of low-ability persons to recognize their own ineptitude; without the self-awareness of metacognition, low-ability people cannot objectively evaluate their actual competence or incompetence." It's really strange that GlenDavidson says that, because I have made my biological arguments very explicit and everyone can check easily if what I have said is true or not. Moreover, serious commenters from the other side, like Bob O'H, when invited to comment about them, have declined because molecular biology was not their specialty. Moreover, Arthur Hunt, who is certainly a competent molecular biologist, has commented at my spliceosome thread, but apparently he was not explicitly denied anything that I was saying. He has just very reasonably pointed at some literature, which I have commented upon, and then promised to post about some aspects of his work that apparently denied the logic used in my thread. But he has never done that. My point is: in the presence of explicit arguments that can be easily checked, serious commenters will come and debate explicitly, or just avoid it if they feel that they don't understand well the arguments. "They really do learn very bad habits of thinking from their pseudoscience" does not sound like that. However, I will quote another statement by GlenDavidson:
We’re not interested in such delusions, true. But, more importantly, they’re not arguments, just IDist wishful thinking (OK, #2 is true, but banal).
#2 is this statement of mine: 2) Irreducible complexity multiplies the functional complexity of the individual components of the system. Well, admissions from the other side are so rare that we have to treasure them! So, for the future discussion I will use this admission by GlenDavidson that my #2 is true. We will see mre in detail if it is banal or not. More in next post (later).gpuccio
April 2, 2018
April
04
Apr
2
02
2018
01:25 AM
1
01
25
AM
PDT
Entropy (and others) at TSZ: OK, I had decided not to look at the TSZ thread again, but I suppose that the last interventions there deserve some brief answer. I am greatful to ET for quoting a few statements from there so that I could have some idea of what was happening. I will start from this statement (by entropy):
It’s interesting that gpuccio was first begging for comments “from the other side,” then he won’t read the comments. Who can understand that kind of mentality?
But it's rather simple, after all. First, I was "begging" (more "hoping", I would say) for comments from the other sode, but my hope was that someone of those who post here could intervene. I was not really hoping for another parallel debate with TSZ, because I have done that a couple of times in the past, and it was very tiresome for me, and not really worthwhile in the end. However, when I became aware that a thread had been opened at TSZ about this thread of mine, I accepted to give it a look and ot try to give some answers here. After a brief time, as documented in the discussion above, I decided that it was even less worthwhile than in the past, and so I stopped looking at your thread. ET has continued to post some updates here, but I have not really commented on them. The reason why O decided that it was "was even less worthwhile than in the past" is ismple, too: in the past there were at least a few commenters who tried to make some real arguments about what I said (and, of course, the usual crowd of stupid nonsense). Now it seems that only the usual crowd has remained. In particular, nobody has really tried to comment on the content of this thread (which was, after all, the subject of your thread). Instead, the "best" among you have resorted to general denial of ID as a whole, which was not what I was discussing here. Now, it seems that my summary at #568 of the genral principles of ID has cause a couple of you to guve some readable answer, so I am trying to give a brief counter-answer. Of course, this is again about ID in general, and not about my arguments in this OP, that nobody has even touched in your thread, as far as I can see. Of course, I am checking your original posts to do that, and not relying on what ET quoted here, because that would not be appropriate. More in next post.gpuccio
April 2, 2018
April
04
Apr
2
02
2018
01:10 AM
1
01
10
AM
PDT
The only thing that will ever convince these people of the legitimacy of ID is a meeting with the Intelligent Designer(s). The TSZ ilk are useless, anti-science and angryET
April 1, 2018
April
04
Apr
1
01
2018
06:21 PM
6
06
21
PM
PDT
responding to gpuccio:
1. You really think that talking about 500 bits is impressive and beyond nature.
Absolutely. If you had any evidence to the contrary you would post it. But you choose to post the following trope:
I can explain to you why I find that unconvincing. If it was impossible for nature to put that amount of information together, then it would be impossible for designers to put that amount of information together. How so? Well, in order for designers to put that amount of information together, energy flow is necessary. Putting information together consists on “transforming” energy flow into patterns. We don’t produce available energy. We’re completely dependent on nature for that. So, claiming that a designer is necessary to produce “information,” seems a lot like putting the cart before the horse.
Utter gibberish. The whole point is that designers can do things with nature, using nature, that nature itself, ie operating freely, could not do. Intelligent designers are required to produce specified complexity. Otherwise archaeology, forensic science and SETI are all in a heap of trouble.
2. You really think that irreducible complexity adds to the bits “problem.”
Absolutely. See above
3. You might have never thought of the problem of anthropomorphism, or you think that’s an obvious inference, rather than an inclination from the fact that you’re human. That might be your true handicap. I have to warn you though, that convincing someone like me that your anthropomorphisms are anything but, might be a titanic task.
No one cares about the willfully ignorant. We can only make pour case to those who can actually think for themselves and realize there is only one reality to our existence. And your position's sheer dumb luck really isn't an explanation. Yes sheer dumb luck- it is all about chance events with you. Cosmic collisions, accidental genetic changes- all probability arguments. And nothing close to science. Just think of how many just-so cosmic collisions had to have occurred to give the earth just the right rotational speed to sustain life and give us the axis stabilizing moon. Sheer dumb luck ETA this gem:
“Semiosis” is your inability to understand the concept, the problem, and the subjectivity, of anthropomorphism.
The inability to understand the concept is all yours, entropy. You cannot grasp the fact that the genetic code is real and other real codes are used within cells. You don't like it because you know it is absurd to think that nature can produce real codes.ET
April 1, 2018
April
04
Apr
1
01
2018
06:14 PM
6
06
14
PM
PDT
1. Nobody has produced a way to demonstrate non-“materialistic” processes. So, of course, anything in science will be about “materialistic” processes.
Oh my. I told you this one is ignorant of what is being debated. Materialistic means blind and mindless- non-telic. And there is plenty science has told us about telic processes
2. The philosophically appropriate approach to understand nature is to assume non-telic processes.
Yeah, as much as you can. Nature produces rocks and stones but not Stonhenges.
3. “Stochastic” is not the same as “non-telic.”
Of course it is
Natural phenomena can have directions without being “telic.” Gravitation is a very obvious example.
That doesn't have anything to do with stochastic not being the same as non-telicET
April 1, 2018
April
04
Apr
1
01
2018
06:04 PM
6
06
04
PM
PDT
It doesn't matter cuz you don't know anything and clearly they know betta. :roll: They are wondering why I don't relay their alleged refutations but the only one I can see that attempted one is the first comment about the ubiquitous nature of ubiquitin. Everything else has just been whining and hand-waving- oh and personal attacks, deflections, misconceptions, etc.ET
April 1, 2018
April
04
Apr
1
01
2018
04:40 PM
4
04
40
PM
PDT
ET: I quote from my comment #453:
In my view, instead, my argument is that there are three different markers that are linked to a design originn and therefore allow empirically a design inference (that is the basic concept in ID, and I have discussed it many times in all its aspects). Those three features are: a) Functional complexity (the one I usually discuss, and which I have quantitatively assessed many times in detail) b) Semiosis (which has been abundantly discussed by UB) c) Irreducible complexity In my OP I have discussed in detail a specific biological system where all those three aspects are present. Therefore, a system for which a design inference is by far the only reasonable explanation. This is my argument. It is not a god-of-the-gap argument (whatever you mean by that). It is an empirical and scientific argument.
There is not much to be added. 1) Functional complexity beyond some appropriate threshold (500 bits will do in all contexts) clearly allows a design inference. 2) Irreducible complexity multiplies the functional complexity of the individual components of the system. 3) Semiosis is a feature that by its same form is never found in non design systems, and clearly points to design. But I suppose that our friends at TSZ are not interested in those arguments.gpuccio
April 1, 2018
April
04
Apr
1
01
2018
04:19 PM
4
04
19
PM
PDT
By the way you have all of the power to refute our design inference. All you have to do step up, do the work and demonstrate materialistic, stochastic (non-telic) processes can produce what we say is intelligently designed. Oh, but you say that you don't have to do anything but poo-poo the design inference with your ignorance. So much for science...ET
April 1, 2018
April
04
Apr
1
01
2018
04:10 PM
4
04
10
PM
PDT
Entropy was too afraid to join this site. Now I understand why.ET
April 1, 2018
April
04
Apr
1
01
2018
04:08 PM
4
04
08
PM
PDT
Do we infer design given any complexity? No. We have made that abundantly clear. Only people who are willfully ignorant make that claim.
If you want to be clear, then be clear, talk about evolution as understood by [most] scientists, or about evolution by natural means, for example.
I am. Most, if not all, evolutionary biologists say that natural selection and evolution proceed via blind and mindless processes. Mayr, who was one of the architects of the modern synthesis, supports that view in "What Evolution Is" and his other books. That is the point of "evolutionism"- evolution by means of blind and mindless processes is an untestable claim. The best evidence for macroevolution doesn't even call upon any mechanism, most likely because of that. So it doesn't have anything to do with religious inclinations that I don't have. It has everything to do with your ignorance of what your position claims. I readily accept science. You don't seem to know what science is. Hopefully you have a towel handy to wipe your spit off of your monitor.ET
April 1, 2018
April
04
Apr
1
01
2018
04:06 PM
4
04
06
PM
PDT
One last bit of ignorance from entropy. First it said:
You, however, think that just pointing to complexity will make your absurd imaginary friend into a reality.
To which I responded - No one claims the design inference from mere complexity. So what does entropy say?
I didn’t write “mere complexity.” That was you.
Right, uh-huh. You used the word "complexity" all by itself. And that, my ignorant opponent, is mere complexity. Entropy seems to think it is my fault that I know more about evolutionism than it does. Because I have read more evolutionary literature by more authors that is somehow a knock against me. Wow. And yes evolutionism is correct as it is based on faith and not science. If you had the science you would just post itET
April 1, 2018
April
04
Apr
1
01
2018
02:35 PM
2
02
35
PM
PDT
1 11 12 13 14 15 32

Leave a Reply