Uncommon Descent Serving The Intelligent Design Community

Controlling the waves of dynamic, far from equilibrium states: the NF-kB system of transcription regulation.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I have recently commented on another thread:

about a paper that (very correctly) describes cells as dynamic, far from equilibrium systems, rather than as “traditional” machines.

That is true. But, of course, the cell implements the same functions as complex machines do, and much more. My simple point is that, to do that, you need much greater functional complexity than you need to realize a conventional machine.

IOWs, dynamic, far from equilibrium systems that can be as successful as a conventional machine, or more, must certainly be incredibly complex and amazing systems, systems that defy everything else that we already know and that we can conceive. They must not only implement their functional purposes, but they must do that by “harnessing” the constantly changing waves of change, of random noise, of improbability. I have commented on those ideas in the mentioned thread, at posts #5 and #8, and I have quoted at posts #11 and #12 a couple of interesting and pertinent papers, introducing the important concept of robustness: the ability to achieve reliable functional results in spite of random noise and disturbing variation.

In this OP, I would like to present in some detail a very interesting system that shows very well what we can understand, at present, of that kind of amazing systems.

The system I will discuss here is an old friend: it is the NF-kB system of transcription factors (nuclear factor kappa-light-chain-enhancer of activated B cells). We are speaking, therefore, of transcription regulation, a very complex topic that I have already discussed in some depth here:

I will remind here briefly that transcription regulation is the very complex process that allows cells to be completely different using the same genomic information: IOWs, each type of cell “reads” differently the genes in the common genome, and that allows the different types of cell differentiation and the different cell responses in the same cell type.

Transcription regulation relies on many different levels of control, that are summarized in the above quoted OP, but a key role is certainly played by Transcription Factors (TFs), proteins that bind DNA and act as activators or inhibitors of transcription at specific sites.

TFs are a fascinating class of proteins. There are a lot of them (1600 – 2000 in humans, almost 10% of all proteins), and they are usually medium sized proteins, about 500 AA long, containing at least one highly conserved domain, the DNA binding domain (DBD), and other, often less understood, functional components.

I quote again here a recent review about human TFs:

The Human Transcription Factors

The NK-kB system is a system of TFs. I have discussed it in some detail in the discussion following the Ubiquitin thread, but I will describe it in a more systematic way here.

In general, I will refer a lot to this very recent paper about it:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The NF-kB system relies essentially on 5 different TFs (see Fig. 1 A in the paper):

  1. RelA  (551 AAs)
  2. RelB  (579 AAs)
  3. c-Rel  (619 AAs)
  4. p105/p50 (968 AAs)
  5. p100/p52  (900 AAs)

Those 5 TFs work forming dimers, homodimers or heterodimers, for a total of 15 possible compbinations, all of which have been found to work in the cell, even if some of them are much more common.

Then there are at least 4 inhibitor proteins, collectively called IkBs.

The mechanism is apparently simple enough. The dimers are inhibited by IkBs and therefore they remain in the cytoplasm in inactive form.

When an appropriate signal arrives to the cell and is received by a membrane receptor, the inhibitor (the IkB molecule) is phosphorylated and then ubiquinated and detached from the complex. This is done by a protein complex called IKK. The free dimer can then migrate to the nucleus and localize there, where it can act as a TF, binding DNA.

This is the canonical activation pathway, summarized in Fig. 1. There is also a non canonical activation pathway, that we will not discuss for the moment.


Mechanism of NF-κB action. In this figure, the NF-κB heterodimer consisting of Rel and p50 proteins is used as an example. While in an inactivated state, NF-κB is located in the cytosol complexed with the inhibitory protein IκBα. Through the intermediacy of integral membrane receptors, a variety of extracellular signals can activate the enzyme IκB kinase (IKK). IKK, in turn, phosphorylates the IκBα protein, which results in ubiquitination, dissociation of IκBα from NF-κB, and eventual degradation of IκBα by the proteasome. The activated NF-κB is then translocated into the nucleus where it binds to specific sequences of DNA called response elements (RE). The DNA/NF-κB complex then recruits other proteins such as coactivators and RNA polymerase, which transcribe downstream DNA into mRNA. In turn, mRNA is translated into protein, resulting in a change of cell function.

Attribution: Boghog2 at English Wikipedia [Public domain]

Now, the purpose of this OP is to show, in greater detail, how this mechanism, apparently moderately simple, is indeed extremely complex and dynamic. Let’s see.

The stimuli.

First of all, we must understand what are the stimuli that, arriving to the cell membrane, are capable to activate the NF-kB system. IOWs, what are the signals that work as inputs.

The main concept is: the NF-kB system is a central pathway activated by many stimuli:

  1. Inflammation
  2. Stress
  3. Free radicals
  4. Infections
  5. Radiation
  6. Immune stimulation

IOWs, a wide variety of aggressive stimuli can activate the system

The extracellular signal arrives to the cell usually through specific cytokines, for example TNF, IL1, or through pathogen associated molecules, like bacterial lipopolysaccharides (LPS). Of course there are different and specific membrane receptors, in particular IL-1R (for IL1) , TNF-R (for TNF), and many TLRs (Toll like receptors, for pathogen associated structures). A special kind of activation is implemented, in B and T lymphocytes, by the immune activation of the specific receptors for antigen epitopes (B cell receptor, BCR, and T cell receptor, TCR).

The process through which the activated receptor can activate the NF-kB dimer is rather complex: it involves, in the canonical pathway, a macromolecular complex called IKK (IkB kinase) complex, comprising two catalytic kinase subunits (IKKa and IKKb) and a regulatory protein (IKKg/NEMO), and involving in multiple and complex ways the ubiquitin system. The non canonical pathway is a variation of that. Finally, a specific protein complex (CBM complex or CBM signalosome) mediates the transmission from the immune BCR or TCR to the canonical pathway. See Fig. 2:

From: NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Increased-activity-of-the-CARMA1-BCL10-MALT1-signalosome-drives-constitutive-NF-kB_fig2_324089636 [accessed 10 Jul, 2019]
Figure 3 – NF-κB Activation in Lymphoid Malignancies: Genetics, Signaling, and Targeted Therapy
available via license: Creative Commons Attribution 4.0 International

I will not go into further details about this part, but those interested can have a look at this very good paper:

TLR-4, IL-1R and TNF-R signaling to NF-kB: variations on a common theme

In particular, Figg. 1, 2, 3.

In the end, as a result of the activation process, the IkB inhibitor is degraded by the ubiquitin system, and the NK-kB dimer is free to migrate to the nucleus.

An important concept is that this is a “rapid-acting” response system, because the dimers are already present, in inactive form, in the cytoplasm, and must not be synthesized de novo: so the system is ready to respond to the activating signal.

The response.

But what is the cellular response?

Again, there are multiple and complex possible responses.

Essentially, this system is a major regulator of innate and adaptive immune responses. As such, it has a central role in the regulation of inflammation, in immunity, in autoimmune processes, and in cancer.

Moreover, the NF-kB system is rather ubiquitous, and is present and active in many different cell types. And, as we have seen, it can be activated by different stimuli, in different ways.

So, the important point is that the response to activation must be (at least):

  1. Lineage-specific
  2. Stimulus-specific

IOWs, different cells must be able to respond differently, and each cell type must respond differently to different stimuli. That gives a wide range of possible gene expression patterns at the transcription level.

The following paper is a good review of the topic:

Selectivity of the NF-κB Response

For example, IL2 is induced by NF-kB activayion in T cells, but not in B cells (lineage specific response). Moreover, specific cell types can undergo specific, and often different, cell destinies after NF-kB activation: for example, NK-kB is strongly involved in the control and regulation of T and B cell development.

From:

30 years of NF-κB: a blossoming of relevance to human pathobiology

“B and T lymphocytes induce NF-κB in adaptive immune responses through the CARD11:Bcl10:MALT1 (CBM) complex (Hayden and Ghosh, 2008). Newly expressed genes promote lymphocyte proliferation and specific immune functions including antibody production by B cells and the generation of cytokines and other anti-pathogen responses by T cells.”

And, in the same cell type, certain promoters regulated by NF-kB require additional signaling (for example,  in human dendritic cells promoters for Il6Il12b, and MCP-1 require additional p38 histone phosphorylation to be activated), while others can be activated directly (stimulus-specific response).

So, to sum up:

  1. A variety of stimuli can activate the system in different ways
  2. The system itself has its complexity (different dimers)
  3. The response can be widely different, according to the cell type where it happens, and to the type of stimuli that have activated the system, and probably according to other complex variables.
  4. The possible responses include a wide range of regulations of inflammation, of the immune system, of cell specifications or modifications, and so on.

How does it work?

So, what do we know about the working of such a system?

I will ignore, for the moment, the many complexities of the activation pathways, both canonical and non canonical, the role of cyotkines and receptors and IKK complexes, the many facets of NEMO and of the involvement of the ubiquitin system.

For simplicity, we will start with the activated system: the IkB inhibitor has been released from the inactive complex in the cytoplasm, and some form of NF-kB dimer is ready to migrate to the nucleus.

Let’s remember that the purpose of this OP is to show that the system works as a dynamic, far from equilibrium system, rather than as a “traditional” machine. And that such a way to work is an even more amazing example of design and functional complexity.

To do that; I will rely mainly on the recent paper quoted at the beginning:

Considering Abundance, Affinity, and Binding Site Availability in the NF-kB Target Selection Puzzle

The paper is essentially about the NF-kB Target Selection Puzzle. IOWs, it tries to analyze what we know about the specificity of the response. How are specific patterns of transcription achieved after the activation of the system? What mechanisms allow the selection of the right genes to be transcribed (the targets) to implement the specific patterns according to cell type, context, and type of stimuli?

A “traditional” view of the system as a machine would try to establish rather fixed connections. For example, some type of dimer is connected to specific stimuli, and evokes specific gene patterns. Or some other components modulate the effect of NK-kB, generate diversification and specificity of the response.

Well, those ideas are not completely wrong. In a sense, the system does work also that way. Dimer specificity has a role. Other components have a role. In a sense, but only in a sense, the system works as though it were a traditional machine, and uses some of the mechanisms that we find in the concept of a traditional biological machine.

But that is only a tiny part of the real thing.

The real thing is that the system really works as a dynamic, far from equilibrium system, harnessing huge random/stochastic components to achieve robustness and complexity and flexibility of behavior in spite of all those non finalistic parts.

Let’s see how that happens, at least for the limited understanding we have of it. It is important to consider that this is a system that has been studied a lot, for decades, because of its central role in so many physiological and pathological contexts, and so we know many things. But still, our understanding is very limited, as you will see.

So, let’s go back to the paper. I will try to summarize as simply as possible the main concepts. Anyone who is really interested can refer to the paper itself.

Essentially, the paper analyzes three important and different aspects that contribute to the selection of targets at the genomic level by our TFs (IOWs, our NF-kB dimers, ready to migrate to the nucleus. As the title itself summarizes, they are:

  1. Abundance
  2. Affinity
  3. Binding site availability

1. Abundance

Abundance is referred here to two different variables: abundance of NF-kB Binding Sites in the genome and abundance of Nucleus-Localized NF-kB Dimers. Let’s consider them separately.

1a) Abundance of NF-kB Binding Sites in the genome:

It is well known that TFs bind specific sites in the genome. For NF-kB TFs, the following consensus kB site pattern has been found:

 5′-GGGRNWYYCC-3′

where R, W, Y, and N, respectively denote purine, adenine or thymine, pyrimidine, and any nucleotide.

That simply means that any sequence corresponding to that pattern in the genome can, in principle, bind NF-kB dimers.

So the problem is: how many such sequences do exist in the human genome?

Well, a study based on RelA has evaluated about 10^4 consensus sequences in the whole genome, but as NF-kB dimers seem to bind even incomplete consensus sites, the total number of potential binding sites could be nearer to 10^6

1b) Abundance of Nucleus-Localized NF-kB Dimers:

An estimate of the abundance of dimers in the nucleus after activation of the system is that about 1.5 × 10^5 molecules can be found, but again that is derived from studies about RelA only. Moreover, the number of molecules and type of dimer can probably vary much according to cell type.

So, the crucial variable, that is the ratio between binding sites and available dimers, and which could help undertsand the rate of sites saturation in the nucleus, remains rather undecided, and it seems very likely that it can vary a lot in different circumstances.

But there is another very interesting aspect about the concentration of dimers in the nucleus. According to some studies, NF-kB seems to generate oscillations of its nuclear content in some cell types, and those oscillation can be a way to generate specific transcription patterns:

NF-kB oscillations translate into functionally related patterns of gene expression

For example, this very recent paper :

NF-κB Signaling in Macrophages: Dynamics, Crosstalk, and Signal Integration

shows at Fig. 3 the occupancy curve of binding sites at nuclear level after NF-kB activation in two different cell types.

In fibroblasts, the curve is a periodic oscillation, with a frequency that varies according to various factors, and translates into different transcription scenarios accordingly:

Gene expression dynamics scale with the period (g1) and amplitude (g2) of these oscillations, which are influenced by variables such as signal strength, duration, and receptor identity.


In macrophages, instead, the curve is rather:

a single, strong nuclear translocation event which persists for as long as the stimulus remains and tends to remain above baseline for an extended period of time.

In this case, the type of transcription will be probably regulated by the are under the curve, ratehr than by the period and amplitude of the oscialltions, as happened in fibroblasts.

Interestingly, while in previous studies it seemed that the concentration of nuclear dimers could be sufficient to saturate most or all binding sites, that has been found not to be the case in more recent studies. Again from the paper about abundance:

in fact, this lack of saturation of the system is necessary to generate stimulus- and cell-type specific gene expression profiles

Moreover, the binding itself seems to be rather short-lived:

Interestingly, it is now thought that most functional NF-kB interactions with chromatin—interactions that lead to a change in transcription—are fleeting… a subsequent study using FRAP in live cells expressing RelA-GFP showed that most RelA-DNA interactions are actually quite dynamic, with half-lives of a few seconds… Indeed, a recent study used single-molecule tracking of individual Halo-tagged RelA molecules in live cells to show that the majority (∼96%) of RelA undergoes short-lived interactions lasting on average ∼0.5 s, while just ∼4% of RelA molecules form more stable complexes with a lifetime of ∼4 s.

2. Affinity

Affinity of dimers for DNA sequences is not a clear cut matter. From the paper:

Biochemical DNA binding studies of a wide variety of 9–12 base-pair sequences have revealed that different NF-kB dimers bind far more sequences than previously thought, with different dimer species exhibiting specific but overlapping affinities for consensus and non-consensus kB site sequences.

IOWs, we have different dimers (15 different types) binding with varying affinity different DNA sequences (starting from the classical consensus sequence, but including also incomplete sequences). Remember that those sequences are rather short (the consensus sequence is 10 nucleotides long), and that there are thousands of such sequences in the genome.

Moreover, different bindings can affect transcription differently. Again, from the paper:

How might different consensus kB sites modulate the activity of the NF-kB dimers? Structure-function studies have shown that binding to different consensus kB sites can alter the conformation of the bound NF-kB dimers, thus dictating dimer function When an NF-kB dimer interacts with a DNA sequence, side chains of the amino  acids located in the DNA-binding domains of dimers contact the bases exposed in the groove of the DNA. For different consensus kB site sequences different bases are exposed in this groove, and NF-kB seems to alter its conformation to maximize interactions with the DNA and maintain high binding affinity. Changes in conformation may in turn impact NF-kB binding to co-regulators of transcription, whether these are activating or inhibitory, to specify the strength and dynamics of the transcriptional response. These findings again highlight how the huge array of kB binding site sequences must play a key role in modulating the transcription of target genes.

Quite a complex scenario, I would say!

But there is more:

Finally, as an additional layer of dimer and sequence-specific regulation, each of the subunits can be phosphorylated at multiple sites with, depending on the site, effects on nearly every step of NF-kB activation.

IOWs, the 15 dimers we have mentioned can be phosphorylated in many different ways, and that changes their binding affinities and their effects on transcription.

This section of the paper ends with a very interesting statement:

Overall, when considering the various ways in which NF-kB dimer abundances and their affinity for DNA can be modulated, it becomes clear that with these multiple cascading effects, small differences in consensus kB site sequences and small a priori differences in interaction affinities can ultimately have a large impact on the transcriptional response to NF-kB pathway activation.

Emphasis mine.

This is interesting, because in some way it seems to suggest that the whole system acts like a chaotic system, at least at some basic level. IOWs, small initial differences, maybe even random noise, can potentially affect deeply the general working of the whole systems.

Unless, of course, there is some higher, powerful level of control.

3. Availability of high affinity kB binding sequences

We have seen that there is a great abundance and variety of binding sequences for NF-kB dimers in the human genome. But, of course, those sequences are not necessarily available. Different cell types will have a different scenario of binding sites availability.

Why?

Because, as we know, the genome and chromatin are a very dynamic system, that can exist in many different states, continuosly changing in different cell types and, in the same cell type, in different conditions..

We know rather well the many levels of control that affect DNA and chromatin state. In brief, they are essentially:

  1. DNA methylation
  2. Histone modifications (methylation, acetylation, etc)
  3. Chromatin modifications
  4. Higher levels of organization, including nuclear localization and TADs (Topologically Associating Domains)

For example, from the paper:

The promoter regions of early response genes have abundant histone acetylation or trimethylation prior to stimulation [e.g., H3K27ac, (67) and H4K20me3, (66)], a chromatin state “poised” for immediate activation…  In contrast, promoters of late genes often have hypo-acetylated histones, requiring conformational changes to the chromatin to become accessible. They are therefore unable to recruit NF-kB for up to several hours after stimulation (68), due to the slow process of chromatin remodeling.

We must remember that each wave of NK-kB activation translates into the modified transcription of a lot of different genes at the genome level. It is therefore extremely important to consider what genes are available (IOWs, their promoters can be reached by the NF-kB signal) in each cell type and cell state.

The paper concludes:

Taken together, chromatin state and chromatin organization strongly influence the selection of DNA binding sites by NF-kB dimers and, most likely, the selection of the target genes that are regulated by these protein-DNA interaction events. Analyses that consider binding events in the context of three-dimensional nuclear organization and chromatin composition will be required to generate a more accurate view of the ways in which NF-kBDNA binding affects gene transcription.

This is the main scenario. But there are other components, that I have not considered in detail for the sake of brevity, for example competition between NF-kB dimers and the complex role and intervention of other co-regulators of transcription.

Does the system work?

But does the system work?

Of course it does. It is a central regulator, as we have said, of many extremely important biological processes, above all immunity. This is the system that decides how immune cells, T and B lymphocytes, have to behave, in terms of cell destiny and cell state. It is of huge relevance in all inflammatory responses, and in our defense against infections. It works, it works very well.

And what happens if it does not work properly?

Of course, like all very complex systems, errors can happen. Those interested can have a look at this recent paper:

30 years of NF-κB: a blossoming of relevance to human pathobiology

First of all, many serious genetic diseases have been linked to mutations in genes involved in the system. You can find a list in Table 1 of the above paper. Among them, for example, some forms of SCID, Severe combined immunodeficiency, one of the most severe genetic diseases of the immune system.

But, of course, a dysfunction of the NF-kB system has a very important role also in autoimmune diseases and in cancer.

Conclusions.

So, let’s try to sum up what we have seen here in the light of the original statement about biological systems that “are not machines”.

The NF-kB system is a perfect example. Even if we still understand very little of how it works, it is rather obvious that it is not a traditional machine.

A traditional machine would work differently. The signal would be transmitted from the membrane to the nucleus in the simplest possible way, without ambiguities and diversions. The Transcription Factor, once activated, would bind, at the level of the genome, very specific sites, each of them corresponding to a definite cascade of specific genes. The result would be clear cut, almost mechanical. Like a watch.

But that’s not the way things happen. There are myriads of variations, of ambiguities, of stochastic components.

The signal arrives to the membrane in multiple ways, very different one from the other: IL1, IL17, TNF, bacterial LPS, and immune activation of the B cell receptor (BCR) or the T cell receptor (TCR) are all possible signals.

The signal is translated to the NF-kB proteins in very different ways: canonical or non canonical activation, involving complex protein structures such as:

The CBM signalosome, intermediate between immune activation of BCR or TCR and canonical activation of the NF-kB. This complex is made of at least three proteins, CARD11, Bcl10 and MALT1.

The IKK complex in canonical activation: this is made of three proteins, IKK alpha, IKK beta, and NEMO. Its purpose is to phosphorylate the IkB, the inhibitor of the dimers, so that it can be ubiquinated and released from the dimer. Then the dimer can relocate to the nucleus.

Non canonical pathway: it involves the following phosphorylation cascade: NIK -> IKK alpha dimer -> Relb – p100 dimer -> Relb – p50 dimer (the final TF). It operates during the development of lymphoid organs and is responsible for the generation of B and T lymphocytes.

Different kinds of activated dimers relocate to the nucleus.

Different dimers, in varying abundance, interact with many different binding sites: complete or incomplete consensus sites, and probably others. The interaction is usually brief, and it can generate an oscillating pattern, or a more stable pattern

Completely different sets of genes are transcribed in different cell types and in different contexts, because of the interaction of NF-kB TFs with their promoters.

Many other factors and systems contribute to the final result.

The chromatin state of the cell at the moment of the NF-kB activation is essential to determine the accessibility of different binding sites, and therefore the final transcription pattern.

All these events and interactions are quick, unstable, far from equilibrium. A lot of possible random noise is involved.

In spite of that amazing complexity and potential stochastic nature of the system, reliable transcripion regulation and results are obtained in most cases. Those results are essential to immune cell differentiation, immune response, both innate and adaptive, inflammation, apoptosis, and many other crucial cellular processes.

So, let’s go back to our initial question.

Is this the working of a machine?

Of course it is! Because the results are purposeful, reasonably robust and reliable, and govern a lot of complex processes with remarkable elegance and efficiency.

But certainly, it is not a traditional machine. It is a lot more complex. It is a lot more beautiful and flexible.

It works with biological realities and not with transistors and switches. And biological realities are, by definition, far from equilibrium states, improbable forms of order that must continuously recreate themselves, fighting against the thermodynamic disorder and the intrinsic random noise that should apparently dominate any such scenario.

It is more similar to a set of extremely clever surfers who succeed in performing elegant and functional figures and motions in spite of the huge contrasting waves.

It is, from all points of view, amazing.

Now, Paley was absolutely right. No traditional machine, like a watch, could ever originate without design.

And if that is true of a watch, with its rather simple and fixed mechanisms, how much truer it must be for a system like NF-kB? Or, for that, like any cellular complex system?

Do you still have any doubts?

Added graphic: The evolutionary history, in terms of human conserved information, of the three proteins in the CBM signalosome.
On the y axis, homologies with the human protein as bits per aminoacid (bpa). On the x axis, approximate time of appearance in million of years.
The graphic shows the big information jump in vertebrates for all three protens , especially CARD11.


Added graphic: two very different proteins and their functional history


Added graphic (for Bill Cole). Functional history of Prp8, collagen, p53.
Comments
"Mutations are random" means they are accidents, errors and mistakes. They were not planned and just happened to happen due to the nature of the process. Yes, x-rays may have caused the damage that produced the errors but the changes were spontaneous and unpredictable as to which DNA sequences, if any, would have been affected.ET
July 14, 2019
July
07
Jul
14
14
2019
01:13 PM
1
01
13
PM
PDT
gpuccio:
Most mutations are random. There can be no doubt about that.
I doubt it. I would say most are directed and only some are happenstance occurrences. See Spetner, "Not By Chance",1997. Also Shapiro, "Evolution: a view from the 21st Century". And:
He [the Designer] indeed seems to have “carefully crafted” information in His species giving them the ability to respond to environmental stimuli to alter their own genome to adapt to new environments. He then evidently let them wander where they will with the ability to adapt.- Dr. Lee Spetner “the Evolution Revolution” p 108
Just think about it- a Designer went through all of the trouble to produce various living organisms and place them on a changing planet in a changing universe. But the Designer is then going to leave it mostly to chance how those organisms cope with the changes? It just makes more sense that organisms were intelligently designed with the ability to adapt and evolve, albeit with genetic entropy creeping in.ET
July 14, 2019
July
07
Jul
14
14
2019
01:10 PM
1
01
10
PM
PDT
Good post at 56, gp. Also, it is my understanding that when someone says "mutations are random" they mean there is no causal connection between the mutation and whatever eventual effects and possible benefits it might have for the organism. "Mutations are random" doesn't refer to the causes of the mutations, I don't think.hazel
July 14, 2019
July
07
Jul
14
14
2019
12:52 PM
12
12
52
PM
PDT
I, of course, disagree with you. The third article,,, "According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets.,,, “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe." That is fairly straightforward. And again, Directed mutations are ‘another possible explanation’. Your 'convoluted' model is not nearly as robust as you have presupposed.bornagain77
July 14, 2019
July
07
Jul
14
14
2019
12:34 PM
12
12
34
PM
PDT
Bornagain77: Most mutations are random. There can be no doubt about that. Of course, that does not exclude that some are directed. A directed mutation is an act of design. I perfectly agree with Behe that the level of necessary design intervention is at least at the family level. The three quotes you give have nothing to do with directed mutations and design. In particular, the author if the second one is frankly confused. He writes:
Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it. On the contrary, there's much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism's predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.
This is simple ignorance. The existence of patterns does not mean that a system is not probabilistic. It just means that there are also necessity effects. He makes his error clear saying: "Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern." Now, "a higher chance" is of course a probabilistic statement. A random distribution is not a distribution where all events have the same probability to happen. That is called a uniform probability distribution. If some events (like mutations near a place where mutations have already occurred) have a higher probability to occur, that is still a random distribution, one where the probability of the events is not uniform. Things become even worse. He writes: "While we can't say mutations are random, we can say there is a large chaotic component, just as there is in the throw of a loaded dice. But loaded dice should not be confused with randomness because over the long run—which is the time frame of evolution—the weighted bias will have noticeable consequences." But of course a loaded dice is a random system. Let's say that the dice is loaded so that 1 has a higher probability to occur. So the probabilities of the six possible events, instead of being all 1/6 (uniform distribution), are, for example, 0.2 for 1 and 0.16 for all the other outcomes. So, the dice is loaded. And so? Isn't that a random system? Of course it is. Each event is completely probabilitstic: we cannot anticipate it with a necessity rule. But the event one is more probable than the others. That article is simply a pile of errors and confusion. Whoever understands something about probability can easily see that. Unfortunately you tend to quote a lot of things, but it seems that not always you evaluate them critically. Again, I propose: let's leave it at that, This discussion does not seem to lead anywhere.gpuccio
July 14, 2019
July
07
Jul
14
14
2019
12:21 PM
12
12
21
PM
PDT
Gp states
I think that most mutations are random,
And yet the vast majority of mutations are now known to be 'directed'
:How life changes itself: the Read-Write (RW) genome. – 2013 Excerpt: Research dating back to the 1930s has shown that genetic change is the result of cell-mediated processes, not simply accidents or damage to the DNA. This cell-active view of genome change applies to all scales of DNA sequence variation, from point mutations to large-scale genome rearrangements and whole genome duplications (WGDs). This conceptual change to active cell inscriptions controlling RW genome functions has profound implications for all areas of the life sciences. http://www.ncbi.nlm.nih.gov/pubmed/23876611 WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? Fully Random Mutations – Kevin Kelly – 2014 Excerpt: What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it. On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. http://edge.org/response-detail/25264 Duality in the human genome – November 28, 2014 Excerpt: According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets. Scientists refer to these as cis and trans mutations, respectively. Evidently, an organism must have more cis mutations, where the second gene form remains intact. “It’s amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula,” says Hoehe. http://medicalxpress.com/news/2014-11-duality-human-genome.html
i.e. Directed mutations are 'another possible explanation'. As to, "do you believe that all relevant functional information is generated when “kinds” are created? And when would that happen?" I believe in 'top down' creation of 'kinds' with genetic entropy, as outlined by Sanford and Behe, following afterwards. As to exactly where that line should be, Behe has recently revised his estimate:
"I now believe it, (the edge of evolution), is much deeper than the level of class. I think is actually goes down to the level of family" Michael Behe: Darwin Devolves - video - 2019 https://www.youtube.com/watch?v=zTtLEJABbTw In this bonus footage from Science Uprising, biochemist Michael Behe discusses his views on the limits of Darwinian explanations and the evidence for intelligent design in biology.
I note that my model is Theologically modest in that I hold to traditional concepts of the omniscience of God and God creating 'kinds' that reproduce after themselves, whereas, humorously, your model is all over the place Theologically speaking. Your model, Theologically speaking, humorously reminds me of this old Johnny Cash song:
JOHNNY CASH - ONE PIECE AT A TIME - CADILLAC VIDEO https://www.youtube.com/watch?v=Hb9F2DT8iEQ
bornagain77
July 14, 2019
July
07
Jul
14
14
2019
11:38 AM
11
11
38
AM
PDT
Bornagain77: Moreover, the mechanisms described by Behe in Darwin devolves are the known mechanisms of NS. They can certainly create some diversification, but essentially they give limited advantages in very special contextx, and they are essentially very simple forms of variation, They cannot certainly explain the emergence of new species, least of all explain the emergence of new comples functional information, like new functional proteins. So, do you believe that all relevant functional information is generated when "kinds" are created? And when would that happen? Just to understand.gpuccio
July 14, 2019
July
07
Jul
14
14
2019
10:18 AM
10
10
18
AM
PDT
Bornagain77: OK, it's too easy to be right in criticizing Swamidass! :) (Just joking, just joking... but not too much) Just to answer you observations about randomness: I think that most mutations are random, unless they are guided by design. I am not sure that I understand what your point is. Do you believe they are guided? I also believe that some mutations are guided, but that is a form of design. If they are not guided, how can you describe the system? If you cannot describe it in terms of necessity (and I don't think you can), some probability distribution is the only remaining option. Again, I don't understand what you really mean. But of course the mutations (if they are mutations) that generate new functional information are not random at all. they must be guided, or intelligently selected. As you know, I cannot debate God in this context. I can only do what ID theory allows is to do: recognize events where a design inference is absolutely (if you allow the word) warranted.gpuccio
July 14, 2019
July
07
Jul
14
14
2019
10:13 AM
10
10
13
AM
PDT
To all: As usual, the levels of regulation and crosstalk of this NF-kB system grow each time I make a Pubmed search. We are all interested, of course, in long non coding RNAs. Well, this paper is about their role in NF-kB signaling: Lnc-ing inflammation to disease https://www.ncbi.nlm.nih.gov/pubmed/28687714
Abstract Termed 'master gene regulators' long ncRNAs (lncRNAs) have emerged as the true vanguard of the 'noncoding revolution'. Functioning at a molecular level, in most if not all cellular processes, lncRNAs exert their effects systemically. Thus, it is not surprising that lncRNAs have emerged as important players in human pathophysiology. As our body's first line of defense upon infection or injury, inflammation has been implicated in the etiology of several human diseases. At the center of the acute inflammatory response, as well as several pathologies, is the pleiotropic transcription factor NF-??. In this review, we attempt to capture a summary of lncRNAs directly involved in regulating innate immunity at various arms of the NF-?? pathway that have also been validated in human disease. We also highlight the fundamental concepts required as lncRNAs enter a new era of diagnostic and therapeutic significance.
The paper, unfortunately, is not open access. It is interesting, however, than lncRNAs are now considered "'master gene regulators".gpuccio
July 14, 2019
July
07
Jul
14
14
2019
10:05 AM
10
10
05
AM
PDT
No, I do not think Dr. Cornelius Hunter is ALWAYS right. But I certainly think he is right in his critique of Swamidass. Whereas I don't think you are always wrong. I just think you are, in this instance, severely mistaken in one or more of your assumptions behind your belief in common descent. Your model is, from what I can tell, severely convoluted. If you presuppose randomness in your model at any instance prior to the design input from God to create a new family of species.,, that is one false assumption that would undermine your claim. I can provide references if need be.bornagain77
July 14, 2019
July
07
Jul
14
14
2019
09:25 AM
9
09
25
AM
PDT
Bornagain77: I disagree with Cornelius Hunter when I think he is wrong. In that sense, I treat him like anyone else. You seem to believe that he is always right. I don't. Many times I have found that he is wrong in what he says. And no, my argument about neutral variation has nothing to do wiith the argument of shared errors. And with the idea that "lightning doesn’t strike twice". My argument is about differences, not similarities, I think you don't understand it. But that's not a problem.gpuccio
July 14, 2019
July
07
Jul
14
14
2019
08:27 AM
8
08
27
AM
PDT
"I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree." Like when he contradicts you? :) Though you tried to downplay it, your argument from supposedly 'neutral variations' is VERY similar to the shared error argument. As such, for reasons listed above, it is not nearly as strong as you seem to presuppose. It is apparent that you believe the variations were randomly generated and therefore you are basically claiming that “lightning doesn’t strike twice”, which is exactly the argument that Dr. Hunter critiqued. Moreover, If anything we now have far more evidence of mutations being 'directed' than we do of them being truly random. You said you could think of no other possible explanation, I hold that directed mutations are a 'other possible explanation' that is far more parsimonious to the overall body of evidence than your explanation of a Designer, i.e. God, creating a brand new species without bothering to correct supposed neutral variations and/or supposed shared errors.bornagain77
July 14, 2019
July
07
Jul
14
14
2019
07:26 AM
7
07
26
AM
PDT
Bornagain77: My argument is not about shared errors. It is about neutral mutations at neutral sites, grossly proportional to evolutionary split times. It is about the ka/ks ratio and the saturation of neutral sites after a few hundred million years. I have made the argument in great detail in the past, with examples, but I have no intention to repeat all the work now. By the way, I would be cautious in accepting everything that Cornelius Hunter says, as you seem to do. I agree with him many times. But many other times I fully disagree.gpuccio
July 14, 2019
July
07
Jul
14
14
2019
07:08 AM
7
07
08
AM
PDT
as to:
1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences.,,, OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1)
Again, the argument is not nearly as strong as you seem to think it is: Particularly You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor. The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different. In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms.
Shared Errors: An Open Letter to BioLogos on the Genetic Evidence, Cont. Cornelius Hunter - June 1, 2016 In recent articles (here, here and here) I have reviewed BioLogos Fellow Dennis Venema’s articles (here, here and here) which claimed that (1) the genomes of different species are what we would expect if they evolved, and (2) in particular the human genome is compelling evidence for evolution. Venema makes several confident claims that the scientific evidence strongly supports evolution. But as I pointed out Venema did not reckon with an enormous body of contradictory evidence. It was difficult to see how Venema could make those claims. Fortunately, however, we were able to appeal to the science. Now, as we move on to Venema’s next article, that will all change. In this article, Venema introduces a new kind of genetic evidence for evolution. Again, Venema’s focus is on, but not limited to, human evolution. Venema’s argument is that harmful mutations shared amongst different species, such as the human and chimpanzee, are powerful and compelling evidence for evolution. These harmful mutations disable a useful gene and, importantly, the mutations are identical. Are not such harmful, shared mutations analogous to identical typos in the term papers handed in by different students, or in historical manuscripts? Such typos are telltale indicators of a common source, for it is unlikely that the same typo would have occurred independently, by chance, in the same place, in different documents. Instead, the documents share a common source. Now imagine not one, but several such typos, all identical, in the two manuscripts. Surely the evidence is now overwhelming that the documents are related and share a common source. And just as a shared, identical, typos are a telltale indicator of a common source, so too must shared harmful mutations be proofs of a common ancestor. It is powerful and compelling evidence for common descent. It is, explains Venema, “one of the strongest pieces of evidence in favor of common ancestry between humans and chimpanzees (and other organisms).” There is only one problem. As we have explained so many times, the argument is powerful because the argument is religious. This isn’t about science. The Evidence Does Not Support the Theory The first hint of a problem should be obvious: harmful mutations are what evolution is supposed to kill off. The whole idea behind evolution is that improved designs make their way into the population via natural selection, and by the same logic natural selection (or purifying selection in this case) filters out the harmful changes. Therefore finding genetic sequence data that must be interpreted as harmful mutations weighs against evolutionary theory. Also, there is the problem that any talk of how a gene proves evolutionary theory is avoiding the problem that evolution fails to explain how genes arose in the first place. Evolution claiming proof in the details of gene sequences seems to be putting the cart before the horse. No Independent Changes You could say that the heart of this “shared error” argument is the idea that “lightning doesn’t strike twice.” The identical, harmful mutations, in different species, could not have arisen independently. Instead they must have arisen only once, and then were inherited from a common ancestor. The problem, of course, there is no reason to make this assumption. The logic made sense for written documents, but the species are not ancient manuscripts or homework assignments. They are species, and species are different. In fact repeated designs found in otherwise distant species are ubiquitous in biology. Listening to evolutionists one would think the species fall into an evolutionary pattern with a few minor exceptions here and there. But that is overwhelmingly false. From the morphological to the molecular level, repeated designs are everywhere, and they take on many different forms. The problem is that these repeated designs appear in species so distant that, according to evolutionary theory, their common ancestor could not have had that design. The human and squid have similar vision systems, but their purported common ancestor, a much simpler and more ancient organism, would have had no such vision system. Evolutionists are forced to say that incredibly complex designs must have arisen, yes, repeatedly and independently. And this must have occurred over and over in biology. It would be a challenge simply to document all of the instances in which evolutionists agreed to an independent origins. For evolutionists then to insist that similar designs in allied species can only be explained by common descent amounts to having it both ways. Bad Designs This “shared error” argument also relies on the premise that the structures in question are bad designs. In this case, the mutations are “harmful,” and so the genes are “broken.” And while that may well be true, it is a premise with a very bad track record. The history of evolutionary thought is full of claims of bad, inefficient, useless designs which, upon further research were found to be, in fact, quite useful. Simply from a history of science perspective, this is a dangerous argument to be making. Epicureanism The “shared error” argument is bad science and bad history, but it remains a very strong argument. This is because its strength does not come from science or history, but rather from religion. As I have explained many times, evolution is a religious theory, and the “shared error” argument is no different. This is why the scientific and historical problems don’t matter. Venema explains: The fact that different mammalian species, including humans, have many pseudogenes with multiple identical abnormalities (mutations) shared between them is a problem for any sort of non-evolutionary, special independent creation model. This is a religious argument, evolution as a referendum on a “special independent creation model.” It is not that the species look like they arose by random chance, it is that they do not look like they were created. Venema and the evolutionists are certain that God wouldn’t have directly created this world. There must be something between the Creator and creation — a Plastik Nature if you will. And if Venema and the evolutionists are correct in their belief then, yes, evolution must be true. Somehow, some way, the species must have arisen naturalistically. This argument is very old. In antiquity it drove the Epicureans to conclude the world must have arisen on its own by random motion. Today evolutionists say the same thing, using random mutations as their mechanism. Needed: An Audit Darwin’s book was loaded with religious arguments. They were the strength of his otherwise weak thesis, and they have always been the strength behind evolutionary thought. No longer can we appeal to the science, for it is religion that is doing the heavy lifting. Yet evolutionists claim the high ground of objective, empirical reasoning. Venema admits that some other geneticists do not agree with this “shared error” argument but, he warns, they do so “for religious reasons.” We have also seen this many times. Evolutionists make religious claims and literally in the next moment lay the blame on the other guy. This is the world according to the Warfare Thesis. We need an audit of our thinking. https://evolutionnews.org/2016/06/shared_errors_a/
and
In Arguments for Common Ancestry, Scientific Errors Compound Theoretical Problems Evolution News | @DiscoveryCSC May 16, 2016 (6) Swamidass points to pseudogenes as evidence for common ancestry, even though many pseudogenes show evidence of function, including the vitellogenin pseudogene that Swamidass cites. Swamidass repeatedly cites Dennis Venema’s arguments for common ancestry based upon pseudogenes. However, as we’ve discussed here in the past, quite a few pseudogenes have turned out to be functional, and we’re discovering more all the time. It’s only recently that we’ve had the technology to study the functions of pseudogenes, so we are just at the beginning of doing so. While it’s true that there’s a lot about pseudogenes we still don’t know, an RNA Biology paper observes, “The study of functional pseudogenes is just at the beginning.” And it predicts that “more and more functional pseudogenes will be discovered as novel biological technologies are developed in the future.” The paper concludes that functional pseudogenes are “widespread.” Indeed, when we carefully study pseudogenes, we often do find function. One paper in Annual Review of Genetics tellingly observed: “Pseudogenes that have been suitably investigated often exhibit functional roles.” One of Swamidass’s central examples mirrors Dennis Venema’s argument that the vitellogenin pseudogene in humans demonstrates we’re related to egg-laying vertebrates like fish or reptiles. But a Darwin-doubting scientist was willing to dig deeper. Good genetic evidence now indicates that what Dennis Venema calls the “human vitellogenin pseudogene” is really part of a functional gene, as one technical paper by an ID-friendly creationist biologist has shown. https://evolutionnews.org/2016/05/in_arguments_fo/
bornagain77
July 14, 2019
July
07
Jul
14
14
2019
06:59 AM
6
06
59
AM
PDT
Bornagain77: I quote myself: "That said, I am rather sure that you will stick to your model, model c). That’s fine for me. But I wanted to clarify as much as possible." The only thing in my model that explains biological form is design. Maybe it is not enough, but it is certainly necessary. I want to be clear: I agree with you about the importance of consciousness and of quantum mechanics. But what has that to do with my argument? Do you believe that functional information is designed? I do. Design comes from consciousness. Consciousness interacts with matter through some quantum interface. That's exactly what I believe. My model is not parsimonious and requires gargantuan jumps? Is it worse than the initial creation of kinds? However, for me we can leave it at that. As explained, I was not even implying CD in my initial discussion here.gpuccio
July 14, 2019
July
07
Jul
14
14
2019
06:50 AM
6
06
50
AM
PDT
correct time mark is 27 minute mark
How Quantum Mechanics and Consciousness Correlate – video (how quantum information theory and molecular biology correlate – 27 minute mark) https://youtu.be/4f0hL3Nrdas?t=1634
bornagain77
July 14, 2019
July
07
Jul
14
14
2019
06:30 AM
6
06
30
AM
PDT
What is the falsification criteria of your model? It seems you are lacking a rigid criteria. Not to mention lacking experimental warrant that what you propose is even possible. "No descent at all. This is, I believe, your model." I do not believe in UCD, but I do believe in diversification from an initially created "kind" by devolutionary processes. i.e. Behe "Darwin Devolves" and Sanford "Genetic Entropy". I note, especially in the Cambrian, we are talking gargantuan jumps in the fossil record. Your model is not parsimonious to such gargantuan jumps. Moreover, your genetic evidence is not nearly as strong as you seem to think it is. And even if it were, it is not nearly enough to explain 'biological form'. For that you need to incorporate recent finding from quantum biology:
How Quantum Mechanics and Consciousness Correlate - video (how quantum information theory and molecular biology corelate - 23 minute mark) https://www.youtube.com/watch?v=4f0hL3Nrdas Darwinian Materialism vs. Quantum Biology – Part II - video https://www.youtube.com/watch?v=oSig2CsjKbg
bornagain77
July 14, 2019
July
07
Jul
14
14
2019
06:07 AM
6
06
07
AM
PDT
Bornagain77 at #42: "All new information is ‘designed in the process”???? Please elaborate on exactly what process you are talking about." It should be clear. However, let's try again. Let's say that there are 3 main models for how functional information comes into existence in biological beings. a) Descent with modifications generated by RV + NS: this is the neo-darwinian model. I absoluetly (if you allow the word) reject it. So do you, I suppose. b) Descent with designed modifications: this is my model. This is the process I refer to: a process of design, of engineering, which derives new species from what already exists. The important point, that justifies the term "descent", is that, as I have said, the old information that is appropriate is physically passed on from the ancestor to the new species. All the rest, the new functional information, is engineered in the design process. So, to be more clear, let's say that species B appears in natural history at time T. Before it, there exists another species, A, which has some strong similarities to species B. Let's say that, according to my model, species B derives physically from the already existing species A. How doe it happen? Let's say that, just as an imaginary example, A and B share about 50% of protein coding genes. The proteins coded by these genes are very similar in the two species, almost identical, at least at the beginning. The reason for that is that the function implemented by those proteins in the two species are extremely similar. But that is only part of the game. Of course, B has a lot of new proteins, or parts of proteins, or simply regulatory parts of the genome, that are not the same as A at all. Those sequences are absolutely funtional, but they do things that are specific to B, and do not exist in A, In the same way, many specific functions of A are not needed in B, and so they are not implemented there. Now, losing some proteins or some functions is not so difficult. We know that losing information is a very easy task, and requires no special ability. But how does all that new functional information arise in B? It did not exist in A, or in any other living organism that existed before to time T. It arises in B for the first time, and approximately at time T. The obvious answer, in my model, is: it is newly designed functional information. If I did not believe that, I would be in the other field, and not here in ID. But the old information, the sequence information that retains its function from A to B? Well, in my model, very simply, it is physically passed on from A to B.That is the meaning of descent in my model. That's what makes A an ancestor of B, even if a completely new process of design and engineering is necessary to derive B from A. Now, you may ask: how does that happen? Of course, we don't know the details, but we know three important facts: 1) There are signatures of neutral variation in the conserved sequences, grossly proportional to the evolutionary time split, that demonstrate that they are physically passed on. This is the single most important argument in favour of descent, and I am aware of no possible explanation of this fact outside of physical descent of those sequences. 2) The new functional information arises often in big jumps, and is almost always very complex. For the origin of vertebrates, I have computed about 1.7 million bits of new functional information, arising is at most 20 million years. RV + NS couild never do that, because it totally lacks the necessary probabilistic resources. 3) The fossil record and the existing genomes and proteomes show no trace of the many functional intermediates that would be necessary for RV + NS to even try something. Therefore, RV + NS did not do it, because there is no trace of what should absolutely be there. So, how did design do it, with physical descent? Let's say that we can imagine us doing it. If we were able. What would we do? It's very simple: we would take a few specimens of A, bring them to some lab of ours, and work on them to engineer the new species with our powerful means of genetic engineering. Adding the new functional information to what already exists, and can still be functional in the new project. Where? And in what time? These are good questions. They are good questions in any case, even if you stick to your (I think) model, model c, soon to be described. Because species B does appear at time T. And that must happen somewhere. And that must happen in some time window. But the details are still to be understood. We know too little. But one thing is certain: both space and time are somehow restricted. Space is restricted, because of course the new species must appear somewhere. It does not appear at once all over the globe. But there is more. Model a, the neo-darwinian model, needs a process that takes place almost everywhere. Why? Because it badly needa as many probabilistic resources as possible. IOWs, it badly needs big numbers. Of course, we know very well that no reasonable big number will do. The probabilstic resources simply are not there. Even for bacteria crowding the whole planet for 5 billion years. But with small populations, any thought if RV and NS is blatantly doomed from the beginning. But design does not work that way. Design does not need big numbers, big populations. Especially if it is mainly top down engineering. So, we could very well engineer B working on a relativel small sample of A. In our lab. In what time? I really don't know, but certainly not too much. As you well know, those information jumps are rather sudden in natural history, This is a fact. So? I minute? 1 year? 1 million year? Interesting questions, but in the end it is not much anyway. Not instantaneously, I would say. Not in model b, anyway. If it is an engineering process, it needs time, anyway. So, what is important about this model? Simply that it is the best model that explains facts. 1) The signatures of neutral variation in conserved sequences are perfectly explained. As those sequences have been passed on as they are fron A to B, they keep those signatures. IOWs, if A has existed for 100 million years from some previous split, in those 100 milllion years neutral variation happens in the sequence, and differentiates that sequence in A from some homologous sequence in A1 (the organsim derived from that old split). So, B inherits those changes from A, and if we compare B and A1, we find those differences, as we find them if we compare A and A1. The differences in B are inherited from A as it was 100 million years after the split from A1. 2) The big jumps in functional information are, of course, explained by the design process, the only type of process that can do those things. 3) There is no need for functional intermediates in the fossil record or in the genomes. What happens in the lab does not leave traces. We do not need big intermediate populations to be expanded by positive NS, to gain new huge probabilistic resources (as in model a). We just need a few samples, a few intermediates, in a limited time and space. There is no reason to expect any relevant trace from that process. Of course, the new engineered species, when it is ready and working, is released into the generla environment. IOWs, it is "published". That's what we observe in the fossil record, and in the genomes: the release of the new engineered species. Nothing else. So, model b, my model, explains all three types of observed facts. c) No descent at all. This is, I believe, your model. What does that mean? Well, it can mean sudden "creation" (if the new species appears of of thin air, from nothing), or, more reasonably, engineering from scratch. I will not discuss the "creation" aspect. I would not know what to say, from a scientific point of view. But I will discuss the "engineering from scratch" model. However it is conceived (quick or slow, sudden or gradual), it implies one simple thing: each time, everything is re-engineered from scratch. Even what had already been engineered in previously existing species. From what? It's simple. If it is not creation ex nihilo, "scratch" here can mean only one thing: from inanimated matter. IOWs, it means re-doing OOl each time a new species originates. OK, I believe there are many arguments against that model, but I will just state here the simplest: it does not explain fact 1) Moreover, I would definitely say that all your arguments against descent, however good (IMO, some are good, some are not) are always arguments agains model a). They have no relevance at all against model b), my model. Once and for all, I absolutely (if you allow the word) reject model a). That said, I am rather sure that you will stick to your model, model c). That's fine for me. But I wanted to clarify as much as possible.gpuccio
July 14, 2019
July
07
Jul
14
14
2019
04:30 AM
4
04
30
AM
PDT
"For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species. I hope this is the last time I have to tell you that." To this in particular,,, "passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on." All new information is 'designed in the process"???? Please elaborate on exactly what process you are talking about. As to examples that falsify the common descent model: Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model.
New Paper by Winston Ewert Demonstrates Superiority of Design Model - Cornelius Hunter - July 20, 2018 Excerpt: Ewert’s three types of data are: (i) sample computer software, (ii) simulated species data generated from evolutionary/common descent computer algorithms, and (iii) actual, real species data. Ewert’s three models are: (i) a null model which entails no relationships between any species, (ii) an evolutionary/common descent model, and (iii) a dependency graph model. Ewert’s results are a Copernican Revolution moment. First, for the sample computer software data, not surprisingly the null model performed poorly. Computer software is highly organized, and there are relationships between different computer programs, and how they draw from foundational software libraries. But comparing the common descent and dependency graph models, the latter performs far better at modeling the software “species.” In other words, the design and development of computer software is far better described and modeled by a dependency graph than by a common descent tree. Second, for the simulated species data generated with a common descent algorithm, it is not surprising that the common descent model was far superior to the dependency graph. That would be true by definition, and serves to validate Ewert’s approach. Common descent is the best model for the data generated by a common descent process. Third, for the actual, real species data, the dependency graph model is astronomically superior compared to the common descent model. Where It Counts Let me repeat that in case the point did not sink in. Where it counted, common descent failed compared to the dependency graph model. The other data types served as useful checks, but for the data that mattered — the actual, real, biological species data — the results were unambiguous. Ewert amassed a total of nine massive genetic databases. In every single one, without exception, the dependency graph model surpassed common descent. Darwin could never have even dreamt of a test on such a massive scale. Darwin also could never have dreamt of the sheer magnitude of the failure of his theory. Because you see, Ewert’s results do not reveal two competitive models with one model edging out the other. We are not talking about a few decimal points difference. For one of the data sets (HomoloGene), the dependency graph model was superior to common descent by a factor of 10,064. The comparison of the two models yielded a preference for the dependency graph model of greater than ten thousand. Ten thousand is a big number. But it gets worse, much worse. Ewert used Bayesian model selection which compares the probability of the data set given the hypothetical models. In other words, given the model (dependency graph or common descent), what is the probability of this particular data set? Bayesian model selection compares the two models by dividing these two conditional probabilities. The so-called Bayes factor is the quotient yielded by this division. The problem is that the common descent model is so incredibly inferior to the dependency graph model that the Bayes factor cannot be typed out. In other words, the probability of the data set, given the dependency graph model, is so much greater than the probability of the data set given the common descent model, that we cannot type the quotient of their division. Instead, Ewert reports the logarithm of the number. Remember logarithms? Remember how 2 really means 100, 3 means 1,000, and so forth? Unbelievably, the 10,064 value is the logarithm (base value of 2) of the quotient! In other words, the probability of the data on the dependency graph model is so much greater than that given the common descent model, we need logarithms even to type it out. If you tried to type out the plain number, you would have to type a 1 followed by more than 3,000 zeros. That’s the ratio of how probable the data are on these two models! By using a base value of 2 in the logarithm we express the Bayes factor in bits. So the conditional probability for the dependency graph model has a 10,064 advantage over that of common descent. 10,064 bits is far, far from the range in which one might actually consider the lesser model. See, for example, the Bayes factor Wikipedia page, which explains that a Bayes factor of 3.3 bits provides “substantial” evidence for a model, 5.0 bits provides “strong” evidence, and 6.6 bits provides “decisive” evidence. This is ridiculous. 6.6 bits is considered to provide “decisive” evidence, and when the dependency graph model case is compared to comment descent case, we get 10,064 bits. But It Gets Worse The problem with all of this is that the Bayes factor of 10,064 bits for the HomoloGene data set is the very best case for common descent. For the other eight data sets, the Bayes factors range from 40,967 to 515,450. In other words, while 6.6 bits would be considered to provide “decisive” evidence for the dependency graph model, the actual, real, biological data provide Bayes factors of 10,064 on up to 515,450. We have known for a long time that common descent has failed hard. In Ewert’s new paper, we now have detailed, quantitative results demonstrating this. And Ewert provides a new model, with a far superior fit to the data. https://evolutionnews.org/2018/07/new-paper-by-winston-ewert-demonstrates-superiority-of-design-model/ Response to a Critic: But What About Undirected Graphs? - Andrew Jones - July 24, 2018 Excerpt: The thing is, Ewert specifically chose Metazoan species because “horizontal gene transfer is held to be rare amongst this clade.” Likewise, in Metazoa, hybridization is generally restricted to the lower taxonomic groupings such as species and genera — the twigs and leaves of the tree of life. In a realistic evolutionary model for Metazoa, we can expect to get lots of “reticulation” at lower twigs and branches, but the main trunk and branches ought to have a pretty clear tree-like form. In other words, a realistic undirected graph of Metazoa should look mostly like a regular tree. https://evolutionnews.org/2018/07/response-to-a-critic-but-what-about-undirected-graphs/ This Could Be One of the Most Important Scientific Papers of the Decade - July 23, 2018 Excerpt: Now we come to Dr. Ewert’s main test. He looked at nine different databases that group genes into families and then indicate which animals in the database have which gene families. For example, one of the nine databases (Uni-Ref-50) contains more than 1.8 million gene families and 242 animal species that each possess some of those gene families. In each case, a dependency graph fit the data better than an evolutionary tree. This is a very significant result. Using simulated genetic datasets, a comparison between dependency graphs and evolutionary trees was able to distinguish between multiple evolutionary scenarios and a design scenario. When that comparison was done with nine different real genetic datasets, the result in each case indicated design, not evolution. Please understand that the decision as to which model fit each scenario wasn’t based on any kind of subjective judgement call. Dr. Ewert used Bayesian model selection, which is an unbiased, mathematical interpretation of the quality of a model’s fit to the data. In all cases Dr. Ewert analyzed, Bayesian model selection indicated that the fit was decisive. An evolutionary tree decisively fit the simulated evolutionary scenarios, and a dependency graph decisively fit the computer programs as well as the nine real biological datasets. http://blog.drwile.com/this-could-be-one-of-the-most-important-scientific-papers-of-the-decade/ Why should mitochondria define species? - 2018 Excerpt: The particular mitochondrial sequence that has become the most widely used, the 648 base pair (bp) segment of the gene encoding mitochondrial cytochrome c oxidase subunit I (COI),,,, The pattern of life seen in barcodes is a commensurable whole made from thousands of individual studies that together yield a generalization. The clustering of barcodes has two equally important features: 1) the variance within clusters is low, and 2) the sequence gap among clusters is empty, i.e., intermediates are not found.,,, Excerpt conclusion: , ,The simple hypothesis is that the same explanation offered for the sequence variation found among modern humans applies equally to the modern populations of essentially all other animal species. Namely that the extant population, no matter what its current size or similarity to fossils of any age, has expanded from mitochondrial uniformity within the past 200,000 years.,,, https://phe.rockefeller.edu/news/wp-content/uploads/2018/05/Stoeckle-Thaler-Final-reduced.pdf Sweeping gene survey reveals new facets of evolution – May 28, 2018 Excerpt: Darwin perplexed,,, And yet—another unexpected finding from the study—species have very clear genetic boundaries, and there’s nothing much in between. “If individuals are stars, then species are galaxies,” said Thaler. “They are compact clusters in the vastness of empty sequence space.” The absence of “in-between” species is something that also perplexed Darwin, he said. https://phys.org/news/2018-05-gene-survey-reveals-facets-evolution.html
bornagain77
July 13, 2019
July
07
Jul
13
13
2019
06:57 PM
6
06
57
PM
PDT
Bornagain77: It's amazing how much you misunderstand me, even if I have repeatedly tried to explain my views to you. 1) "Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan." Interesting claims, that have nothing to do with my belief in CD, and about which I can absolutely agree with you. I absolutely believe that the fossil record is discontinuous, that genetic evidence is discontinuous, and that no one has ever changed the basic body plan of an organism into another body plan. And so? 2) "Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part?" I don't believe that scientific certanty is ever absolute. I use "absolutely" to express my strength of certainty that there is empirical warrant of CD. And I have explained why, many times, even to you. As I have explained many times to you what I mean by CD. But I am not sure that you really listen to me. That's OK, I believe in free will, as you probably know. 3) "For instance, it seems you are holding somewhat to a reductive materialistic framework in your ‘absolute’ certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself." I am in no way a reductionist, least of all a materialist. My certainty about CD only derives from scientific facts, and from what I believe to be the most reasonable way to interpret them. As I have tried to explain many times. Essentially, the reasons why I believe in CD (again, the type of CD that I believe in, and that I have tried to explain to you many times) are essentially of the same type for which I believe in Intelligent Design. There is nothing reductionist or materialist in them. Only my respect for facts. For example, I do believe that we do not understand at all how body plans are implemented. You seem to know more. I am happy for you. 4) "In other words, even with a complete microscopic description of an organism, it is impossible for you to have ‘absolute’ certainty about the macroscopic behavior of that organism much less to have ‘absolute’ certainty about CD." I have just stated that IMO we don't understand at all how body plans are implemented. Moreover, I don't believe at all that we have any complete microscopic description of any living organsism. We are absolutely (if you allow the word) distant from that. OK. But I still don't understand what that has to do with CD. For the last time: CD, for me, just means that there is very strong evidence that the molecular information in DNA and proteins of already existing species is physically passed on to new species that by design derive from them. All the new information is designed in the process, but the old information is physically passed on. As proven by the differences in neutral sites, between species. I hope this is the last time I have to tell you that.gpuccio
July 13, 2019
July
07
Jul
13
13
2019
05:44 PM
5
05
44
PM
PDT
"I absolutely believe in CD" Interesting claim of absolute certainty from you given the discontinuous nature of the fossil record, the discontinuous nature of the genetic evidence, and the fact that no one has ever changed the basic body plan of an organism into another body plan. Perhaps, given your poverty of empirical warrant, a bit more modest measure of certainty would be wise on your part? For instance, it seems you are holding somewhat to a reductive materialistic framework in your 'absolute' certainty about CD, and yet, the failure of reductive materialism to be able to explain the basic form and/or body plan of any particular organism occurs at a very low level. Much lower than DNA itself. In the following article entitled 'Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics', which studied the derivation of macroscopic properties from a complete microscopic description, the researchers remark that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, The researchers further commented that their findings challenge the reductionists' point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description."
Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics - December 9, 2015 Excerpt: A mathematical problem underlying fundamental questions in particle and quantum physics is provably unsolvable,,, It is the first major problem in physics for which such a fundamental limitation could be proven. The findings are important because they show that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, "We knew about the possibility of problems that are undecidable in principle since the works of Turing and Gödel in the 1930s," added Co-author Professor Michael Wolf from Technical University of Munich. "So far, however, this only concerned the very abstract corners of theoretical computer science and mathematical logic. No one had seriously contemplated this as a possibility right in the heart of theoretical physics before. But our results change this picture. From a more philosophical perspective, they also challenge the reductionists' point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description." http://phys.org/news/2015-12-quantum-physics-problem-unsolvable-godel.html
In other words, even with a complete microscopic description of an organism, it is impossible for you to have 'absolute' certainty about the macroscopic behavior of that organism much less to have 'absolute' certainty about CD.bornagain77
July 13, 2019
July
07
Jul
13
13
2019
04:12 PM
4
04
12
PM
PDT
Bornagain77; No. As you know, I absolutely believe in CD, but that is not the issue here. Homology is homology, and divergence is divergence, whatever the model we use to explain them. I just wanted to show an example of a protein (RelA), indeed a TF, where both homology (in the DBD) and divergence (in the TADs) are certainly linked to function. When I want to "push" for CD, I know how to do that.gpuccio
July 13, 2019
July
07
Jul
13
13
2019
03:12 PM
3
03
12
PM
PDT
Silver Asiatic and all: OK, a few words about the myth of "self organization". You say: "But we don’t have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways." It is perfectly true that we "don't have enough data" about that. We don't have them because there is none: "self organization" simply does not work as a substitute for Darwinian mechanisms. IOWs, it explain absolutely nothing about functional complexity (not that Darwinian mechanisms do, but at least they try). Let's see. I would say that there is a correct concept of self-organization, and a completely mythological expansion of it to realities that have nothing to do with it. The correct concept of self-organization comes from physics and chemistry, essentially. It is the science behind systems that present some unexpected "order"deriving from the interaction of random components and physical laws. Examples: a) Physics: Heat applied evenly to the bottom of a tray filled with a thin sheet of viscous oil transforms the smooth surface of the oil into an array of hexagonal cells of moving fluid called Bénard convection cells b) Chenistry: A Belousov–Zhabotinsky reaction, or BZ reaction, is a nonlinear chemical oscillator, including bromine and an acid. These reactions are far from equilibrium and remain so for a significant length of time and evolve chaotically, being characterized by a noise-induced order. And so on. Now, the concept of self-organization has been artificially expanded to almost everything, including biology. But the phemomenon is essentially derived from this type of physical models. In general, in these examples, some stochastic system tends to achieve some more or less ordered stabilization towards what is called an attractor. Now, to make things simple, I will just mention a few important points that show how the application of those principles to biology is completely wrong. 1) In all those well known physical systems, the system obeys the laws of physics, and the pattern that "emerges" can very well be explained as an interaction between those laws and some random component. Snowflakes are another example. 2) The property we observe in these systems is some form of order. That is very important. It is the most important reason why self-organization has nothing to do with functional complexity. 3) Functional complexity is the number of specific bits that are necessary to implement a function. It has nothing to do with a generic "order". Take a protein that has an enzymatic activity, for example, and compare it to a snowflake. The snowflake has order, but no complex function. Its order can be explained by simple laws, and the differences between snowflakes can be explained by random differences in the conditions of the system. Instead, the function of a protein strictly depends on the sequence of AAs. It has nothing to do with random components, and it follows a very specific "recipe" coming from outside the system: the specific sequence in the protein, which in turn depends on the specific sequence of nucleotides in the protein coding gene. There is no way that such a specific sequence can be the result of "self-organization". To believe that it is the result of Natural Selection is foolish, but at least it has some superficial rationale. But to believe that it can be the result of self-organization, of physical and chemical laws acting on random components, is total folly. 4) The simple truth is that the sequence of AAs generates function according to chemical rules, but to find what sequence among all possible sequences will have the function requires deep understanding of the rules of chemistry, and extreme computational power. We still are not able to build functional proteins by a top down process. Bottom up processes are more efficient, but still require a lot of knowledge, computation power, and usually strictly guided artificial selection. Even so, we are completely unable to engineer anything like ATP synthase, as I have discussed in detail many times. Nor could ever RV + NS do that. But, certainly, no amount of "self-organization" in the whole reality could even begin to do such a thing. 5) Complex networks like the one I have discussed here certainly elude our understanding in many ways. But one thing is certain: they do require tons of functional information at the level of the sequences in proteins and other parts of the genome to wortk correctly. As we have seen in the OP, mutations in different parts of the system are connected to extremely serious diseases. Of course, no self-organization of any kind can ever correct those small errors in digital functional information. 6) The function of a protein is not an "emerging" quality of the protein any more than the function of a watch is an emerging quality of the gears. The function of a protein depends on a very precise correspondence between the digital sequence of AAs and the laws of biochemistry, which determines the folding and the final structure and status (or statuses) of the protein. This is information. The same information that makes the code for Excel a functional reality. Do we see codes for software emerging from self-organization? We should maybe inform video game programmers of that, they could spare a lot of work and time. In the end, all these debates about self-organizarion, emerging properties and snowflakes have nothing to do with functional information. The only objects that exhibit functional information beyond 500 bits are, still, human artifacts and biological objects. Nothing else. Not snowflakes, not viscous oil, not the game of life. Only human artifacts and biological objects. Those are the only objects in the whole known universe that exhibit thousands, millions, maybe billions of bits strictly aimed at implementing complex and obvious functions. The only existing instances of complex functional information.gpuccio
July 13, 2019
July
07
Jul
13
13
2019
03:09 PM
3
03
09
PM
PDT
"What is the problem? What am I missing?" Could be me missing something. I thought you might, with your emphasis on conservation, be pushing for CD again.bornagain77
July 13, 2019
July
07
Jul
13
13
2019
02:18 PM
2
02
18
PM
PDT
Bornagain77 at #35: I am not sure that I understand what you mean. My theory? Falsification? Counterexamples? At #12 you quote a paper that says: "Similarity regression inherently quantifies TF motif evolution, and shows that previous claims of near-complete conservation of motifs between human and Drosophila are inflated, with nearly half of the motifs in each species absent from the other, largely due to extensive divergence in C2H2 zinc finger proteins." OK? At #14 I agree with the paper, and add a comment: Indeed, divergence in TF sequences and motifs is certainly one of the main tools of specific transcription control in different organisms. A lot of superficial ideas about TFs is probably due to the rather strong conservation of known DNA binding domanis (DBDs). However, DBDs are only part of the story. The most interesting part of TF sequences is certainly to be found in the less conserved sequences and domains, even in intrinsically disordered sequences." OK? At #29 I reference a paper about RelA, one of the TFs discussed in this OP, that shows a clear example of what I said at #14: homology of the DBD and divergence of the functional TADs between humans and cartilaginous fishes. Which is exactly what was stated in the paper you quoted. What is the problem? What am I missing?gpuccio
July 13, 2019
July
07
Jul
13
13
2019
02:04 PM
2
02
04
PM
PDT
Per Gp 32, it is not enough, per falsification, to find examples that support your theory. In other words, I can find plenty of counterexamples.bornagain77
July 13, 2019
July
07
Jul
13
13
2019
11:25 AM
11
11
25
AM
PDT
Silver Asiatic at #31: Very good points. Yes, my argument is exactly that as the cell is more than a machine, and yet it implements the same type of functions as traditional machines do, only with much higher flexibility and complexity, it does require a lot more of intelligent design and engineering to be able to work. So, it is absolutely true that the researcher in that paper has made a greater point for Intelligent Design. But, of course, he (or they) will never admit such a thing! And we know very well why. So, the call to "self-organization", or to "stochastic systems". Of course, that's simply mystification. And not even a good one. I will comment on the famous concept of "self-organization" in my next post.gpuccio
July 13, 2019
July
07
Jul
13
13
2019
10:56 AM
10
10
56
AM
PDT
Silver Asiatic at #30: I absolutely agree with what you say here! :)gpuccio
July 13, 2019
July
07
Jul
13
13
2019
10:49 AM
10
10
49
AM
PDT
Bornagain77 at #12: I believe that my comment at #29 is strictly connected to you observations. It also expands, with a real example, the simple ideas I had already expressed at #14. So, you could like to have a look at it! :)gpuccio
July 13, 2019
July
07
Jul
13
13
2019
10:48 AM
10
10
48
AM
PDT
GP
What do you think? Do my arguments in this OP, about harnessing stochastic change to get strict funtion, favour the design hypothesis? Or are they perfectly compatible with a neo-darwinian view of reality?
I think you did a great job, but just a thought … You responded to the notion that supported our view - the researcher says that the cell is not merely engineering but is more dynamic. So, we support that and you showed that the cell is far more than a machine. However, in supporting that researcher's view, has the discussion changed? In this case, the researcher is actually saying that deterministic processes cannot explain these cellular functions. He says it's all about self-organization, etc. Now, what you have done is amplified his statement very wonderfully. However … What remains open are a few things: 1. Why didn't the researcher, stating what you (and we) would and did - just conclude Design? 2. The researcher is attacking Darwinism (subtly) accepting some of it:
This familiar understanding grounds the conviction that a cell's organization can be explained reductionistically, as well as the idea that its molecular pathways can be construed as deterministic circuits. The machine conception of the cell owes a great deal of its success to the methods traditionally used in molecular biology. However, the recent introduction of novel experimental techniques capable of tracking individual molecules within cells in real time is leading to the rapid accumulation of data that are inconsistent with an engineering view of the cell.
… so, hasn't he already conceded the game to us on that point? Could we now show how self-organization is not a strong enough answer for this type of system? I believe we could simply use Nicholson's paper to discredit Darwinism (as he does himself), and our amplification of his work does "favor a design view". But we don't have enough data on how he (and others) believe self-organization really works as a substitute for Darwinian mechanisms, and that weakens support for Design in some ways.Silver Asiatic
July 13, 2019
July
07
Jul
13
13
2019
10:32 AM
10
10
32
AM
PDT
1 22 23 24 25

Leave a Reply