Uncommon Descent Serving The Intelligent Design Community

Karsten Pultz: Why random processes cannot produce information: A new approach to the argument

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Our Danish friend, Karsten Pultz (left), author of Exit Evolution, tackles the big question;


This is the important question ID raises against the neo-Darwinian claim that life came about by random processes. Here I offer some thoughts about the connection between information in the form of digital codes and the products they code for. I thus hope to support the argument for ID and for what seems to me the inescapable fact of teleology in nature.

If a random number generator were set to produce eight digit numbers, and it coincidentally spat out 87958007, which happens to be my phone number, would it then have produced information? Or if a prebiotic soup randomly produced a functional protein, or a string of DNA that coded for a protein (ignoring the low probability of such occurrences), would the soup have produced information?

I would argue no.

The specific sequences that carry information, phone numbers, for example, can only be defined as information in relation to the whole system of which they are a part. There needs to be a translation system with defined rules that sorts out which sequences contains information and which do not.

To claim that information in DNA could arise by random chance is therefore nonsense. A functional sequence could not be regarded as information until we have a complete set of rules which defines that certain sequences are functional and all others are nonfunctional gobbledygook.

It is not reasonable to consider it likely that information can arise by chance without the translation that defines it as information?

A random process producing information is an oxymoron

No matter how many phone numbers that happen to be in service that a random number generator spits out, they wouldn’t be phone numbers because the generator has not chosen them from a defined set of rules. In the same way, proteins randomly produced in a prebiotic soup would not be proteins at all because they would not have been produced in relation to the translation system which defines them as proteins. Equally important, they would not have been produced in relation to the specific task they would serve in an organism.

Order Exit Evolution online. 198 kr.

Phone numbers are only phone numbers because we, the intelligent designers, have made the rules that decide what is required in order for a sequence of digits to function as an actual phone number.
Talking about information detached from the translation system which defines it as information doesn’t make sense.

We don’t get information without the act of choice, a feature that is related only to intelligent agency. A random process producing information is therefore an oxymoronic (or just moronic) concept. The translation system that defines what sequences are functional is obviously intelligently designed because it involves the conscious selection of parameters.

The information-product connection

Not only is the problem that there aren’t enough probabilistic resources to produce the DNA code by random processes but the DNA code would in fact not be information without the translation system. The code, together with the translation system, would still be worthless if it weren’t strictly tied to the last part of the overall system, namely the product, the organism.

In the end, it is the final product that defines that the information which forms the basis of the product can be considered information in the first place.

Code-translation-product

In a modern car factory we have the three part production system which consists of computer code, the machinery (robots) that translate the code into specific movements, and finally the end product, the car. It is obvious that the intelligent designers did not start with the two first items, the code and the translation. It was the final product which initially was in the mind of the intelligent designers. The code and the translation tools were developed in coordination to realize the idea of a car, the idea of which initially was in the mind of the intelligent designer.

Hence I will argue that information is only information if it is related to the end product. The idea (logos) comes first, then secondly the coding and the translation tools are put together simultaneously in order to realize the idea.

I think it is reasonable to argue that information is necessarily tied to a product, an idea (logos) or message, and that the product always is in the mind of the intelligent designer as the first thing. Information can therefore not be the product of a mindless process.

Human-engineered factory production is characterized by initially having a desired object in mind. After the inception of the idea, robots are built that can produce the object, and lastly the coding that is going to operate the machinery is done. The important thing here is that the idea of the end product comes first; the translation (the machinery) and the coding comes after inception of the idea.

A code—which is what we call information—is nothing in itself because it is slave to the idea. I would argue that the same counts for living things, that the DNA code was set up to realize and idea, an organism the intelligent designer already had in mind.

Identifying that something is wrong

It is the end product that defines whether a code sequence carries (“correct”) information or not. If cars leave the production line with only three wheels or with just one headlight, we become aware that something is wrong with the underlying information. So that is how we can evaluate if something can be regarded as information, simply by looking at the end product.

The materialist cannot escape the fact of teleology in nature, because even he or she will recognize illness and defect as something that is “wrong” with an organism, thereby acknowledging the overall idea and purpose of an organism.

When it comes to information the process always starts with an idea in a mind. The information that’s needed to realize the idea is defined by the idea, not the other way around. This makes it unlikely that information could arise by mindless natural processes, because we need a mind with an idea before we come to the part we call information.

Because human engineers, the only intelligent designers we are familiar with, operate in the described way, with the idea as the primary and the translation and coding as secondary, we have an empirical basis for arguing that life is the product of an idea in the mind of an intelligent designer.

The idea is primary

In human engineering you start with the end-product in mind, for instance a car. Then you set up the machinery that can produce the wanted item, and lastly you program the machinery. In written language you also start with the end-product, namely the message you want to convey, and then you do the “programming”, the sequencing of letters which follows the in advance chosen rules. It is not possible to look at information separated from the translation and the end product. The three parts, code-translation-product, are inherently connected, the idea, product or message in the mind of the intelligent designer being the first to arise.

So even if the chances of a protein arising in a prebiotic soup were not out of reach, it would still not be a protein because a protein is defined by its function in the end-product, the organism. A DNA sequence can only be defined as information if you already have the organism in view. Therefore it makes sense that in the beginning was the idea, the logos.

Conclusion

I think my argument shows that idealism is true and materialism is false, that random processes do not produce information, and that a mind with an idea is the primary means by which everything comes into existence. One can use it to argue for ID and for teleology in nature. I would also not hesitate to use it as argument in a theological debate.

See also: Karsten Pultz: The perils of talking about ID He wonders, should he give up?

Comments
PavelU claims that,
this recent paper that explains how the gene regulatory networks evolved:
Yet the paper that PavelU cites does nothing of the sort. The paper is a 'just-so story' with no empirical support. It uses computer modelling to "show that fitness landscapes can be modified by the intrinsic properties of dynamical network self-organization, via a simple, biologically plausible mechanism that is compatible with conventional descriptions of evolution by natural selection.". They simply have no empirical evidence that their computer model is feasible. As they themselves admit in their discussion section of their paper, "Attractor scaffolding offers a potential mechanism for genetic assimilation; by the gradual evolution of limit cycle dynamics towards point attractor dynamics. Thus, it might support a range of epigenetic phenomena, such as the Baldwin effect(s) As should be needless to say, "offers a potential mechanism" and "it might support" is a FAR cry from actually empirically demonstrating how gene regulatory networks might have supposedly evolved. Moreover, for Darwinian computer programmers to claim Intelligent Design is not needed is ludicrous since, number one, computer algorithms don't write themselves but are instead the "outcome of thousands of human decisions."
- Greg Coppola to Tucker: "algorithms don’t “write themselves” Excerpt: "Basically, any software launch reflects to outcome of thousands of human decisions. If you made different human decisions you would get a different result. And so, if you see a resulting end product that seems to encode a bias of one sort or another, there must have been that bias in the process that produced the end result." - Google Insider, Greg Coppola, Talks Political Bias at Google On Tucker Carlson https://www.youtube.com/watch?v=uu5-VQuFU_g
In short, the computer programs themselves are obviously the product of intelligent design. Secondly, as Robert Marks, William Dembski, and company, have demonstrated, “There exists no (computer) model successfully describing undirected Darwinian evolution.",, and "Those hoping to establish Darwinian evolution as a hard science with a (computer) model have either failed or inadvertently cheated." since they sneak information into the model which "can be measured, in bits, as active information.,,,”
Top Ten Questions and Objections to ‘Introduction to Evolutionary Informatics’ – Robert J. Marks II – June 12, 2017 Excerpt: “There exists no model successfully describing undirected Darwinian evolution. Hard sciences are built on foundations of mathematics or definitive simulations. Examples include electromagnetics, Newtonian mechanics, geophysics, relativity, thermodynamics, quantum mechanics, optics, and many areas in biology. Those hoping to establish Darwinian evolution as a hard science with a model have either failed or inadvertently cheated. These models contain guidance mechanisms to land the airplane squarely on the target runway despite stochastic wind gusts. Not only can the guiding assistance be specifically identified in each proposed evolution model, its contribution to the success can be measured, in bits, as active information.,,,”,,, “there exists no model successfully describing undirected Darwinian evolution. According to our current understanding, there never will be.,,,” https://evolutionnews.org/2017/06/top-ten-questions-and-objections-to-introduction-to-evolutionary-informatics/
Thirdly and most importantly, all the empirical evidence we have to date, (actual empirical evidence and not misleading computer models that were intelligently designed by Darwinian computer programmers), all empirical evidence we have to date says that it is impossible for gene regulatory networks to gradually evolve. Specifically, ""There is always an observable consequence if a dGRN (developmental gene regulatory network) subcircuit is interrupted. Since these consequences are always catastrophically bad, flexibility is minimal, and since the subcircuits are all interconnected, the whole network partakes of the quality that there is only one way for things to work. And indeed the embryos of each species develop in only one way."
A Listener's Guide to the Meyer-Marshall Debate: Focus on the Origin of Information Question - Casey Luskin - December 4, 2013 Excerpt: "There is always an observable consequence if a dGRN (developmental gene regulatory network) subcircuit is interrupted. Since these consequences are always catastrophically bad, flexibility is minimal, and since the subcircuits are all interconnected, the whole network partakes of the quality that there is only one way for things to work. And indeed the embryos of each species develop in only one way." - Eric Davidson - developmental biologist http://www.evolutionnews.org/2013/12/a_listeners_gui079811.html Stephen Meyer - Responding to Critics: Marshall, Part 2 (developmental Gene Regulatory Networks) - video https://www.youtube.com/watch?v=Cg8Mhn2EKvQ Developmental gene regulatory networks—an insurmountable impediment to evolution - by Jeffrey P. Tomkins and Jerry Bergman - August 2018 Excerpt: As Davidson has documented, a dGRN that regulates body-plan development “is very impervious to change” and usually leads to “catastrophic loss of the body part or loss of viability altogether”.12 This observable consequence virtually always occurs if even one dGRN subcircuit is interrupted. Because most of these changes are always “catastrophically bad, flexibility is minimal, and since the subcircuits are all interconnected … there is only one way for things to work. And indeed the embryos of each species can develop in only one way.”12 In his book, Intelligent Design proponent Stephen Meyer noted that “Davidson’s work highlights a profound contradiction between the neo-Darwinian account of how new animal body plans are built and one of the most basic principles of engineering—the principle of constraints.”26 As a result, “the more functionally integrated a system is, the more difficult it is to change any part of it without damaging or destroying the system as a whole”.26 Because this system of gene regulation controls animal-body-plan development in such an exquisitely integrated fashion, any significant alterations in its gene regulatory networks inevitably damage or destroy the developing animal. This now-proven fact creates critical problems for the evolution of new animal body plans and the new dGRNs necessary to produce them, preventing gradual evolution via mutation and selection from a pre-existing body plan and set of dGRNs. Developmental biologists openly recognize these clear problems for the standard evolutionary synthesis. The problem as elaborated by Davidson, noted that neo-Darwinian evolution erroneously assumes that all microevolutionary processes equate to macroevolutionary mechanisms, thus producing the false conclusion that the “evolution of enzymes or flower colors can be used as current proxies for study of evolution of the body plan”.12 Typical evolutionary research programs involve studying genetic variation within a species or genus involving inter-fertile natural populations or populations from controlled crosses. From a developmental systems biology perspective, the genes or regulatory features involved in such variability lie at the peripheral nodes and do not explain novel body plans associated with macroevolution. Davidson notes that the standard evolutionary synthesis “erroneously assumes that change in protein-coding sequence is the basic cause of change in [the] developmental program; and it [also] erroneously assumes that evolutionary change in body-plan morphology occurs by a continuous process”.12 Davidson also aptly notes that “these assumptions are basically counterfactual” because the “neo-Darwinian synthesis from which these ideas stem was a pre-molecular biology concoction focused on population genetics and adaptation natural history”.12 Neo-Darwinism in any form does not provide a mechanistic means of changing the genomic regulatory systems that drive embryonic development of the body plan. Alternating the peripheral differentiation process associated with observable variability is an entirely different scenario from building a new form of animal life by changing the fundamental structure of a resilient dGRN.,,, Summary At the very core of the validity of models for macroevolution is how organisms develop. Any form of Darwinian evolution requires that new developmental adaptations arise via random mutations that somehow provide a novel advantageous selectable trait. Decades of developmental genetics research in a wide variety of organisms has documented in detail the fact that once an embryo begins to develop along a certain trajectory, mutations in top and mid-level transcription factor genes in the hierarchy model of regulation described by Davidson cause fatal catastrophe in the program. This mutation-intolerant obstacle poses a complete barrier for the modern Darwinian synthesis, the neutral model, and saltational evolution. Another important aspect of the developmental genetics paradigm is the paradox of conserved protein sequence among top-level transcription factors combined with their intolerance of mutation. It is quite a quandary for the evolutionist—extreme conservation of sequence would seem to support common descent yet lack of mutability negates the fundamental requirement of evolutionary change. An Intelligent Design model, however, would predict that common code serving a general common purpose would be found among unrelated engineered systems that were the work of the same Creator—exactly as we find in man-made systems. https://creation.com/developmental-gene-regulatory-networks
Thus in conclusion, PavelU once again, far from explaining how gene regulatory networks supposedly evolved, has, in actuality, referenced a paper that demonstrates the necessity of intelligent design in that computer programmers had to intelligently design a computer model that 'cheated' by sneaking active information into the simulation. In short, PavelU, by offering a biased computer simulation as proof for Darwinian evolution, is basically admitting that gene regulatory networks (GRNs) are an irresolved dilemma for Darwinian explanations, An irresolved dilemma for which they have no real time empirical evidence, (lest they would not have needed the intelligently design the 'biased' computer simulation in the first place in order to try to 'explain away' the dilemma of GRNs).bornagain77
January 5, 2020
January
01
Jan
5
05
2020
03:58 AM
3
03
58
AM
PDT
Aarceng, Good point. To think or not to think, that's the question. :)pw
January 4, 2020
January
01
Jan
4
04
2020
09:32 PM
9
09
32
PM
PDT
Millions of monkeys randomly typing would eventually produce the text of Hamlet. BUT, the monkeys wouldn't recognise it as anything significant and would just carry on typing. It requires a separate intelligent observer to recognise that the significant event has occurred and act on it.aarceng
January 4, 2020
January
01
Jan
4
04
2020
07:27 PM
7
07
27
PM
PDT
PavelU, Not so fast, buddy. You may want to read this: https://en.wikipedia.org/wiki/Dynamical_system https://en.wikipedia.org/wiki/Limit_cycle https://en.wikipedia.org/wiki/Attractor At best they can model some adaptive changes in a given GRN, but that won't help to explain any putative "macro-evolutionary" differences. Biological systems are much more complex than physical oscillators or man-made circuits. Try again.OLV
January 4, 2020
January
01
Jan
4
04
2020
05:08 PM
5
05
08
PM
PDT
Pav said:
The Danish ID proponent cited in this OP should change his mind after reading this recent paper that explains how the gene regulatory networks evolved: https://www.nature.com/articles/s41598-019-53251-w
Don't bother. It's just another Pav citation bluff.Latemarch
January 4, 2020
January
01
Jan
4
04
2020
04:50 PM
4
04
50
PM
PDT
The Danish ID proponent cited in this OP should change his mind after reading this recent paper that explains how the gene regulatory networks evolved: https://www.nature.com/articles/s41598-019-53251-wPavelU
January 4, 2020
January
01
Jan
4
04
2020
02:27 PM
2
02
27
PM
PDT
Why is math useful in describing complex biological systems? Classification-Based Inference of Dynamical Models of Gene Regulatory Networks
Cell-fate decisions during development are controlled by densely interconnected gene regulatory networks (GRNs) consisting of many genes. Inferring and predictively modeling these GRNs is crucial for understanding development and other physiological processes. Gene circuits, coupled differential equations that represent gene product synthesis with a switch-like function, provide a biologically realistic framework for modeling the time evolution of gene expression. However, their use has been limited to smaller networks due to the computational expense of inferring model parameters from gene expression data using global non-linear optimization.
Development is controlled by gene regulatory networks (GRNs) that integrate extrinsic signals and intrinsic cell state to make decisions about cell fate
In summary, we have exploited features of the mathematical structure of gene circuits to break a difficult optimization problem into a series of two, much simpler, optimization problems. We have demonstrated that FIGR is effective on synthetic as well as experimental data from a biologically realistic GRN. We have validated the inferred gap gene model by comparing its parameters against models inferred with SA as well as comparing its output against experimental data. The improvement in computational efficiency and scalability should allow the inference of much larger GRNs than was possible previously.
OLV
January 4, 2020
January
01
Jan
4
04
2020
02:13 PM
2
02
13
PM
PDT
Organelles of the Cell  Cell Structure The Structure of DNA Cell Cycle Terms Cell Cycle Multi Scale Modeling of Chromatin and Nucleosomespw
January 4, 2020
January
01
Jan
4
04
2020
11:22 AM
11
11
22
AM
PDT
a phone number :)))) HOLD ON ! IT IS NOT THAT SIMPLE !!!! i am sure that biologists wish it would be that simple.... BUT IT IS NOT !!! in biology, the problem is much deeper.... in biology, a single 'phone number' represents several layers of information... in biology - the cell is reading 'the phone number' backward and forward and always reaches the 'right person' and it is getting worse .... this 'phone number' is spliced in various ways ... this way, a single 'phone number' can be used to reach several 'persons' multiple layers of information in a single phone number by random ? :)))) it seems, that biologists - natural science graduates - believes in miracles ... over and over again.... Someone should use this phone number and call the doctor ...martin_r
January 4, 2020
January
01
Jan
4
04
2020
11:20 AM
11
11
20
AM
PDT
Selfish ID? :) The CDK Pef1 and Protein Phosphatase 4 oppose each other for regulating cohesin binding to fission yeast chromosomes
Cohesin has essential roles in chromosome structure, segregation and repair. Cohesin binding to chromosomes is catalyzed by the cohesin loader, Mis4 in fission yeast. How cells fine tune cohesin deposition is largely unknown. Here we provide evidence that Mis4 activity is regulated by phosphorylation of its cohesin substrate. A genetic screen for negative regulators of Mis4 yielded a CDK called Pef1, whose closest human homologue is CDK5. Inhibition of Pef1 kinase activity rescued cohesin loader deficiencies. In an otherwise wild-type background, Pef1 ablation stimulated cohesin binding to its regular sites along chromosomes while ablating Protein Phosphatase 4 had the opposite effect. Pef1 and PP4 control the phosphorylation state of the cohesin kleisin Rad21. The CDK phosphorylates Rad21 on Threonine 262. Pef1 ablation, non phosphorylatable Rad21-T262 or mutations within a Rad21 binding domain of Mis4 alleviated the effect of PP4 deficiency. Such a CDK/PP4 based regulation of cohesin loader activity could provide an efficient mechanism for translating cellular cues into a fast and accurate cohesin response.
Cohesin is involved in a wide range of cellular functions at all stages of the cell cycle, implying a tight control by the cell machinery. The data presented here provide evidence that the CDK Pef1 and PP4 are part of this regulatory network.
Understanding how Pef1 regulates cohesin binding to DNA will require further knowledge of the biochemical activities of both cohesin and its loader.
   OLV
January 4, 2020
January
01
Jan
4
04
2020
10:25 AM
10
10
25
AM
PDT
ID exuberance ? :) DNA sequence encodes the position of DNA supercoils
The three-dimensional organization of DNA is increasingly understood to play a decisive role in vital cellular processes. Many studies focus on the role of DNA-packaging proteins, crowding, and confinement in arranging chromatin, but structural information might also be directly encoded in bare DNA itself.
the DNA sequence directly encodes the structure of supercoiled DNA by pinning plectonemes at specific sequences.
sequence-dependent intrinsic curvature is the key determinant of pinning strength
Analysis of several prokaryotic genomes indicates that plectonemes localize directly upstream of promoters
Our findings reveal a hidden code in the genome that helps to spatially organize the chromosomal DNA.
Control of DNA supercoiling is of vital importance to cells. Torsional strain imposed by DNA-processing enzymes induces supercoiling of DNA, which triggers large structural rearrangements through the formation of plectonemes
Recent biochemical studies suggest that supercoiling plays an important role in the regulation of gene expression in both prokaryotes and eukaryotes
In order to tailor the degree of supercoiling around specific genes, chromatin is organized into independent topological domains with varying degrees of torsional strain
Domains that contain highly transcribed genes are generally underwound whereas inactive genes are overwound
Furthermore, transcription of a gene transiently alters the local supercoiling, while, in turn, torsional strain influences the rate of transcription
For many years, the effect of DNA supercoiling on various cellular processes has mainly been understood as a torsional stress that enzymes should overcome or exploit for their function. More recently, supercoiling has been acknowledged as a key component of the spatial architecture of the genome
Here, bound proteins are typically viewed as the primary determinant of sequence-specific tertiary structures while intrinsic mechanical features of the DNA are often ignored. However, the DNA sequence influences its local mechanical properties such as bending stiffness, curvature, and duplex stability, which in turn alter the energetics of plectoneme formation at specific sequences
Our findings reveal how a previously unrecognized ‘hidden code’ of intrinsic curvature governs the localization of local DNA supercoils, and hence the organization of the three-dimensional structure of the genome.
A full statistical mechanical modeling of the plectonemic structures distributed across the DNA molecule should further improve the predictive power and accuracy, but will require significant computational resources and time.
Significant intrinsic curvatures are encoded in genomic DNA, as evident in our scans of both prokaryotic and eukaryotic genomes, which indicates its biological relevance.
In addition to this direct interaction of RNA polymerase and curved DNA, our results suggest an indirect effect, as the same curved DNA can easily pin a plectoneme that can further regulate the transcription initiation and elongation by structural re-arrangement of the promotor and coding regions.
promoter sequences have evolved local regions with highly curved DNA that promote the localization of DNA plectonemes at these sites. There may be multiple reasons for this. For one, it may help to expose these DNA regions to the outer edge of the dense nucleoid, making them accessible to RNAP, transcription factors, and topoisomerases. Plectonemes may also play a role in the bursting dynamics of gene expression, since each RNAP alters the supercoiling density within a topological domain as it transcribes, adding or removing nearby plectonemes In addition, by bringing distant regions of DNA close together, plectonemes may influence specific promoter-enhancer interactions to regulate gene expression Finally, plectoneme tips may help RNA polymerase to initiate transcription, since the formation of an open complex also requires bending of the DNA, a mechanism that was proposed as a universal method of regulating gene expression across all organisms
Our analysis of eukaryotic genomes showed a greater diversity of behavior. The spacing of the peaks suggests that plectonemes may play a role in positioning nucleosomes, consistent with proposals that nucleosome positioning may rely on sequence-dependent signals near promoters
The plectoneme signal encoded by intrinsic curvature could therefore indirectly position the promoter plectoneme tip by helping to organize these nearby nucleosomes.
The above findings demonstrate that DNA contains a previously hidden ‘code’ that determines the local intrinsic curvature and consequently governs the locations of plectonemes. These plectonemes can organize DNA within topological domains, providing fine-scale control of the three-dimensional structure of the genome
it will be interesting to explore how changes in this plectoneme code affect levels of gene expression and other vital cellular processes.
OLV
January 4, 2020
January
01
Jan
4
04
2020
09:59 AM
9
09
59
AM
PDT
ID showoff? :) Wavelet-Based Genomic Signal Processing for Centromere Identification and Hypothesis Generation
Various ‘omics data types have been generated for Populus trichocarpa, each providing a layer of information which can be represented as a density signal across a chromosome.  We make use of genome sequence data, variants data across a population as well as methylation data across 10 different tissues, combined with wavelet-based signal processing to perform a comprehensive analysis of the signature of the centromere in these different data signals, and successfully identify putative centromeric regions in P. trichocarpa from these signals. Furthermore, using SNP (single nucleotide polymorphism) correlations across a natural population of P. trichocarpa, we find evidence for the co-evolution of the centromeric histone CENH3 with the sequence of the newly identified centromeric regions, and identify a new CENH3 candidate in P. trichocarpa.
Integrating data from multiple different sources is a task which is becoming more prevalent with the increased availability of systems biology data from high-throughput ‘omics technologies and phenotyping strategies (Gomez-Cabrero et al., 2014). Developing statistical and mathematical approaches to integrate this data in order to provide an increased understanding of the biological system is thus an important endeavor.
Chromosomal features including SNPs, genes, genome gaps and DNA methylation plotted as density signals across a chromosome result in signals that vary along the length of the chromosome
Identification of approximate centromere locations from gene density, SNP density and methylation wavelet landscapes requires knowledge of what patterns to look for.
Though centromeric/pericentromeric regions as a whole are highly methylated, it has been found in Maize that the active centromere consists of repeats associated with CENH3 (the modified histone found in the active centromere) and is usually less methylated when compared to the pericentromeric regions
The wavelet-based centromere identification through the use of multiple lines of evidence allows us to be more certain of centromeric regions, and also allows more specific locations to be identified than can be done by simply looking at repeat density, which map to broad regions of the genome. Layering multiple data types allows for the identification of putative centromere positions based on multiple lines of evidence, and thus, allows one to be more certain of their location.
The histone CENH3 epigenetically defines centromere position, and replaces normal histone H3 in the nucleosomes at the centromere
This study illustrates how through integrating multiple sources of data, one can arrive at a more comprehensive understanding of the system one is investigating. 
OLV
January 4, 2020
January
01
Jan
4
04
2020
09:10 AM
9
09
10
AM
PDT
complex functional information Data Integration in Poplar: ‘Omics Layers and Integration Strategies
This research has generated several large ‘omics datasets. This review will summarize various ‘omics data layers network and signal processing techniques for the integration and analysis of these data types will be discussed
network and signal processing approaches to representing, analyzing, and integrating multiple ‘omics data layers
Different ‘omics layers each provide information on a different aspect of a complex biological system
 large-scale ‘omics data types, multi-omics studies, as well as network-based analysis/integration techniques and wavelet-based multi-scale analysis and comparisons
Integrated analysis of various ‘omics data layers will expand the system-wide knowledge
OLV
January 4, 2020
January
01
Jan
4
04
2020
08:32 AM
8
08
32
AM
PDT
It's interesting -- significant, I'd say -- that one of the definitions of the Greek "logos" is "story," which is not really a definition of the English "word" outside of idiomatically (e.g. "What's the good word?" and "Is there word yet?"). Like the debut of Big Bang cosmology caused thinkers pause at Genesis 1:1, surely the realization of Information Technology will be -- has been and is -- giving thinkers pause at John 1:1. Despite it's defects, strictly speaking as a translation, imagining the in-depth and inescapable rhetorical and linguistic impact inherent in translating "logos" as "story" there, we might get the impact more fully that the word's use would have on a native speaker of Koine Greek: "In the beginning was the story, and the story was with God, and the story was God." No story = no beginning, no cosmos, no life, no man, no God.jstanley01
January 4, 2020
January
01
Jan
4
04
2020
05:49 AM
5
05
49
AM
PDT
To go a bit further, classical information is shown to be a subset of quantum information by the following method. Specifically, in the following 2011 paper, researchers ,,, show that when the bits (in a computer) to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer's state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,, In measuring entropy, one should bear in mind that (in quantum information theory) an object does not have a certain amount of entropy per se, instead an object's entropy is always dependent on the observer.
Quantum knowledge cools computers: New understanding of entropy - June 1, 2011 Excerpt: Recent research by a team of physicists,,, describe,,, how the deletion of data, under certain conditions, can create a cooling effect instead of generating heat. The cooling effect appears when the strange quantum phenomenon of entanglement is invoked.,,, The new study revisits Landauer's principle for cases when the values of the bits to be deleted may be known. When the memory content is known, it should be possible to delete the bits in such a manner that it is theoretically possible to re-create them. It has previously been shown that such reversible deletion would generate no heat. In the new paper, the researchers go a step further. They show that when the bits to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer's state to that of the computer in such a way that they know more about the memory than is possible in classical physics.,,, In measuring entropy, one should bear in mind that an object does not have a certain amount of entropy per se, instead an object's entropy is always dependent on the observer. Applied to the example of deleting data, this means that if two individuals delete data in a memory and one has more knowledge of this data, she perceives the memory to have lower entropy and can then delete the memory using less energy.,,, No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that "more than complete knowledge" from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, "This doesn't mean that we can develop a perpetual motion machine." The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what's known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says "We're working on the edge of the second law. If you go any further, you will break it." http://www.sciencedaily.com/releases/2011/06/110601134300.htm
And as the following 2017 article states: James Clerk Maxwell (said), “The idea of dissipation of energy depends on the extent of our knowledge.”,,, quantum information theory,,, describes the spread of information through quantum systems.,,, Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,
The Quantum Thermodynamics Revolution – May 2017 Excerpt: the 19th-century physicist James Clerk Maxwell put it, “The idea of dissipation of energy depends on the extent of our knowledge.” In recent years, a revolutionary understanding of thermodynamics has emerged that explains this subjectivity using quantum information theory — “a toddler among physical theories,” as del Rio and co-authors put it, that describes the spread of information through quantum systems. Just as thermodynamics initially grew out of trying to improve steam engines, today’s thermodynamicists are mulling over the workings of quantum machines. Shrinking technology — a single-ion engine and three-atom fridge were both experimentally realized for the first time within the past year — is forcing them to extend thermodynamics to the quantum realm, where notions like temperature and work lose their usual meanings, and the classical laws don’t necessarily apply. They’ve found new, quantum versions of the laws that scale up to the originals. Rewriting the theory from the bottom up has led experts to recast its basic concepts in terms of its subjective nature, and to unravel the deep and often surprising relationship between energy and information — the abstract 1s and 0s by which physical states are distinguished and knowledge is measured.,,, Renato Renner, a professor at ETH Zurich in Switzerland, described this as a radical shift in perspective. Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,, https://www.quantamagazine.org/quantum-thermodynamics-revolution/
Again to repeat that last sentence,“Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,, That statement about entropy being a property of an observer who describes the system, for anyone involved in the ID vs. Darwinism debate, ought to send chills down their scientific spine. These experiments completely blow the reductive materialistic presuppositions of Darwinian evolution, (presuppositions about information being merely ’emergent’ from some material basis), out of the water, and tie the creation of information in an organism directly to the knowledge of the ‘observer’ in quantum theory. In other words, contrary to the reductive materialistic presuppositions of Darwinists, information, particularly this ‘thermodynamic positional information’, is now experimentally shown, via quantum information theory, to be its own distinct physical entity that, although it can interact with matter and energy, is its own independent, ‘non-local’ beyond space and time, entity that is separate from matter and energy. On top of all that, this ‘thermodynamic positional information’ is found, via quantum information theory, to be “a property of an observer who describes a system.” In other words, Intelligent Design, and a direct inference to God as the Intelligence behind life, (via the non-locality of quantum information and/or the non-locality of quantum entanglement ), has, for all intents and purposes, finally achieved experimental confirmation. One final note. The implication for us personally of finding ‘non-local’, beyond space and time, and ‘conserved’, quantum information in molecular biology on such a massive scale, in every important biomolecule in our bodies, is fairly, and pleasantly, obvious. That pleasant implication is, of course, the fact that we now have very strong empirical evidence suggesting that we do indeed have an eternal soul that is capable of living beyond the death of our material bodies. As Stuart Hameroff states in the following article, the quantum information,,, isn’t destroyed. It can’t be destroyed.,,, it’s possible that this quantum information can exist outside the body. Perhaps indefinitely as a soul.”
Leading Scientists Say Consciousness Cannot Die It Goes Back To The Universe – Oct. 19, 2017 – Spiritual Excerpt: “Let’s say the heart stops beating. The blood stops flowing. The microtubules lose their quantum state. But the quantum information, which is in the microtubules, isn’t destroyed. It can’t be destroyed. It just distributes and dissipates to the universe at large. If a patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says, “I had a near death experience. I saw a white light. I saw a tunnel. I saw my dead relatives.,,” Now if they’re not revived and the patient dies, then it’s possible that this quantum information can exist outside the body. Perhaps indefinitely as a soul.” – Stuart Hameroff – Quantum Entangled Consciousness – Life After Death – video (5:00 minute mark) https://www.disclose.tv/leading-scientists-say-consciousness-cannot-die-it-goes-back-to-the-universe-315604
Verse:
Mark 8:37 Is anything worth more than your soul?
supplemental notes:
Darwinism vs Biological Form - video https://www.youtube.com/watch?v=JyNzNPgjM4w How Quantum Mechanics and Consciousness Correlate – video (how consciousness, quantum information theory, and molecular biology correlate – 27 minute mark) https://youtu.be/4f0hL3Nrdas?t=1634 August 2019 - Experimentally tying God into the origin of life via quantum information theory https://uncommondescent.com/intelligent-design/if-you-can-reproduce-how-life-got-started-10-million-is-yours/#comment-681958 Darwinian Materialism vs. Quantum Biology – Part II - video https://www.youtube.com/watch?v=oSig2CsjKbg
bornagain77
January 4, 2020
January
01
Jan
4
04
2020
04:32 AM
4
04
32
AM
PDT
as to:
The specific sequences that carry information, phone numbers, for example, can only be defined as information in relation to the whole system of which they are a part.
This is very similar to the argument that was put forth by Wiker & Witt in their book “A Meaningful World”. Specifically, "The whole is required to give meaning to the part."
A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature – Book Review Excerpt: They focus instead on what “Methinks it is like a weasel” really means. In isolation, in fact, it means almost nothing. Who said it? Why? What does the “it” refer to? What does it reveal about the characters? How does it advance the plot? In the context of the entire play, and of Elizabethan culture, this brief line takes on significance of surprising depth. The whole is required to give meaning to the part. http://www.thinkingchristian.net/C228303755/E20060821202417/
In short, reductive materialism can never provide an explanation for overall 'context':
con·text noun the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed.
Pastor Joe Boot puts the irresolvable dilemma for reductive materialists as such,
“If you have no God, then you have no design plan for the universe. You have no prexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is flux. It’s just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other because there is no overall structure. There’s no design plan. It’s like my kids do ‘join the dots’ puzzles. It’s just dots, but when you join the dots there is a structure, and a picture emerges. Well, the atheists is without that (final picture). There is no preestablished pattern (to connect the facts given atheism).” Pastor Joe Boot – quote taken from 13:20 minute mark of the following video: Defending the Christian Faith – Pastor Joe Boot – video http://www.youtube.com/watch?v=wqE5_ZOAnKo
This irresolvable dilemma of 'context' for reductive materialists also plays out in mathematics. Specifically, Kurt Gödel's incompleteness theorem for mathematics can be succinctly stated as such, "Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove”.
"Gödel's incompleteness theorem (1931), proves that there are limits to what can be ascertained by mathematics. Kurt Gödel halted the achievement of a unifying all-encompassing theory of everything in his theorem that: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove”." Stephen Hawking & Leonard Miodinow, The Grand Design (2010)
Kurt Gödel also stated this,
“In materialism all elements behave the same. It is mysterious to think of them as spread out and automatically united. For something to be a whole, it has to have an additional object, say, a soul or a mind.” Kurt Gödel – Hao Wang’s supplemental biography of Gödel, A Logical Journey, MIT Press, 1996.
Moreover, Kurt Gödel's incompleteness theorem is not just some abstract mathematical and/or philosophical proof, but Kurt Gödel's incompleteness theorem has now been extended to physics. Specifically, as the following article states, "even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,,"
Quantum physics problem proved unsolvable: Gödel and Turing enter quantum physics – December 9, 2015 Excerpt: A mathematical problem underlying fundamental questions in particle and quantum physics is provably unsolvable,,, It is the first major problem in physics for which such a fundamental limitation could be proven. The findings are important because they show that even a perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, “We knew about the possibility of problems that are undecidable in principle since the works of Turing and Gödel in the 1930s,” added Co-author Professor Michael Wolf from Technical University of Munich. “So far, however, this only concerned the very abstract corners of theoretical computer science and mathematical logic. No one had seriously contemplated this as a possibility right in the heart of theoretical physics before. But our results change this picture. From a more philosophical perspective, they also challenge the reductionists’ point of view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description.” http://phys.org/news/2015-12-quantum-physics-problem-unsolvable-godel.html
In biology, Gödel's incompleteness theorem, and this irresolvable dilemma that 'context' presents to reductive materialists, plays out in that reductive materialists have no clue why the billions of trillions of protein molecules of any particular organism may cohere as a single unified whole "precisely for a lifetime, and not a moment longer?"
The Unbearable Wholeness of Beings - Stephen L. Talbott - 2010 Excerpt: Virtually the same collection of molecules exists in the canine cells during the moments immediately before and after death. But after the fateful transition no one will any longer think of genes as being regulated, nor will anyone refer to normal or proper chromosome functioning. No molecules will be said to guide other molecules to specific targets, and no molecules will be carrying signals, which is just as well because there will be no structures recognizing signals. Code, information, and communication, in their biological sense, will have disappeared from the scientist’s vocabulary. ,,, the question, rather, is why things don’t fall completely apart — as they do, in fact, at the moment of death. What power holds off that moment — precisely for a lifetime, and not a moment longer? Despite the countless processes going on in the cell, and despite the fact that each process might be expected to “go its own way” according to the myriad factors impinging on it from all directions, the actual result is quite different. Rather than becoming progressively disordered in their mutual relations (as indeed happens after death, when the whole dissolves into separate fragments), the processes hold together in a larger unity. http://www.thenewatlantis.com/publications/the-unbearable-wholeness-of-beings
Of course the Christian Theist has a ready answer to the question of "What power holds off that moment — precisely for a lifetime, and not a moment longer?" That answer is, of course, that It is the soul that must be providing the singular overall context to the billions of trillions of protein molecules in an organism and gives us a coherent explanation as to exactly "why things don’t fall completely apart — as they do, in fact, at the moment of death." Moreover, the Christian Theist does not have to rely only on the 'necessity of context' in order to argue for the reality of the soul, but the Christian Theist can now also, due to advances in quantum biology and quantum information theory, appeal directly to empirical evidence to support his foundational belief in a soul. Specifically, quantum coherence and/or quantum entanglement is not found to be ubiquitous within life. As the title of the following paper states, "Quantum criticality in a wide range of important biomolecules"
Quantum criticality in a wide range of important biomolecules – Mar. 6, 2015 Excerpt: “Most of the molecules taking part actively in biochemical processes are tuned exactly to the transition point and are critical conductors,” they say. That’s a discovery that is as important as it is unexpected. “These findings suggest an entirely new and universal mechanism of conductance in biology very different from the one used in electrical circuits.” The permutations of possible energy levels of biomolecules is huge so the possibility of finding even one (biomolecule) that is in the quantum critical state by accident is mind-bogglingly small and, to all intents and purposes, impossible.,, of the order of 10^-50 of possible small biomolecules and even less for proteins,”,,, “what exactly is the advantage that criticality confers?” https://medium.com/the-physics-arxiv-blog/the-origin-of-life-and-the-hidden-role-of-quantum-criticality-ca4707924552
As well, at 24:00 minute mark of the following video Dr Rieper remarks that practically the whole DNA molecule can be viewed as quantum information with classical information embedded within it.
"What happens is this classical information (of DNA) is embedded, sandwiched, into the quantum information (of DNA). And most likely this classical information is never accessed because it is inside all the quantum information. You can only access the quantum information or the electron clouds and the protons. So mathematically you can describe that as a quantum/classical state." Elisabeth Rieper – Classical and Quantum Information in DNA – video (Longitudinal Quantum Information resides along the entire length of DNA discussed at the 19:30 minute mark; at 24:00 minute mark Dr Rieper remarks that practically the whole DNA molecule can be viewed as quantum information with classical information embedded within it) https://youtu.be/2nqHOnVTxJE?t=1176
What is so devastating to Darwinian presuppositions with the finding pervasive quantum coherence and/or quantum entanglement within molecular biology, is that quantum coherence and/or quantum entanglement is a non-local, beyond space and time, effect that requires a beyond space and time cause in order to explain its existence. As the following paper entitled “Looking beyond space and time to cope with quantum theory” stated, ““Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,”
Looking beyond space and time to cope with quantum theory – 29 October 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” http://www.quantumlah.org/highlight/121029_hidden_influences.php
Moreover, as the following study found, the greater the number of particles in a quantum hypergraph state, (which is exactly the type of quantum coherence that we have with protein and DNA molecules), the more strongly it violates local realism, with the strength increasing exponentially with the number of particles.
Physicists find extreme violation of local realism in quantum hypergraph states - Lisa Zyga - March 4, 2016 Excerpt: Many quantum technologies rely on quantum states that violate local realism, which means that they either violate locality (such as when entangled particles influence each other from far away) or realism (the assumption that quantum states have well-defined properties, independent of measurement), or possibly both. Violation of local realism is one of the many counterintuitive, yet experimentally supported, characteristics of the quantum world. Determining whether or not multiparticle quantum states violate local realism can be challenging. Now in a new paper, physicists have shown that a large family of multiparticle quantum states called hypergraph states violates local realism in many ways. The results suggest that these states may serve as useful resources for quantum technologies, such as quantum computers and detecting gravitational waves.,,, The physicists also showed that the greater the number of particles in a quantum hypergraph state, the more strongly it violates local realism, with the strength increasing exponentially with the number of particles. In addition, even if a quantum hypergraph state loses one of its particles, it continues to violate local realism. This robustness to particle loss is in stark contrast to other types of quantum states, which no longer violate local realism if they lose a particle. This property is particularly appealing for applications, since it might allow for more noise in experiments. http://phys.org/news/2016-03-physicists-extreme-violation-local-realism.html
Darwinists, with their reductive materialistic framework, simply have no beyond space and time cause that they can appeal so as to be able to explain the non-local quantum coherence and/or entanglement of the cell or of an entire organism in general. Whereas Christians readily do have a beyond space and time cause that they can appeal to. As Colossians 1:17 states, “He is before all things, and in him all things hold together.”
Colossians 1:17 He is before all things, and in him all things hold together.
bornagain77
January 4, 2020
January
01
Jan
4
04
2020
04:31 AM
4
04
31
AM
PDT
EDTA, It seems to be just a reshaping of ideas we’re already familiar with. Another way to explain or illustrate it.PeterA
January 3, 2020
January
01
Jan
3
03
2020
11:43 PM
11
11
43
PM
PDT
Pultz is defining a particular context (that of a logos and then a mechanism to realize the logos, which then needs the information), and then saying that his particular type of information is the kind we should concern ourselves with. Let's call it p-info ("p" for Pultz). Then he's saying that p-info does not come about spontaneously, in the absence of a logos. The first criticism we might find of this idea, is that information certainly exists in other forms. And then it will be argued that those other forms can indeed come about via randomness. But the person arguing for that is probably thinking of a different type of information. The next challenge for this p-info idea is to find a way to argue that it cannot come about via randomness--not even via some subtle bootstrapping means. One approach would be to argue that the Von Neuman universal constructor mechanism is the simplest possible, and that to realize one with atoms takes more probabilistic resources than our universe can muster. Therefore, p-info isn't bootstrappable. But we already knew this. So is this idea truly new, or is it just a reshaping of ideas we're already familiar with?EDTA
January 3, 2020
January
01
Jan
3
03
2020
07:19 PM
7
07
19
PM
PDT
Code-sender-medium-receiver-translation-confirmation-feedback "Choice" is definitely an aspect. It is mistaken to think of "information" merely as the code. It's an informational system or circuit.Silver Asiatic
January 3, 2020
January
01
Jan
3
03
2020
01:29 PM
1
01
29
PM
PDT
Exit Evolution - I'm sad to see, no English language edition yet ...Silver Asiatic
January 3, 2020
January
01
Jan
3
03
2020
01:26 PM
1
01
26
PM
PDT
1 2

Leave a Reply