Uncommon Descent Serving The Intelligent Design Community

Eric Holloway: ID as a bridge between Francis Bacon and Thomas Aquinas

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Eric Holloway, an electrical and computer engineer, offers some thoughts on how to prevent science from devolving into “scientism.” For an example of scientism, see Peter Atkins’s claim that science can answer all the Big Questions. Here’s John Mark Reynolds’s outline of the general problem:

Sometimes a culture takes a right road, sometimes it passes the right way and ends up a bit lost. Western Europe had a chance at the start of seventeenth century to get a few things right, but by the eighteenth century most had taken a worse way: Enlightenment or reaction. Enlightenment lost the wisdom of the Middle Ages, creating the myth of a dark age, and the main enlightened nation, France, ended the seventeenth century in butchery and dictatorship. Instead of the development of an urbane Spain Cervantes might have prefigured, there was a mere reaction away from the new ideas, including the good ones. More.

Intelligent Design: The Bridge Between Baconian Science and Thomistic Philosophy

Imagine giving your friend a good book filled with beautiful pictures and stories. Instead of reading it, the friend begins to count the letters, and make theories about which letters predict which pictures will come next, and analyze the types of ink used to print the pages. This does not make sense. Why doesn’t he just read the book? The reason, he claims, is because we do not want to bias ourselves by assuming the ink was arranged purposefully.

Carlo Crivelli 007.jpg
Thomas Aquinas

This story illustrates difference in perspective of the medieval ages and our modern scientific age. The medieval worldview was marked by the voluminous philosophy of Thomas Aquinas (1224/6—1274). The worldview of that time was that God is ultimate existence, and creation is ordered towards maximizing its existence in God. As such, there is a natural law that must be followed for humankind to flourish. Deviation from the natural law results in cessation of existence and death. Due to the ability of the human mind to rationally grasp changeless principles, the medievals thought there was something changeless and immortal about the human soul. Since all other physical creatures do not have this rational ability, they exist to a less perfect degree than human beings. This means that all humans inherently have a higher worth than all the rest of physical creation, and at the same time all humans are equal since it is of the nature of humankind to be rational, even if particular humans are incapable of rational thought
.
But, the intricate medieval tapestry begins to unravel. An expanding view of the globe, major diseases and wars, and internal criticisms leads to a breakdown of the Thomistic system. Francis Bacon (1561–1626), a leading popularizer of what we consider modern science, grows impatient with the monks’ philosophizing and debating. Demanding results, Bacon recommends carefully dissecting nature’s mysteries to heal the world’s suffering, instead of wondering about the meaning of it all. And thus was born the modern scientific age, where the perception of meaning is only a biased illusion and truth must be empirically measurable.

Today, Bacon’s view is the dominant view, so much so that we take it for granted. Science and technology have led to a revolution in health, wealth and material happiness throughout the world. In the space of a few centuries it has lifted the majority of the earth’s booming population out of poverty. The rigorous vision of Bacon, spoken with the precision of math, has given us the gift of the gods, but has also resulted in unprecedented death and destruction, horrific human experimentation, mass enslavement, cultural disintegration, and in general left us with a sense that we have lost something of great value that we cannot find again. The core reason for the aimlessness is because the building blocks of science are inert. They are like Legos in a box. You cannot shake the box of Legos and expect a spaceship to fall out. In the same way, mathematical proof and physical evidence cannot explain their own reason for being. Science cannot explain meaning. At the same time, the very inability of science to speak for itself says something of interest.

Somer Francis Bacon.jpg
Francis Bacon

In medieval language this missing meaning is called function. Function cannot emerge from atoms in motion. It cannot emerge from shaking the Lego box. This claim can be proven mathematically. In information theory, function is a kind of mutual information. Mutual information is subject to the law of information non-increase, which means mutual information and thus function cannot be created by natural processes. Thus, without an organizing force, matter is functionless and void, and there is no meaning.

The fundamental insight of the intelligent design movement is that we can empirically differentiate function from accidental patterns created by natural processes. This means we can describe the Thomistic system with Baconian empirical precision if we really wanted to. Fortunately, humans seem to be pretty good at identifying function without huge amounts of empirical justification, unless they are university trained. The empirical detection of function is a new pair of glasses that corrects Bacon’s vision, and helps us again follow along the path that winds back through the medieval monasteries of Thomas Aquinas, with the mathematical and empirical rigor of science.

But, after hearing this Bacon will say, “it all sounds quite nice, but how is it useful? Function doesn’t feed children or cure cancer.” The answer to Bacon’s question is illustrated with the story of the book at the beginning. If we approach the natural world as if it were arbitrarily put together, then we miss many clues that can help us to understand and use it better.

We are seeing the scientific importance of empirically detecting function now with the ENCODE project. Previously, scientists believed that since the human genome was produced by evolution, most of it would be random and functionless. However, the ENCODE project has shown the majority of the human genome is functional. Now that we understand the genome is mostly functional, we will be better able to decode how it works and programs our body. So, contrary to Bacon, being able to detect function in the human genome can help us improve our lives.

This raises the further question: how would science change if we broaden our detection of function to the rest of the world? Since things work better if they follow their function, does this mean there is a proper order for human flourishing, as the medievals believed? Furthermore, what does science have to say about the creators of function, such as humans? Since matter cannot create function, function creators cannot be reduced to matter. And being more than matter, human beings must be more valuable than any material good. While it is true we cannot go from is to ought, intelligent design does provide a scientific basis for human ontological and pragmatic worth, as well as justify a natural law that must be followed in order for humanity to prosper. So, through the lens of intelligent design, science can indeed talk about the metaphysical realm of value and morals and explain the medieval worldview of function in the empirical language of modern science.

Note: This post also appeared at Patheos (August 30, 2018)

See also: Could one single machine invent everything? (Eric Holloway)

and

Renowned chemist on why only science can answer the Big Questions (Of course, he define th Big Questions as precisely the ones science can answer, dismissing the others a not worth bothering with.)

Comments
This gives quantitative meaning to “uncertainty”: it is the number of yes/no questions it takes to guess a random variables, given knowledge of the underlying distribution and taking the optimal question-asking strategy . . . .
Note it is not merely counting yes/no questions, but also knowing the underlying distribution and using the optimal question-asking strategy.Mung
September 17, 2018
September
09
Sep
17
17
2018
01:40 PM
1
01
40
PM
PDT
Thanks for all the feedback. The basic idea is not mine, but Salvador Cordova's. Basically, bad mutations can spell the end of an organism. As genomes grow in length the probability of a deadly mutation grows exponentially. So, the question is how does evolution cause genomes to grow? The simulations answer this question, and show we have to make some very unrealistic assumptions about evolution for it to cause genomes to grow. As I mentioned before, this applies to any form of evolution. To return to the topic of the article, since we can talk about these sorts of things empirically, this opens the door to Thomism.EricMH
September 13, 2018
September
09
Sep
13
13
2018
10:23 AM
10
10
23
AM
PDT
JDK @122, I ran out of time to edit my previous entry. You stated...
When we build a mathematical/logical model, we have to test the model against real-world, empirical results to see if it’s a good model. If the model gives us nothing we can test, or it gives testable results but they don’t agree with reality, we have to revise our model. Of course our models are intelligently designed (some more intelligently than others, I imagine), but how else can we study the world if we don’t use our intelligence?
First, no one is arguing about models needing to be intelligently designed. But do you not see any irony of your statements? That the models are built upon assumptions of macro-evolutionary events? And at best this is an inference? Not evidence? Claiming a "real-world" by unguided means is an assumption, not a fact. And that it takes intelligence to model this assertion. Because the models must utilize Intelligent Selection in order to create novel forms over time. This is not population genetics or models of variations among species and bacteria, or dogs and plants for example. I find this statement ironic... "...but how else can we study the world if we don’t use our intelligence?" We agree :) But the world you speak of is at best an Inference, and may not be real at all. In fact, there's a lot of story-telling in evolutionary science circles now since Darwinism was first promoted. All very informed, highly educated scientist proposing stories of the past. Those were all built upon assumptions and you might say highly informed, but still assumptions of blind, unguided events - random mutations - and natural selection. So building a model, inserting Intelligent Selection into the model to mold future events is still built upon assumptions. And as we see with the Epigenome and vast areas of new discovery now taking place, many assumptions were wrong. And the largest of them all might be "Junk" DNA. The more "Junk" found to have purpose, function and is transcribed, I think the farther away the "real-world" moves away from Darwinist assumptions in these models and towards Intelligent or Directed Evolution. I would hope you agree at least these models are using built-in assumptions of macro events in the past and at best are an Inference of the Darwinist framework and not evidence since they must use guided intelligent programs to model any aggregate formation of new forms. And that these models are not actually simulating what we know, but only what scientist think they know and assume. Yet still cannot get away from artificial selection by intelligent means. I think far from helping Darwinist models, it reinforces Intelligent and Guided Evolution models. There are natural areas of limited, protected and controlled randomness that makes complete sense in different life forms and in immune response systems. But again, I think the fall out of ENCODE, the continued discovery of functions within "Junk" DNA will be Darwinist undoing. As over time, a threshold might be reached where blind, random mutations do not have room or time to innovate macro events.DATCG
September 10, 2018
September
09
Sep
10
10
2018
09:33 PM
9
09
33
PM
PDT
JDK @122, Guess we will agree to disagree. From everything I've read the attempts to model or simulation blind, unguided evolution include intelligent selection like Gpuccio stated. I remember maybe one was more true to random mutations and natural selection and showed no significant increase in novel forms of the kind that is required for diversity of life on earth. Cannot remember which study that was, been some time now. If you model or simulation an assumption it is likely you may recreate the assumption and not what is actually scientifically accurate depiction of evolution. But, designed or directed, or Prescribed Evolution is precisely what these studies, models and simulations did. So sorry, I think that scores one for Intelligent Design. Finally, I do wonder in the future, not so far from now how ENCODE ends up being the largest game changer of all. We are now finding many more functions across all areas of DNA in the Epigenome. As more functionality is discovered across more sections of previously written off areas of DNA, the noose tightens around the old "Junk" DNA claims, as well as on old assumptions. The very assumptions those models were built upon. It is why many scientist are openly questioning and searching for new answers to evolution. Answers they thought they already had discovered in the past, based upon inaccurate assumptions, largely drawn upon ignorance of DNA and not actual scientific research. Conclusions were drawn early, they are exposed now, and many Darwinist openly admitting there's a problem. They have been in fact wrong for decades, but today with the internet, blogs, and social media the word of these wrong conclusions and problems with neo-Darwinism are not in limited circles anymore. ENCODE is having a huge impact on how scientist are studying DNA, how we view diseases and mutations as well as how scientist themselves are now contemplating evolution. Dan Graur predicts 75% Junk DNA if I remember correctly. Yes, here's his prediction... https://evolutionnews.org/2017/07/dan-graur-anti-encode-crusader-is-back/ I'm not pretending to know how much he might be wrong, but my guess for now is it might be substantial as I keep reading different papers and sources on former regions of "Junk" DNA turning out to have functions. In fact, there are scientific databases full of functions being tracked today on these formerly unexplored regions that were long neglected by many. It's a new broad industry especially important for Regulatory Gene processing. Gpuccio covers many areas on this that include Epigenetic structures and functional protein complexes that depend on formerly named "Junk" regions. I think you might find them all very interesting. I am especially interested in Intronic regions once blown off as JUNK. Turns out they do have functionality - see the Spliceosome entry. And I think I posted recently in the Transcription post by Gpuccio on Intronic regions as well. Here's a post by Upright Biped with Gpuccio's last Post on Transcription linking all of Gpuccio's past post. Many have comments that include important functions once assumed to be Junk by Darwinist in the past. Post 165 by Upright Biped in Transcription Regulation Post by Gpuccio... Transcription regulation: a miracle of engineering As I currently see it, the more function we find, the less "chances" there are for random mutations and "time" to work for blind evolutionary macro events.DATCG
September 10, 2018
September
09
Sep
10
10
2018
08:15 PM
8
08
15
PM
PDT
Erasmus If you suddenly came across this, would you be able to hypothesize of its origin? Please answer 'yes' or 'no'.Eugene S
September 10, 2018
September
09
Sep
10
10
2018
08:21 AM
8
08
21
AM
PDT
Genetic algorithms definitely use telic processes to actively search for solutions to the problems they were intelligently designed to solve. What we lack is a way to test natural selection being a designer mimic. Reality tells us that it isn't. So how do we proceed?ET
September 9, 2018
September
09
Sep
9
09
2018
03:54 PM
3
03
54
PM
PDT
re 120: Thanks. DATCG. I just posted snippets of the prior thread to highlight some key things. One of the other points we made was that Eric's simulation just used the length of the string as the measure of "complexity" without any explanation as to why that would model the ability to survive and reproduce, so the only "selection" that occurs is if the organism (string) gets a "corrupt" bit (a deleterious mutation). So I agree with Gpuccio that modeling genetic change, changes in phenotype, and natural selection all in one simulation would be a very difficult task, and I also (who am no expert in this overall area) know of no model that does that. However, Gpuccio also says, "they are all examples of Intelligent selection, and therefore mean nothing." All simulations include intelligently designed decisions about how to proceed. I'm not sure that means much, unless we just don't want to count simulations as a legitimate tool for investigation. When we build a mathematical/logical model, we have to test the model against real-world, empirical results to see if it's a good model. If the model gives us nothing we can test, or it gives testable results but they don't agree with reality, we have to revise our model. Of course our models are intelligently designed (some more intelligently than others, I imagine), but how else can we study the world if we don't use our intelligence?jdk
September 9, 2018
September
09
Sep
9
09
2018
10:36 AM
10
10
36
AM
PDT
DAT, I was clipping as testifying against interest, from Wikipedia. They in turn were referring to experts including G N Lewis. I note with concern from your link:
Fifteen studies from 1990 and 2002 were found. All 15 studies found that brain temperature was higher than all measures of core temperature with mean differences of 0.39 to 2.5 degrees C reported. Only three studies employed a t test to examine the differences; all found statistical significance. Temperatures greater than 38 degrees C were found in 11 studies. This review demonstrates that brain temperatures have been found to be higher than core temperatures; however, existing studies are limited by low sample sizes, limited statistical analysis, and inconsistent measures of brain and core temperatures.
Looks like the brain can begin to cook itself! This points to the significance of temperature regulation as a key body control process. KFkairosfocus
September 9, 2018
September
09
Sep
9
09
2018
09:50 AM
9
09
50
AM
PDT
JDK @117, A quick glance, not all of Gpuccio's statements @166 were included. He goes on to say,
"But I would say the same thing of all computer simulations of evolution of which I am aware, because no computer simulation I am aware of even tries to simulate NS: they are all examples of Intelligent selection, and therefore mean nothing."
I agree with Gpuccio's quick summary on simulations of evolution in general. This includes Avida or examples like EV Ware as shown and discussed by Marks, et al. http://evoinfo.org/ev.html Out for rest of day. Have a good one.DATCG
September 9, 2018
September
09
Sep
9
09
2018
09:06 AM
9
09
06
AM
PDT
JDK, Thanks, I'll let Eric respond as I catch up with it.DATCG
September 9, 2018
September
09
Sep
9
09
2018
08:40 AM
8
08
40
AM
PDT
KF @114, You said or quoted,
For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . ."
Related? Increased blood pressure leads to vascular dementia(memory loss, confusion, etc.). body temperature compared to brain temperature, paper from 2004:... "https://www.ncbi.nlm.nih.gov/pubmed/14998103/" And from 2016... Measuring Entropy Change in a Human Physiological System https://www.hindawi.com/journals/jther/2016/4932710/ Note how Mechanical Engineers are increasing in such areas of practical research and knowledge.DATCG
September 9, 2018
September
09
Sep
9
09
2018
08:39 AM
8
08
39
AM
PDT
at 116, DATCG asks, "You stated Bob’OH had one objection you addressed. Was he satisfied after your modifications?" Thread in question is at https://uncommondescent.com/intelligent-design/how-some-materialists-are-blinded-by-their-faith-commitments/#comments. Eric addressed Bob's objection, but Bob didn't think much of it. See posts 87 and 93. However Bob's objection was minor in comparison to more fundamental flaws that were pointed out by JVL and me. I posted Eric's explanation of his code at #143. JVL responded at 150, and mentioned tow key things.
Each organism is represented by a bitstring of # length L. L is the organism’s complexity. How is complexity measured? Some ferns have a genome many times the size of humans. Each generation, the organism will create two # offspring with lengths of L-1 and L+1. Name me one organism that fits this model. And, again, how do you measure complexity? Are you saying if I have two children then one will be more complex and the other will be less complex?
At 160, I added,
But, in addition to the obvious questions jvl asked (why is length a measure of complexity, what in the world is one child with L+1 and one with L-1 supposed to model, and why is corruption and instant death tied to one “bit flipping”), I’ll also note that the “corruption probability” is set to 0.01. What does this represent? I set p =0.0001, and the size of the organism with the longest length (which doesn’t represent anything I can think of, but appears to be the number which the program claims shows that evolution is working or not) appears to get bigger and bigger, showing, presumably, that evolution works? This really all make very little sense.
At 166, gpuccio said "OK, I don’t think I understand well what this “simulation” is meant to simulate, but for what I understand, as it is reported at #160, I would agree with jdk that it does not seem any realistic simulation of any model of evolution." And at 167, JVL wrote, "Anyway, I think it would be best to hear from the ‘designer’ of the code to find out what exactly they were trying to model before making any other assumptions or comments. Fair enough?" But EricMH never came back for further discussion. I recommend one read his explanation at #143 and try to figure out how his program models evolution, or how his results (which vary depend on the parameter p) shows that "evolution doesn't work."jdk
September 9, 2018
September
09
Sep
9
09
2018
08:26 AM
8
08
26
AM
PDT
EricMH @100, Thanks for your response. Also, thanks for your service and congrats on your appointment to Walter Bradly Center for Natural and Artificial Intelligence. Must be fun to work with Dr. Robert Marks! You stated Bob'OH had one objection you addressed. Was he satisfied after your modifications? And this line...
"However, it seems clear that if we think of DNA as similar to bitstrings describing programs in computers, that there is an enormous amount of mutual information in DNA"
Understatement of the year ;-) How many variables are there in DNA Code(s) and molecular interactions of cells? How many rules for that matter, how many sub-routines. On a side note: Do you think it fair or analogous to say bitstrings = AA sequences? Anyone else can chime in if they like. Upright Biped, another contributer here at UD cited Pattee and others like Barberie in the past. http://codebiology.org An example of the collection of Codes that natural evolutionist Barbieri cites...
The signal transduction codes Signal transduction is the process by which cells transform the signals from the environment, called first messengers, into internal signals, called second messengers. First and second messengers belong to two independent worlds because there are literally hundreds of first messengers (hormones, growth factors, neurotransmitters, etc.) but only four great families of second messengers (cyclic AMP, calcium ions, diacylglycerol and inositol trisphosphate) (Alberts et al. 2007). The crucial point is that the molecules that perform signal transduction are true adaptors. They consists of three subunits: a receptor for the first messengers, an amplifier for the second messengers, and a mediator in between (Berridge 1985). This allows the transduction complex to perform two independent recognition processes, one for the first messenger and the other for the second messenger. Laboratory experiments have proved that any first messenger can be associated with any second messenger, which means that there is a potentially unlimited number of arbitrary connections between them. In signal transduction, in short, we find all the three essential components of a code: (1) two independents worlds of molecules (first messengers and second messengers), (2) a set of adaptors that create a mapping between them, and (3) the proof that the mapping is arbitrary because its rules can be changed (Barbieri 2003).
Linking this back to one of my points, comparing Order Sequence to Functional Sequence(Organized Coding and Networking) stated in the paper by Abel and Trevors. Organization at levels Trevors and Able outline is Semantic driven - Code. That leads to varying and wide rules based coding, and a myriad of variables that can be utilized, preset and updated, defined and deciphered as part of the Code structure. As defined by Barbieri above. The incredible number of interpretive processing taking place between different cellular systems and external networks strengthens your point on Mutual Information. And is it to big a leap to state, such a system expands to the Nth Power without Design centric approach? Following the wabbit down the wabbit hole... The task of Multivariate Mutual Information(or Information Interaction) states...
"These attempts have met with a great deal of confusion and a realization that interactions among many random variables are poorly understood.[citation needed]"
When we are discussing Code(s) in DNA, like the Ubiquitin Code as an example, then variables increase. And we begin to see built-in redundancy which is the case of Information Interaction(or Multivariate Mutual Information). Darwinist have long stated duplication and redundancy as evidence of unguided, blind evolution. Often laughing and guffawing that the Designer must be very bad due to redundant structures. On the contrary! That only shows their absolute ignorance on the subject. Redundancy is a hallmark of sophisticated design techniques. And unguided, random mutations have no power to generate novel forms that survive w/o some trade off and certainly not at the level of macro events. Like Behe has noted, the evidence is scant. Example Malaria and Sickle Cell Anemia. While the survivors survive, they do so at a cost to future generations. And there is no new novel forms. Note to blind, unguided evolutionist. Variations of species on the edge, like Darwin's finches are not evidence for macro events, just variation. They still interact, produce offspring and are still finches. Even Darwin's finches show the limits of blind, unguided random mutations and natural selection. And to add insult to injury, if I remember correctly the changes are now thought to be led by epigenetic features. LOL formerly thought to be "Junk" by Darwinist! Life is grand :)DATCG
September 9, 2018
September
09
Sep
9
09
2018
08:02 AM
8
08
02
AM
PDT
EricMH, Regarding your Python simulations, have you sought any feedback from fellow researchers at your university (or elsewhere in academia)? I would guess you have access to quite a bit of expertise and perhaps could find a specialist to evaluate your programs.daveS
September 9, 2018
September
09
Sep
9
09
2018
06:49 AM
6
06
49
AM
PDT
PPPS: For background, let me also tie in some thoughts in my always linked note, on the connexions between informational and thermodynamic entropy, given the informational thermodynamics school of thought:
let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information,. . . ):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
kairosfocus
September 8, 2018
September
09
Sep
8
08
2018
11:38 PM
11
11
38
PM
PDT
PPS: I should note, random variables run the gamut from highest uncertainty for a value [flat random distribution] to certainty [probability 1 or 0 for the value] so using such variables is WLOG, again.kairosfocus
September 8, 2018
September
09
Sep
8
08
2018
11:25 PM
11
11
25
PM
PDT
PS: On Mutual info (e.g. think, how much do we credibly know of info string X on observing string Y and apply to sent/stored and received/retrieved messages), Scholarpedia gives some useful thoughts:
Mutual information is one of many quantities that measures how much one random variables tells us about another. It is a dimensionless quantity with (generally) units of bits, and can be thought of as the reduction in uncertainty about one random variable given knowledge of another. High mutual information indicates a large reduction in uncertainty; low mutual information indicates a small reduction; and zero mutual information between two random variables means the variables are independent . . . . H(X) [--> entropy in the info context] has a very concrete interpretation: Suppose x is chosen randomly from the distribution PX(x) , and someone who knows the distribution PX(x) is asked to guess which x was chosen by asking only yes/no questions. If the guesser uses the optimal question-asking strategy, which is to divide the probability in half on each guess by asking questions like "is x greater than x0 ?", then the average number of yes/no questions it takes to guess x lies between H(X) and H(X)+1 (Cover and Thomas, 1991). This gives quantitative meaning to "uncertainty": it is the number of yes/no questions it takes to guess a random variables, given knowledge of the underlying distribution and taking the optimal question-asking strategy . . . . Mutual information is therefore the reduction in uncertainty about variable X , or the expected reduction in the number of yes/no questions needed to guess X after observing Y .
In context, I point to how the structured Y/N chain of q's strategy in effect quantifies info in bits, relevantly understood. Where also, as a 3-d functionally specific configuration (think, oriented nodes and arcs in a mesh similar to an exploded view diagram) may be described through such a chained, structured Y/N Q strategy (i.e. encoded in strings, cf AutoCAD etc) we see here, that discussion on strings is WLOG. Where, too, one may infer the underlying "assembly instructions/info" from observing the 3-d functionally specific structure. Proteins beg to be so analysed, and as we know, the 3-base codon pattern is tied to the significantly redundant AA sequences via the genetic code. Where, too, there is evidence that the redundancies are tied to robustness of function of proteins and perhaps to regulation of expression. Similarly, we may point to the sequences and how they link to the pattern whereby functionality of proteins is tied to key-lock fitting driven by folding patterns. (This also applies in a similar way to tRNA.) In short, the islands of function structure of AA sequence space is connected to the observed functional protein patterns in ways that point to purposeful selection and organisation of a complex organised system based on profound, systemic knowledge of possibilities. That's a smoking gun sign of very smart engineering, as Paley long ago recognised on comparing stumbling across a rock in a field vs finding a coherently and functionally organised watch. Where, too, in Ch 2 he rapidly went on to discuss the further significance of finding the additional functionality of a self-replicating subsystem. Yes, it was almost 150 years later before von Neumann gave us a functional analysis of the kinematic self replicator and then we saw how such worked out in the living cell. What remains is that we are looking at something that is antecedent to OoCBL, and it is clearly linguistic, purposeful, systemic and driven by deep knowledge of highly complex possibilities for C-chemistry in aqueous mediums. That then points onward to the observation that the physics of the cosmos is evidently fine tuned in ways that support such cell-based life. That is another whole domain of connected information where knowledge of the cell ties in with knowledge of the cosmos, giving a much broader scope to our design inferences. We are seeing design signatures connecting the world of life and the fine tuning of the physics of the cosmos.kairosfocus
September 8, 2018
September
09
Sep
8
08
2018
11:22 PM
11
11
22
PM
PDT
Folks, let us not miss the forest due to looking at the trees. D/RNA expresses alphanumeric code that functions to create proteins stepwise, using molecular nanotech, all of which is by logic of process antecedent to functioning, self-replicating cell based life. This pattern strongly indicates language, purpose via goal directed system behaviour, and that such is present prior to the origin of cell based life [OoCBL]. Where, too, the CCA tool tips of tRNA's are universal, separated physically from the anticodon ends, and where specific loading with given AA's is based on general conformation of the folded tRNA. That is how the code is operationally introduced into the system. All of this is constrained by the need for high contingency to store information, so the mere dynamics of chemical bonding will not explain the system. The complexity joined to functional coherence, organisation and specificity i/l/o the further fact of deeply isolated protein-relevant fold domains in AA sequence space combine to make happy chance utterly implausible. All of these factors strongly point to design being at the heart of cell based life. So, design sits at the table as strong horse candidate right at OoCBL. This then extends to origin of body plans right down to our own. KFkairosfocus
September 8, 2018
September
09
Sep
8
08
2018
10:36 PM
10
10
36
PM
PDT
EricMH:
It is not restricted to blind Darwinian evolution, but all forms of evolution.
That is just wrong, then. Evolution by means of intelligent design can definitely evolve increasingly complex organisms- genetic algorithms exemplify that.ET
September 8, 2018
September
09
Sep
8
08
2018
09:18 PM
9
09
18
PM
PDT
The code shows how difficult it is to evolve increasingly complex organisms according to some simple mathematical models. It is not restricted to blind Darwinian evolution, but all forms of evolution. Bob O'H had one objection, which was easily added to the model, and continued to demonstrate the difficulty of evolving complex organisms. Since I'm not well versed in biological evolution, that's all I can do. Someone proposes a way the model is deficient, and I see if that's true with further refinement of the model. Seems a reasonable way to proceed. Since evolutionary theory is full of hand wavy ambiguity, such that much cannot even be falsified and is thus not science, we should seek to mathematically quantify as much as we can, and see what it takes to create a mathematical model that is as effective as evolution is claimed to be.EricMH
September 8, 2018
September
09
Sep
8
08
2018
07:44 PM
7
07
44
PM
PDT
From the link:
This simulation shows random mutation and # selection causes organism complexity to stop # increasing
That said there isn't anything to disprove when it comes to evolution by means of blind and mindless processes because no one has ever demonstrated that type of evolution can do anything beyond cause genetic diseases and deformities. That's allET
September 8, 2018
September
09
Sep
8
08
2018
07:14 PM
7
07
14
PM
PDT
re 106: You are reading too much into my comment. Eric's program is at a site called "https://repl.it/@EricHolloway/Evolution-Doesnt-Work". I was just referring to the claim made in the thread where he first brought this up that it doesn't in any way model any real biology, so it doesn't disprove evolution, as his title seems to imply. That's all.jdk
September 8, 2018
September
09
Sep
8
08
2018
07:07 PM
7
07
07
PM
PDT
Earth to Jack- Intelligent Design is NOT anti-evolution, so it follows that IDists are not trying to disprove evolution. What IDists argue is against blind and mindless processes being able to produce certain structures, objects, events and systems. And to date no one has stepped up and shown that evolution by means of blind and mindless processes can do anything beyond cause genetic diseases and deformities. That means you make it clear that you are not well informed in biological mattersET
September 8, 2018
September
09
Sep
8
08
2018
06:32 PM
6
06
32
PM
PDT
re 102: Eric, you posted that code once before, claiming it disproved evolution. Some of us, including gpuccio I believe, showed you why it did not model evolution at all, and proved or disproved nothing. In 100, you say, "I am not well informed in biological matters." Your program certainly makes that clear.jdk
September 8, 2018
September
09
Sep
8
08
2018
05:29 PM
5
05
29
PM
PDT
Hi Eric
@DATCG I am not well informed in biological matters. I can only draw analogies with my field of expertise, computer science. However, it seems clear that if we think of DNA as similar to bitstrings describing programs in computers, that there is an enormous amount of mutual information in DNA. Also, a quick google pulls up a decent amount of material applying mutual information to bioinformatics. Of possible interest, I wrote a story illustrating the core reason why natural processes cannot create mutual information.
I just reviewed a paper comparing mutual information of genes with the environment. I then searched mutual information in pubmed and good 6000 hits. This subject is front and center in biology right now.bill cole
September 8, 2018
September
09
Sep
8
08
2018
03:28 PM
3
03
28
PM
PDT
It's not only the code. You need the parts and system to run the code.ET
September 8, 2018
September
09
Sep
8
08
2018
02:13 PM
2
02
13
PM
PDT
@john_a_designer, the fact that reproduction provides such a fault tolerant channel is incredible. The only reason this is possible is because DNA is a digital code. Analog signals do not have the error correcting ability. I ran some simulations to demonstrate how incredible this is here: http://creationevolutionuniversity.com/forum/viewtopic.php?f=3&t=169EricMH
September 8, 2018
September
09
Sep
8
08
2018
10:19 AM
10
10
19
AM
PDT
Here is a stunning claim from Abel and Trevors paper cited by DATCG @ 97.
Genes are not analogous to messages; genes are messages. Genes are literal programs. They are sent from a source by a transmitter through a channel (Fig. (Fig.3)3) within the context of a viable cell. They are decoded by a receiver and arrive eventually at a final destination. At this destination, the instantiated messages catalyze needed biochemical reactions. Both cellular and extracellular enzyme functions are involved (e.g., extracellular microbial cellulases, proteases, and nucleases). Making the same messages over and over for millions to billions of years (relative constancy of the genome, yet capable of changes) is one of those functions. Ribozymes are also messages, though encryption/decryption coding issues are absent. The message has a destination that is part of a complex integrated loop of information and activities. The loop is mostly constant, but new Shannon information can also be brought into the loop via recombination events and mutations. Mistakes can be repaired, but without the ability to introduce novel combinations over time, evolution could not progress. The cell is viewed as an open system with a semi-permeable membrane. Change or evolution over time cannot occur in a closed system. However, DNA programming instructions may be stored in nature (e.g., in permafrost, bones, fossils, amber) for hundreds to millions of years and be recovered, amplified by the polymerase chain reaction and still act as functional code. The digital message can be preserved even if the cell is absent and non-viable. It all depends on the environmental conditions and the matrix in which the DNA code was embedded. This is truly amazing from an information storage perspective. (emphasis added)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ One of the key questions you have to answer, if you believe in a naturalistic dys-teleological origin for DNA or RNA, is how did chemistry create the code? Do you have any evidence of how an undirected and purposeless physical process created what we intelligent beings recognize as code? If you do please give us your explanation. Or, is it just your belief?john_a_designer
September 8, 2018
September
09
Sep
8
08
2018
10:00 AM
10
10
00
AM
PDT
@DATCG I am not well informed in biological matters. I can only draw analogies with my field of expertise, computer science. However, it seems clear that if we think of DNA as similar to bitstrings describing programs in computers, that there is an enormous amount of mutual information in DNA. Also, a quick google pulls up a decent amount of material applying mutual information to bioinformatics. Of possible interest, I wrote a story illustrating the core reason why natural processes cannot create mutual information. https://uncommondescent.com/intelligent-design/could-one-single-machine-invent-everything/EricMH
September 8, 2018
September
09
Sep
8
08
2018
09:06 AM
9
09
06
AM
PDT
@Quaesitor, you seem to have not carefully understood the proof in section 1.2 of Levin's paper, and are instead just quote mining. Perhaps you can carefully understand the proof first, and then respond to my argument. It is not very long and I can help.EricMH
September 8, 2018
September
09
Sep
8
08
2018
08:58 AM
8
08
58
AM
PDT
1 2 3 4 6

Leave a Reply