Uncommon Descent Serving The Intelligent Design Community

Eric Holloway: ID as a bridge between Francis Bacon and Thomas Aquinas

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Eric Holloway, an electrical and computer engineer, offers some thoughts on how to prevent science from devolving into “scientism.” For an example of scientism, see Peter Atkins’s claim that science can answer all the Big Questions. Here’s John Mark Reynolds’s outline of the general problem:

Sometimes a culture takes a right road, sometimes it passes the right way and ends up a bit lost. Western Europe had a chance at the start of seventeenth century to get a few things right, but by the eighteenth century most had taken a worse way: Enlightenment or reaction. Enlightenment lost the wisdom of the Middle Ages, creating the myth of a dark age, and the main enlightened nation, France, ended the seventeenth century in butchery and dictatorship. Instead of the development of an urbane Spain Cervantes might have prefigured, there was a mere reaction away from the new ideas, including the good ones. More.

Intelligent Design: The Bridge Between Baconian Science and Thomistic Philosophy

Imagine giving your friend a good book filled with beautiful pictures and stories. Instead of reading it, the friend begins to count the letters, and make theories about which letters predict which pictures will come next, and analyze the types of ink used to print the pages. This does not make sense. Why doesn’t he just read the book? The reason, he claims, is because we do not want to bias ourselves by assuming the ink was arranged purposefully.

Carlo Crivelli 007.jpg
Thomas Aquinas

This story illustrates difference in perspective of the medieval ages and our modern scientific age. The medieval worldview was marked by the voluminous philosophy of Thomas Aquinas (1224/6—1274). The worldview of that time was that God is ultimate existence, and creation is ordered towards maximizing its existence in God. As such, there is a natural law that must be followed for humankind to flourish. Deviation from the natural law results in cessation of existence and death. Due to the ability of the human mind to rationally grasp changeless principles, the medievals thought there was something changeless and immortal about the human soul. Since all other physical creatures do not have this rational ability, they exist to a less perfect degree than human beings. This means that all humans inherently have a higher worth than all the rest of physical creation, and at the same time all humans are equal since it is of the nature of humankind to be rational, even if particular humans are incapable of rational thought
.
But, the intricate medieval tapestry begins to unravel. An expanding view of the globe, major diseases and wars, and internal criticisms leads to a breakdown of the Thomistic system. Francis Bacon (1561–1626), a leading popularizer of what we consider modern science, grows impatient with the monks’ philosophizing and debating. Demanding results, Bacon recommends carefully dissecting nature’s mysteries to heal the world’s suffering, instead of wondering about the meaning of it all. And thus was born the modern scientific age, where the perception of meaning is only a biased illusion and truth must be empirically measurable.

Today, Bacon’s view is the dominant view, so much so that we take it for granted. Science and technology have led to a revolution in health, wealth and material happiness throughout the world. In the space of a few centuries it has lifted the majority of the earth’s booming population out of poverty. The rigorous vision of Bacon, spoken with the precision of math, has given us the gift of the gods, but has also resulted in unprecedented death and destruction, horrific human experimentation, mass enslavement, cultural disintegration, and in general left us with a sense that we have lost something of great value that we cannot find again. The core reason for the aimlessness is because the building blocks of science are inert. They are like Legos in a box. You cannot shake the box of Legos and expect a spaceship to fall out. In the same way, mathematical proof and physical evidence cannot explain their own reason for being. Science cannot explain meaning. At the same time, the very inability of science to speak for itself says something of interest.

Somer Francis Bacon.jpg
Francis Bacon

In medieval language this missing meaning is called function. Function cannot emerge from atoms in motion. It cannot emerge from shaking the Lego box. This claim can be proven mathematically. In information theory, function is a kind of mutual information. Mutual information is subject to the law of information non-increase, which means mutual information and thus function cannot be created by natural processes. Thus, without an organizing force, matter is functionless and void, and there is no meaning.

The fundamental insight of the intelligent design movement is that we can empirically differentiate function from accidental patterns created by natural processes. This means we can describe the Thomistic system with Baconian empirical precision if we really wanted to. Fortunately, humans seem to be pretty good at identifying function without huge amounts of empirical justification, unless they are university trained. The empirical detection of function is a new pair of glasses that corrects Bacon’s vision, and helps us again follow along the path that winds back through the medieval monasteries of Thomas Aquinas, with the mathematical and empirical rigor of science.

But, after hearing this Bacon will say, “it all sounds quite nice, but how is it useful? Function doesn’t feed children or cure cancer.” The answer to Bacon’s question is illustrated with the story of the book at the beginning. If we approach the natural world as if it were arbitrarily put together, then we miss many clues that can help us to understand and use it better.

We are seeing the scientific importance of empirically detecting function now with the ENCODE project. Previously, scientists believed that since the human genome was produced by evolution, most of it would be random and functionless. However, the ENCODE project has shown the majority of the human genome is functional. Now that we understand the genome is mostly functional, we will be better able to decode how it works and programs our body. So, contrary to Bacon, being able to detect function in the human genome can help us improve our lives.

This raises the further question: how would science change if we broaden our detection of function to the rest of the world? Since things work better if they follow their function, does this mean there is a proper order for human flourishing, as the medievals believed? Furthermore, what does science have to say about the creators of function, such as humans? Since matter cannot create function, function creators cannot be reduced to matter. And being more than matter, human beings must be more valuable than any material good. While it is true we cannot go from is to ought, intelligent design does provide a scientific basis for human ontological and pragmatic worth, as well as justify a natural law that must be followed in order for humanity to prosper. So, through the lens of intelligent design, science can indeed talk about the metaphysical realm of value and morals and explain the medieval worldview of function in the empirical language of modern science.

Note: This post also appeared at Patheos (August 30, 2018)

See also: Could one single machine invent everything? (Eric Holloway)

and

Renowned chemist on why only science can answer the Big Questions (Of course, he define th Big Questions as precisely the ones science can answer, dismissing the others a not worth bothering with.)

Comments
@Mung & Bill Cole, when I talk about mutual information, it is not only between DNA strands. That is probably the weakest way to apply the argument, as we've seen in my interactions at PS. What I more had in mind is where X is a DNA strand, and Y is some way of describing X. Y could be a mathematical formula or information about the organism's environment. Alternatively, X could be some organs in the animal and Y could be a description of how they work together to perform the function. There are many ways to fill in the variables X and Y. The point is that if there is mutual information between the two, and the conditions for the LoING are met, then something other than chance and necessity must be involved in the generation of X. The bigger point is this is what ID has been saying in regards to CSI. Many have claimed CSI is bogus math, but what I've found is that CSI can be coherently defined within information theory with the concepts of mutual information and the law of information non growth. So, at the very least the math is not bogus, we can potentially measure quantities that indicate intelligent design, even if it is unclear exactly how the theory is applied to areas of interest such as biology. EricMH
I publicly apologize for the unprofessional comment regarding Dr. Swamidass in comment #146. I should not have made accusations about his argumentation without specifics. EricMH
@Mung > Even if Y has no effect on the probability distribution X can you not still calculate the mutual information? If X (the DNA symbols) was generated by a uniform distribution, there wouldn't be any mutual information with Y. Basically, the mutual information is showing there is a better explanation for X than the uniform distribution. So, > isn’t mutual information the information one obtains about X by knowing Y, or conversely, the amount of information one obtains about Y if you know X? is correct. In Durston's case that there is another distribution Y that provides much more information about X than the uniform distribution. > What he has give us so far is the FI to perform a function. The FI to perform a function is going to be less than the information required to create the FI. You cannot get more mutual information from less, due to the law of information non-growth. Swamidass in another thread explained FI is conditional mutual information, which in turn depends on non conditioned mutual information, so there is no getting around the information non-growth law. Durston's FI is the Kullback-Liebler divergence between an empirical distribution and the uniform distribution, and when the divergence is typical it becomes mutual information, per my analysis a couple comments above. EricMH
EricMH:
Thus, Durston's functional information turns out to be mutual information, and the standard conservation laws apply. This means that whenever he detects functional information, it is a sign of intelligent design, since chance and necessity are provably incapable of generating mutual information.
Kirk point out that there is an important distinction to be made here, one which I don't think you're taking into account. We'll see where it leads.
5. There is a difference between the functional information required to perform a function, and creating that information.
What he has give us so far is the FI to perform a function. Mung
Thank you Eric. Even if Y has no effect on the probability distribution X can you not still calculate the mutual information? Further, isn't mutual information the information one obtains about X by knowing Y, or conversely, the amount of information one obtains about Y if you know X? So as it it stands just now I don't see what you are describing as mutual information. I'll need to crack open the books and see if I can find some examples. Mung
@Mung and Bill Cole, Just to be extra clear, Durston's formula is H(U) - H(X), and this is the same as KLD(P(X|Y) || U) since H(X) < H(U) because some Y is affecting X. Since his measurements are pretty typical for DNA, then his H(U) - H(X) is the mean KLD(P(X|Y) || U) and consequently is the same as I(X;Y). Thus, Durston's functional information turns out to be mutual information, and the standard conservation laws apply. This means that whenever he detects functional information, it is a sign of intelligent design, since chance and necessity are provably incapable of generating mutual information. EricMH
The above only works if KLD(P(X|Y) || P(X)) is the mean over all Y. However, by the law of large numbers, as the number of samples increases we approach the mean. EricMH
@Mung & Bill Cole, here is my claim about order being mutual information in mathematical detail. Say X is a symbol in a string S (e.g. DNA strand) made from N symbols. By the principle of maximum entropy, https://en.wikipedia.org/wiki/Principle_of_maximum_entropy we set the distribution P(X) to be uniform and P(X) = 1/N. Let's say there is a reason Y makes X not uniform, i.e. there is a large amount of order in X. So, P(X|Y) is a non uniform distribution. The Kullback-Liebler divergence measures the divergence between two distributions, and is always non negative. So, KLD(P(X|Y) || P(X)) measures the distance of the observed symbol distribution from the uniform distribution. For clarity, we set P(X) to U, to indicate it is the uniform distribution, KLD(P(X|Y) || P(X)) = KLD(P(X|Y) || U). However, this is not yet mutual information, but it can be by taking the expectation. https://en.wikipedia.org/wiki/Mutual_information#Relation_to_Kullback%E2%80%93Leibler_divergence To turn the KLD into mutual information, we take the expectation over the probability distribution of possible Ys that could influence X: I(X;Y) = E[KLD(P(X|Y) || P(X))]. If we apply the principle of maximum entropy to Y, since we don't know a priori what explanation is most likely, then P(Y) is also uniform. In which case, I(X;Y) = E[KLD(P(X|Y) || P(X))] = KLD(P(X|Y) || U). This is what Kurt Durston is measuring with his functional information, the mutual information between a DNA strand and some cause that makes the symbol probability non uniform, while applying the principle of maximum entropy, which we do to minimize our bias. EricMH
@Mung, I'm saying mutual information is the Shannon entropy of the uniform distribution minus the Shannon entropy of the actual distribution, as Kurt calculated. The uniform distribution is the maximum entropy distribution, so represents the state of no a priori correlation with anything else. EricMH
Bill, it will be interesting to see whether Joshua allows Eric to participate in his exclusive thread. :) Mung
EricMH:
The only thing that is not mutual information is the output of a uniform distribution.
You seem to be saying that mutual information is the same as Shannon information except in the case of a uniform distribution. You agree that the Shannon measure of information is defined for any probability distribution, right, including the uniform distribution? Mung
Eric
@bill cole, I’ll check it out this weekend. Swamidass is annoying to debate with because he equivocates a whole lot and fudges his math. For someone who is an expert in information theory, his arguments are highly questionable.
I understand. I am hoping that Kirk's attendance and hopefully Mung's along with you can add some rationality to the discussion. bill cole
@bill cole, I'll check it out this weekend. Swamidass is annoying to debate with because he equivocates a whole lot and fudges his math. For someone who is an expert in information theory, his arguments are highly questionable. At any rate, FSC and functional information are mutual information. Any kind of order is mutual information. The only thing that is not mutual information is the output of a uniform distribution. EricMH
Eric Mung Kirk Durston has joined the discussion on Joshua's blog. It would be great for both of you to join. One of the discussion points is if cancer can generate functional information. Joshua has put forward this hypothesis however at this point I disagree with him. One of my confusions is in understand the relationships between FSC Mutual information and functional information. A random generator appears to be able to create some of these but not all of these. bill cole
@Mung yeah, mutual information is two probability distributions. It is more general than functional information, but functional information is a kind of mutual information, and consequently the law of non growth applies. EricMH
Bill, Mutual Information has nothing to do with function. All you need are two random variables with their own probability distributions. That's it. For Shannon Information all you need is a probability distribution. Mutual Information just has a second one. Eric, am I wrong about that? Mung
@bill cole, > Have you read Durston’s paper on measuring functional specified complexity? I just skimmed through it. While his formula is not strictly mutual information, since the second term is not conditional, but there is a reduction of entropy subtracted from a baseline. So, it appears to be mutual information. EricMH
bill cole:
I agree with Mung that the issue in biology is functional information.
Or you could be like Neil Rickert and Alan Fox- deny that information has anything to do with it. And ignore all of the evolutionary literature to the contrary. ET
Eric
If Swamidass is right that functional information is conditional mutual information, then functional information is also limited by the conservation law. Thus, natural processes, insofar as they are reducible to randomness + determinism, cannot generate functional information.
I not sure he is right. Have you read Durston's paper on measuring functional specified complexity? bill cole
@Bill Cole Re: functional information At least according to Swamidass account in the following thread: https://discourse.peacefulscience.org/t/swamidass-computing-the-functional-information-in-cancer/1646/40 it is a form of conditional mutual information. Conditional mutual information is the difference between absolute mutual information quantities, so it is a lower bound on the total absolute mutual information. Since the absolute mutual information is limited by the conservation law, then so is the conditional mutual information. If Swamidass is right that functional information is conditional mutual information, then functional information is also limited by the conservation law. Thus, natural processes, insofar as they are reducible to randomness + determinism, cannot generate functional information. EricMH
Eric Mung
. Swamidass claims CMI can be increased through naturalistic processes, and that since AMI is inaccessible to us, we should only concern ourselves with CMI. Thus, as far as we can tell, naturalistic processes can create mutual information.
I agree with Mung that the issue in biology is functional information. I think there is a connection with mutual information but I am not sure what it is. Perhaps functional information is a subset of mutual information. I have not seen the claim properly supported that natural processes can generate functional information at least in a sustainable way. bill cole
@Mung here is the current thread Swamidass and I are debating on. https://discourse.peacefulscience.org/t/wrap-up-experiment-with-mutual-information/ EricMH
@Mung and others, here's a short overview of the applications of Shannon & Kolmogorov's information theory. http://www.dp-pmi.org/uploads/3/8/1/3/3813936/6._figueiredo_2016.pdf It mentions Shannon's caveat that even though information theory is broader than just communications, its application should be approached in a methodical manner and not just by word association. So, Shannon agrees information theory applies to other domains, and encourages care in its application. EricMH
Thanks for the links, I was looking at two completely different threads, lol. Mung
@Mung here's a more concrete example: https://mindmatters.today/2018/09/meaningful-information-vs-artificial-intelligence/ EricMH
> Mutual information is still Shannon information and as such has nothing to do with either meaning or function. True, it is just mathematical manipulation. But, my point is that it describes what meaningful/functional information is: namely a correlation between some entity and an independent description. In particular, the algorithmic mutual information makes this pretty precise. > Would you kindly post the link? Swamidass' main argument is that though he agrees with me that algorithmic mutual information (AMI) cannot be produced through naturalistic processes, in the real world we can never measure AMI exactly, because it is by definition not computable. Instead, we are always stuck with calculable mutual information (CMI). Swamidass claims CMI can be increased through naturalistic processes, and that since AMI is inaccessible to us, we should only concern ourselves with CMI. Thus, as far as we can tell, naturalistic processes can create mutual information. Though it is true we cannot measure AMI directly, I'm not convinced by his argument for a couple reasons. First, I think there are situations where we can get a decent approximation of AMI. For example, the Lempel-Ziv compression algorithm will approach the true AMI with a long enough ergodic sequence. Second, because positive CMI always implies positive AMI, even if we cannot measure the true AMI. So, at the very least, we cannot go from 0 -> 1 CMI through naturalism. All the threads are quite long. Here is where Swamidass claims he can demonstrate MI comes about through naturalistic processes: https://discourse.peacefulscience.org/t/swamidass-computing-the-functional-information-in-cancer/1646/40 Here he discusses my argument that Levin's proof + existence of MI means a halting oracle must exist: https://discourse.peacefulscience.org/t/intelligence-and-halting-oracles/1124/29 EricMH
EricMH:
@Mung what do you think meaningful and functional information are?
I think meaningless information is an oxymoron. Functional information would be information where meaning is defined by function. Shannon information simply does not take the meaning of a sequence of symbols into account. It is merely concerned with probability distributions. Mutual information is still Shannon information and as such has nothing to do with either meaning or function. Mutual information is therefore the reduction in uncertainty about variable X, or the expected reduction in the number of yes/no questions needed to guess X after observing Y. There is no connection here between what natural processes can or cannot accomplish. I thought I was following the conversation at PS but maybe I am looking at the wrong link. Would you kindly post the link? Mung
@Mung, and I'll work on more concrete examples. I've been arguing the point at length with Swamidass over at PS, and so far it hasn't been refuted. So, seems worthwhile to pursue. EricMH
@Mung what do you think meaningful and functional information are? EricMH
EricMH:
But I’m more than willing to do my best effort for sincere intentions to understand. So, Mung, if you are truly interested, let me know, and I’ll work on something more concrete.
If you want your argument to be useful to ID I think you would want to put in that effort. Information theory proves that evolution is impossible isn't going to go far in a debate if the person making that claim can't explain how or why. Mutual information is still information in the Shannon sense, which has nothing to do with meaning or function. Mung
This gives quantitative meaning to “uncertainty”: it is the number of yes/no questions it takes to guess a random variables, given knowledge of the underlying distribution and taking the optimal question-asking strategy . . . .
Note it is not merely counting yes/no questions, but also knowing the underlying distribution and using the optimal question-asking strategy. Mung
Thanks for all the feedback. The basic idea is not mine, but Salvador Cordova's. Basically, bad mutations can spell the end of an organism. As genomes grow in length the probability of a deadly mutation grows exponentially. So, the question is how does evolution cause genomes to grow? The simulations answer this question, and show we have to make some very unrealistic assumptions about evolution for it to cause genomes to grow. As I mentioned before, this applies to any form of evolution. To return to the topic of the article, since we can talk about these sorts of things empirically, this opens the door to Thomism. EricMH
JDK @122, I ran out of time to edit my previous entry. You stated...
When we build a mathematical/logical model, we have to test the model against real-world, empirical results to see if it’s a good model. If the model gives us nothing we can test, or it gives testable results but they don’t agree with reality, we have to revise our model. Of course our models are intelligently designed (some more intelligently than others, I imagine), but how else can we study the world if we don’t use our intelligence?
First, no one is arguing about models needing to be intelligently designed. But do you not see any irony of your statements? That the models are built upon assumptions of macro-evolutionary events? And at best this is an inference? Not evidence? Claiming a "real-world" by unguided means is an assumption, not a fact. And that it takes intelligence to model this assertion. Because the models must utilize Intelligent Selection in order to create novel forms over time. This is not population genetics or models of variations among species and bacteria, or dogs and plants for example. I find this statement ironic... "...but how else can we study the world if we don’t use our intelligence?" We agree :) But the world you speak of is at best an Inference, and may not be real at all. In fact, there's a lot of story-telling in evolutionary science circles now since Darwinism was first promoted. All very informed, highly educated scientist proposing stories of the past. Those were all built upon assumptions and you might say highly informed, but still assumptions of blind, unguided events - random mutations - and natural selection. So building a model, inserting Intelligent Selection into the model to mold future events is still built upon assumptions. And as we see with the Epigenome and vast areas of new discovery now taking place, many assumptions were wrong. And the largest of them all might be "Junk" DNA. The more "Junk" found to have purpose, function and is transcribed, I think the farther away the "real-world" moves away from Darwinist assumptions in these models and towards Intelligent or Directed Evolution. I would hope you agree at least these models are using built-in assumptions of macro events in the past and at best are an Inference of the Darwinist framework and not evidence since they must use guided intelligent programs to model any aggregate formation of new forms. And that these models are not actually simulating what we know, but only what scientist think they know and assume. Yet still cannot get away from artificial selection by intelligent means. I think far from helping Darwinist models, it reinforces Intelligent and Guided Evolution models. There are natural areas of limited, protected and controlled randomness that makes complete sense in different life forms and in immune response systems. But again, I think the fall out of ENCODE, the continued discovery of functions within "Junk" DNA will be Darwinist undoing. As over time, a threshold might be reached where blind, random mutations do not have room or time to innovate macro events. DATCG
JDK @122, Guess we will agree to disagree. From everything I've read the attempts to model or simulation blind, unguided evolution include intelligent selection like Gpuccio stated. I remember maybe one was more true to random mutations and natural selection and showed no significant increase in novel forms of the kind that is required for diversity of life on earth. Cannot remember which study that was, been some time now. If you model or simulation an assumption it is likely you may recreate the assumption and not what is actually scientifically accurate depiction of evolution. But, designed or directed, or Prescribed Evolution is precisely what these studies, models and simulations did. So sorry, I think that scores one for Intelligent Design. Finally, I do wonder in the future, not so far from now how ENCODE ends up being the largest game changer of all. We are now finding many more functions across all areas of DNA in the Epigenome. As more functionality is discovered across more sections of previously written off areas of DNA, the noose tightens around the old "Junk" DNA claims, as well as on old assumptions. The very assumptions those models were built upon. It is why many scientist are openly questioning and searching for new answers to evolution. Answers they thought they already had discovered in the past, based upon inaccurate assumptions, largely drawn upon ignorance of DNA and not actual scientific research. Conclusions were drawn early, they are exposed now, and many Darwinist openly admitting there's a problem. They have been in fact wrong for decades, but today with the internet, blogs, and social media the word of these wrong conclusions and problems with neo-Darwinism are not in limited circles anymore. ENCODE is having a huge impact on how scientist are studying DNA, how we view diseases and mutations as well as how scientist themselves are now contemplating evolution. Dan Graur predicts 75% Junk DNA if I remember correctly. Yes, here's his prediction... https://evolutionnews.org/2017/07/dan-graur-anti-encode-crusader-is-back/ I'm not pretending to know how much he might be wrong, but my guess for now is it might be substantial as I keep reading different papers and sources on former regions of "Junk" DNA turning out to have functions. In fact, there are scientific databases full of functions being tracked today on these formerly unexplored regions that were long neglected by many. It's a new broad industry especially important for Regulatory Gene processing. Gpuccio covers many areas on this that include Epigenetic structures and functional protein complexes that depend on formerly named "Junk" regions. I think you might find them all very interesting. I am especially interested in Intronic regions once blown off as JUNK. Turns out they do have functionality - see the Spliceosome entry. And I think I posted recently in the Transcription post by Gpuccio on Intronic regions as well. Here's a post by Upright Biped with Gpuccio's last Post on Transcription linking all of Gpuccio's past post. Many have comments that include important functions once assumed to be Junk by Darwinist in the past. Post 165 by Upright Biped in Transcription Regulation Post by Gpuccio... Transcription regulation: a miracle of engineering As I currently see it, the more function we find, the less "chances" there are for random mutations and "time" to work for blind evolutionary macro events. DATCG
Erasmus If you suddenly came across this, would you be able to hypothesize of its origin? Please answer 'yes' or 'no'. Eugene S
Genetic algorithms definitely use telic processes to actively search for solutions to the problems they were intelligently designed to solve. What we lack is a way to test natural selection being a designer mimic. Reality tells us that it isn't. So how do we proceed? ET
re 120: Thanks. DATCG. I just posted snippets of the prior thread to highlight some key things. One of the other points we made was that Eric's simulation just used the length of the string as the measure of "complexity" without any explanation as to why that would model the ability to survive and reproduce, so the only "selection" that occurs is if the organism (string) gets a "corrupt" bit (a deleterious mutation). So I agree with Gpuccio that modeling genetic change, changes in phenotype, and natural selection all in one simulation would be a very difficult task, and I also (who am no expert in this overall area) know of no model that does that. However, Gpuccio also says, "they are all examples of Intelligent selection, and therefore mean nothing." All simulations include intelligently designed decisions about how to proceed. I'm not sure that means much, unless we just don't want to count simulations as a legitimate tool for investigation. When we build a mathematical/logical model, we have to test the model against real-world, empirical results to see if it's a good model. If the model gives us nothing we can test, or it gives testable results but they don't agree with reality, we have to revise our model. Of course our models are intelligently designed (some more intelligently than others, I imagine), but how else can we study the world if we don't use our intelligence? jdk
DAT, I was clipping as testifying against interest, from Wikipedia. They in turn were referring to experts including G N Lewis. I note with concern from your link:
Fifteen studies from 1990 and 2002 were found. All 15 studies found that brain temperature was higher than all measures of core temperature with mean differences of 0.39 to 2.5 degrees C reported. Only three studies employed a t test to examine the differences; all found statistical significance. Temperatures greater than 38 degrees C were found in 11 studies. This review demonstrates that brain temperatures have been found to be higher than core temperatures; however, existing studies are limited by low sample sizes, limited statistical analysis, and inconsistent measures of brain and core temperatures.
Looks like the brain can begin to cook itself! This points to the significance of temperature regulation as a key body control process. KF kairosfocus
JDK @117, A quick glance, not all of Gpuccio's statements @166 were included. He goes on to say,
"But I would say the same thing of all computer simulations of evolution of which I am aware, because no computer simulation I am aware of even tries to simulate NS: they are all examples of Intelligent selection, and therefore mean nothing."
I agree with Gpuccio's quick summary on simulations of evolution in general. This includes Avida or examples like EV Ware as shown and discussed by Marks, et al. http://evoinfo.org/ev.html Out for rest of day. Have a good one. DATCG
JDK, Thanks, I'll let Eric respond as I catch up with it. DATCG
KF @114, You said or quoted,
For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . ."
Related? Increased blood pressure leads to vascular dementia(memory loss, confusion, etc.). body temperature compared to brain temperature, paper from 2004:... "https://www.ncbi.nlm.nih.gov/pubmed/14998103/" And from 2016... Measuring Entropy Change in a Human Physiological System https://www.hindawi.com/journals/jther/2016/4932710/ Note how Mechanical Engineers are increasing in such areas of practical research and knowledge. DATCG
at 116, DATCG asks, "You stated Bob’OH had one objection you addressed. Was he satisfied after your modifications?" Thread in question is at https://uncommondescent.com/intelligent-design/how-some-materialists-are-blinded-by-their-faith-commitments/#comments. Eric addressed Bob's objection, but Bob didn't think much of it. See posts 87 and 93. However Bob's objection was minor in comparison to more fundamental flaws that were pointed out by JVL and me. I posted Eric's explanation of his code at #143. JVL responded at 150, and mentioned tow key things.
Each organism is represented by a bitstring of # length L. L is the organism’s complexity. How is complexity measured? Some ferns have a genome many times the size of humans. Each generation, the organism will create two # offspring with lengths of L-1 and L+1. Name me one organism that fits this model. And, again, how do you measure complexity? Are you saying if I have two children then one will be more complex and the other will be less complex?
At 160, I added,
But, in addition to the obvious questions jvl asked (why is length a measure of complexity, what in the world is one child with L+1 and one with L-1 supposed to model, and why is corruption and instant death tied to one “bit flipping”), I’ll also note that the “corruption probability” is set to 0.01. What does this represent? I set p =0.0001, and the size of the organism with the longest length (which doesn’t represent anything I can think of, but appears to be the number which the program claims shows that evolution is working or not) appears to get bigger and bigger, showing, presumably, that evolution works? This really all make very little sense.
At 166, gpuccio said "OK, I don’t think I understand well what this “simulation” is meant to simulate, but for what I understand, as it is reported at #160, I would agree with jdk that it does not seem any realistic simulation of any model of evolution." And at 167, JVL wrote, "Anyway, I think it would be best to hear from the ‘designer’ of the code to find out what exactly they were trying to model before making any other assumptions or comments. Fair enough?" But EricMH never came back for further discussion. I recommend one read his explanation at #143 and try to figure out how his program models evolution, or how his results (which vary depend on the parameter p) shows that "evolution doesn't work." jdk
EricMH @100, Thanks for your response. Also, thanks for your service and congrats on your appointment to Walter Bradly Center for Natural and Artificial Intelligence. Must be fun to work with Dr. Robert Marks! You stated Bob'OH had one objection you addressed. Was he satisfied after your modifications? And this line...
"However, it seems clear that if we think of DNA as similar to bitstrings describing programs in computers, that there is an enormous amount of mutual information in DNA"
Understatement of the year ;-) How many variables are there in DNA Code(s) and molecular interactions of cells? How many rules for that matter, how many sub-routines. On a side note: Do you think it fair or analogous to say bitstrings = AA sequences? Anyone else can chime in if they like. Upright Biped, another contributer here at UD cited Pattee and others like Barberie in the past. http://codebiology.org An example of the collection of Codes that natural evolutionist Barbieri cites...
The signal transduction codes Signal transduction is the process by which cells transform the signals from the environment, called first messengers, into internal signals, called second messengers. First and second messengers belong to two independent worlds because there are literally hundreds of first messengers (hormones, growth factors, neurotransmitters, etc.) but only four great families of second messengers (cyclic AMP, calcium ions, diacylglycerol and inositol trisphosphate) (Alberts et al. 2007). The crucial point is that the molecules that perform signal transduction are true adaptors. They consists of three subunits: a receptor for the first messengers, an amplifier for the second messengers, and a mediator in between (Berridge 1985). This allows the transduction complex to perform two independent recognition processes, one for the first messenger and the other for the second messenger. Laboratory experiments have proved that any first messenger can be associated with any second messenger, which means that there is a potentially unlimited number of arbitrary connections between them. In signal transduction, in short, we find all the three essential components of a code: (1) two independents worlds of molecules (first messengers and second messengers), (2) a set of adaptors that create a mapping between them, and (3) the proof that the mapping is arbitrary because its rules can be changed (Barbieri 2003).
Linking this back to one of my points, comparing Order Sequence to Functional Sequence(Organized Coding and Networking) stated in the paper by Abel and Trevors. Organization at levels Trevors and Able outline is Semantic driven - Code. That leads to varying and wide rules based coding, and a myriad of variables that can be utilized, preset and updated, defined and deciphered as part of the Code structure. As defined by Barbieri above. The incredible number of interpretive processing taking place between different cellular systems and external networks strengthens your point on Mutual Information. And is it to big a leap to state, such a system expands to the Nth Power without Design centric approach? Following the wabbit down the wabbit hole... The task of Multivariate Mutual Information(or Information Interaction) states...
"These attempts have met with a great deal of confusion and a realization that interactions among many random variables are poorly understood.[citation needed]"
When we are discussing Code(s) in DNA, like the Ubiquitin Code as an example, then variables increase. And we begin to see built-in redundancy which is the case of Information Interaction(or Multivariate Mutual Information). Darwinist have long stated duplication and redundancy as evidence of unguided, blind evolution. Often laughing and guffawing that the Designer must be very bad due to redundant structures. On the contrary! That only shows their absolute ignorance on the subject. Redundancy is a hallmark of sophisticated design techniques. And unguided, random mutations have no power to generate novel forms that survive w/o some trade off and certainly not at the level of macro events. Like Behe has noted, the evidence is scant. Example Malaria and Sickle Cell Anemia. While the survivors survive, they do so at a cost to future generations. And there is no new novel forms. Note to blind, unguided evolutionist. Variations of species on the edge, like Darwin's finches are not evidence for macro events, just variation. They still interact, produce offspring and are still finches. Even Darwin's finches show the limits of blind, unguided random mutations and natural selection. And to add insult to injury, if I remember correctly the changes are now thought to be led by epigenetic features. LOL formerly thought to be "Junk" by Darwinist! Life is grand :) DATCG
EricMH, Regarding your Python simulations, have you sought any feedback from fellow researchers at your university (or elsewhere in academia)? I would guess you have access to quite a bit of expertise and perhaps could find a specialist to evaluate your programs. daveS
PPPS: For background, let me also tie in some thoughts in my always linked note, on the connexions between informational and thermodynamic entropy, given the informational thermodynamics school of thought:
let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information,. . . ):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
kairosfocus
PPS: I should note, random variables run the gamut from highest uncertainty for a value [flat random distribution] to certainty [probability 1 or 0 for the value] so using such variables is WLOG, again. kairosfocus
PS: On Mutual info (e.g. think, how much do we credibly know of info string X on observing string Y and apply to sent/stored and received/retrieved messages), Scholarpedia gives some useful thoughts:
Mutual information is one of many quantities that measures how much one random variables tells us about another. It is a dimensionless quantity with (generally) units of bits, and can be thought of as the reduction in uncertainty about one random variable given knowledge of another. High mutual information indicates a large reduction in uncertainty; low mutual information indicates a small reduction; and zero mutual information between two random variables means the variables are independent . . . . H(X) [--> entropy in the info context] has a very concrete interpretation: Suppose x is chosen randomly from the distribution PX(x) , and someone who knows the distribution PX(x) is asked to guess which x was chosen by asking only yes/no questions. If the guesser uses the optimal question-asking strategy, which is to divide the probability in half on each guess by asking questions like "is x greater than x0 ?", then the average number of yes/no questions it takes to guess x lies between H(X) and H(X)+1 (Cover and Thomas, 1991). This gives quantitative meaning to "uncertainty": it is the number of yes/no questions it takes to guess a random variables, given knowledge of the underlying distribution and taking the optimal question-asking strategy . . . . Mutual information is therefore the reduction in uncertainty about variable X , or the expected reduction in the number of yes/no questions needed to guess X after observing Y .
In context, I point to how the structured Y/N chain of q's strategy in effect quantifies info in bits, relevantly understood. Where also, as a 3-d functionally specific configuration (think, oriented nodes and arcs in a mesh similar to an exploded view diagram) may be described through such a chained, structured Y/N Q strategy (i.e. encoded in strings, cf AutoCAD etc) we see here, that discussion on strings is WLOG. Where, too, one may infer the underlying "assembly instructions/info" from observing the 3-d functionally specific structure. Proteins beg to be so analysed, and as we know, the 3-base codon pattern is tied to the significantly redundant AA sequences via the genetic code. Where, too, there is evidence that the redundancies are tied to robustness of function of proteins and perhaps to regulation of expression. Similarly, we may point to the sequences and how they link to the pattern whereby functionality of proteins is tied to key-lock fitting driven by folding patterns. (This also applies in a similar way to tRNA.) In short, the islands of function structure of AA sequence space is connected to the observed functional protein patterns in ways that point to purposeful selection and organisation of a complex organised system based on profound, systemic knowledge of possibilities. That's a smoking gun sign of very smart engineering, as Paley long ago recognised on comparing stumbling across a rock in a field vs finding a coherently and functionally organised watch. Where, too, in Ch 2 he rapidly went on to discuss the further significance of finding the additional functionality of a self-replicating subsystem. Yes, it was almost 150 years later before von Neumann gave us a functional analysis of the kinematic self replicator and then we saw how such worked out in the living cell. What remains is that we are looking at something that is antecedent to OoCBL, and it is clearly linguistic, purposeful, systemic and driven by deep knowledge of highly complex possibilities for C-chemistry in aqueous mediums. That then points onward to the observation that the physics of the cosmos is evidently fine tuned in ways that support such cell-based life. That is another whole domain of connected information where knowledge of the cell ties in with knowledge of the cosmos, giving a much broader scope to our design inferences. We are seeing design signatures connecting the world of life and the fine tuning of the physics of the cosmos. kairosfocus
Folks, let us not miss the forest due to looking at the trees. D/RNA expresses alphanumeric code that functions to create proteins stepwise, using molecular nanotech, all of which is by logic of process antecedent to functioning, self-replicating cell based life. This pattern strongly indicates language, purpose via goal directed system behaviour, and that such is present prior to the origin of cell based life [OoCBL]. Where, too, the CCA tool tips of tRNA's are universal, separated physically from the anticodon ends, and where specific loading with given AA's is based on general conformation of the folded tRNA. That is how the code is operationally introduced into the system. All of this is constrained by the need for high contingency to store information, so the mere dynamics of chemical bonding will not explain the system. The complexity joined to functional coherence, organisation and specificity i/l/o the further fact of deeply isolated protein-relevant fold domains in AA sequence space combine to make happy chance utterly implausible. All of these factors strongly point to design being at the heart of cell based life. So, design sits at the table as strong horse candidate right at OoCBL. This then extends to origin of body plans right down to our own. KF kairosfocus
EricMH:
It is not restricted to blind Darwinian evolution, but all forms of evolution.
That is just wrong, then. Evolution by means of intelligent design can definitely evolve increasingly complex organisms- genetic algorithms exemplify that. ET
The code shows how difficult it is to evolve increasingly complex organisms according to some simple mathematical models. It is not restricted to blind Darwinian evolution, but all forms of evolution. Bob O'H had one objection, which was easily added to the model, and continued to demonstrate the difficulty of evolving complex organisms. Since I'm not well versed in biological evolution, that's all I can do. Someone proposes a way the model is deficient, and I see if that's true with further refinement of the model. Seems a reasonable way to proceed. Since evolutionary theory is full of hand wavy ambiguity, such that much cannot even be falsified and is thus not science, we should seek to mathematically quantify as much as we can, and see what it takes to create a mathematical model that is as effective as evolution is claimed to be. EricMH
From the link:
This simulation shows random mutation and # selection causes organism complexity to stop # increasing
That said there isn't anything to disprove when it comes to evolution by means of blind and mindless processes because no one has ever demonstrated that type of evolution can do anything beyond cause genetic diseases and deformities. That's all ET
re 106: You are reading too much into my comment. Eric's program is at a site called "https://repl.it/@EricHolloway/Evolution-Doesnt-Work". I was just referring to the claim made in the thread where he first brought this up that it doesn't in any way model any real biology, so it doesn't disprove evolution, as his title seems to imply. That's all. jdk
Earth to Jack- Intelligent Design is NOT anti-evolution, so it follows that IDists are not trying to disprove evolution. What IDists argue is against blind and mindless processes being able to produce certain structures, objects, events and systems. And to date no one has stepped up and shown that evolution by means of blind and mindless processes can do anything beyond cause genetic diseases and deformities. That means you make it clear that you are not well informed in biological matters ET
re 102: Eric, you posted that code once before, claiming it disproved evolution. Some of us, including gpuccio I believe, showed you why it did not model evolution at all, and proved or disproved nothing. In 100, you say, "I am not well informed in biological matters." Your program certainly makes that clear. jdk
Hi Eric
@DATCG I am not well informed in biological matters. I can only draw analogies with my field of expertise, computer science. However, it seems clear that if we think of DNA as similar to bitstrings describing programs in computers, that there is an enormous amount of mutual information in DNA. Also, a quick google pulls up a decent amount of material applying mutual information to bioinformatics. Of possible interest, I wrote a story illustrating the core reason why natural processes cannot create mutual information.
I just reviewed a paper comparing mutual information of genes with the environment. I then searched mutual information in pubmed and good 6000 hits. This subject is front and center in biology right now. bill cole
It's not only the code. You need the parts and system to run the code. ET
@john_a_designer, the fact that reproduction provides such a fault tolerant channel is incredible. The only reason this is possible is because DNA is a digital code. Analog signals do not have the error correcting ability. I ran some simulations to demonstrate how incredible this is here: http://creationevolutionuniversity.com/forum/viewtopic.php?f=3&t=169 EricMH
Here is a stunning claim from Abel and Trevors paper cited by DATCG @ 97.
Genes are not analogous to messages; genes are messages. Genes are literal programs. They are sent from a source by a transmitter through a channel (Fig. (Fig.3)3) within the context of a viable cell. They are decoded by a receiver and arrive eventually at a final destination. At this destination, the instantiated messages catalyze needed biochemical reactions. Both cellular and extracellular enzyme functions are involved (e.g., extracellular microbial cellulases, proteases, and nucleases). Making the same messages over and over for millions to billions of years (relative constancy of the genome, yet capable of changes) is one of those functions. Ribozymes are also messages, though encryption/decryption coding issues are absent. The message has a destination that is part of a complex integrated loop of information and activities. The loop is mostly constant, but new Shannon information can also be brought into the loop via recombination events and mutations. Mistakes can be repaired, but without the ability to introduce novel combinations over time, evolution could not progress. The cell is viewed as an open system with a semi-permeable membrane. Change or evolution over time cannot occur in a closed system. However, DNA programming instructions may be stored in nature (e.g., in permafrost, bones, fossils, amber) for hundreds to millions of years and be recovered, amplified by the polymerase chain reaction and still act as functional code. The digital message can be preserved even if the cell is absent and non-viable. It all depends on the environmental conditions and the matrix in which the DNA code was embedded. This is truly amazing from an information storage perspective. (emphasis added)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ One of the key questions you have to answer, if you believe in a naturalistic dys-teleological origin for DNA or RNA, is how did chemistry create the code? Do you have any evidence of how an undirected and purposeless physical process created what we intelligent beings recognize as code? If you do please give us your explanation. Or, is it just your belief? john_a_designer
@DATCG I am not well informed in biological matters. I can only draw analogies with my field of expertise, computer science. However, it seems clear that if we think of DNA as similar to bitstrings describing programs in computers, that there is an enormous amount of mutual information in DNA. Also, a quick google pulls up a decent amount of material applying mutual information to bioinformatics. Of possible interest, I wrote a story illustrating the core reason why natural processes cannot create mutual information. https://uncommondescent.com/intelligent-design/could-one-single-machine-invent-everything/ EricMH
@Quaesitor, you seem to have not carefully understood the proof in section 1.2 of Levin's paper, and are instead just quote mining. Perhaps you can carefully understand the proof first, and then respond to my argument. It is not very long and I can help. EricMH
And came across another paper Briefings in Bioinformatics from 2013. Information theory applications for biological sequence analysis Susana Vinga Briefings in Bioinformatics, Volume 15, Issue 3, 1 May 2014, Pages 376–389, https://doi.org/10.1093/bib/bbt068 Published: 20 September 2013 Article history https://academic.oup.com/bib/article/15/3/376/183705
Abstract Information theory (IT) addresses the analysis of communication systems and has been widely applied in molecular biology. In particular, alignment-free sequence analysis and comparison greatly benefited from concepts derived from IT, such as entropy and mutual information. This review covers several aspects of IT applications, ranging from genome global analysis and comparison, including block-entropy estimation and resolution-free metrics based on iterative maps, to local analysis, comprising the classification of motifs, prediction of transcription factor binding sites and sequence characterization based on linguistic complexity and entropic profiles. IT has also been applied to high-level correlations that combine DNA, RNA or protein features with sequence-independent properties, such as gene mapping and phenotype analysis, and has also provided models based on communication systems theory to describe information transmission channels at the cell level and also during evolutionary processes. While not exhaustive, this review attempts to categorize existing methods and to indicate their relation with broader transversal topics such as genomic signatures, data compression and complexity, time series analysis and phylogenetic classification, providing a resource for future developments in this promising area.
Introduction
Information theory (IT) addresses the analysis of communication systems, which are usually defined as connected blocks representing a source of messages, an encoder, a (noisy) channel, a decoder and a receiver. IT, generally regarded as having been founded by Claude Shannon (1948) [1, 2], attempts to construct mathematical models for each of the components of these systems. IT has answered two essential questions about the ultimate data compression, related with the entropy of a source, and also the maximum possible transmission rate through a channel, associated with its capacity, computed by its statistical noise characteristics. The fundamental theorem of IT states that it is possible to transmit information through a noisy channel (at any rate less than channel capacity) with an arbitrary small probability of error. This was a surprising and counter-intuitive result. The key idea to achieve such transmission is to wait for several blocks of information and use code words, adding redundancy to the transmitted information [3, 4].
and...
Compression is also related with Shannon’s entropy definitions and was also applied to biological sequences. There is a clear association between these concepts: a sequence with low entropy (high redundancy) will, in principle, be more compressible and the length of the compressed sequence gives an estimate of its complexity, and consequently, of its entropy [20]. The drawback of this method is its dependency on the compression procedures, which might fail to recognize complex organization levels in the sequences. Although data compression is closely related with IT applications, a complete review of this topic is out of the scope of this work; see other surveys on average mutual information (AMI) applications [21], Kolmogorov complexity-based features [22] and a comprehensive review by Giancarlo et al. [23] for more details.
DATCG
EricMH, Curious if you've read ... Three subsets of sequence complexity and their relevance to biopolymeric information by David L Abel and Jack T Trevors https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ Often, it seems disagreements between Darwinist and Design Theorist tend to be between definitions of Order and Organization. Much like a snowflake, a hurricane or the Cell(manufacturing plant). If you have read the paper, curious how you see "Mutual Information" in relation to 3rd Subset of Functional Sequence Complexity, or Organized - Functional Sequence Complexity in the paper as opposed to Ordered Sequence and Random Sequence. From the paper...
Genetic sequence complexity is unique in nature "Complexity," even "sequence complexity," is an inadequate term to describe the phenomenon of genetic "recipe." Innumerable phenomena in nature are self-ordered or complex without being instructive (e.g., crystals, complex lipids, certain polysaccharides). Other complex structures are the product of digital recipe (e.g., antibodies, signal recognition particles, transport proteins, hormones). Recipe specifies algorithmic function. Recipes are like programming instructions. They are strings of prescribed decision-node configurable switch-settings. If executed properly, they become like bug-free computer programs running in quality operating systems on fully operational hardware. The cell appears to be making its own choices. Ultimately, everything the cell does is programmed by its hardware, operating system, and software. Its responses to environmental stimuli seem free. But they are merely pre-programmed degrees of operational freedom.
and FSC ...
Functional Sequence Complexity (FSC) A linear, digital, cybernetic string of symbols representing syntactic, semantic and pragmatic prescription; each successive sign in the string is a representation of a decision-node configurable switch-setting – a specific selection for function. FSC is a succession of algorithmic selections leading to function. Selection, specification, or signification of certain "choices" in FSC sequences results only from nonrandom selection. These selections at successive decision nodes cannot be forced by deterministic cause-and-effect necessity. If they were, nearly all decision-node selections would be the same. They would be highly ordered (OSC). And the selections cannot be random (RSC). No sophisticated program has ever been observed to be written by successive coin flips where heads is "1" and tails is "0." We speak loosely as though "bits" of information in computer programs represented specific integrated binary choice commitments made with intent at successive algorithmic decision nodes. The latter is true of FSC, but technically such an algorithmic process cannot possibly be measured by bits (-log2 P) except in the sense of transmission engineering. Shannon [2,3] was interested in signal space, not in particular messages. Shannon mathematics deals only with averaged probabilistic combinatorics. FSC requires a specification of the sequence of FSC choices. They cannot be averaged without loss of prescriptive information (instructions).
I hope to check back in tomorrow if you reply. Thanks. DATCG
jdk, Here's a thread with some of EricMH's posts on the BB function, relating it to ID. Link to the original BB function paper. Finally, the fascinating blog entry/paper that EricMH's posts led me to. daveS
Mung, EricMH
The expected mutual information produced by an arbitrary combination of random and deterministic processing is always non-positive.
This sounds like it might be interesting, but its really not. Mutual information is just the mutual dependence between two random variables. The dependency is a property of the variables' probability distributions i.e. the ranges within which they randomly vary. So randomly varying a random variable isn't going to change the nature of that dependency. I'm really not sure why you think mutual information means anything more than this ... Quaesitor
Link to the busy beaver, Dave? jdk
EricMH, Off-topic: It's only now that I realize you are the Eric Holloway of the OP. I recall some very interesting posts of yours having to do with the busy-beaver function, which has provided much food for thought for me recently. daveS
@EW, glad to know you are Tom English. Despite your invective and ad hominem attacks on ID people, which is not really called for, the actual math you do is pretty informative, and it's developed my thinking on the matter a fair amount. For example, connecting ASC to the Kolmogorov minimal sufficient statistic is very interesting, and a topic I've thought about for quite awhile. They don't quite connect, since the context in KMSS is dependent on X, whereas in ASC it is not, but I hope to bridge that issue at some point. You are one of only a handful of ID skeptics (one or two?) that actually posts coherent rebuttals, which I greatly appreciate. You are right, there is a fair amount of information theory that corroborate ID claims which we are unfortunately ignorant about. That is certainly embarrassing, and demonstrates a lack of due diligence, as you point out. It has been really eye opening to me during my dissertation research to discover just how well supported ID is by much more credible sources. Your work has been a big help in that regard. @Mung, since all finite physical objects can be generated by a random process, then there is probability any physical object, designed or not, can be generated by flipping a coin. So, that is not really the sort of claim we are interested in. We are interested in the expected value, which is something we can make an absolute claim about. The expected mutual information produced by an arbitrary combination of random and deterministic processing is always non-positive. This is because the expectation forms the negative Kullback-Liebler distance, which is always non positive. I would be happy to get into more empirical demonstrations of this idea, but it will take time and effort on my part, which I'm only willing to dedicate if there is sincere interest on your part. I've got a busy life, and cannot throw away time on frivolous exchanges. But I'm more than willing to do my best effort for sincere intentions to understand. So, Mung, if you are truly interested, let me know, and I'll work on something more concrete. EricMH
Mung
I still see no connection to function.
I see where you are going and I agree that he needs to develop the connection between mutual information and information and function. In the case of DNA coding for proteins we do see a direct of mutual information as the nucleic acid sequence directly translates to the amino acid acid sequence although the nucleic acid sequence can vary to some degree and still translate to the same amino acid sequence. The amino acid sequence folds into a functional protein. The question in my mind if his proof covers this scenario. bill cole
WB, active info is indeed a manifestation of CSI, in the context of search challenge to find islands of function. Thus, we can estimate how much info had to be intelligently added to make the functional outcome plausible relative to blind chance and/or mechanical necessity. KF PS: I have cited definitions at the indicated points in NFL. There is also a diagram. kairosfocus
ET:
Tom English, yes, an idiot savant if there ever was one.
You're a lovely person. Erasmus Wiffball
PS: Orgel, in more extended form:
living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure.
[--> this is of course equivalent to the string of yes/no questions required to specify the relevant J S Wicken "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, -- here and -- here -- (with here on self-moved agents as designing causes).]
One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes [--> Orgel had high hopes for what Chem evo and body-plan evo could do by way of info generation beyond the FSCO/I threshold, 500 - 1,000 bits.] [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]
kairosfocus
F/N: For record,
CONCEPT: NFL, p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [cf. p 144 as cited below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways
[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites: Wouters, p. 148: "globally in terms of the viability of whole organisms," Behe, p. 148: "minimal function of biochemical systems," Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction." On p. 149, he roughly cites Orgel's famous remark on specified complexity from 1973, which exactly cited reads: " In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . ." And, p. 149, he highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .”
DEFINITION: p. 144: [Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [the cluster] (T, E) constitutes CSI because T [effectively the target hot zone in the field of possibilities] subsumes E [effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
KF kairosfocus
WB, your dismissal of one of the leading astrophysicists and cosmologists of the past 100 years, speaking to matters well within his technical knowledge base, tells us volumes and not in favour of the cause you seek to promote. KF kairosfocus
EricMH:
At any rate, I am really interested in any refutation of my argument
Not to be unkind, but I am not sure what your argument is. Is it that you have a mathematical proof, based on information theory, that we cannot apply random processing to a lump of matter and expect it to turn into an onion cutting knife? Is your argument a probabilistic one, if so, should you not say that we probably would not observe it, but that doesn't mean it is impossible? I sill don't know what your random variables are and what trying to reproduce at the receiver an encoded signal sent from the source has to do with making an onion cutting knife. And how do you respond to my pointing out that your argument is actually an anti-ID argument? Perhaps you go too far. :) Look, you've made some claims, I'm just trying to figure out how you justify them. Those are the missing steps one would need in order to understand why your argument should prevail. I think you can get way with very general claims such as what appears in the OP here at UD because you are essentially "preaching to the choir." But I think it's a bad habit to get into because it only serves to make ID look silly. Do you think anyone else here understands your argument? Mung
Tom English, yes, an idiot savant if there ever was one. But go ahead and respond to Eric. I will leave you be seeing that you don't have anything to offer ET
ET:
The problem is with all the sock puppets you allow to pollute UD
My name is Tom English. My handle, "Erasmus Wiffball," is ugly. I admit to that. I've been banned from UD many times. Barry can ban me now, if he thinks that's actually a judicious response. I rarely post here anymore, and simply needed a break this morning from something I'd worked on through the night. I actually came to respond to Eric Holloway, and, being just a tad sleep-deprived, went off track. I do not know for a fact that the changes in Introduction to Evolutionary Informatics were in response to my post on algorithmic specified complexity. But the circumstantial evidence is pretty strong. Erasmus Wiffball
EW:
I’ve already asked you for help, and have acknowledged that promulgation of intelligent design theory is a fantastic way to go about bringing sinners like me to Jesus.
Except for the fact that ID has nothing to do with Jesus nor sinners. So perhaps you should start with an education. ET
The Apostle Paul would be an intelligent-design theorist with a Ph.D. in electrical and computer engineering, were he alive today. Obviously.
Well evolutionary biologists can't answer anything with respect to the claims of evolutionism. They can't even formulate a scientific theory of evolution. But I digress. In Approaching Biology From a Different Angle we read:
Systems biology is a loosely defined term, but the main idea is that biology is an information science, with genes a sort of digital code. Moreover, while much of molecular biology has involved studying a single gene or protein in depth, systems biology looks at the bigger picture, how all the genes and proteins interact. Ultimately the goal is to develop computer models that can predict the behavior of cells or organisms, much as Boeing can simulate how a plane will fly before it is built. But such a task requires biologists to team up with computer scientists, engineers, physicists and mathematicians.
Not that EW would understand what that means... ET
Umm, active information is a form of CSI, duh. ET
KF:
WB, kindly cf pp 144 and 148-9, NFL. KF
What in the world do you think you accomplish with little displays like this? I've reread NFL in the past year. How about you? I was amazed at the number of passages that are closely paralleled in Introduction to Evolutionary Informatics, but are attached to active information instead of complex specified information. Of course, Marks, Dembski, and Ewert never admit that Dembski fouled up. Erasmus Wiffball
ET:
Clearly you have other issues and should seek help
I've already asked you for help, and have acknowledged that promulgation of intelligent design theory is a fantastic way to go about bringing sinners like me to Jesus. I can tell that you actually love me, and are deeply concerned about the fate of my soul. The Apostle Paul would be an intelligent-design theorist with a Ph.D. in electrical and computer engineering, were he alive today. Obviously. Erasmus Wiffball
PS: Prior to that, kindly see Orgel and Wicken. kairosfocus
WB, kindly cf pp 144 and 148-9, NFL. KF kairosfocus
@EricMH #48 Quaesitor explained it better than I can, in post #49. What you have essentially done is what is called a "quote-mine" followed by a hasty generalization. It's like taking the principles of thermodynamics that apply to closed systems, and claiming that they apply also to open systems, so they can argue that local decreases in entropy are impossible. Creationists fall for that because they often don't understand thermo, and they often don't get the concept of scope of applicability. Or because the person using the argument CONVENIENTLY omits the discussion of scope. Have you taken your required class in thermo yet? Those are so much fun. Deputy Dog
Kairos focus- I am toned down. The problem is with all the sock puppets you allow to pollute UD ET
EW:
This is not an empty jibe, but instead a straight call on what I’ve seen over the past 15 years: an ID proponent is usually someone who has gained proficiency in some area other than science, and who somehow has failed to recognize that he’s as ignorant of science as scientists are of his area of proficiency.
Right gibberish, ie meaningless language
I really shouldn’t look too hard at the wording of “ID has a scientifically testable methodology.”
Of course not. You prefer to be willfully ignorant. ET
ET, please tone down. KF kairosfocus
WB, you have distorted the matter. Besides, the outline made above still obtains. It is noticeable that you are emphasising personalities rather than that substance on the merits. KF kairosfocus
ET:
And the fact that you quote-mined me proves that you are a clueless dolt on an agenda
So, in full context, it was clear that you really didn't mean "gibberish" with the word gibberish? I'm leaving out crucial context, just so I can embarrass you by invoking the dictionary definition of the term? Wow, I quote-mined. I hate quote miners. How could I have stooped so low? OK, here's what I wrote, and that you called gibberish:
This is not an empty jibe, but instead a straight call on what I’ve seen over the past 15 years: an ID proponent is usually someone who has gained proficiency in some area other than science, and who somehow has failed to recognize that he’s as ignorant of science as scientists are of his area of proficiency.
Now, I believe that the meaning of the passage is very clear, so I suggested that you have your very own meaning of gibberish, which is different from that associated with the word by most people in the English-speaking world. I suppose, when accused of quote-mining, I had better dump the mine from which I got the quote:
What a load of gibberish. This is what I have seen for the last 50+ years- an evolutionist is usually someone clueless about science and what it entails. They are usually cowardly equivocators with no shame but tons of belligerence. They refuse to say how to test their claims- they refuse because no one has a clue how to do so. On the other hand ID has a scientifically testable methodology. That alone is by far more than evos can muster. Erasmus Wiffball is just another clueless evo who thinks its raw spewage is some sort of argument
So, in full context, I should understand that you really just wanted to spit out something nasty sounding, and should not take you to task on the meaning of gibberish. And if I don't properly respect your need to emote, without excessive concern for the meaning of the words you use, then I am a quote-miner. And I can only dig my hole deeper by taking you to task on the meaning of quote-miner. I really shouldn't look too hard at the wording of "ID has a scientifically testable methodology." I simply should respect your feelings about the words, and not parse them. A close parse would be unfair. Yeah. UNFAIR. If you were the president, it would be treason. Erasmus Wiffball
Look, Erasmus, YOU made the claim. Either you support it or retract it. And Intelligent Design doesn't care about you. And stop quote-mining me. Clearly you have other issues and should seek help ET
ET:
Except he didn’t drop CSI.
Sorry, but I don't understand all of this well enough to know what you mean. It would help if you first gave me definitions of complex specified information and active information. Dembski uses neither term in "Conservation of Information Made Simple." We can't decide which "information" he's writing about unless we consider the definitions. So please, pretty please, with sugar on top, help me out with this. I'm intellectually challenged, and blinded by sin. Perhaps if you set me straight on evolutionary informatics, I will see the Light, and ultimately find my way to the Lord. Intelligent design obviously should be a priority for Christians who are deeply concerned with the salvation of lost souls like mine. Erasmus Wiffball
Erasmus Wiffball is just another clueless evo who thinks its raw spewage is some sort of argument.
Ask Bob Marks or Winston Ewert what accounted for the last delay of the publication of Introduction to Evolutionary Informatics, and why the title and contents of the chapter on algorithmic specified complexity changed, and why the book turned out to be 17 pages shorter than pre-publication purchasers (including me) were told it would be. I'm sure it had nothing to do with some critical analysis that I posted online. Absolutely sure. Nothing at all. Nope. I couldn't explain evolutionary informatics, let alone identify technical errors in it, if my life depended on it. Of course, you've studied evolutionary informatics, and are ready to discuss the topic with me now -- right? Otherwise there would be some question as to who is doing the spewing. I'm sure you would not spew. Nope. No chance of it. Erasmus Wiffball
EW:
Yeah, Dembski is so serious about setting matters straight in “Conservation of Information Made Simple” that he neglects to mention that he has dropped complex specified information — the putatively conserved quantity in No Free Lunch (2002) — and switched to active information — the putatively conserved quantity in Introduction to Evolutionary Informatics (2017).
Except he didn't drop CSI. ET
No, Erasmus, all you post is gibberish. You couldn't support what you post if your life depended on it. And the fact that you quote-mined me proves that you are a clueless dolt on an agenda ET
ET:
What a load of gibberish.
You evidently have a private meaning for the word gibberish. Is gibberish, for you, an expletive to emit when you don't like an observation that you understand perfectly well? Perhaps you just like the sound of the word. Gibberish. Gibberish. Gibberish. Yeah. It kind of grows on you. Erasmus Wiffball
Dembski here may be a useful link for those actually serious about the substantial issue: https://evolutionnews.org/2012/08/conservation_of/
Yeah, Dembski is so serious about setting matters straight in "Conservation of Information Made Simple" that he neglects to mention that he has dropped complex specified information -- the putatively conserved quantity in No Free Lunch (2002) -- and switched to active information -- the putatively conserved quantity in Introduction to Evolutionary Informatics (2017). But, hey, I'm not actually serious, so I couldn't possibly know anything about the subject. If you want to know how the formal definitions for complex specified information and active information are related to one another, just ask kairosfocus. He doesn't just spout a bunch of generalities about mathematical issues, you know. He truly does get down and dirty with the math, and readily will explain the details, if you ask him kindly, and if he has the time, and if the moon is in the right phase. Erasmus Wiffball
EW:
This is not an empty jibe, but instead a straight call on what I’ve seen over the past 15 years: an ID proponent is usually someone who has gained proficiency in some area other than science, and who somehow has failed to recognize that he’s as ignorant of science as scientists are of his area of proficiency.
What a load of gibberish. This is what I have seen for the last 50+ years- an evolutionist is usually someone clueless about science and what it entails. They are usually cowardly equivocators with no shame but tons of belligerence. They refuse to say how to test their claims- they refuse because no one has a clue how to do so. On the other hand ID has a scientifically testable methodology. That alone is by far more than evos can muster. Erasmus Wiffball is just another clueless evo who thinks its raw spewage is some sort of argument ET
Great, we have a new, ignorant sock puppet to deal with. Where do evoTARDs come up with these puppets to spew their ignorance and prove that they are a cowardly lot? Why even bother if they have nothing to offer beyond proving to be an embarrassment to humans? ET
PS: I seem to be having keyboard skip issues. kairosfocus
WB, it is notorious that convergent discussions are a dime a dozen in technical ar4eas, as different minds may move to similar conclusions once a discipline has reached a relevant stage. Schredinger's Wave Equation approach and the Matrix mechanics approach come to mind, as does the parallel between Newton and Leibniz. besides, the focal issue is inference of design on empirical sign, where complex functionally specific organisation and information do not per observation -- on trillions of cases, come about by blind chance and/or mechanical necessity. Where the analysis of configuration spaces and blind needle in haystack search challenge will rapidly give an analytical basis as to why that is so. KF PS: Dembski here may be a useful link for those actually serious about the substantial issue: https://evolutionnews.org/2012/08/conservation_of/ kairosfocus
complex, alphanumerically coded, algorithmically functional information in DNA
The thought of someone who does not recognize that as bafflegab is deeply depressing. Erasmus Wiffball
KF:
Further to this, recall that cosmological design inferences on fine tuning follow a line of work tracing to Sir Fred Hoyle…
Great example of someone who had zero comprehension of the bounds of his expertise. Erasmus Wiffball
Kairosfocus:
EW, strawmannish, ill-founded stereotype.
No, the stereotype is that they're all insecure about the size of their penises. Erasmus Wiffball
EricMH:
@DD here is a link to the law [of information non-growth]: https://core.ac.uk/download/pdf/82092683.pdf
Like, wow, you guys finally noticed Levin!? How diligent you are in your literature reviews! (I vowed, about ten years ago, that I was not going to provide references that IDists ought to have found for themselves.) What an embarrassment it is that Dembski and followers have held forth, ever so confidently, on conservation of information for two decades without noticing Leonid Levin! What's truly stunning, however, is that the term conservation of information is commonly used in quantum mechanics, and that no ID proponent, to my knowledge, has ever said anything about it. The problem is that ID wants to appeal to quantum mechanics as some sort of sanctum for notions of "minded matter," when it in fact prohibits the sort of information creation that ID attributes to minds/souls. Erasmus Wiffball
EW, strawmannish, ill-founded stereotype. Many Engineers, Applied Scientists, Pure Scientists, Computer Scientists and Mathematicians do know considerable bodies of science and for cause see good reason to hold that we can cogently infer design on observable, reliable signs. Further to this, recall that cosmological design inferences on fine tuning follow a line of work tracing to Sir Fred Hoyle, 1953 on resonances and their fine tuning significance. Similarly, as soon as March 19, 1953 Sir Francis Crick wrote to his son Michael concerning the presence of complex, alphanumerically coded, algorithmically functional information in DNA. That is direct indication of language, purpose and functional design in the heart of the living cell. The design inference is not founded on scientific ignorance. Just the opposite. KF kairosfocus
Mung, I would mark a distinction between ordering (often driven by lawlike necessity e.g. convection plus Coriolis virtual forces leading to tropical cyclones) and organisation that assembles parts into a functionally coherent entity requiring particular arrangements with complexity beyond 500 - 1,000 bits. KF kairosfocus
EricMH:
Have you conducted a poll or talked with engineers to see if your theory is valid?
You've been an engineering student for two years now, have you? This is not an empty jibe, but instead a straight call on what I've seen over the past 15 years: an ID proponent is usually someone who has gained proficiency in some area other than science, and who somehow has failed to recognize that he's as ignorant of science as scientists are of his area of proficiency. Erasmus Wiffball
EricMH:
I wonder why engineers tend to believe in intelligent design?
I wonder who believes that it is legal for you, a resident of the State of Texas, to refer to yourself as an engineer? Erasmus Wiffball
EricMH writes,
You can never disprove a mathematical proof with empirical evidence
True. You can never prove a mathematical assertion with evidence either. You can propose mathematical models for the world, and either provide empirical evidence for or against the validity of the model. But proof or disproof of mathematical statements themselves take place with logic and other accepted math, not empirical evidence. jdk
EricMH, Deputy Dog
What precisely do you mean by “math that applies to a very narrow set of conditions”?
Just as I stated earlier, the "law of information non-growth”, or conservation of independence, is a very simple statement that randomly varying one variable will not affect another variable if the two variables are independent. The law does not mean information cannot ever increase by deterministic and random processes; it only applies to the "narrow set of conditions" of two independent variables. In any case, as Levin notes in the quoted paper, the foundation for this law is the Independence Postulate, and "not being a mathematical assertion (the physical world is not chosen mathematically), the Independence Postulate (like, e.g., Church's thesis) cannot be proven." Quaesitor
@DD, can you explain what you mean by this? >As Quaesitor and Mung pointed out earlier, it would be foolish to draw broad conclusions about how order and information can arise in the universe from math that applies to a very narrow set of conditions. What precisely do you mean by "math that applies to a very narrow set of conditions"? Are you saying that the laws of probability and logarithms, which is all information theory is, only apply to communication systems and not to the rest of the universe? Or that in general math only applies to some areas of the universe and not to others? If so, I would like to open a bank account in the Land of No Math :D At any rate, I am really interested in any refutation of my argument, so it'd be great to get something more than the pseudo-intellectual "I doubt it" card. EricMH
EricMH: "we just have to figure out where the fault lies in the supposed counter evidence." Or.... figure out where the fault lies in our assumptions about where the math/theory is applicable, and where it is not. As Quaesitor and Mung pointed out earlier, it would be foolish to draw broad conclusions about how order and information can arise in the universe from math that applies to a very narrow set of conditions. Unless, of course, you are trying to get a grant from the Discovery Institute.... because they just might fall for it. Deputy Dog
@DD & ET, I see what you are saying. But, still there is no more order there than implicit in the initial conditions. Hurricanes don't pop into existence ex nihilo. The conditions have to be just right for them to happen, and interestingly it is provable there must always be a vortex somewhere due to the fact our world is a sphere. And examples aside, the result is proven mathematically. You can never disprove a mathematical proof with empirical evidence. Contradiction is always an illusion, and we just have to figure out where the fault lies in the supposed counter evidence. EricMH
Wow, devastating refutation. What a clueless clown you are Depudie: 1- YOU brought up STAR formation 2- I brought our Sun and the FACT that more than mere gravity is needed to explain hydrogen gravitational pull on only hydrogen and heat causes expansion 3- You, like the coward that you are, switched to gas giants 4- I responded with facts 5- You, the ever petulant child, respond with belligerence You are so dense that you are a walking black hole. ET
ET: I will inform NASA of your theories. Maybe they will launch a probe to plumb the depths of your ignorance. Deputy Dog
Wow, so our Sun is now a gas giant? Really? And we don't know how the gas giants formed. We don't know if they have a core made out of the heavier elements. With their own gravity I would expect their cores to at least resemble something solid. But then again the Sun doesn't seem to follow that and we were talking about STARs ET
ET: It did. Those are called gas giants. We have four of them in our solar system. Deputy Dog
Deputy Dog- There needs to be more than gravity to gather hydrogen together, especially when the heat generated wants to stop that process and expand. Why didn't the hydrogen gather around all of the heavy elements that make up the rocky planets, if gravity was the sole motivator? ET
@EricMH: To add to ET's statement about hurricanes, another example is star formation. Gravity gathers hydrogen into a star, and the result is more ordered than it was previously, all through natural processes. Deputy Dog
No, hurricanes are very ordered. That is how we know they are hurricanes and not just wind and rain. Hurricanes are well organized storms. ET
@ET >Hurricanes increase their order beyond their initial conditions. Don't you mean disorder? EricMH
EricMH:
Correct, without the ability to create information, all natural processes cannot increase order beyond their initial conditions.
Hurricanes increase their order beyond their initial conditions. What are we missing? ET
@Mung, missed this question: > There are no organizing forces in nature? Correct, without the ability to create information, all natural processes cannot increase order beyond their initial conditions. EricMH
EricMH, Well, I have seen a lot of A/Mats whip out the "not convinced" card when presented with evidence that violates their worldview. It tends to come off as a very personal and subjective way to react. It's not s very scientific posture, IMO. It's almost like salesman vs customer language. Andrew asauber
@asauber, ah great. I'm very convinced by this sort of argument, and I would like to help anyone with whatever part doesn't make sense. Perhaps I can uncover what is wrong with the argument that way, too. In either case, everyone benefits. EricMH
EricMH, I was just saying that "not convincing" is one of those arm waving motions. I didn't intend to aim anything at you. Andrew asauber
@asauber, alright, fine by me. I cannot help you if I don't know what is unconvincing. It is just standard information theory, and proves the whole ID argument is correct. EricMH
not convincing
This line of argumentation is not convincing. Andrew asauber
@Mung, can you explain what is not convincing about my example? Is it just because the knife and onion are not a telecommunication system? Information theory is mathematics, which applies to everything, regardless of the context it was invented in. EricMH
EricMH:
The DPI and the like say that we cannot apply random processing to a lump of matter and expect it to turn into an onion cutting knife.
I disagree. Obviously. :) I think you are making claims that you have not yet justified and in fact cannot justify. You've just wrapped up nonsense in fancy language and expect people to blindly accept it. If I thought there ws something to your claims I'd use them myself against the opponents of ID. But first they are going to require a bit more in depth work. I still see no connection to function. And following up on Quaesitor's post, what are your two random variables? And as my post shows, the DPI applies to trying to reproduce at the receiver an encoded signal sent from the source. IOW, communications theory. It says more about what design can do than what natural processes can or cannot do. Mung
@Quaesitor > It does not mean information cannot increase by deterministic and random processes ever; it only applies to the case of two independent variables. If you look at the proof in section 1.2 it exactly means that. It doesn't matter how independent the variables are at the beginning. They might be highly correlated. But still no amount of random and deterministic processing can increase the mutual information further. EricMH
Deputy Dog Regarding the "law of information non-growth", otherwise called conservation of independence, see: L. A. Levin “Laws of Information Conservation (Nongrowth) and Aspects of the Foundation of Probability Theory”, Probl. Peredachi Inf., 10:3 (1974), 30–35. This means that, if you have two independent random variables, varying one will not change the other. It's a definition in probability theory that two variables are independent if and only if they have no "mutual information". The word "information" appears because if variables are independent, then experimentally varying one will not reveal any "information" about the other. So, EricMH is correct that "mutual information cannot be increased by deterministic and random processes", but that just means that if two variables a truly independent, changing one will not reveal any information about the other. It does not mean information cannot increase by deterministic and random processes ever; it only applies to the case of two independent variables. Which is kind of obvious. Quaesitor
@DD here is a link to the law: https://core.ac.uk/download/pdf/82092683.pdf @Mung Think of X as a lump of matter and Y something we want to do with that lump of matter, such as cut up onions. Knife shaped lumps of matter have much more correlation with the function chopping onions than cups of water. So, I(knife : cut onion) > I(cup of water : cut onion). The DPI and the like say that we cannot apply random processing to a lump of matter and expect it to turn into an onion cutting knife. EricMH
@ericMH #21 I can find nothing about a "law of information non-growth" Can you please provide a reference document of some kind that explains this? Thanks ! Deputy Dog
Intuitively, the data processing inequality says that no clever transformation of the received code (channel output) Y can give more information about the sent code (channel input) X than Y itself.
I don't see how this has anything to do with function, meaning, or what natural processes can or cannot do. Sorry. From the OP:
Thus, without an organizing force, matter is functionless and void, and there is no meaning.
There are no organizing forces in nature? Mung
EricMH:
I(X;Y) = H(Y) – H(Y|X) Where H(Y) is the entropy of Y, and H(Y|X) is the conditional entropy. Basically, it is measuring how much uncertainty we have about Y, and then how much X is able to reduce the uncertainty.
Thank you for your reply. Let me try again. From the OP: In information theory, function is a kind of mutual information. In information theory, what kind of mutual information is function? Mung
How about if the cloud weasel is about half-way in between “amazing detail” and “a few cotton balls”?
Quaesitor, How bout if the cloud weasel moved its tail back and forth? The point is we all can see this is an Mungian ill-posed hypothetical that is lacking in crucial information. I'm sure he will step in soon to improve it. ;) Andrew asauber
@Mung thank you very much for the good questions. >>Function cannot emerge from atoms in motion. It cannot emerge from shaking the Lego box. This claim can be proven mathematically. In information theory, function is a kind of mutual information. Mutual information is subject to the law of information non-increase, which means mutual information and thus function cannot be created by natural processes. Thus, without an organizing force, matter is functionless and void, and there is no meaning. >I have no idea what you mean here. So many questions. >Information theory doesn’t deal with meaning, and thus, it does not deal with function. So it dopesn’t prove anything in that respect. Certainly no mathematical proof. When X is meaningful, it tells us about some other thing Y. Mutual information is how much one random variable tells us about another random variable. This seems like a pretty good fit for what we call "meaning". The same with function. X is functional if it performs function Y, so X has information about Y. Again, this looks like mutual information. There are two main ways of talking about mutual information (that I know of), and they are asymptotically equivalent. The first is within the context of Shannon's information theory, and the amount of information X has about Y is expressed: I(X;Y) = H(Y) - H(Y|X) Where H(Y) is the entropy of Y, and H(Y|X) is the conditional entropy. Basically, it is measuring how much uncertainty we have about Y, and then how much X is able to reduce the uncertainty. The second form is called algorithmic mutual information, and is measured very similarly to Shannon's version: I(X:Y) = K(Y) - K(Y|X*). Here, K(Y) is the Kolmogorov complexity of Y, which is the length of the shortest program that can generate Y. K(Y|X*) is the conditional Kolmogorov complexity, which is the length of the shortest program that generates Y when it is given the shortest program that generates X as an input. >What is the law of information non-increase? Does it come from information theory? And in information theory mutual information has nothing to do with any law of information non-increase. The law does come from information theory. Both variants of mutual information have their own law. The Shannon form has the data processing inequality. The algorithmic mutual information law is more powerful, and goes by a few different names: "law of information non-growth" and "independence conservation." Both laws say mutual information cannot be increased by deterministic and random processes. Deterministic and random processes seem to encompass all possible natural processes. Based on that premise, the laws prove that no natural process can generate meaning or function. >>The fundamental insight of the intelligent design movement is that we can empirically differentiate function from accidental patterns created by natural processes. >Are you saying that ID can tell us whether a cloud formation that looks like a weasel was designed that way as opposed to having been formed by a natural process? ID can only guarantee true positives, so it can only answer "Yes" or "I don't know" when asked "Is object X designed?" In context of the article, Francis Bacon claimed the only answer science can give is "I don't know." However, ID has proven science can also give the "Yes" answer. EricMH
Seversky: Has it occurred to you that the fact that a range of subjective human experiences can be elicited by psychotropic drugs or electrical or transcranial magnetic stimulation point to them being epiphenomena of the activities of the physical brain rather than any “spiritual” influence? See you never acknowledge what is well known in the field which is that these experiences elicit dramatic alleviation of mental symptoms, and personality changes. You ignore this because you are ignorant of the research, and it would tend to refute your view of what is going on. In short, mental illness and addiction can be obliterated, and materialist approaches are an abject failure in the same endeavor What you propose doesn't have to occur to me because I have studied the literature for decades and you have not even for an hour, and so I have seen discussed in the literature these types of shallow attempts to "explain" away the phenomena, in order to 'save' materialist explanations of these phenomena. This is a type of research where the researchers typically also take on the role of subjects so as to understand these phenomena from the point of view of the subjects. In fact this is a required part of the training for the researchers. If you were not ignorant of the field, you would know that NONE of the hundreds of researchers over the decades say what you do to explain away the results of the research. Here you go the FDA just made it a little easier for the research to be stepped up. BECAUSE of the spectacular benefits that the research activity has manifested: https://www.blacklistednews.com/article/68007/the-fda-just-approved-paypal-founders-project-to-use-magic-mushrooms-to-treat.html groovamos
Mung: Are you saying that ID can tell us whether a cloud formation that looks like a weasel was designed that way as opposed to having been formed by a natural process? asauber: Does your Hypothetical Cloud Weasel look like a weasel in amazing detail or does it look like a few cotton balls attached to each other?
How about if the cloud weasel is about half-way in between "amazing detail" and "a few cotton balls"? Quaesitor
Are you saying that ID can tell us whether a cloud formation that looks like a weasel was designed that way as opposed to having been formed by a natural process?
Mung, Does your Hypothetical Cloud Weasel look like a weasel in amazing detail or does it look like a few cotton balls attached to each other? Andrew asauber
Mung:
Are you saying that ID can tell us whether a cloud formation that looks like a weasel was designed that way as opposed to having been formed by a natural process?
What is the function of such a cloud formation, Mung? mutual information for Mung ET
I have some questions about he OP which I hope the author will address.
Function cannot emerge from atoms in motion. It cannot emerge from shaking the Lego box. This claim can be proven mathematically. In information theory, function is a kind of mutual information. Mutual information is subject to the law of information non-increase, which means mutual information and thus function cannot be created by natural processes. Thus, without an organizing force, matter is functionless and void, and there is no meaning.
I have no idea what you mean here. So many questions. Information theory doesn't deal with meaning, and thus, it does not deal with function. So it dopesn't prove anything in that respect. Certainly no mathematical proof. What is the law of information non-increase? Does it come from information theory? And in information theory mutual information has nothing to do with any law of information non-increase.
The fundamental insight of the intelligent design movement is that we can empirically differentiate function from accidental patterns created by natural processes.
Are you saying that ID can tell us whether a cloud formation that looks like a weasel was designed that way as opposed to having been formed by a natural process? Mung
Seversky, an ad hominem distractor. Would you go to somebody indoctrinated in an ideology and locked in a system that punishes deviations from the party-line if the question at stake is core commitments of the ideology? That is, sadly, what we are dealing with. And, for 60+ years we have known that DNA contains a complex, in key part algorithmically functional alphanumeric code backed up by molecular nanotech execution machinery. This directly implies language and purpose in the heart of cell based life. It is evolutionary materialistic scientism applying ideological lockout that keeps the patent import of that off the table. Relevant Engineers, Applied Scientists and Mathematicians as well as medical practitioners are not under the domination of that lockout and are crying out that the emperor's new wardrobe is a massive malfunction. KF kairosfocus
Folks, the first big hole in the Baconian picture as played out in the modern empiricism is the pivotal role of as abstract and objective a discipline as we get: Mathematics, a decidedly non-inductive study. To do Math, we need rational, responsible freedom required for logical inference and filling out logic model worlds, exactly what GIGO-limited computational substrates cannot do: rationally contemplate. So, we are dealing with something beyond the evolutionary materialist picture. The second is as EH pointed out: inductive studies imply provisionality in which observations are more reliable than explanations -- where, empirical observations include the reliability so far in such and such a range of a given model. Where, too, engineering is far more often about workable and robust compromises and tradeoffs than neatly mathematical optima. KF PS: A reminder on the limitations of computational substrates when it comes to actual reasoning:
. . . let us suppose that brain state A [--> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [--> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
kairosfocus
@Seversky Have you conducted a poll or talked with engineers to see if your theory is valid? EricMH
Seversky, show me exactly how the religion of Darwinism qualifies as a real science instead of being a pseudoscience that it actually is and then I might start taking Evolutionary biologists opinions as seriously as I take those opinions of people who actually work in the "real" world in the hard sciences doing "real" stuff that actually matters and that actually makes other people's lives better. i.e. Engineers, Doctors, Computer Programmers.
Darwin’s Theory vs Falsification – video https://www.youtube.com/watch?v=8rzw0JkuKuQ “There are five standard tests for a scientific hypothesis. Has anyone observed the phenomenon — in this case, Evolution — as it occurred and recorded it? Could other scientists replicate it? Could any of them come up with a set of facts that, if true, would contradict the theory (Karl Popper’s “falsifiability” tests)? Could scientists make predictions based on it? Did it illuminate hitherto unknown or baffling areas of science? In the case of Evolution… well… no… no… no… no… and no.” – Tom Wolfe – The Kingdom of Speech – page 17 Darwinian Evolution Fails the Five Standard Tests of a Scientific Hypothesis – video https://www.youtube.com/watch?v=L7f_fyoPybw
bornagain77
Seversky:
Engineers work with known quantities towards optimal design the whole time.
Well the try to do the best they can with what they have
If they’re designing and building bridges or aircraft or skyscrapers that must work as intended, they have to.
But they don't have to be optimal
To those whose work involves design at the most fundamental level, everything looks designed.
That is a load of crap. To those whose work involves design at the most fundamental level, know intelligent design when they see it and they also know when an intelligent agent isn't required to produce whatever it is. They best understand cause and effect relationships. ET
Seversky:
Has it occurred to you that the fact that a range of subjective human experiences can be elicited by psychotropic drugs or electrical or transcranial magnetic stimulation point to them being epiphenomena of the activities of the physical brain rather than any “spiritual” influence?
That doesn't follow. I can change the hardware of a computer such that it garbles the software. Does that mean that the software is really hardware? ET
Seversky:
No, but I give their views on the theory no more weight than yours or mine.
Why is that? It isn't as if your evolutionary biologists have any answers- or a clue.
So, let me ask you, suppose you or someone dear to you began to suffer a serious, possibly life-threatening, decline in health, would you consult a plumber or a lawyer to find out what is wrong.
That depends. I had 5 orthopedic surgeons- experts in their field- tell me that my right knee was fine even though I, a non-expert, knew it was far from fine. The 6th surgeon also said it was fine but he said he could get a look inside as by that time a tear had occurred in the meniscus. During that surgery he discovered that my ACL, the one the first surgeon replaced 2 years earlier was so messed up that it wasn't really doing anything. So he had to schedule another ACL replacement surgery for the knee he and 5 other experts said was fine. The MRI's didn't catch it. Their weak tests they conducted didn't catch it. I knew I had a problem though. And the reason I pushed it was my knowledge of kinesiology- I pushed through 5 surgeons to get one that would finally listen, did something, found and repaired the problem. So yes, I would definitely take advice from a kinesiologist if I couldn't find an orthopedic surgeon to diagnose my problem. But I digress- yes engineers have a place in biology: Approaching Biology From a Different Angle- to wit:
Systems biology is a loosely defined term, but the main idea is that biology is an information science, with genes a sort of digital code. Moreover, while much of molecular biology has involved studying a single gene or protein in depth, systems biology looks at the bigger picture, how all the genes and proteins interact. Ultimately the goal is to develop computer models that can predict the behavior of cells or organisms, much as Boeing can simulate how a plane will fly before it is built. But such a task requires biologists to team up with computer scientists, engineers, physicists and mathematicians
Welcome to 21st century biology, Seversky ET
bornagain77 @ 5
So Seversky let me get this straight. You think that it is point in favor of Evolution that a large percentage of Engineers, Computer Programmers and Medical Doctors think that your theory is bunk?
No, but I give their views on the theory no more weight than yours or mine. So, let me ask you, suppose you or someone dear to you began to suffer a serious, possibly life-threatening, decline in health, would you consult a plumber or a lawyer to find out what is wrong. After all, they've read some articles on the Internet, visited some medical websites so you'd think they have some idea of what they're talking about. Or would you go to your family physician and maybe subsequently an oncologist because they might actually know more about medicine than your plumber or lawyer or engineers or computer programmers? Seversky
groovamos @ 3
BTW I have thrown up many times in answer to Seversky the research on these substances, which consistently demolishes the utility of scientific materialism in any study of the human mind, and I am about to supply another one; yet Seversky never is able to refute this slant on human experience or even acknowledge that this research exists in his responses on here. So here I will supply another such reference and dare Seversky to comment on its validity vis a vis his own particular philosophical commitment. Here is a review referencing a recent book and also a two volume set on this research:
Has it occurred to you that the fact that a range of subjective human experiences can be elicited by psychotropic drugs or electrical or transcranial magnetic stimulation point to them being epiphenomena of the activities of the physical brain rather than any "spiritual" influence? Seversky
EricMH @ 2
@Seversky I wonder why engineers tend to believe in intelligent design? What is your theory for the Salem Hypothesis? It is certainly mysterious, isn’t it?
Engineers work with known quantities towards optimal design the whole time. If they're designing and building bridges or aircraft or skyscrapers that must work as intended, they have to. Where many lives depend on them getting it right, unanswered questions - saying "I'm afraid I don't know" - don't cut it. Anything less than the highest degree of confidence - in effect, certainty - is rejected. In a sense, the messy uncertainties and unanswered questions, that are meat and drink to cutting-edge science, are anathema to the engineering mindset. But, the old adage, to the man with a hammer everything looks like a nail, applies. To those whose work involves design at the most fundamental level, everything looks designed. Seversky
So Seversky let me get this straight. You think that it is point in favor of Evolution that a large percentage of Engineers, Computer Programmers and Medical Doctors think that your theory is bunk? Funny, I would definitely not consider that to be a point in favor of your theory. :) Only in the twisted pseudo-scientific world of Darwinian reasoning is such a devastating fact against a theory to be considered a point in its favor.
“Biologists must constantly keep in mind that what they see was not designed, but rather evolved.” - Francis Crick - co-discoverer of DNA helix
In the following "guided tour" video of a human cell, it is easy to see why Crick had to live in denial of the design that he saw
Cellscape VR (Virtual Reality) Biology - Guided Tour Final - video (2018) https://www.youtube.com/watch?v=0A56uOVluNM Cross-section of DNA https://i.pinimg.com/originals/6f/1e/91/6f1e91f1adb21fb9636f51e977eaf69d.jpg
bornagain77
@groovamos #3 You do realize that employers now research a prospective employee's online presence before hiring? Do you want evidence of your "substance" usage posted anywhere that they could find? Deputy Dog
Seversky Another datapoint in support of the Salem Hypothesis The source of the so-called Salem Hypothesis could be this: engineers are not as much subject to the steel trapped, lockstep regimentation as are academic scientists, who are subject to ostracization in the workplace if they are to show any sympathy towards any philosophical stance that does not have an affinity for philosophical materialism or scientific materialism. This is why you have been reading of silicon valley engineers and mathematicians employing entheogens, ("microdosing") in their work when you never seem to hear of academic scientists or mathematicians doing so. BTW I have thrown up many times in answer to Seversky the research on these substances, which consistently demolishes the utility of scientific materialism in any study of the human mind, and I am about to supply another one; yet Seversky never is able to refute this slant on human experience or even acknowledge that this research exists in his responses on here. So here I will supply another such reference and dare Seversky to comment on its validity vis a vis his own particular philosophical commitment. Here is a review referencing a recent book and also a two volume set on this research: https://religionnews.com/2016/04/29/psychedelic-drugs-can-deepen-religious-experiences-commentary/ Here is a link to the details on the book published by Columbia University and I dare Seversky to comment on this: https://cup.columbia.edu/book/sacred-knowledge/9780231174060 Here is a set of reviews from the two volume set by the U of California press. BTW one of the contributors to this anthology is Stanislav Grof who I have mentioned several times on UD: http://nr.ucpress.edu/content/20/2/117 BTW I am an engineer with an MSEE working on a Ph.D thesis. You will not find many hard core Darwinian partisans with an interest in the research into these substances, maybe that is why Seversky refuses to respond. groovamos
@Seversky I wonder why engineers tend to believe in intelligent design? What is your theory for the Salem Hypothesis? It is certainly mysterious, isn't it? EricMH
Another datapoint in support of the Salem Hypothesis. Seversky

Leave a Reply