Uncommon Descent Serving The Intelligent Design Community

Eric Holloway: ID as a bridge between Francis Bacon and Thomas Aquinas

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Eric Holloway, an electrical and computer engineer, offers some thoughts on how to prevent science from devolving into “scientism.” For an example of scientism, see Peter Atkins’s claim that science can answer all the Big Questions. Here’s John Mark Reynolds’s outline of the general problem:

Sometimes a culture takes a right road, sometimes it passes the right way and ends up a bit lost. Western Europe had a chance at the start of seventeenth century to get a few things right, but by the eighteenth century most had taken a worse way: Enlightenment or reaction. Enlightenment lost the wisdom of the Middle Ages, creating the myth of a dark age, and the main enlightened nation, France, ended the seventeenth century in butchery and dictatorship. Instead of the development of an urbane Spain Cervantes might have prefigured, there was a mere reaction away from the new ideas, including the good ones. More.

Intelligent Design: The Bridge Between Baconian Science and Thomistic Philosophy

Imagine giving your friend a good book filled with beautiful pictures and stories. Instead of reading it, the friend begins to count the letters, and make theories about which letters predict which pictures will come next, and analyze the types of ink used to print the pages. This does not make sense. Why doesn’t he just read the book? The reason, he claims, is because we do not want to bias ourselves by assuming the ink was arranged purposefully.

Carlo Crivelli 007.jpg
Thomas Aquinas

This story illustrates difference in perspective of the medieval ages and our modern scientific age. The medieval worldview was marked by the voluminous philosophy of Thomas Aquinas (1224/6—1274). The worldview of that time was that God is ultimate existence, and creation is ordered towards maximizing its existence in God. As such, there is a natural law that must be followed for humankind to flourish. Deviation from the natural law results in cessation of existence and death. Due to the ability of the human mind to rationally grasp changeless principles, the medievals thought there was something changeless and immortal about the human soul. Since all other physical creatures do not have this rational ability, they exist to a less perfect degree than human beings. This means that all humans inherently have a higher worth than all the rest of physical creation, and at the same time all humans are equal since it is of the nature of humankind to be rational, even if particular humans are incapable of rational thought
.
But, the intricate medieval tapestry begins to unravel. An expanding view of the globe, major diseases and wars, and internal criticisms leads to a breakdown of the Thomistic system. Francis Bacon (1561–1626), a leading popularizer of what we consider modern science, grows impatient with the monks’ philosophizing and debating. Demanding results, Bacon recommends carefully dissecting nature’s mysteries to heal the world’s suffering, instead of wondering about the meaning of it all. And thus was born the modern scientific age, where the perception of meaning is only a biased illusion and truth must be empirically measurable.

Today, Bacon’s view is the dominant view, so much so that we take it for granted. Science and technology have led to a revolution in health, wealth and material happiness throughout the world. In the space of a few centuries it has lifted the majority of the earth’s booming population out of poverty. The rigorous vision of Bacon, spoken with the precision of math, has given us the gift of the gods, but has also resulted in unprecedented death and destruction, horrific human experimentation, mass enslavement, cultural disintegration, and in general left us with a sense that we have lost something of great value that we cannot find again. The core reason for the aimlessness is because the building blocks of science are inert. They are like Legos in a box. You cannot shake the box of Legos and expect a spaceship to fall out. In the same way, mathematical proof and physical evidence cannot explain their own reason for being. Science cannot explain meaning. At the same time, the very inability of science to speak for itself says something of interest.

Somer Francis Bacon.jpg
Francis Bacon

In medieval language this missing meaning is called function. Function cannot emerge from atoms in motion. It cannot emerge from shaking the Lego box. This claim can be proven mathematically. In information theory, function is a kind of mutual information. Mutual information is subject to the law of information non-increase, which means mutual information and thus function cannot be created by natural processes. Thus, without an organizing force, matter is functionless and void, and there is no meaning.

The fundamental insight of the intelligent design movement is that we can empirically differentiate function from accidental patterns created by natural processes. This means we can describe the Thomistic system with Baconian empirical precision if we really wanted to. Fortunately, humans seem to be pretty good at identifying function without huge amounts of empirical justification, unless they are university trained. The empirical detection of function is a new pair of glasses that corrects Bacon’s vision, and helps us again follow along the path that winds back through the medieval monasteries of Thomas Aquinas, with the mathematical and empirical rigor of science.

But, after hearing this Bacon will say, “it all sounds quite nice, but how is it useful? Function doesn’t feed children or cure cancer.” The answer to Bacon’s question is illustrated with the story of the book at the beginning. If we approach the natural world as if it were arbitrarily put together, then we miss many clues that can help us to understand and use it better.

We are seeing the scientific importance of empirically detecting function now with the ENCODE project. Previously, scientists believed that since the human genome was produced by evolution, most of it would be random and functionless. However, the ENCODE project has shown the majority of the human genome is functional. Now that we understand the genome is mostly functional, we will be better able to decode how it works and programs our body. So, contrary to Bacon, being able to detect function in the human genome can help us improve our lives.

This raises the further question: how would science change if we broaden our detection of function to the rest of the world? Since things work better if they follow their function, does this mean there is a proper order for human flourishing, as the medievals believed? Furthermore, what does science have to say about the creators of function, such as humans? Since matter cannot create function, function creators cannot be reduced to matter. And being more than matter, human beings must be more valuable than any material good. While it is true we cannot go from is to ought, intelligent design does provide a scientific basis for human ontological and pragmatic worth, as well as justify a natural law that must be followed in order for humanity to prosper. So, through the lens of intelligent design, science can indeed talk about the metaphysical realm of value and morals and explain the medieval worldview of function in the empirical language of modern science.

Note: This post also appeared at Patheos (August 30, 2018)

See also: Could one single machine invent everything? (Eric Holloway)

and

Renowned chemist on why only science can answer the Big Questions (Of course, he define th Big Questions as precisely the ones science can answer, dismissing the others a not worth bothering with.)

Comments
@Mung & Bill Cole, when I talk about mutual information, it is not only between DNA strands. That is probably the weakest way to apply the argument, as we've seen in my interactions at PS. What I more had in mind is where X is a DNA strand, and Y is some way of describing X. Y could be a mathematical formula or information about the organism's environment. Alternatively, X could be some organs in the animal and Y could be a description of how they work together to perform the function. There are many ways to fill in the variables X and Y. The point is that if there is mutual information between the two, and the conditions for the LoING are met, then something other than chance and necessity must be involved in the generation of X. The bigger point is this is what ID has been saying in regards to CSI. Many have claimed CSI is bogus math, but what I've found is that CSI can be coherently defined within information theory with the concepts of mutual information and the law of information non growth. So, at the very least the math is not bogus, we can potentially measure quantities that indicate intelligent design, even if it is unclear exactly how the theory is applied to areas of interest such as biology.EricMH
October 16, 2018
October
10
Oct
16
16
2018
11:41 AM
11
11
41
AM
PDT
I publicly apologize for the unprofessional comment regarding Dr. Swamidass in comment #146. I should not have made accusations about his argumentation without specifics.EricMH
October 5, 2018
October
10
Oct
5
05
2018
10:15 AM
10
10
15
AM
PDT
@Mung > Even if Y has no effect on the probability distribution X can you not still calculate the mutual information? If X (the DNA symbols) was generated by a uniform distribution, there wouldn't be any mutual information with Y. Basically, the mutual information is showing there is a better explanation for X than the uniform distribution. So, > isn’t mutual information the information one obtains about X by knowing Y, or conversely, the amount of information one obtains about Y if you know X? is correct. In Durston's case that there is another distribution Y that provides much more information about X than the uniform distribution. > What he has give us so far is the FI to perform a function. The FI to perform a function is going to be less than the information required to create the FI. You cannot get more mutual information from less, due to the law of information non-growth. Swamidass in another thread explained FI is conditional mutual information, which in turn depends on non conditioned mutual information, so there is no getting around the information non-growth law. Durston's FI is the Kullback-Liebler divergence between an empirical distribution and the uniform distribution, and when the divergence is typical it becomes mutual information, per my analysis a couple comments above.EricMH
September 27, 2018
September
09
Sep
27
27
2018
08:44 AM
8
08
44
AM
PDT
EricMH:
Thus, Durston's functional information turns out to be mutual information, and the standard conservation laws apply. This means that whenever he detects functional information, it is a sign of intelligent design, since chance and necessity are provably incapable of generating mutual information.
Kirk point out that there is an important distinction to be made here, one which I don't think you're taking into account. We'll see where it leads.
5. There is a difference between the functional information required to perform a function, and creating that information.
What he has give us so far is the FI to perform a function.Mung
September 27, 2018
September
09
Sep
27
27
2018
07:04 AM
7
07
04
AM
PDT
Thank you Eric. Even if Y has no effect on the probability distribution X can you not still calculate the mutual information? Further, isn't mutual information the information one obtains about X by knowing Y, or conversely, the amount of information one obtains about Y if you know X? So as it it stands just now I don't see what you are describing as mutual information. I'll need to crack open the books and see if I can find some examples.Mung
September 27, 2018
September
09
Sep
27
27
2018
06:59 AM
6
06
59
AM
PDT
@Mung and Bill Cole, Just to be extra clear, Durston's formula is H(U) - H(X), and this is the same as KLD(P(X|Y) || U) since H(X) < H(U) because some Y is affecting X. Since his measurements are pretty typical for DNA, then his H(U) - H(X) is the mean KLD(P(X|Y) || U) and consequently is the same as I(X;Y). Thus, Durston's functional information turns out to be mutual information, and the standard conservation laws apply. This means that whenever he detects functional information, it is a sign of intelligent design, since chance and necessity are provably incapable of generating mutual information.EricMH
September 26, 2018
September
09
Sep
26
26
2018
09:31 PM
9
09
31
PM
PDT
The above only works if KLD(P(X|Y) || P(X)) is the mean over all Y. However, by the law of large numbers, as the number of samples increases we approach the mean.EricMH
September 26, 2018
September
09
Sep
26
26
2018
09:48 AM
9
09
48
AM
PDT
@Mung & Bill Cole, here is my claim about order being mutual information in mathematical detail. Say X is a symbol in a string S (e.g. DNA strand) made from N symbols. By the principle of maximum entropy, https://en.wikipedia.org/wiki/Principle_of_maximum_entropy we set the distribution P(X) to be uniform and P(X) = 1/N. Let's say there is a reason Y makes X not uniform, i.e. there is a large amount of order in X. So, P(X|Y) is a non uniform distribution. The Kullback-Liebler divergence measures the divergence between two distributions, and is always non negative. So, KLD(P(X|Y) || P(X)) measures the distance of the observed symbol distribution from the uniform distribution. For clarity, we set P(X) to U, to indicate it is the uniform distribution, KLD(P(X|Y) || P(X)) = KLD(P(X|Y) || U). However, this is not yet mutual information, but it can be by taking the expectation. https://en.wikipedia.org/wiki/Mutual_information#Relation_to_Kullback%E2%80%93Leibler_divergence To turn the KLD into mutual information, we take the expectation over the probability distribution of possible Ys that could influence X: I(X;Y) = E[KLD(P(X|Y) || P(X))]. If we apply the principle of maximum entropy to Y, since we don't know a priori what explanation is most likely, then P(Y) is also uniform. In which case, I(X;Y) = E[KLD(P(X|Y) || P(X))] = KLD(P(X|Y) || U). This is what Kurt Durston is measuring with his functional information, the mutual information between a DNA strand and some cause that makes the symbol probability non uniform, while applying the principle of maximum entropy, which we do to minimize our bias.EricMH
September 26, 2018
September
09
Sep
26
26
2018
06:50 AM
6
06
50
AM
PDT
@Mung, I'm saying mutual information is the Shannon entropy of the uniform distribution minus the Shannon entropy of the actual distribution, as Kurt calculated. The uniform distribution is the maximum entropy distribution, so represents the state of no a priori correlation with anything else.EricMH
September 25, 2018
September
09
Sep
25
25
2018
08:52 PM
8
08
52
PM
PDT
Bill, it will be interesting to see whether Joshua allows Eric to participate in his exclusive thread. :)Mung
September 25, 2018
September
09
Sep
25
25
2018
01:31 PM
1
01
31
PM
PDT
EricMH:
The only thing that is not mutual information is the output of a uniform distribution.
You seem to be saying that mutual information is the same as Shannon information except in the case of a uniform distribution. You agree that the Shannon measure of information is defined for any probability distribution, right, including the uniform distribution?Mung
September 25, 2018
September
09
Sep
25
25
2018
01:30 PM
1
01
30
PM
PDT
Eric
@bill cole, I’ll check it out this weekend. Swamidass is annoying to debate with because he equivocates a whole lot and fudges his math. For someone who is an expert in information theory, his arguments are highly questionable.
I understand. I am hoping that Kirk's attendance and hopefully Mung's along with you can add some rationality to the discussion.bill cole
September 25, 2018
September
09
Sep
25
25
2018
01:19 PM
1
01
19
PM
PDT
@bill cole, I'll check it out this weekend. Swamidass is annoying to debate with because he equivocates a whole lot and fudges his math. For someone who is an expert in information theory, his arguments are highly questionable. At any rate, FSC and functional information are mutual information. Any kind of order is mutual information. The only thing that is not mutual information is the output of a uniform distribution.EricMH
September 25, 2018
September
09
Sep
25
25
2018
01:08 PM
1
01
08
PM
PDT
Eric Mung Kirk Durston has joined the discussion on Joshua's blog. It would be great for both of you to join. One of the discussion points is if cancer can generate functional information. Joshua has put forward this hypothesis however at this point I disagree with him. One of my confusions is in understand the relationships between FSC Mutual information and functional information. A random generator appears to be able to create some of these but not all of these.bill cole
September 25, 2018
September
09
Sep
25
25
2018
10:14 AM
10
10
14
AM
PDT
@Mung yeah, mutual information is two probability distributions. It is more general than functional information, but functional information is a kind of mutual information, and consequently the law of non growth applies.EricMH
September 25, 2018
September
09
Sep
25
25
2018
07:00 AM
7
07
00
AM
PDT
Bill, Mutual Information has nothing to do with function. All you need are two random variables with their own probability distributions. That's it. For Shannon Information all you need is a probability distribution. Mutual Information just has a second one. Eric, am I wrong about that?Mung
September 25, 2018
September
09
Sep
25
25
2018
06:54 AM
6
06
54
AM
PDT
@bill cole, > Have you read Durston’s paper on measuring functional specified complexity? I just skimmed through it. While his formula is not strictly mutual information, since the second term is not conditional, but there is a reduction of entropy subtracted from a baseline. So, it appears to be mutual information.EricMH
September 25, 2018
September
09
Sep
25
25
2018
03:39 AM
3
03
39
AM
PDT
bill cole:
I agree with Mung that the issue in biology is functional information.
Or you could be like Neil Rickert and Alan Fox- deny that information has anything to do with it. And ignore all of the evolutionary literature to the contrary.ET
September 24, 2018
September
09
Sep
24
24
2018
10:54 AM
10
10
54
AM
PDT
Eric
If Swamidass is right that functional information is conditional mutual information, then functional information is also limited by the conservation law. Thus, natural processes, insofar as they are reducible to randomness + determinism, cannot generate functional information.
I not sure he is right. Have you read Durston's paper on measuring functional specified complexity?bill cole
September 24, 2018
September
09
Sep
24
24
2018
09:10 AM
9
09
10
AM
PDT
@Bill Cole Re: functional information At least according to Swamidass account in the following thread: https://discourse.peacefulscience.org/t/swamidass-computing-the-functional-information-in-cancer/1646/40 it is a form of conditional mutual information. Conditional mutual information is the difference between absolute mutual information quantities, so it is a lower bound on the total absolute mutual information. Since the absolute mutual information is limited by the conservation law, then so is the conditional mutual information. If Swamidass is right that functional information is conditional mutual information, then functional information is also limited by the conservation law. Thus, natural processes, insofar as they are reducible to randomness + determinism, cannot generate functional information.EricMH
September 24, 2018
September
09
Sep
24
24
2018
07:56 AM
7
07
56
AM
PDT
Eric Mung
. Swamidass claims CMI can be increased through naturalistic processes, and that since AMI is inaccessible to us, we should only concern ourselves with CMI. Thus, as far as we can tell, naturalistic processes can create mutual information.
I agree with Mung that the issue in biology is functional information. I think there is a connection with mutual information but I am not sure what it is. Perhaps functional information is a subset of mutual information. I have not seen the claim properly supported that natural processes can generate functional information at least in a sustainable way.bill cole
September 24, 2018
September
09
Sep
24
24
2018
07:30 AM
7
07
30
AM
PDT
@Mung here is the current thread Swamidass and I are debating on. https://discourse.peacefulscience.org/t/wrap-up-experiment-with-mutual-information/EricMH
September 24, 2018
September
09
Sep
24
24
2018
04:47 AM
4
04
47
AM
PDT
@Mung and others, here's a short overview of the applications of Shannon & Kolmogorov's information theory. http://www.dp-pmi.org/uploads/3/8/1/3/3813936/6._figueiredo_2016.pdf It mentions Shannon's caveat that even though information theory is broader than just communications, its application should be approached in a methodical manner and not just by word association. So, Shannon agrees information theory applies to other domains, and encourages care in its application.EricMH
September 21, 2018
September
09
Sep
21
21
2018
01:20 PM
1
01
20
PM
PDT
Thanks for the links, I was looking at two completely different threads, lol.Mung
September 20, 2018
September
09
Sep
20
20
2018
01:40 PM
1
01
40
PM
PDT
@Mung here's a more concrete example: https://mindmatters.today/2018/09/meaningful-information-vs-artificial-intelligence/EricMH
September 19, 2018
September
09
Sep
19
19
2018
12:28 PM
12
12
28
PM
PDT
> Mutual information is still Shannon information and as such has nothing to do with either meaning or function. True, it is just mathematical manipulation. But, my point is that it describes what meaningful/functional information is: namely a correlation between some entity and an independent description. In particular, the algorithmic mutual information makes this pretty precise. > Would you kindly post the link? Swamidass' main argument is that though he agrees with me that algorithmic mutual information (AMI) cannot be produced through naturalistic processes, in the real world we can never measure AMI exactly, because it is by definition not computable. Instead, we are always stuck with calculable mutual information (CMI). Swamidass claims CMI can be increased through naturalistic processes, and that since AMI is inaccessible to us, we should only concern ourselves with CMI. Thus, as far as we can tell, naturalistic processes can create mutual information. Though it is true we cannot measure AMI directly, I'm not convinced by his argument for a couple reasons. First, I think there are situations where we can get a decent approximation of AMI. For example, the Lempel-Ziv compression algorithm will approach the true AMI with a long enough ergodic sequence. Second, because positive CMI always implies positive AMI, even if we cannot measure the true AMI. So, at the very least, we cannot go from 0 -> 1 CMI through naturalism. All the threads are quite long. Here is where Swamidass claims he can demonstrate MI comes about through naturalistic processes: https://discourse.peacefulscience.org/t/swamidass-computing-the-functional-information-in-cancer/1646/40 Here he discusses my argument that Levin's proof + existence of MI means a halting oracle must exist: https://discourse.peacefulscience.org/t/intelligence-and-halting-oracles/1124/29EricMH
September 18, 2018
September
09
Sep
18
18
2018
03:29 PM
3
03
29
PM
PDT
EricMH:
@Mung what do you think meaningful and functional information are?
I think meaningless information is an oxymoron. Functional information would be information where meaning is defined by function. Shannon information simply does not take the meaning of a sequence of symbols into account. It is merely concerned with probability distributions. Mutual information is still Shannon information and as such has nothing to do with either meaning or function. Mutual information is therefore the reduction in uncertainty about variable X, or the expected reduction in the number of yes/no questions needed to guess X after observing Y. There is no connection here between what natural processes can or cannot accomplish. I thought I was following the conversation at PS but maybe I am looking at the wrong link. Would you kindly post the link?Mung
September 18, 2018
September
09
Sep
18
18
2018
01:28 PM
1
01
28
PM
PDT
@Mung, and I'll work on more concrete examples. I've been arguing the point at length with Swamidass over at PS, and so far it hasn't been refuted. So, seems worthwhile to pursue.EricMH
September 18, 2018
September
09
Sep
18
18
2018
10:21 AM
10
10
21
AM
PDT
@Mung what do you think meaningful and functional information are?EricMH
September 18, 2018
September
09
Sep
18
18
2018
09:29 AM
9
09
29
AM
PDT
EricMH:
But I’m more than willing to do my best effort for sincere intentions to understand. So, Mung, if you are truly interested, let me know, and I’ll work on something more concrete.
If you want your argument to be useful to ID I think you would want to put in that effort. Information theory proves that evolution is impossible isn't going to go far in a debate if the person making that claim can't explain how or why. Mutual information is still information in the Shannon sense, which has nothing to do with meaning or function.Mung
September 17, 2018
September
09
Sep
17
17
2018
01:55 PM
1
01
55
PM
PDT
1 2 3 6

Leave a Reply