Uncommon Descent Serving The Intelligent Design Community

A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The common belief is that adding disorder to a designed object will destroy the design (like a tornado passing through a city, to paraphrase Hoyle). Now if increasing entropy implies increasing disorder, creationists will often reason that “increasing entropy of an object will tend to destroy its design”. This essay will argue mathematically that this popular notion among creationists is wrong.

The correct conception of these matters is far more nuanced and almost the opposite of (but not quite) what many creationists and IDists believe. Here is the more correct view of entropy’s relation to design (be it man-made or otherwise):

1. increasing entropy can increase the capacity for disorder, but it doesn’t necessitate disorder

2. increasing an object’s capacity for disorder doesn’t imply that the object will immediately become more disordered

3. increasing entropy in a physical object is a necessary (but not sufficient) condition for increasing the complexity of the design

4. contrary to popular belief, a complex design is a high entropy design, not a low entropy design. The complex organization of a complex design is made possible (and simultaneously improbable) by the high entropy the object contains.

5. without entropy there is no design

If there is one key point it is: Entropy makes design possible but simultaneously improbable. And that is the nuance that many on both sides of the ID/Creation/Evolution controversy seem to miss.

The notion of entropy is foundational to physics, engineering, information theory and ID. These essays are written to provide a discussion on the topic of entropy and its relationship to other concepts such as uncertainty, probability, microstates, and disorder. Much of what is said will go against popular understanding, but the aim is to make these topics clearer. Some of the math will be in a substantially simplified form, so apologies in advance to the formalists out there.

Entropy may refer to:

1. Thermodynamic (Statistical Mechanics) entropy – measured in Joules/Kelvin, dimensionless units, degrees of freedom, or (if need be) bits

2. Shannon entropy – measured in bits or dimensionless units

3. Algorithmic entropy or Kolmogorov complexity – measured also in bits, but deals with the compactness of a representation. A file that can be compressed substantially has low algorithmic entropy, whereas files which can’t be compressed evidence high algorithmic entropy (Kolmogorov complexity). Both Shannon entropy and algorithmic entropies are within the realm of information theory, but by default, unless otherwise stated, most people associate Shannon entropy as the entropy in information theory.

4. disorder in the popular sense – no real units assigned, often not precise enough to be of scientific or engineering use. I’ll argue the term “disorder” is a misleading way to conceptualize entropy. Unfortunately, the word “disorder” is used even in university science books. I will argue mathematically why this is so…

The reason the word entropy is used in the disciplines of Thermodynamics, Statistical Mechanics and Information Theory is that there are strong mathematical analogies. The evolution of the notion of entropy began with Clausius who also coined the term for thermodynamics, then Boltzmann and Gibbs related Clausius’s notions of entropy to Newtonian (Classical) Mechanics, then Shannon took Boltzmann’s math and adapted it to information theory, and then Landauer brought things back full circle by tying thermodynamics to information theory.

How entropy became equated with disorder, I do not know, but the purpose of these essays is to walk through actual calculations of entropy and allow the reader to decide for himself whether disorder can be equated with entropy. My personal view is that Shannon entropy and Thermodynamic entropy cannot be equated with disorder, even though the lesser-known algorithmic entropy can. So in general entropy should not be equated with disorder. Further, the problem of organization (which goes beyond simple notions of order and entropy) needs a little more exploration. Organization sort of stands out as a quality that seems difficult to assign numbers to.

The calculations that follow are to give an illustration how I arrived at some my conclusions.

First I begin with calculating Shannon entropy for simple cases. Thermodynamic entropy will be covered in the Part II.

Bill Dembski actually alludes to Shannon entropy in his latest offering on Conservation of Information Made Simple

In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy).

William Dembski
Conservation of Information Made Simple

To elaborate on what Bill said, if we have a fair coin, it can exist in two microstates: heads (call it microstate 1) or tails (call it microstate 2).

After a coin flip, the probability of the coin emerging in microstate 1 (heads) is 1/2. Similarly the probability of the coin emerging in microstate 2 (tails) is 1/2. So let me tediously summarize the facts:

N = Ω(N) = Ω = Number of microstates of a 1-coin system = 2

x1 = microstate 1 = heads
x2 = microstate 2 = tails

P(x1) = P(microstate 1)= P(heads) = probability of heads = 1/2
P(x2) = P(microstate 2)= P(tails) = probability of tails = 1/2

Here is the process for calculating the Shannon Entropy of a 1-coin information system starting with Shannon’s famous formula:

where I is the Shannon entropy (or measure of information).

This method seems a rather torturous way to calculate the Shannon entropy of a single coin. A slightly simpler method exists if we take advantage of the fact that each microstate of the coin (heads or tails) is equiprobable, and thus conforms to the fundamental postulate of statistical mechanics, and thus we can calculate the number of bits by simply taking the logarithm of the number of microstates as is done in statistical mechanics.

Now compare this equation of the Shannon entropy in information theory

to Boltzmann entropy from statistical mechanics and thermodynamics

and even more so using different units whereby kb=1

The similarities are not an accident. Shannon’s ideas of information theory are a descendant of Boltzmann’s ideas from statistical mechanics and thermodynamics.

To explore Shannon entropy further, let us suppose we have a system of 3 distinct coins. The Shannon entropy relates the amount of information that will be gained by observing the collective state (microstate) of the 3 coins.

First we have to compute the number of microstates or ways the system of coins can be configured. I will lay them out specifically.

microstate 1 = H H H
microstate 2 = H H T
microstate 3 = H T H
microstate 4 = H T T
microstate 5 = T H H
microstate 6 = T H T
microstate 7 = T T H
microstate 8 = T T T

N = Ω(N) = Ω = Number of microstates of a 3-coin system = 8

So there are 8 microstates or outcomes the system can realize. The Shannon entropy can be calculated in the torturous way:

or simply taking the logarithm of the number of microstates:

It can be shown that for the Shannon entropy of a system of N distinct coins is equal to N bits. That is, a system with 1 coin has 1 bit of Shannon entropy, a system with 2 coins has 2 bits of Shannon entropy, a system of 3 coins has 3 bits of Shannon entropy, etc.

Notice, the more microstates there are, the more uncertainty exists that the system will be found in any given microstate. Equivalently, the more microstates there are, the more improbable the system will be found in a given microstate. Hence, sometimes entropy is described in terms of improbability or uncertainty or unpredictability. But we must be careful here, uncertainty is not the same thing as disorder. That is subtle but important distinction.

So what is the Shannon Entropy of a system of 500 distinct coins? Answer: 500 bits, or the Universal Probability Bound.

By way of extension, if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy. This illustrates the principle that more complex designs require larger Shannon entropy to support the design. It cannot be otherwise. Design requires the presence of entropy, not absence of it.

Suppose we found that a system of 500 coins were all heads, what is the Shannon entropy of this 500-coin system? Answer: 500 bits. No matter what configuration the system is in, whether ordered (like all heads) or disordered, the Shannon entropy remains the same.

Now suppose a small tornado went through the room where the 500 coins resided (with all heads before the tornado), what is the Shannon entropy after the tornado? Same as before, 500-bits! What may arguably change is the algorithmic entropy (Kolmogorov complexity). The algorithmic entropy may go up, which simply means we can’t represent the configuration of the coins in a compact sort of way like saying “all heads” or in the Kleene notation as H*.

Amusingly, if in the aftermath of the tornado’s rampage, the room got cooler, the thermodynamic entropy of the coins would actually go down! Hence the order or disorder of the coins is independent not only of the Shannon entropy but also the thermodynamic entropy.

Let me summarize the before and after of the tornado going through the room with the 500 coins:

BEFORE : 500 coins all heads, Temperature 80 degrees
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): low
Thermodynamic Entropy : some finite starting value

AFTER : 500 coins disordered
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy : lower if the temperature is lower, higher if the temperature is higher

Now to help disentangle concepts a little further consider three 3 computer files:

File_A : 1 gigabit of binary numbers randomly generated
File_B : 1 gigabit of all 1’s
File_C : 1 gigabit encrypted JPEG

Here are the characteristics of each file:

File_A : 1 gigabit of binary numbers randomly generated
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly disorganized
inference : not designed

File_B : 1 gigabit of all 1’s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : designed (with qualification, see note below)

File_C : 1 gigabit encrypted JPEG
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : extremely designed

Notice, one cannot ascribe high levels of improbable design based on the Shannon entropy or algorithmic entropy without some qualification. Existence of improbable design depends on the existence of high Shannon entropy, but is somewhat independent of algorithmic entropy. Further, to my knowledge, there is not really a metric for organization that is separate from Kolmogorov complexity, but this definition needs a little more exploration and is beyond my knowledge base.

Only in rare cases will high Shannon entropy and low algorithmic entropy (Kolmogorov complexity) result in a design inference. One such example is 500 coins all heads. The general method to infer design (including man-made designs), is that the object:

1. has High Shannon Entropy (high improbability)
2. conforms to an independent (non-postdictive) specification

In contrast to the design of coins being all heads where the Shannon entropy is high but the algorithmic entropy is low, in cases like software or encrypted JPEG files, the design exists in an object that has both high Shannon entropy and high algorithmic entropy. Hence, the issues of entropy are surely nuanced, but on balance entropy is good for design, not always bad for it. In fact, if an object evidences low Shannon entropy, we will not be able to infer design reliably.

The reader might be disturbed at my final conclusion in as much as it grates against popular notions of entropy and creationist notions of entropy. But well, I’m no stranger to this controversy. I explored Shannon entropy in this thread because it is conceptually easier than its ancestor concept of thermodynamic entropy.

In the Part II (which will take a long time to write) I’ll explore thermodynamic entropy and its relationship (or lack thereof) to intelligent design. But in brief, a parallel situation often arises: the more complex a design, the higher its thermodynamic entropy. Why? The simple reason is that more complex designs involve more parts (molecules) and more molecules in general imply higher thermodynamic (as well as Shannon) entropy. So the question of Earth being an open system is a bit beside the point since entropy is essential for intelligent designs to exist in the first place.

[UPDATE: the sequel to this thread is in Part 2]

Acknowledgements (both supporters and critics):

1. Elizabeth Liddle for hosting my discussions on the 2nd Law at TheSkepticalZone

2. physicist Olegt who offered generous amounts of time in plugging the holes in my knowledge, particularly regarding the Liouville Theorem and Configurational Entropy

3. retired physicist Mike Elzinga for his pedagogical examples and historic anecdotes. HT: the relationship of more weight to more entropy

4. An un-named theoretical physicist who spent many hours teaching his students the principles of Statistical Mechanics and Thermodynamics

5. physicists Andy Jones and Rob Sheldon

6. Neil Rickert for helping me with Latex

7. Several others that have gone unnamed

NOTE:
[UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn’t necessarily warranted. It’s possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer’s intention.]

[UPDATE 9/7/2012]
Boltzmann

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

Comments
Surely you mean “if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain 32 bits or so of Shannon entropy”
Surely not. 32-bits (or 64 bits) refers to the number of bits available to address memeory, not the actual amount of memory Windows-7 requires. 32 bits can address 2^32 bytes of memory or 4 gigabytes directly. From the Windows website describing Vista (and the comment applies to other Windows operating systems)
One of the greatest advantages of using a 64-bit version of Windows Vista is the ability to access physical memory (RAM) that is above the 4-gigabyte (GB) range. This physical memory is not addressable by 32-bit versions of Windows Vista.
Windows x64 occupies about 16gigabytes. A byte being 8 bits implies 16 gigabytes is 16*8 = 128 gigabits. Thus the Shannon entropy required to represent windows-7 x64 is on the order of 128 gigabits. Shannon entropy is the amount of information that can be represented, not the number of bits required to locate an address in memory.scordova
September 5, 2012
September
09
Sep
5
05
2012
04:59 PM
4
04
59
PM
PDT
"if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy" Surely you mean "if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain 32 bits or so of Shannon entropy"EndoplasmicMessenger
September 5, 2012
September
09
Sep
5
05
2012
04:06 PM
4
04
06
PM
PDT
EA: Notice, I consistently speak of sampling a distribution of possibilities in a config space, where the atomic resources of solar system or observed cosmos are such that only a very small fraction can be sampled. For 500 bits, we talk of a one straw size sample to a cubical haystack 1,000 LY on the side, about as thick as the galaxy. With all but certainty, a blind, chance and necessity sample will be dominated by the bulk of the distribution. In short, it is maximally implausible that special zones will be sampled. KF PS: Have I been sufficiently clear in underscoring that in stat thermo-d the relevant info metric associated with entropy is a measure of the missing info to specify micro state given macro state?kairosfocus
September 5, 2012
September
09
Sep
5
05
2012
03:59 PM
3
03
59
PM
PDT
Their arguments I find worthwhile. I don’t have any new theories to offer. Such an endeavor would be over my head anyway. I know too little to make much of a contribution to the debate beyond what you have seen at places like UD. Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…
Sorry, I have to bring it down a notch. Just something that has been on my mind a long timebutifnot
September 5, 2012
September
09
Sep
5
05
2012
03:58 PM
3
03
58
PM
PDT
Trevors and Abel point out the necessity of Shannon entropy (uncertainty) to store information for life to replicate. Hence, they recognize that a sufficient amount of Shannon entropy is needed for life:
Chance and Necessity do not explain the Origin of Life No natural mechanism of nature reducible to law canexplain the high information content of genomes. This is a mathematical truism, not a matter subject to over-turning by future empirical data. The cause-and-e?ect necessity described by natural law manifests a probability approaching 1.0. Shannon uncertainty is a probability function (-log2 p). When the probability of naturallaw events approaches 1.0, the Shannon uncertaintycontent becomes miniscule (-log2p = log2 1.0=0 uncertainty). There is simply not enough Shannon uncertainty in cause-and-e?ect determinism and its reductionistic laws to retain instructions for life.
scordova
September 5, 2012
September
09
Sep
5
05
2012
03:42 PM
3
03
42
PM
PDT
Sal:
Besides, blogs aren’t really for doing science, laboratories and libraries are better places for that. The internet is just for fun…
Spoken like a true academic elitist! :)Eric Anderson
September 5, 2012
September
09
Sep
5
05
2012
03:05 PM
3
03
05
PM
PDT
kf @16: You make a good point about breakdown. I'm just looking at the typical approach by abiogenesis proponents from a debating standpoint. I have rarely seen an abiogenesis proponent take careful stock of the many problems with their own preferred OOL scenario, including not only breakdown but also problems with interfering cross reactions, construction of polymers only on side chains, etc. The typical abiogenesis proponent, when they are willing to debate the topic, are almost wholly engrossed with the raw probabilistic resources -- amount of matter in the universe, reaction rates, etc. Rarely do they consider the additional probabilistic hurdles that come with things like breakdown. Indeed, one of the favorite debating tactics is to assert that because we don't know all the probabilistic hurdles that need to be overcome we can't therefore draw any conclusion about the unlikelihood of abiogenesis taking place. Despite the obvious logical failure of such an argument, this is a favorite rhetorical tactic of, for example, Elizabeth Liddle. This is of course absurd, to say the least, but it underscores the mindset. As a result, when we talk about increased energy, the only thing the abiogenesis proponent will generally allow into their head is the hopeful glimmer of faster reaction rates. That is all they are interested in -- more opportunities for chance to do its magic. The other considerations -- including things like interfering cross reactions and breakdown of nascent molecules -- are typically shuffled aside or altogether forgotten. The unfortunate upshot is that pointing out problems with additional energy (like faster breakdown), typically, will fall on deaf ears. That, coupled with the fact that any definitive answer on the point requires a detailed analysis of precisely which OOL scenario is being discussed, how dilute the solution is, what kind of environment is present, the operative temperature, the type of energy infused, etc., means that it is nearly impossible to convince the recalcitrant abiogenesis proponent that additional energy can in fact be worse. Thus, from a practical standpoint, we seem better off just focusing on real issue -- information -- and note that energy does nothing to help with that key aspect. Anyway, way more than you wanted to hear. I'm glad you shared your thoughts on additional energy. I think you have something there worth considering, including a potential hurdle for the occasional abiogenesis proponent who is actually willing to think about things like breakdown.Eric Anderson
September 5, 2012
September
09
Sep
5
05
2012
02:59 PM
2
02
59
PM
PDT
Hi butifnot, I don't believe that evolutionists have proven their case. There are fruitful ways to criticize OOL and Darwinism, I just think that creationists will hurt themselves using the 2nd Law and Entropy arguments (for the reasons outlined in these posts). They need to move on to arguments that are more solid. What is persuasive to me are the cases of evolutionsits leaving the Darwin camp or OOL camp: Micahel Denton Jerry Fodor Masimo Piantelli Jack Trevors Hubert Yockey Richard Sternberg Dean Kenyon James Shapiro etc. Their arguments I find worthwhile. I don't have any new theories to offer. Such an endeavor would be over my head anyway. I know too little to make much of a contribution to the debate beyond what you have seen at places like UD. Besides, blogs aren't really for doing science, laboratories and libraries are better places for that. The internet is just for fun... Salscordova
September 5, 2012
September
09
Sep
5
05
2012
02:44 PM
2
02
44
PM
PDT
Sal, the time is ripe for a bold new thermo-entropy synthesis! Practically the sum of human knowledge is available in an instant for free. A continuing and wider survey, far wide of materialists, is needed before this endeavor can (should) be launched to fruition. Comments on Shannon
Shannon’s concept of information is adequate to deal with the storage and transmission of data, but it fails when trying to understand the qualitative nature of information. Theorem 3: Since Shannon’s definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning. In order to be able adequately to evaluate information and its processing in different systems, both animate and inanimate, we need to widen the concept of information considerably beyond the bounds of Shannon’s theory. Figure 4 illustrates how information can be represented as well as the five levels that are necessary for understanding its qualitative nature. Level 1: statistics Shannon’s information theory is well suited to an understanding of the statistical aspect of information. This theory makes it possible to give a quantitative description of those characteristics of languages that are based intrinsically on frequencies. However, whether a chain of symbols has a meaning is not taken into consideration. Also, the question of grammatical correctness is completely excluded at this level. http://creation.com/information-science-and-biology
The distinction (good question) between data and information (and much else) must be addressed to get to thermo-design-info theory.butifnot
September 5, 2012
September
09
Sep
5
05
2012
02:11 PM
2
02
11
PM
PDT
Part two is now available: Part IIscordova
September 5, 2012
September
09
Sep
5
05
2012
02:08 PM
2
02
08
PM
PDT
Sal, something's missing, don't you think? Does it not 'feel' that when we get to thermo and information and design, there is *more*, that will not to be admitted from a basic rehash, which is where it looks like you're at. The bridge between thermo and 'information' is fascinating, but here is where it could become really interesting - [what if] actual information has material and non material components! Our accounting may, and may have to, meet this reality. The difference in entropy of a 'live' brain and and the same brain dead with a small .22 hole in it is said to be very small, but is it? Perhaps something is missing.butifnot
September 5, 2012
September
09
Sep
5
05
2012
01:44 PM
1
01
44
PM
PDT
OT:
Amazing --- light filmed at 1,000,000,000,000 Frames/Second! - video (this is so fast that at 9:00 Minute mark of video the time dilation effect of relativity is caught on film) http://www.youtube.com/watch?v=SoHeWgLvlXI
bornagain77
September 5, 2012
September
09
Sep
5
05
2012
11:36 AM
11
11
36
AM
PDT
The fact that Oleg and Mike went beyond their natural dislike of creationists and were generous to teach me things is something I'm very appreciative of. I'm willing to endure their harsh comments about me because they have scientific knowledge that is worth learning and passing on to everyone.scordova
September 5, 2012
September
09
Sep
5
05
2012
11:18 AM
11
11
18
AM
PDT
OlegT helped you? Is this the same olegt that now quote-mines you for brownie points? olegt's quote-mine earns him 10 points (out of 10) on the low-integrity scaleJoe
September 5, 2012
September
09
Sep
5
05
2012
11:02 AM
11
11
02
AM
PDT
SC: Please note the Macro-micro info gap issue I have highlighted above. KFkairosfocus
September 5, 2012
September
09
Sep
5
05
2012
10:44 AM
10
10
44
AM
PDT
F/N; Please note how I speak of a sampling theory result on a config space, which is independent of precise probability calculations; we have only a reasonable expectation to pick up the bulk of the distribution. Remember we are sampling on the order of one straw to a cubical hay bale 1,000 light years on the side, i.e comparably thick to our Galaxy. KFkairosfocus
September 5, 2012
September
09
Sep
5
05
2012
10:43 AM
10
10
43
AM
PDT
EA: When the equilibria are as unfavourable as they are, a faster reaction rate will favour breakdown, as is seen from how we refrigerate to preserve. In effect around room temp, activation processes double for every 8 K increase in temp. And, the rate of state sampling used in the FSCI calc at 500 bits as revised is actually that for the fastest ionic reactions, not the slower rates appropriate to organic ones. For 1,000 bits, we are using Planck times which are faster than anything else physical. The limits are conservative. KFkairosfocus
September 5, 2012
September
09
Sep
5
05
2012
10:40 AM
10
10
40
AM
PDT
Shannon entropyJoe
September 5, 2012
September
09
Sep
5
05
2012
10:12 AM
10
10
12
AM
PDT
Regarding the "Add Energy" argument. Set off an source equal in energy and power to an atomic bomb -- the results are predictable in terms of the designs (or lack thereof) that will emerge in the aftermath. That is an example where Entropy increases, but so does disorder. The problem, as illustrated with the 500-coins, is that Shannon Entropy and Thermodynamic Entropy have some independence from the notions of disorder. A designed system can have 500 bits of Shannon entropy but so can an undesigned system. Having 500 bits of Shannon entropy says little (in and of itself) whether something is desiged. An independent specification is needed to identify a design, the entropy score is only a part. We can have: 1. entropy rise and more disorder 2. entropy rise and more order 3. entropy rise and more disorganization 4 entropy rise and more organization 5. entropy rise and destroying design 6. entropy rise and creating design We can't make a general statement about what will happen to a design or a disordered system merely because the entropy rises. There are too many other variables to account for before we can say something useful.scordova
September 5, 2012
September
09
Sep
5
05
2012
08:57 AM
8
08
57
AM
PDT
Also, kf, the rejoinder by the "just add energy" advocate will be that the energy typically increases the reaction rate. Therefore, even if there are more states possible, the prebiotic soup can move through the states more quickly. It is very difficult to analyze and compare the probabilities (number of states and increased reaction time of various chemicals in the soup) and how they would be affected by adding energy. Perhaps impossible, without making all kinds of additional assumptions about the particular soup and amount/type of energy, which assumptions would themselves be subject to debate. Anyway, I think you make an interesting point. The more I think about it, however, the more I think it could lead to getting bogged down in the 'add energy' part of the discussion. Seems it might be better to stick with a strategy that forcefully states that the 'add energy' argument is a complete red herring and not honor the argument by getting into a discussion of whether adding energy would decrease or increase the already terrible odds with specific chemicals in specific situations. Anyway, just thinking out loud here . . .Eric Anderson
September 5, 2012
September
09
Sep
5
05
2012
08:47 AM
8
08
47
AM
PDT
kf:
What advocates of this do not usually disclose, is that raw injection of energy tends to go to heat, i.e. to dramatic rise in the number of possible configs, given the combinational possibilities of so many lumps of energy dispersed across so many mass-particles. That is, MmIG will strongly tend to RISE on heating.
Interesting thought and worth considering. I think it is a useful point to bring up when addressing the "open system" red herring put forth by some OOL advocates, but at the end of the day it is really a rounding error on the awful probabilities that already exist. Thus, it probably makes sense to mention it in passing ("Adding energy without direction can actually make things worse.") if someone is pushing the "just add energy" line of thought, but then keep the attention focused squarely on the heart of the matter.Eric Anderson
September 5, 2012
September
09
Sep
5
05
2012
08:21 AM
8
08
21
AM
PDT
mahuna I assume you have absolutely no experience with the specification and development of new systems.
Before becoming a financeer I was an engineer. I have 3 undergraduate degrees in electrical engineering and computer science and mathematics and a graduate engineering degree in applied physics. Of late I try to minimize mentioning it because there are so many things I don't understand which I ought to with that level of academic exposure. I fumble through statistical mechanics and thermodynamics and even basic math. I have to solicit expertise on these matters, and I have to admit that I'm wrong many times or don't know something, or misunderstand something -- and willingness to admit mistakes or lack of understanding is a quality which I find lacking among many of my creationist brethren, and even worse among evolutionary biologists. I worked on aerospace systems, digital telephony, unmanned aerial vehicles, air traffic control systems, security systems. I've written engineering specifications and carried them out. Thus
I assume you have absolutely no experience with the specification and development of new systems.
is utterly wrong and a fabrication of your own imagination. Besides, my experience is irrelevant to this discussion. At issue are the ideas and calculations. Do you have any comment on my calculations of Shannon entropy or the other entropy scores for the objects listed?scordova
September 5, 2012
September
09
Sep
5
05
2012
06:57 AM
6
06
57
AM
PDT
Complete and utter nonsense. I assume you have absolutely no experience with the specification and development of new systems. A baseball's design is refined to eliminate every single ounce of weight or space that does not satisfy the requirements for a baseball. An airliner's design is refined to eliminate every single ounce of weight or space that does not satifsy the requirements for an airliner. But the airliner is much more complex than the baseball and didn't get that way by accident. I assume that you assume that an entropic design is launched by its designers like a Mars probe but expected to change/evolve after launch (by increasing its entropy). But as far as we know, most biologic systems are remarkably stable in their designs (um, the oldest known bat fossils are practically identical to modern bats). In "The Edge of Evolution", Behe in fact bases his argument against Evolution on the fact that there are measurably distinct levels of complexity in biologic systems, and that no known natural mechanism, most especially random degradation of the original design, will get you from a Level 2 system to a more complex Level 3 system.mahuna
September 5, 2012
September
09
Sep
5
05
2012
06:43 AM
6
06
43
AM
PDT
gpuccio, In light of your very insightful criticism, I amended the OP as follows:
inference : designed (with qualification, see note below) .... NOTE: [UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn't necessarily warranted. It's possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer's intention.]
scordova
September 5, 2012
September
09
Sep
5
05
2012
05:07 AM
5
05
07
AM
PDT
Sal: Great post!
Thank you!
A few comments: a) Shannon entropy is the basis for what we usually call the “complexity” of a digital string.
In Bill Dembski's literature, yes. Some other's will use a different metric for complexitly, like Algorithmic complexity. Phil Johnson and Stephen Meyer actually refer to algorithmic complexity if you read what they say carefully. In my previously less enlightened writings on the net I used algorithmic complexity. The point is, this confusion needs a little bit of remedy. Rather than use the word "complexity" it is easier to say what actual metric one is working from. CSI is really based on Shannon Entropy not algorithmic or thermodynamic entropy.
b) Regarding the exmaple in: File_B : 1 gigabit of all 1?s Shannon Entropy: 1 gigabit Algorithmic Entropy (Kolmogorov Complexity): low Organizational characteristics: highly organized inference : designed I would say that the inference of design is not necessarily warrnted.
Yes, thank you. I'll have to revisit this example. It's possible a programmer had the equivalent of stuck key's. I'll update the post accordingly. That's why I post stuff like this at UD, to help clean up my own thoughts.scordova
September 5, 2012
September
09
Sep
5
05
2012
05:02 AM
5
05
02
AM
PDT
F/N: Let's do some boiling down, for summary discussion in light of the underlying matters above and in onward sources:
1: In communication situations, we are interested in information we have in hand, given certain identifiable signals (which may be digital or analogue, but can be treated as digital WLOG) 2: By contrast, in the thermodynamics situation, we are interested in the Macro-micro info gap [MmIG], i.e the "missing info" on the ultra-microscopic state of a system, given the lab-observable state of the system. 3: In the former, the inference that we have a signal, not noise, is based on an implicit determination that noise is not credibly likely to be lucky enough to mimic the signal, given the scope of the space of possible configs, vs the scope of apparently intelligent signals. 4: So, we confidently and routinely make that inference to intelligent signal not noise on receiving an apparent signal of sufficient complexity, and indeed define a key information theory metric signal to noise power ratio, on the characteristic differences between the typical observable characteristics of signals and noise. 5: Thus, we are routinely inferring that signals involving FSCO/I are not improbable on intelligent action (intelligently directed organising work, IDOW) but that they are so maximally improbable on "lucky noise" that we typically assign what looks like typical signals to real signals, and what looks like noise to noise on a routine and uncontroversial basis. 6: In the context of spontaneous OOL etc, we are receiving a signal in the living cell, which is FSCO/I rich. 7: But because there is a dominant evo mat school of thought that assumes or infers that at OOL no intelligence was existing or possible to direct organising work, it is presented as if it were essentially unquestionable knowledge, that without IDOW, FSCO/I arose. 8: In other words, despite never having observed FSCO/I arising in this way and despite the implications of the infinite monkeys/ needle in haystack type analysis, that such is essentially unobservable on the gamut of our solar system or the observed cosmos, this ideological inference is presented as if it were empirically well grounded knowledge. 9: This is unacceptable, for good reasons of avoiding question-begging. 10: By sharpest contrast, on the very same principles of inference to best current explanation of the past in light of dynamics of cause and effect in the present that we can observe as leaving characteristic signs that are comparable to traces in deposits from the past or from remote reaches of space [astrophysics], design theorists infer from the sign, FSCO/I to its cause in the remote past etc being -- per best explanation on empirical warranting grounds -- being design, or as I am specifying for this discussion: IDOW.
Let us see how this chain of reasoning is handled, here and elsewhere. KFkairosfocus
September 5, 2012
September
09
Sep
5
05
2012
04:55 AM
4
04
55
AM
PDT
It is interesting to note that in the building of better random number generators for computer programs, a better source of entropy is required:
Cryptographically secure pseudorandom number generator Excerpt: From an information theoretic point of view, the amount of randomness, the entropy that can be generated is equal to the entropy provided by the system. But sometimes, in practical situations, more random numbers are needed than there is entropy available. http://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
And Indeed we find:
Thermodynamics – 3.1 Entropy Excerpt: Entropy – A measure of the amount of randomness or disorder in a system. http://www.saskschools.ca/curr_content/chem30_05/1_energy/energy3_1.htm
And the maximum source of randomness in the universe is found to be,,,
Entropy of the Universe - Hugh Ross - May 2010 Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated. http://www.reasons.org/entropy-universe Roger Penrose - How Special Was The Big Bang? “But why was the big bang so precisely organized, whereas the big crunch (or the singularities in black holes) would be expected to be totally chaotic? It would appear that this question can be phrased in terms of the behaviour of the WEYL part of the space-time curvature at space-time singularities. What we appear to find is that there is a constraint WEYL = 0 (or something very like this) at initial space-time singularities-but not at final singularities-and this seems to be what confines the Creator’s choice to this very tiny region of phase space.”
,,, there is also a very strong case to be made that the cosmological constant in General Relativity, the extremely finely tuned 1 in 10^120 expansion of space-time, drives, or is deeply connected to, entropy as measured by diffusion:
Big Rip Excerpt: The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of universe, from stars and galaxies to atoms and subatomic particles, are progressively torn apart by the expansion of the universe at a certain time in the future. Theoretically, the scale factor of the universe becomes infinite at a finite time in the future. http://en.wikipedia.org/wiki/Big_Rip
Thus, though neo-Darwinian atheists may claim that evolution is as well established as Gravity, the plain fact of the matter is that General Relativity itself, which is by far our best description of Gravity, testifies very strongly against the entire concept of 'random' Darwinian evolution. also of note, quantum mechanics, which is even stronger than general relativity in terms of predictive power, has a very different 'source for randomness' which sets it as diametrically opposed to materialistic notion of randomness:
Can quantum theory be improved? – July 23, 2012 Excerpt: However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (conscious observation) parameters can be chosen independently (free choice, free will assumption) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random. http://phys.org/news/2012-07-quantum-theory.html
Needless to say, finding ‘free will conscious observation’ to be ‘built into’ quantum mechanics as a starting assumption, which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands randomness as the driving force of creativity! Could these two different sources of randomness in quantum mechanics and General relativity be one of the primary reasons of their failure to be unified??? Further notes, Boltzman, as this following video alludes to,,,
BBC-Dangerous Knowledge http://video.google.com/videoplay?docid=-8492625684649921614
,,,being a materialist, thought of randomness, entropy, as 'unconstrained', as would be expected for someone of the materialistic mindset. Yet Planck, a Christian Theist, corrected that misconception of his:
The Austrian physicist Ludwig Boltzmann first linked entropy and probability in 1877. However, the equation as shown, involving a specific constant, was first written down by Max Planck, the father of quantum mechanics in 1900. In his 1918 Nobel Prize lecture, Planck said:This constant is often referred to as Boltzmann's constant, although, to my knowledge, Boltzmann himself never introduced it – a peculiar state of affairs, which can be explained by the fact that Boltzmann, as appears from his occasional utterances, never gave thought to the possibility of carrying out an exact measurement of the constant. Nothing can better illustrate the positive and hectic pace of progress which the art of experimenters has made over the past twenty years, than the fact that since that time, not only one, but a great number of methods have been discovered for measuring the mass of a molecule with practically the same accuracy as that attained for a planet. http://www.daviddarling.info/encyclopedia/B/Boltzmann_equation.html
Related notes:
"It from bit symbolizes the idea that every item of the physical world has at bottom - at a very deep bottom, in most instances - an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin." John Archibald Wheeler Zeilinger's principle Zeilinger's principle states that any elementary system carries just one bit of information. This principle was put forward by Austrian physicist Anton Zeilinger in 1999 and subsequently developed by him to derive several aspects of quantum mechanics. Some have reasoned that this principle, in certain ways, links thermodynamics with information theory. [1] http://www.eoht.info/page/Zeilinger%27s+principle "Is there a real connection between entropy in physics and the entropy of information? ....The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental..." Tom Siegfried, Dallas Morning News, 5/14/90 - Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article In the beginning was the bit - New Scientist Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle. http://www.quantum.at/fileadmin/links/newscientist/bit.html Is it possible to find the radius of an electron? The honest answer would be, nobody knows yet. The current knowledge is that the electron seems to be a 'point particle' and has refused to show any signs of internal structure in all measurements. We have an upper limit on the radius of the electron, set by experiment, but that's about it. By our current knowledge, it is an elementary particle with no internal structure, and thus no 'size'.
bornagain77
September 5, 2012
September
09
Sep
5
05
2012
04:31 AM
4
04
31
AM
PDT
F/N 2: We should bear in mind that information arises when we move from an a priori state to an a posteriori one where with significant assurance we are in a state that is to some degree or other surprising. Let me clip my always linked note, here on:
let us now consider in a little more detail a situation where an apparent message is received. What does that mean? What does it imply about the origin of the message . . . or, is it just noise that "got lucky"? If an apparent message is received, it means that something is working as an intelligible -- i.e. functional -- signal for the receiver. In effect, there is a standard way to make and send and recognise and use messages in some observable entity [e.g. a radio, a computer network, etc.], and there is now also some observed event, some variation in a physical parameter, that corresponds to it. [For instance, on this web page as displayed on your monitor, we have a pattern of dots of light and dark and colours on a computer screen, which correspond, more or less, to those of text in English.] Information theory, as Fig A.1 illustrates, then observes that if we have a receiver, we credibly have first had a transmitter, and a channel through which the apparent message has come; a meaningful message that corresponds to certain codes or standard patterns of communication and/or intelligent action. [Here, for instance, through HTTP and TCP/IP, the original text for this web page has been passed from the server on which it is stored, across the Internet, to your machine, as a pattern of binary digits in packets. Your computer then received the bits through its modem, decoded the digits, and proceeded to display the resulting text on your screen as a complex, functional coded pattern of dots of light and colour. At each stage, integrated, goal-directed intelligent action is deeply involved, deriving from intelligent agents -- engineers and computer programmers. We here consider of course digital signals, but in principle anything can be reduced to such signals, so this does not affect the generality of our thoughts.] Now, it is of course entirely possible, that the apparent message is "nothing but" a lucky burst of noise that somehow got through the Internet and reached your machine. That is, it is logically and physically possible [i.e. neither logic nor physics forbids it!] that every apparent message you have ever got across the Internet -- including not just web pages but also even emails you have received -- is nothing but chance and luck: there is no intelligent source that actually sent such a message as you have received; all is just lucky noise: "LUCKY NOISE" SCENARIO: Imagine a world in which somehow all the "real" messages sent "actually" vanish into cyberspace and "lucky noise" rooted in the random behaviour of molecules etc, somehow substitutes just the messages that were intended -- of course, including whenever engineers or technicians use test equipment to debug telecommunication and computer systems! Can you find a law of logic or physics that: [a] strictly forbids such a state of affairs from possibly existing; and, [b] allows you to strictly distinguish that from the "observed world" in which we think we live? That is, we are back to a Russell "five- minute- old- universe"-type paradox. Namely, we cannot empirically distinguish the world we think we live in from one that was instantly created five minutes ago with all the artifacts, food in our tummies, memories etc. that we experience. We solve such paradoxes by worldview level inference to best explanation, i.e. by insisting that unless there is overwhelming, direct evidence that leads us to that conclusion, we do not live in Plato's Cave of deceptive shadows that we only imagine is reality, or that we are "really" just brains in vats stimulated by some mad scientist, or we live in a The Matrix world, or the like. (In turn, we can therefore see just how deeply embedded key faith-commitments are in our very rationality, thus all worldviews and reason-based enterprises, including science. Or, rephrasing for clarity: "faith" and "reason" are not opposites; rather, they are inextricably intertwined in the faith-points that lie at the core of all worldviews. Thus, resorting to selective hyperskepticism and objectionism to dismiss another's faith-point [as noted above!], is at best self-referentially inconsistent; sometimes, even hypocritical and/or -- worse yet -- willfully deceitful. Instead, we should carefully work through the comparative difficulties across live options at worldview level, especially in discussing matters of fact. And it is in that context of humble self consistency and critically aware, charitable open-mindedness that we can now reasonably proceed with this discussion.) In short, none of us actually lives or can consistently live as though s/he seriously believes that: absent absolute proof to the contrary, we must believe that all is noise. [To see the force of this, consider an example posed by Richard Taylor. You are sitting in a railway carriage and seeing stones you believe to have been randomly arranged, spelling out: "WELCOME TO WALES." Would you believe the apparent message? Why or why not?] Q: Why then do we believe in intelligent sources behind the web pages and email messages that we receive, etc., since we cannot ultimately absolutely prove that such is the case? ANS: Because we believe the odds of such "lucky noise" happening by chance are so small, that we intuitively simply ignore it. That is, we all recognise that if an apparent message is contingent [it did not have to be as it is, or even to be at all], is functional within the context of communication, and is sufficiently complex that it is highly unlikely to have happened by chance, then it is much better to accept the explanation that it is what it appears to be -- a message originating in an intelligent [though perhaps not wise!] source -- than to revert to "chance" as the default assumption. Technically, we compare how close the received signal is to legitimate messages, and then decide that it is likely to be the "closest" such message. (All of this can be quantified, but this intuitive level discussion is enough for our purposes.) In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources. Thus, if we then try to dismiss the study of such inferences to design as "unscientific," when they may cut across our worldview preferences, we are plainly being grossly inconsistent. Further to this, the common attempt to pre-empt the issue through the attempted secularist redefinition of science as in effect "what can be explained on the premise of evolutionary materialism - i.e. primordial matter-energy joined to cosmological- + chemical- + biological macro- + sociocultural- evolution, AKA 'methodological naturalism' " [ISCID def'n: here] is itself yet another begging of the linked worldview level questions. For in fact, the issue in the communication situation once an apparent message is in hand is: inference to (a) intelligent -- as opposed to supernatural -- agency [signal] vs. (b) chance-process [noise]. Moreover, at least since Cicero, we have recognised that the presence of functionally specified complexity in such an apparent message helps us make that decision. (Cf. also Meyer's closely related discussion of the demarcation problem here.) More broadly the decision faced once we see an apparent message, is first to decide its source across a trichotomy: (1) chance; (2) natural regularity rooted in mechanical necessity (or as Monod put it in his famous 1970 book, echoing Plato, simply: "necessity"); (3) intelligent agency. These are the three commonly observed causal forces/factors in our world of experience and observation. [Cf. abstract of a recent technical, peer-reviewed, scientific discussion here. Also, cf. Plato's remark in his The Laws, Bk X, excerpted below.] Each of these forces stands at the same basic level as an explanation or cause, and so the proper question is to rule in/out relevant factors at work, not to decide before the fact that one or the other is not admissible as a "real" explanation. This often confusing issue is best initially approached/understood through a concrete example . . . A CASE STUDY ON CAUSAL FORCES/FACTORS -- A Tumbling Die: Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert. [More details . . .] . . . . The second major step is to refine our thoughts, through discussing the communication theory definition of and its approach to measuring information. A good place to begin this is with British Communication theory expert F. R Connor, who gives us an excellent "definition by discussion" of what information is:
From a human point of view the word 'communication' conveys the idea of one person talking or writing to another in words or messages . . . through the use of words derived from an alphabet [NB: he here means, a "vocabulary" of possible signals]. Not all words are used all the time and this implies that there is a minimum number which could enable communication to be possible. In order to communicate, it is necessary to transfer information to another person, or more objectively, between men or machines. This naturally leads to the definition of the word 'information', and from a communication point of view it does not have its usual everyday meaning. Information is not what is actually in a message but what could constitute a message. The word could implies a statistical definition in that it involves some selection of the various possible messages. The important quantity is not the actual information content of the message but rather its possible information content. This is the quantitative definition of information and so it is measured in terms of the number of selections that could be made. Hartley was the first to suggest a logarithmic unit . . . and this is given in terms of a message probability. [p. 79, Signals, Edward Arnold. 1972. Bold emphasis added. Apart from the justly classical status of Connor's series, his classic work dating from before the ID controversy arose is deliberately cited, to give us an indisputably objective benchmark.]
To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form . . . [--> previously discussed]
A baseline for discussion. KFkairosfocus
September 5, 2012
September
09
Sep
5
05
2012
04:09 AM
4
04
09
AM
PDT
F/N: I have put the above comment up with a diagram here.kairosfocus
September 5, 2012
September
09
Sep
5
05
2012
03:47 AM
3
03
47
AM
PDT
Sal: Great post! A few comments: a) Shannon entropy is the basis for what we usually call the "complexity" of a digital string. b) Regarding the exmaple in: File_B : 1 gigabit of all 1?s Shannon Entropy: 1 gigabit Algorithmic Entropy (Kolmogorov Complexity): low Organizational characteristics: highly organized inference : designed I would say that the inference of design is not necessarily warrnted. According to the explanatory filter, in the presence of this kind of compressible order we must first ascertain that no deterministic effect is the cause of the apparent order. IOWs, many simple deterministic causes could explain a series of 1s, however long. Obviously, such a scenario would imply that the system that generates the string is not random, or that the probabilities of 0 and 1 are extremely different. I agree that, if we have assurance that the system is really random and the probabilities are as described, then a long series of 1 allows the design inference. c) A truly pseudo-random string, which has no formal evidence of order (no compressibility), like the jpeg file, but still conveys very specific information, is certainly the best scenario for design inference. Indeed, as far as I know, no deterministic system can explain the emergence of that kind of object. d) Regarding the problem of specification, I paste here what I posted yesterday in another thread, as I believe it is pertinent to the discussion here: "I suppose much confusion derives from Shannon’s theory, which is not, and never has been, a theory about information, but is often considered as such. Contemporary thought, in the full splendor of its dogmatic reductionism, has done its best to ignore the obvious connection between information and meaning. Everybody talks about information, but meaning is quite a forbidden word. As if the two things could be separated! I have discussed for days here with darwinists just trying to have them admit that sucg a thing as “function” does exist. Another forbidden word. And even IDist often are afraid to admit that meaning and function cannot even be defined if we do not refer to a conscious being. I have challenged evrybody I know to give a definition, any definition, of meaning, function and intent without recurring to conscious experience. How strange, the same concepts on which all our life, and I would say also all our science and knowledge, are based, have become forbidden in modern thought. And consciousness itself, what we are, the final medium that cognizes everything, can scarcely be mentioned, if not to affirm that it is an unscientific concept, or even better a concept completely reducible to non conscious aggregations of things (!!!). The simple truth is: there is no cognition, no science, no knowledge, without the fundamental intuition of meaning. And that intuition is a conscious event, and nothing else. There is no understanding of meaning in stones, rivers or computers. Only in conscious beings. And information is only a way to transfer menaing from one conscious being to another. Through material systems, that carry the meaning, but have no understanding of it. That’s what Shannon considered: what is necessary to transfer information through a material system. In that context, meaning is not relevant, because what we are measuring is only a law of transmission. The same is true in part for ID. The measure of complexity is a Shannon measure, it has nothing to do with meaning. A random string can be as complex as a meaningful string. But the concept of specification does relate to meaning, in one of its many aspects, for instance as function. The beautiful simplicity of ID theory is that it measures the complexity necessary to convey a specific meaning. That is simple and beautiful, beacuse it connects the quantitative concept of Shannon complexity to the qualitative aspect of meaning and function."gpuccio
September 5, 2012
September
09
Sep
5
05
2012
03:20 AM
3
03
20
AM
PDT
1 2 3

Leave a Reply