Uncommon Descent Serving The Intelligent Design Community

Does information theory support design in nature?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Eric Holloway argues at Mind Matters that design theorist William Dembski makes a convincing case, using accepted information theory principles relevant to computer science:

When I first began to look into intelligent design (ID) theory while I was considering becoming an atheist, I was struck by Bill Dembski’s claim that ID could be demonstrated mathematically through information theory. A number of authors who were experts in computer science and information theory disagreed with Dembski’s argument. They offered two criticisms: that he did not provide enough details to make the argument coherent and that he was making claims that were at odds with established information theory.

In online discussions, I pressed a number of them, including Jeffrey Shallit, Tom English, Joe Felsenstein, and Joshua Swamidass. I also read a number of their articles. But I have not been able to discover a precise reason why they think Dembski is wrong. Ironically, they actually tend to agree with Dembski when the topic lies within their respective realms of expertise. For example, in his rebuttal Shallit considered an idea which is very similar to the ID concept of “algorithmic specified complexity”. The critics tended to pounce when addressing Dembski’s claims outside their realms of expertise.

To better understand intelligent design’s relationship to information theory and thus get to the root of the controversy, I spent two and a half years studying information theory and associated topics during PhD studies with one of Dembski’s co-authors, Robert Marks. I expected to get some clarity on the theorems that would contradict Dembski’s argument. Instead, I found the opposite.

Intelligent design theory is sometimes said to lack any practical application. One straightforward application is that, because intelligence can create information and computation cannot, human interaction will improve computational performance.
More.

Also: at Mind Matters:

Would Google be happier if America were run more like China? This might be a good time to ask. A leaked internal discussion document, the “Cultural Context Report” (March 2018), admits a “shift toward censorship.” It characterizes free speech as a “utopian narrative,” pointing out that “As the tech companies have grown more dominant on the global stage, their intrinsically American values have come into conflict with some of the values and norms of other countries.”

Facebook’s old motto was “Move fast and break things.” With the current advertising scandal, it might be breaking itself A tech consultant sums up the problem, “Sadly Facebook didn’t realize is that moving fast can break things…”

AI computer chips made simple Jonathan Bartlett: The artificial intelligence chips that run your computer are not especially difficult to understand. Increasingly, companies are integrating“AI chips” into their hardware products. What are these things, what do they do that is so special, and how are they being used?

The $60 billion-dollar medical data market is coming under scrutiny As a patient, you do not own the data and are not as anonymous as you think. Data management companies can come to know a great deal about you; they just don’t know your name—unless, of course, there is a breach of some kind. Time Magazine reported in 2017 that “Researchers have already re-identified people from anonymized profiles from hospital exit records, lists of Netflix customers, AOL online searchers, even GPS data of New York City taxi rides.” One would expect detailed medical data to be even more revelatory.

George Gilder explains what’s wrong with “Google Marxism”
In discussion with Mark Levin, host of Life, Liberty & Levin, on Fox TV: Marx’s great error, his real mistake, was to imagine that the industrial revolution of the 19th century, all those railways and “dark, satanic mills” and factories and turbine and the beginning of electricity represented the final human achievement in productivity so in the future what would matter is not the creation of wealth but the redistribution of wealth.

Do we just imagine design in nature? Or is seeing design fundamental to discovering and using nature’s secrets? Michael Egnor reflects on the way in which the 2018 Nobel Prize in Chemistry has so often gone to those who intuit or impose desire or seek the purpose of things

Comments
Antonin, it is not an assumption that configuration based function will be rare in sequence space or in configuration space in general, especially code oriented function. First, it is a readily confirmed observation that it is exceedingly difficult to obtain long enough coded strings by blind processes where by contrast unconstrained variation will as a rule produce non-functional gibberish. Likewise, in wider config spaces (which on description languages are reducible to strings, see AutoCAD etc), requisites of functional configuration generally require assembly in accord with a wiring diagram of some type, implicit or explicit. This is contrasted with something as loose as scattered or clumped in any order, which leads straight to very large spaces indeed. The example of a 6500 fishing reel is a simple, direct illustration, as is the million monkeys type simulation exercise. So, in effect you are defying abundant observation and are putting up "assumption" as a case of selectively hyperskeptical dismissal. Your observed evidence for complex code-functional configurations being common in sequence spaces and for similar any clumped or scattered order wiring patterns having functionality that is not rare in the space of possibilities is: ________ . Complex here is measured by requiring at least 500 - 1,000 bits of information to describe in a reasonably compact description language. KFkairosfocus
November 8, 2018
November
11
Nov
8
08
2018
01:38 PM
1
01
38
PM
PDT
Antonin: From: “Your approach is not persuasive” to: “Good grief!” to: "Just, well, arbitrary!" it's a remarkable escalation! :)gpuccio
November 8, 2018
November
11
Nov
8
08
2018
01:27 PM
1
01
27
PM
PDT
Antonin:
*Googles Durston* Ah, Dr Kirk Durston. I see he’s entered the lion’s den at The Skeptical Zone. He appears to make the same assumption (erroneous in my view and I’m far from alone) that function is rare in sequence space. You should compare notes.
We have probably similar ideas.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
01:03 PM
1
01
03
PM
PDT
Antonin at #453
What I suspect you are objecting to is the evolvability of the current process from simpler precursors.
I simply laid out a well-recorded history of thought on the subject. Pierce modeled the physical capacity to specify something among alternatives; requiring a representation, a referent, and a separate interpretant to establish what was being represented. This model is dead center of any materialist view of the physical cosmos. Turing built that same triadic relationship into his 1936 machine -- indeed; it was the entire mechanism behind the machine’s ability to function. Von Neumann then used that machine to successfully predict the material requirements of an autonomous self-replicator. His predictions were wholly confirmed by Crick’s discovery of a reading frame code in DNA and his further prediction that a set of Piercean constraints would be found at work in the system. These were later isolated by Hoagland and Zamecnik, while Nirenberg et al went on to list the individual associations established by the constraints. And all of this has since been carefully and meticulously described in the physics literature as the only other example of a general purpose (sequential, multi-referent) language (other than human language) found anywhere in the cosmos. This also includes the additional organizational requirement of being semantically closed in order to function (i.e. to persist over time). You might ask yourself why you viewed the listing of these historical facts as an attack on “evolutionary theory”.
We can only surmise what the biochemistry was like in the earliest organisms that could both grow and replicate.
Well, okay. I imagine we can do a bit more than that. We can follow established scientific reasoning and infer that the forces and substances in nature acted (back then) just as they do today. Likewise, we can infer that any theory that requires those forces and substances to act differently (back then) than they do today (accomplishing effects they’ve never been known or recorded to accomplish) are less grounded in evidence and reason than theories that do not require such things -- in particular, theories that expect nature to have acted (back then) exactly as it does today, and proposes causes that are known (and well-documented) to be adequate to explain the phenomena. My question to you is ‘what is it’ empirically-speaking that motivates the idea that the gene system isn’t exactly what it was predicted to be (a semantically-closed, sequential, multi-referent symbol system, using a medium of information established by a set of physical constraints) confirmed by experiment and fully described by physics? Alternatively, we can certainly assume that some form of biologically-relevant constituents came together at a common place on earth. We can further assume that these constituents all came together in the necessary quantities in some form of conducive environment. We can then assume that some unknown sequential dynamic replicator formed in that environment. We can further assume that while this dynamic replicator replicated, it also dynamically produced some form of discrete product that could serve as a constraint to establish a first unique informational relationship. We can further assume that while this sequence kept on replicating it produced a second discrete product to serve as a constraint. We can then assume, one by one, more discrete products were produced by the dynamics of this unknown sequence. We can go on to assume that along its length, it continued replicating dynamically as it produced all the constraints necessary to describe itself in a symbol system, along with all the additional products that would cause it to bring its symbols and constraints together in a hierarchy capable of transitioning from a dynamic replicator to a symbolic replicator. Or we can shuffle all those things around and come up with a whole new set of assumptions. Likewise, we can simply ignore the logic that any organization which must maintain its function as it transitions from one system of description to another must (at some point in that transition) function in both systems. We can also ignore how many constraints are required to synthesize a constraint from memory. And we can certainly ignore the simultaneous organization required for semantic closure to occur. Is this the bottom line for you? Do you believe that, given the documented physical evidence (with its genuine history of prediction and discovery), there is simply no room for dispute about the validity of materialists’ assumptions -- that is, there is no intellectual or scientifically-respectable dissent from materialism? Is this not where you are at? Or, or do you have a positioning statement to retain the line in practice while seeming otherwise?Upright BiPed
November 8, 2018
November
11
Nov
8
08
2018
12:31 PM
12
12
31
PM
PDT
S = - k_B [SUM on i] p_i ln p_i This brings in distribution of Pi across possible microstates ikairosfocus
November 8, 2018
November
11
Nov
8
08
2018
12:12 PM
12
12
12
PM
PDT
F/N: For convenience, Wiki:
The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if E i {\displaystyle E_{i}} E_{i} is the energy of microstate i, and p i {\displaystyle p_{i}} p_{i} is the probability that it occurs during the system's fluctuations, then the entropy of the system is S = ? k B [SUM on i] p i ln p i Entropy changes for systems in a canonical state A system with a well-defined temperature, i.e., one in thermal equilibrium with a thermal reservoir, has a probability of being in a microstate i given by Boltzmann's distribution. The quantity k B {\displaystyle k_{\text{B}}} k_{\text{B}} is a physical constant known as Boltzmann's constant, which, like the entropy, has units of heat capacity. The logarithm is dimensionless. This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) on which the sum is done is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article). Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and thence to an overestimate of the entropy.[1] Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas. This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum mechanical case.
KFkairosfocus
November 8, 2018
November
11
Nov
8
08
2018
12:10 PM
12
12
10
PM
PDT
@Bill Cole and Joe Felsenstein: here's an analogy for what I see with information theory and ID. In quantum physics there is something known as Bell's theorem which has been used to disprove local realism. Bell derived from first principles a distribution that quantum experiments must follow if local realism is true. All experiments have shown that quantum experiments do not follow the distribution. Thus we know local realism is false. In a similar manner, Levin's law gives a distribution that all physical phenomena must follow of it is purely the result of Turing reducible processes and random oracles (natural causes). Thus, if what we actually observe does not follow the distribution, then we can eliminate what I am labeling "natural causes" as a complete explanation for the natural world. Levin's law also seems to let us go further and identify a distribution generated by "halting oracles," potentially giving us the ability to make positive identification of halting oracles in our universe. https://www.am-nat.org/site/halting-oracles-as-intelligent-agents/ https://www.am-nat.org/site/law-of-information-non-growth/ This is equivalent to what Dembski's work shows. Hence my claim that Dembski's argument is supported by information theory.EricMH
November 8, 2018
November
11
Nov
8
08
2018
12:06 PM
12
12
06
PM
PDT
Antonin:
I’m sure Professor Felsenstein will address your points but let me just repeat you seem to have a fundamental misconception about how evolution is postulated to work. And the assumption of “islands of function” is an assertion that doesn’t bear scrutiny.
What misconception? Details, please. Why doesn't it bear scrutiny? I have given reasons for my "assumption". You haven't given any against it. More in next post.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
11:58 AM
11
11
58
AM
PDT
Antonin:
I don’t think that’s the problem. Joe Felsenstein’s complaint is that definitions, not least Dembski’s, have morphed. He’s asking ID proponents to decide on a definitive (heh) definition.
Funtional information is a true concept. It works, It is there. Definitions can change, but the concept remains true. I am rather sure that my definition works, so I stick to it. As I always say, ID is not a political party. We are here, all of us, to understand what is true. Including JF.
But a protein is not a text, is not a machine, is not software. Analogies may help in understanding, they can also mislead.
Wrong. A protein is certainly not a text, and it is somewhat different from a software. But it is definitely a machine. Or at least, most of them. This is not an analogy. It is a simple and obvious truth.
One thing I’ve learned from following scientific arguments is that math is a powerful analytical tool and mathematical modelling has served science well. But it has also led some to revere the model rather than be sure how well it fits reality. Map and territory!
Of course. I am a big fan of the map and territory idea. Got it from my limited frequentations with NLP.
Felsenstein has issues there too. Dembski has changed his definition more than once.
Maybe. However, I haven't. Note: I am not saying in any way that I am any better than Dembski. Just stating that I have been sticking to my definition, probably for lack of creative thought! :)
I think you fall into that trap I just mentioned. Doesn’t matter how elegant the model if it doesn’t tally with reality.
It's you that are falling into a trap, in particular a bad analogy, or rather a good analogy badly used. Of course no model corresponds to reality. But there are good models and bad models, and science is about choosing among different models. Going back to the map and territory issue, you certainly understand that a map is a good map if it is useful for what we have to do. Complex functional information is a very good map to infer design, and that is exactly what we want to do. I have never said that any map corresponds to reality, even ID theory. But a good map is useful to understand reality better. More in next post.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
11:54 AM
11
11
54
AM
PDT
@Bob O'H, I missed a comment of yours: > EricMH @ 272 – if ASC is a hypothesis test w.r.t. a specific hypothesis, then you should be able to provide the maths to show how you go from the hypothesis to ASC as a test statistic. I can see how you can do that for CSI (because of the relationship between Shannon’s entropy and the multinomial distribution), but not ASC. The specific hypothesis would be addressed by the probability distribution used to measure self information. Same idea as with the complexity term in CSI. I may be still misunderstanding you.EricMH
November 8, 2018
November
11
Nov
8
08
2018
11:51 AM
11
11
51
AM
PDT
Mung: "I almost agree with what you have said in your post. A caveat. If we are talking about “Shannon information” then we should say entropy and not information unless we are talking about “mutual information.”" I am very fine with that! :) "How about if we think of a probability distribution as a potential model? It may or may not model an empirical object. We can plug numbers into it that have no relation to anything. It may be entirely abstract." I think we essentially agree, except a little in terminology. A probability distribution is a mathematical object. It becomes a model when we use it to model some data. Now, those data (the numbers we try to model by the mathematical object) are usually derived from real empirical scenarios. Or they can, of course, be arbitrary: in that case, it would be a simulation of modeling. However, there is a subtle difference between the distribution, which is a mathematical object, and the model realized using that distribution. For example, the model has residues, and so on, because usually the distribution does not fit perfectly the data. The normal distribution is an invariable mathematical object. When we apply it to model real data, we have specific values for mean, standard deviation and so on. That said, I think we agree.gpuccio
November 8, 2018
November
11
Nov
8
08
2018
11:30 AM
11
11
30
AM
PDT
More cluelessness from Felsenstein:
If we have a population of DNA sequences, we can imagine a case with four alleles of equal frequency. At a particular position in the DNA, one allele has A, one has C, one has G, and one has T. There is complete uncertainty about the sequence at this position. Now suppose that C has 10% higher fitness than A, G, or T (which have equal fitnesses). The usual equations of population genetics will predict the rise of the frequency of the C allele. After 84 generations, 99.9001% of the copies of the gene will have the C allele. This is an increase of information: the fourfold uncertainty about the allele has been replaced by near-certainty. It is also specified information — the population has more and more individuals of high fitness, so that the distribution of alleles in the population moves further and further into the upper tail of the original distribution of fitnesses.
Unbelievable. The sad part is he really thinks that he has something there. The "information" in CSI pertains to a sequence or sequences, either nucleotides or amino acids that perform specific functions, with respect to biology. It also pertains to individuals within a population.ET
November 8, 2018
November
11
Nov
8
08
2018
10:53 AM
10
10
53
AM
PDT
Antonin, is info stored in a hard drive in the mind of an observer? If the drive were wiped with a random pattern, would that be anywhere likely to ever be functional, regardless of how many times it were done within sol system scale resources? Why or why not? KFkairosfocus
November 8, 2018
November
11
Nov
8
08
2018
09:55 AM
9
09
55
AM
PDT
Antonin:
The model is not the reality!
That's your opinion. You don't seem to be able to support anything that you post, though. And the fact that you couldn't see that Felsenstein is blowing smoke proves that you don't deserve to be in a discussion here.ET
November 8, 2018
November
11
Nov
8
08
2018
09:53 AM
9
09
53
AM
PDT
F/N: configurationally specific, complex function will be exceedingly rare and isolated in a space of possible configs i/l/o that threshold, as say shaking a bait bucket full of 6500 fishing reel parts and hoping to assemble the reel will illustrate. Do not let dismissive rhetoric against a Nobel equivalent prize holder mislead you into thinking the problem is not real. Ponder the search challenge to find functional strings of 500 bits as just shown, to see what is being said. If you don't like coins, ponder a paramagnetic substance in a weak aligning field as a more obviously physical case. And -- notice how confusions and dismissals compound -- the D/RNA string for a 300 AA protein will be about 900 bases long, each base having 4 possible states. That is, 1800 bits of raw information storing capacity. The codes used and the synonyms etc plus the patterns of what varies and what does not for a given protein across the world of life will reduce info content but not anywhere near enough to undermine the Fermi calc for a cell with 500 proteins. OoL by blind chance and/or mechanical necessity is a non-starter, never mind hopeful scenarios. More to the point, we find a coherent code with algorithms reflecting a deep knowledge base of AA sequence possibilities, and associated with a complex molecular tech cybernetic system. The evidence is clear enough that blind mechanisms are utterly implausible explanations for the world of life from its root. KFkairosfocus
November 8, 2018
November
11
Nov
8
08
2018
09:52 AM
9
09
52
AM
PDT
Felsenstein is clueless:
Starting in 2005, Dembski changed the definition of his CSI. He'd say that he clarified it, showing what he had meant all along. But after 2005, his original LCCSI is no longer discussed. Instead he has a new way of proving that CSI cannot be achieved by natural evolutionary forces: he simply refuses to define the population as having CSI if its state can be achieved by natural evolutionary forces! It's only CSI if evolution can't get there. So how do we know that? He doesn't say -- it's up to us to find a way to show that, in order to be able to call it CSI. Which reverses the whole effect of showing something has CSI. CSI formerly was being used to show evolution couldn't get there. Now we have to separately show (somehow) that evolution can't get there before we can call it CSI. Which makes CSI a useless add-on to the whole argument. Dembski's new argument will be found here. It was published in Philosophia Christi in 2005. Some further comments on the new argument by me will be found here (comments are missing as in many older PT threads -- we hope to restore them some day soon).
Wrong! In 2005 Wm Dembski revised/ refined his views on "specification" and he never said anything about a population cannot have CSI if it arose via blind and mindless processes. That is total BS by Felsenstein. CSI exists regardless of how it came to be. The point is materialistic processes cannot produce it. No one even knows how to go about demonstrating such a thing. Then Joe F mumbles something about fitness when that really isn't even the issue. Increased fitness can occur with a loss of functionality. We can easily dismantle Shallit and Elsberry, tooET
November 8, 2018
November
11
Nov
8
08
2018
09:50 AM
9
09
50
AM
PDT
KF:
In short, entropy is an index of missing info to get the microstate, starting from the macro.
Careful with that, though. It leads to the idea that entropy is in the mind of the observer. Better informed observers, reduction in entropy. Completely informed observer - zero entropy. Can that be right?Antonin
November 8, 2018
November
11
Nov
8
08
2018
09:43 AM
9
09
43
AM
PDT
This is about evolution by means of blind and mindless processes
Exactly! The model is not the reality! Thanks for confirming that. I might even revise my opinion of you as an interlocutor! :)Antonin
November 8, 2018
November
11
Nov
8
08
2018
09:40 AM
9
09
40
AM
PDT
PS: Observe from Wiki's admissions, how entropy is informational (noting that in effect we can always round a continuous to a discrete quantity):
in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.
In short, entropy is an index of missing info to get the microstate, starting from the macro. PPS: I should add, that the 500 - 1,000 bit complexity threshold as reasonable upper limits for sol system or observed cosmos can readily be understood. Consider the 10^57 atoms of our sol system as observers; each updating observations of a tray of 500 coins every 10^-12 to 10^-15s. Run this exercise for 10^17 s, let's use -14. We have a grand total of 10^(57 + 17 + 14) = 10^88 possible observations. 2^500 possible states of 500 coins = 3.27*10^150. In short, a yardstick for negligible possible blind search of a config space. The 1000 bit version is even more generous for the solar system. Too much haystack, too little time and resources to search blindly hoping to come across a needle.kairosfocus
November 8, 2018
November
11
Nov
8
08
2018
09:39 AM
9
09
39
AM
PDT
Antonin:
But a protein is not a text, is not a machine, is not software.
They exist and their existence is due to a code and the components required to carry it out in the required manner. So to that end they need to be explained.
But evolution is not a tornado in a junkyard.
' This is not about mere evolution. This is about evolution by means of blind and mindless processes
There can be intermediates. This arbitrary claim of 500 bits is just, well, arbitrary!
Yes, it is arbitrary and way too high. There isn't any evidence for blind and mindless processes coming close to that figure. Dembski used it only because it allegedly breaches the upper probability bound.
He appears to make the same assumption (erroneous in my view and I’m far from alone) that function is rare in sequence space.
Where is the peer-review or any science to support YOUR claim? Can YOU or anyone else show that function is abundant, or whatever your claim is, in sequence space? By the way starting with living organisms means you are starting with the very things that need explaining. And that is all you, Joe F or anyone else can do.ET
November 8, 2018
November
11
Nov
8
08
2018
09:35 AM
9
09
35
AM
PDT
@bill cole, thanks. I think Felsenstein misunderstood what I am saying. > I was struck by Bill Dembski’s claim that ID could be demonstrated mathematically through information theory. I am merely pointing out that Dembski's claims that CSI cannot be generated by natural processes is supported by information theory. Whether biological entities have CSI or not is a separate matter. Perhaps they do, perhaps they don't. But at any rate, the theoretical framework of Dembski's is valid, and if CSI is detected then it indicates intelligent design. I am currently looking into FI and other approaches to see if any qualify as the right sort of algorithmic mutual information. The "unreasonable effectiveness of mathematics in the natural sciences" strikes me in particular as mutual information with an independent variable, since mathematics is not produced by the physical world. https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html Also, I saw that you only found Levin's Russian publication. Here is his English paper that talks about conservation of information in section 1.2. https://core.ac.uk/download/pdf/82092683.pdf I have a high level explanation how halting oracles make an empirical difference: https://www.am-nat.org/site/halting-oracles-as-intelligent-agents/ and a mathematically detailed overview of Levin's proof, and how a halting oracle can violate his law: https://www.am-nat.org/site/law-of-information-non-growth/EricMH
November 8, 2018
November
11
Nov
8
08
2018
09:29 AM
9
09
29
AM
PDT
EMH, that's a really weird twist! Entropy is informational, but it is not information (and for sure not functionally specific coded information), why Brillouin spoke of negentropy way back. Let me give the telling clip against interest from Wiki on entropy, again:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
Observe again, from Gilbert Newton Lewis as echoed by Wikipedia:
in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.
Anyone so confused as to imagine the opposite is missing the heart of the matter. KFkairosfocus
November 8, 2018
November
11
Nov
8
08
2018
09:20 AM
9
09
20
AM
PDT
GP, 463, that's why I termed them descriptively, loading enzymes. KFkairosfocus
November 8, 2018
November
11
Nov
8
08
2018
09:10 AM
9
09
10
AM
PDT
Eric Gpuccio Mung This was posted yesterday at TSZ.
Eric Holloway needs our help (new post at Panda’s Thumb) Posted on November 7, 2018 by Joe Felsenstein 14 Just a note that I have put up a new post at Panda’s Thumb in response to a post by Eric Holloway at the Discovery Institute’s new blog Mind Matters. Holloway declares that critics have totally failed to refute William Dembski’s use of Complex Specified Information to diagnose Design. At PT, I argue in detail that this is an exactly backwards reading of the outcome of the argument. Commenters can post there, or here — I will try to keep track of both. There has been a discussion of Holloway’s argument by Holloway and others at Uncommon Descent as well (links in the PT post). gpuccio also comments there trying to get someone to call my attention to an argument about Complex Functional Information that gpuccio made in the discussion of that earlier. I will try to post a response on that here soon, separate from this thread.
bill cole
November 8, 2018
November
11
Nov
8
08
2018
08:39 AM
8
08
39
AM
PDT
The idea of a pdf is not bad. Maybe I could work at it, let’s see.
Well, good!
However, I cannot believe that, as EugeneS says, people, including JF, still have problems to understand the idea of specified information and functional information. It’s not so difficult.
I don't think that's the problem. Joe Felsenstein's complaint is that definitions, not least Dembski's, have morphed. He's asking ID proponents to decide on a definitive (heh) definition.
Anyone can understand that a machine, or a software, or a text, or a protein, needs a minimal level of bit complexity to do what it does. It’s such a simple and intuitive idea that it is a real mystery how people try to deny it.
But a protein is not a text, is not a machine, is not software. Analogies may help in understanding, they can also mislead.
I have not yet read JF’s article about Dembski. As I am not a mathematician, I try not do discuss details of general theories about information, information conservation, and so on.
One thing I've learned from following scientific arguments is that math is a powerful analytical tool and mathematical modelling has served science well. But it has also led some to revere the model rather than be sure how well it fits reality. Map and territory!
As I have said many times, I have problems with the famous Dembski paper about specification. Maybe it’s just my limited mathematical proficiency.
Felsenstein has issues there too. Dembski has changed his definition more than once.
But the basci idea of specified information, and in particular of functionally specified information, is simple beautiful and universal. I have only tried ot express it in empirical terms that can easily be applied to biology.
I think you fall into that trap I just mentioned. Doesn't matter how elegant the model if it doesn't tally with reality.
JF has explicitly criticized my arguments as if my idea that functional information must be computed for one explicitly defined function were some weird addition to ID theory. But that’s not true. All examples of functional information discussed in biological settings are of the kind. How can JF insist that many simple mutations that give reproductive advantage, happening “anywhere in the genome”, add up to generate 500 bits of functional information? What argument is this? Those are independent events, simple events, that can be naturally selected if the increase “fitness”, exactly as the thief can gain from different safes with simple keys. That has nothing to do with complex functional information.
I'm sure Professor Felsenstein will address your points but let me just repeat you seem to have a fundamental misconception about how evolution is postulated to work. And the assumption of "islands of function" is an assertion that doesn't bear scrutiny.
The alpha and beta chains of ATP synthase are complex functional information. A single object (indeed, just part of it) that requires hundreds of specific AAS to work. That’s the big safe. That’s what cannot be reached by RV + NS
(I sense some Texas sharpshooting coming up)
...because: a) RV cannot find anything that needs more than 500 bits of functional complexity to implement its function.
But evolution is not a tornado in a junkyard. There can be intermediates. This arbitrary claim of 500 bits is just, well, arbitrary! :)
b) There is no naturally selectable path to a complex function which cannot exist without at least 500 bits of information already there.
See above
The fact that complex proteins require a lot of specific information wo work is absolutely incontrovertible: even those biologists that are honest enough to recognize the importance of functional information admit it. See Szostak. Even JF admits that the concept of functional information is important, even if immediately after he demonstrates that he has not understood it at all. My application of a simple procedure to compute FI in proteins is really simple too. The idea is simple and powerful. We know there is FI in proteins. How can we approximate its value? The answer is: functional conservation of the sequence thoough long evolutionary times. It’s an answer directly derived from the only part of the neo-darwinist paradigm that works: the idea of negative NS, aka purifying selection. The idea is not mine, of course. It’s there in all biological thought. Durston was probably the first to apply it in a definite bioinformatic procedure. Mine idea is essentially the same, even if the procedure is somewhat different. As I have explained in detail many times.
*Googles Durston* Ah, Dr Kirk Durston. I see he's entered the lion's den at The Skeptical Zone. He appears to make the same assumption (erroneous in my view and I'm far from alone) that function is rare in sequence space. You should compare notes.Antonin
November 8, 2018
November
11
Nov
8
08
2018
08:37 AM
8
08
37
AM
PDT
Eric, Thanks.EugeneS
November 8, 2018
November
11
Nov
8
08
2018
08:28 AM
8
08
28
AM
PDT
Mung: Thanks for the correction. Here it is on Shannon information: https://en.wikipedia.org/wiki/Information This is how I understand the connection. The notions of information and entropy are closely related. For states with min entropy (e.g. at 0K), information that the observer gets when such a state is realized is minimum. At high temperatures, the information provided by observations increases (because states are getting towards equiprobable). It is not without a reason that information entropy H is measured in bits.EugeneS
November 8, 2018
November
11
Nov
8
08
2018
08:27 AM
8
08
27
AM
PDT
gpuccio:
I don’t think that we can “measure” a probability distribution. A probability distribution is a mathematical object. I think what you mean is that Shannon’s entropy uses a probability distribution to effect measures on empirical objects.
How about if we think of a probability distribution as a potential model? It may or may not model an empirical object. We can plug numbers into it that have no relation to anything. It may be entirely abstract. As long as values are assigned for the probabilities we can "measure" the "information" associated with that distribution. There is no necessary connexion with anything material or empirical or any other object other than the mathematical object that is the distribution itself. That's what I think.Mung
November 8, 2018
November
11
Nov
8
08
2018
07:49 AM
7
07
49
AM
PDT
kairosfocus:
As for defining a probability, that is an exercise in itself in Mathematics and philosophy.
:) Worth the read: Probability Theory: The Logic of Science Kolmogorov developed an axiomatic approach to probability. I hear it works if you're willing to accept the axioms.Mung
November 8, 2018
November
11
Nov
8
08
2018
07:34 AM
7
07
34
AM
PDT
gpuccio:
It’s better to be precise, rather than looking for some universal definition of information that does not exist.
:) I almost agree with what you have said in your post. A caveat. If we are talking about "Shannon information" then we should say entropy and not information unless we are talking about "mutual information." Using the term "entropy" is absolutely defensible from the literature on information theory. Cover and Thomas is a classic text. They barely use the word information at all. Personally, after having read Arieh Ben-Name for years, I go one step further and try to avoid using the term entropy because that term has a specific meaning within thermodynamics and statistical mechanics and it is not the same as shannon entropy. People get confused. Shannon entropy has a far broader scope as it applies to any probability distribution. Thermodynamic entropy does not. So the alternative suggested by Ben-Naim is SMI. The Shannon Measure of Information. It's idiosyncratic, but clearly delineates the subject matter.Mung
November 8, 2018
November
11
Nov
8
08
2018
07:19 AM
7
07
19
AM
PDT
1 2 3 4 5 6 20

Leave a Reply