Uncommon Descent Serving The Intelligent Design Community

Biologist Wayne Rossiter on Joshua Swamidass’ claim that entropy = information

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From Wayne Rossiter, author of Shadow of Oz: Theistic Evolution and the Absent God, at Shadow of Oz:

Swamidass surprises us with a very counter-intuitive statement:

“Did you know that information = entropy? This means the 2nd law of thermodynamics guarantees that information content will increase with time, unless we do something to stop it.”

Of course, he is arguing much more than just the observation that entropy = information. He’s arguing that the 2nd Law of Thermodynamics guarantees an increase in information. This seems very sloppy to me. Yes, if we consider time as a variable, then the universe will develop pockets of increased information (complexity?), while still experiencing a global loss of information and complexity. The universe putatively began as a maximally dense, maximally hot locus. As it expands, heat and potential energy will dissipate, even if locally, structure and information emerges.

In some ways, Swamidass’s argument sounds similar to that of Lawrence Krauss (a devout nonbeliever). Swamidass is basically arguing that the universe begins at maximum energy, but minimum complexity (no information-rich structures, just the simplest atoms–hydrogen and helium). It then moves through stages of more complex (information-rich) structures, even as it loses available energy. It then ends largely as it began, and those complex structures reduce and decay back to a uniformly distributed universe, devoid of free energy, and the universe becomes a “sparse soup of particles and radiation.”It emerges as simplicity and ends as simplicity.More.

If Swamidass (or any similar person is funded by Templeton), he can argue anything he wants, including any form of fake physics. Making sense is optional.

Sorry, gotta go fill up another trash bag full of suddenly appearing Boltzmann brains … Quite the plague this year.

See also: 2016 worst year ever for “fake physics”?

Follow UD News at Twitter!

Comments
Origenes: "I would like to note that being integral to a larger functional system* is a prerequisite for functional information." A very important point. And yet, I would say yes and no, for the reasons I am going to explain. Let's take, for example, a protein which has an enzymatic function. Of course, the enzymatic function is linked to many cellular contexts, and so the concept of functional coherence is very important. Moreover, there are some basic requisites without which no function can be performed: the protein must be located in the correct cellular environment, in the correct biochemical setting, pH, and so on. So, in that sense, it is dependent on the cellular system, not only because its function is linked to higher order functions, but also because its function could not work in any environment (say, in interplanetary space). In a sense, I would say that the concept of functional coherence is very similar (but not identical) to the concept of irreducible complexity. It is a pillar of ID theory, and I would never underestimate its importance. However, that said, it is also true that we can focus our attention, for practical reasons, on the "local" function of the protein, without considering, for the moment, the links to other levels of function in the cell. For example, for an enzyme, the local function can be easily defined as the ability to accelerate a specific biochemical reaction, provided that the basic environment is there (pH, temperature, substrate, and so on). This is useful, because we can usually demonstrate that the local function of a protein is already functionally complex enough to infer design. And, while higher level functions and connections certainly add to the functional complexity of that protein, they are more difficult to evaluate quantitatively. Therefore, it is often useful to compute the functional complexity of the local function in itself, which is certainly a lower threshold of the approximation of the whole functional complexity of that protein, and is much easier to compute. IOWs, you have to have a certain amount of functional complexity to be able to accelerate a specific reaction, even if you do not consider why that reaction is necessary in the cell, if it is part of a cascade, and so on. Finally, I generally use the word "system" in a generic sense, not connected to functionality, because that is the common use of the word in physics. See Wikipedia: https://en.wikipedia.org/wiki/Physical_system "In physics, a physical system is a portion of the physical universe chosen for analysis." I find the concept useful, because we can define any kind of physical system, and then analyze if there is any evidence of complex function and design inside that system.gpuccio
March 19, 2017
March
03
Mar
19
19
2017
09:50 AM
9
09
50
AM
PDT
GPuccio: Functional information, the kind of information we deal with in language, software and biological strings, is not specially “ordered”. Its specification is not order or compressibility. Its specification is function. What you can do with it.
I would like to note that being integral to a larger functional system* is a prerequisite for functional information. This is what separates random pixels from pixels of a photograph, random letters from coherently arranged letters of the Apollo 13 manual and ice crystals from DNA. Axe on ‘functional coherence’:
What enables inventions to perform so seamlessly is a property we’ll call functional coherence. It is nothing more than complete alignment of low-level functions in support of the top-level function. Figure 9.3 illustrates this schematically for a hypothetical invention built from two main components, both of which can be broken down into two subcomponents, each of which can in turn be broken down into elementary constituents. Horizontal brackets group parts on a given level that form something bigger one level up, with the upward arrows indicating these compositional relationships. Notice that every part functions on its own level in a way that supports the top-level function. This complete unity of function is what we mean by functional coherence. [Douglas Axe, ‘Undeniable’, Ch. 9]
(*) IMHO the term ‘system’ should always be linked with functionality. I find the term ‘random system’ unhelpful.Origenes
March 19, 2017
March
03
Mar
19
19
2017
03:07 AM
3
03
07
AM
PDT
EricMH: I am not sure I understand. How would that apply to a functional protein vs a random AA sequence?gpuccio
March 19, 2017
March
03
Mar
19
19
2017
01:35 AM
1
01
35
AM
PDT
gpuccio @19:
I think that too much importance is often given to terms like “complexity” [...]
Agree, that's why I use "complex complexity" just to call it somehow, though it's still inaccurate. :)Dionisio
March 18, 2017
March
03
Mar
18
18
2017
04:21 PM
4
04
21
PM
PDT
If the enthalpy (number of microstates) of a system increases or the entropy (number of macrostates) decreases, then CSI increases. Enthalpy is the complexity part of CSI and entropy is the specification. If enthalpy is low and entropy is low, such as a crystal, then there is no CSI. If enthalpy is high and entropy is high, there is also no CSI. Free energy is enthalpy minus entropy. This means that free energy is CSI, and evidence of design.EricMH
March 18, 2017
March
03
Mar
18
18
2017
02:52 PM
2
02
52
PM
PDT
Eric Anderson: Interesting thoughts, but still I think differently on some points. There is no doubt that "a code of some kind is always required to convey information". I agree. But I think that, in discussions about Shannon entropy, that "code" is given for granted, and is not part of the discussion. You example about pi is not appropriate, IMO. “write the first 100 digits of pi” is not a way of conveying the first 100 digits of pi. IOWs, it is not a "compression" of the first 100 digits of pi. A compression of the first 100 digits of pi (or of the first 1000, or 10000) would be to transmit the algorithm to compute them. If the algorithm is shorter than the actual result, then it is a compression, in the sense of Kolmogorov compexity. But just giving the instruction: "write the first 100 digits of pi" is not good, because the instruction does not have the true information to do what it requires. On the other hand, the instruction "write 01 50 times" is a true compression (provided that the simple instructions are understood as part of the basic transmission code). Shannon's theory is exactly about "reduction of uncertainty" and according to Shannon's theory the values of entropy are different for different messages. That does not depend on external factors, but on intrinsic properties of the message sequence. However, that has something to do with order (ordered systems being more compressible, and easier to describe), but it has in itself no relationship with functional specification. As I have tried to argue, function is often independent from intrinsic order of the system. The sequence of AAs for a functional protein may be almost indistinguishable from a random AA sequence, until we build the protein and see what it can do. Now, I think that we should use "complexity", in a generic and basic meaning, just as a measure of "the quantity of bits that are necessary to do something". So, in Shannon's theory, complexity can be used to designate Shannon's entropy, which is a measure related to compressibility and transmission of a message. I quote from Wikipedia: "Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source." So, this kind of "complexity" has nothing to do with meaning or function. On the contrary, the "complexity" related to functional information measures the bits that are necessary to implement a function. As I have said, this is a concept completely different from compressibility and transmission. As functional information is often scarcely compressible, the functional complexity of a system will often be similar to the total complexity of a random system. In this case, we are rather interested in finding how much of the potential complexity of the system is really necessary to implement its function. That's what we do by computing the rate of the target space to the search space. But that has nothing to do with compression based on order. I think that too much importance is often given to terms like "complexity" and "information". The only meaning of "information" that is interesting for ID is specified information, and in particular functionally specified information. All other meanings of "information" are irrelevant. And "complexity" just means, for me: "how many bits are necessary to do this?" "This" can be different things. In Shannon's theory, it is conveying a message. In ID, it is implementing a function. "Complexity" is, in all cases, just a qunatitative measure of the necessary bits.gpuccio
March 18, 2017
March
03
Mar
18
18
2017
02:08 PM
2
02
08
PM
PDT
Just as a final side note on the above, I should add that there is a general principle at work here regarding the relationship between codes, communication and transmission. Many people are confused about this and some, even in the design community, have spoken too loosely about this. This confusion sometimes arises if we talk about sensical strings as though they existed in a vacuum. The principle is this: When we use a coded message to convey a particular piece of information, it ultimately requires that more information be conveyed (or already have been conveyed) to the recipient, not less. The use of codes is related to speed, efficiency over time, and ease of communication, not the overall quantity of information that must be conveyed to make sense of a single communication. The value of codes* (whether language grammar or otherwise), is that it allows us to set up a system beforehand that contains a great deal of underlying information. Then, in the moment of actual transmission, we can use our code to piggyback on that background information, vastly increasing our speed of communication. Also, as we repeatedly use our coded system, we gain tremendous efficiency over time. I hope that makes sense, but if not let me know and I'd be happy to elaborate further. ----- * It seems a code of some kind is always required to convey information, certainly at a distance without personal interaction. This deserves its own discussion another time. What I'm focusing on here is the implication for a single message or single string.Eric Anderson
March 18, 2017
March
03
Mar
18
18
2017
10:09 AM
10
10
09
AM
PDT
gpuccio @14: Thanks for the good thoughts. We still need to be careful, though, about the idea that entropy drives toward complexity (which, as you note, is sometimes poorly defined).
You are right about the “uniformity” of entropic disorder. But the point here is again that a word is ambiguous, and this time the ambiguous word is “complexity”. A random system, even if uniform for all practical purposes, still can be said to have a lot of “unspecified complexity”, because if you want to describe it exactly you have to give information about each random and contingent part of it. In that sense, gas particles are not so different from random strings. However, that complexity is completely uninteresting, because for all practical purposes random gas states and random string behave in the same way: that’s the “uniformity”.
What would make a uniform sample more complex than a non-uniform sample? Why would I have to describe each gas particle in a uniform sample, but not in a non-uniform sample? Take my container with water example. One could argue that if we wanted to fully and completely describe the uniform sample, we would -- as you say -- have to describe each particle, its composition and position within the sample. But this is also true of the non-uniform sample. *And* in the uniform sample, we have one less differentiating factor -- namely the heat content (or movement speed of the molecules). Thus, we have one less factor to describe. Similarly, if we do a more high-level description at the macro level, the same holds true: it is easier and shorter to describe the uniform sample than the non-uniform sample. In either case, the entropic drive toward uniformity leads to less complexity, not more. ----- Again, I think in the case of strings this is harder to see, because information entropy doesn't drive toward uniformity in the sense of an actual physical reality. An "a" doesn't decay to a "b" and so on. Once something changes our "a" to a "b" it stays a "b" until the something changes it again. (BTW, we might do well to ask what the "something" is that is changing our string?) Information entropy drives toward uniformity in the sense of nonsense or randomness. So one could easily argue that a nonsense string can be described, for practical purposes and intents and real-world applications, as "a string of random characters of length x". That is a very simple way to describe it. ----- Think of it this way: One issue is whether we believe we have to transmit or reproduce the precise string in question. In that case, one could argue that a purely random string is more complex because it takes more instructions to transmit or reproduce. This is true enough. This, unfortunately, is where most people stop in their analysis. However, and this is important: 1. The primary reason why we can describe a sensical string with a shorter description than a purely random nonsensical string is because the necessary background information has already been conveyed to the recipient. For example, if I have a string of digits that represents the first 100 digits of pi, I can just convey the message "write the first 100 digits of pi", which is much shorter than conveying the entire string (which I would have to do with a pure random string). But note well that my shorter conveyance is built upon, contingent upon, and only works because of the fact that the recipient already (i) understands English and the rules of grammar related to my conveyance, (ii) understands what pi is, (iii) understands how to compute the first 100 digits or where to look up the information elsewhere, and (iv) knows how recreate the string. So while I think I have conveyed a tremendous amount of information with my short description, what I have really done is convey a small amount of information, which piggybacks on a whole background suite of information that was already previously conveyed to the recipient (by me or someone else). There is no free lunch. Substantively, one way or another the recipient has to receive these 100 digits of pi. It seems short and simple in the moment of transmission, but only because almost all of the necessary information was previously conveyed and already in place. Indeed, it is clear when we analyze what is really going on that, over time, much more information was required to be transmitted to the recipient in order to be able to deal with our short, cryptic transmission, than if we had just transmitted the actual string. 2. Shannon was concerned about transmission and channel capacity, not information. Regardless of the unfortunate term and confusion surrounding "Shannon information," Shannon himself said he wasn't dealing with information per se. I know we're on the same page here, but I mention this again because when we are talking about entropy and descriptions of strings and what might be required to convey a particular string, it is very important to keep in mind that we are not talking about information. Rather, we are talking about channel capacity and what is required for faithful transmission in a particular instance, given (a) a desire for precise and accurate replication of the string on the receiving end, and (b) the various protocols, grammar, and background knowledge of the recipient that are already in place.Eric Anderson
March 18, 2017
March
03
Mar
18
18
2017
09:39 AM
9
09
39
AM
PDT
KF: Thank you for the very pertinent clarification! :)gpuccio
March 18, 2017
March
03
Mar
18
18
2017
04:24 AM
4
04
24
AM
PDT
Folks, even so humble a source as Wiki has a useful insight, in its article on Informational Entropy, as I have cited several times:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
KFkairosfocus
March 18, 2017
March
03
Mar
18
18
2017
03:52 AM
3
03
52
AM
PDT
Eric Anderson: Thank you for your very good thoughts. I completely agree with you. You raise two important points. and I will comment on both: 1) You are right about the "uniformity" of entropic disorder. But the point here is again that a word is ambiguous, and this time the ambiguous word is "complexity". A random system, even if uniform for all practical purposes, still can be said to have a lot of "unspecified complexity", because if you want to describe it exactly you have to give information about each random and contingent part of it. In that sense, gas particles are not so different from random strings. However, that complexity is completely uninteresting, because for all practical purposes random gas states and random string behave in the same way: that's the "uniformity". Ordered states are often more "compressible", and they can be described with shorter "information". So, Swamidass here is equivocating on the meaning of both "information" and "complexity". If you use those two concepts without any reference to some sort of specification, it is easy to say that random systems or random strings exhibit a lot of (basic, unspecified, useless) information and complexity. But who cares? All random strings are similar because they convey no useful meaning, just as all random states of a gas are similar because they behave in a similar way for all practical purposes. That's what characterizes all random states of any complex system: a) they behave in a similar way, and they convey no useful information linked to their specific configuration. b) They are common, extremely common, super-extremely common. While ordered or functional states are exceedingly rare. In a few words, that is really the main idea in the second law, either applies to physics or more generally to all informational systems. "non interesting" states are so much more common then any other, that systems governed only by natural laws will ultimately tend to them. 2) Here you raise a very important point: order is different from function. Sometimes the two may coincide, but usually they don't. Functional information, the kind of information we deal with in language, software and biological strings, is not specially "ordered". Its specification is not order or compressibility. Its specification is function. What you can do with it. That is a much stronger concept than simple order. While you can have order from physical laws, you cannot have complex functional information from physical laws. The sequence of a protein is fully functional, but it is usually scarcely compressible. That's why most complex functional information is, apparently, "pseudo-random": you cannot easily derive the function simply from mathematical properties of the sequence itself, like compressibility. Indeed, the function is linked to some higher level knowledge: a) For language, that knowledge is understanding of the symbolic meaning of words. b) For software, that knowledge is understanding of how algorithms operate. c) For a protein sequence, that knowledge is understanding of biochemical laws, of how a protein folds, and of what it can do once it folds. None of that "knowledge" can be derived simply from properties of the sequence. All those forms of knowledge imply higher level understanding, the kind of understanding that only a conscious, intelligent and purposeful being can have. That's all the "magic" of ID theory: it allows us, by means of the concept of complex functional information, to safely detect the present of higher knowledge and purpose in the observed physical configuration of a system, a higher knowledge whose only possible origin is a conscious, intelligent, purposeful being.gpuccio
March 18, 2017
March
03
Mar
18
18
2017
01:34 AM
1
01
34
AM
PDT
gpuccio @4: Good thoughts and comments. I have no doubt that he is confused about "Shannon information" and that this is a large part of his mistake. This is an issue that has caused no small amount of confusion and has tripped up many a would-be traveler in these waters. May whoever first used the term "Shannon information" promptly apologize from the grave . . . I agree with you that this is a big part of the issue, as you well highlighted. Let me offer two clarifications, however, so that everyone is on the same page: ----- Entropy (in the sense of the Second Law) does not necessarily mean that we end up with a system that is more complex. Indeed, entropy typically (and ultimately) drives toward uniformity. Entropy is often said to drive toward "disorder," but that is in the sense that it drives against functional organization, as well as against the preservation of gradients. Consider a container filled with a barrier in the middle, filled with hot water on one side and cold water on the other. When we remove the barrier, the gradient will quickly disappear and we will soon be left with a more uniform system, not a more complex one. (The same thing would happen with the barrier left in place, just much more slowly.) This issue is a little harder to see when we talk about strings of letters because they are not subject to the physical realities controlled by the Second Law. (The broad principles behind the Second Law apply to information, to functional constructions, etc. and we can properly talk about "entropy" in those contexts, but it sounds like Dr. Swamidass is talking about the Second Law in a more classical sense.) ----- One other point, when considering information, in the sense of strings of characters/symbols: Complex specified information typically lies between the extremes of uniformity and randomness. Sometimes we hear (and I have occasionally heard ID proponents incorrectly suggest) that more order = more information, or that less order = more information. Either may be true in a particular well-defined case, but neither is true as a general principle. Rather: - On one end of the spectrum we have uniformity. This is the law/necessity side of the spectrum. - On the other end we have randomness. This is the chance side of the spectrum. - In the middle is where complex specified information resides. Neither the result of simple law-like necessity, nor the result of random chance. Rather, a careful balancing act of contingent organization and complexity. Thus, just as design in the physical, three-dimensional world can be seen as a third real cause, juxtaposed against chance and necessity, so too design in the form of complex specified information can be seen juxtaposed against chance and necessity in a string.Eric Anderson
March 17, 2017
March
03
Mar
17
17
2017
11:18 PM
11
11
18
PM
PDT
Understanding has nothing to do with information. - J. SwamidassMung
March 17, 2017
March
03
Mar
17
17
2017
07:34 PM
7
07
34
PM
PDT
Swamidass turns out to be a fool. An expert fool.Mung
March 17, 2017
March
03
Mar
17
17
2017
06:27 PM
6
06
27
PM
PDT
Dr Swamidass, there are four people here now that are quite capable of conversing with you on this topic. If you do not believe you are equivocating on the issue, or do not understand why anyone would say that you are, then please jump in. More directly, you are misleading your readers. You are doing so as a scientist, and as a theist. We can quite easily set the theism aside, in that regard you are free to do as you wish. However, you are completely and quite carelessly wrong about the science. You should address it.Upright BiPed
March 17, 2017
March
03
Mar
17
17
2017
06:15 PM
6
06
15
PM
PDT
Hello Eric, I followed the links and read Dr. Swamidass's paper. He does exactly as you say. Its quite a spectacle that he assumes the role of an authoritative scientific voice, and yet he completely misunderstands the material. Perhaps he'll stop by and defend his position.Upright BiPed
March 17, 2017
March
03
Mar
17
17
2017
05:56 PM
5
05
56
PM
PDT
GPuccio: It’s the old misunderstanding, all over again.
Why am I not surprised? Swamidass, who claims to have been an “intelligent design fanatic”, is nevertheless plagued by his misunderstandings about ID. It’s hard to keep up, but here are some of them: Swamidass holds that ID “invokes God.” Swamidass holds that ID and SETI use a completely different methodology, and that he could name “five material differences” — but refuses to tell us what they are. Swamidass holds that cancer evolves and casts serious doubts on intelligent design. Swamidass holds that Behe’s 2004 paper contains “two clear errors” — but refuses to tell us what they are.Origenes
March 17, 2017
March
03
Mar
17
17
2017
05:52 PM
5
05
52
PM
PDT
Hi GP,
It’s the old misunderstanding, all over again.
Of course it is...or is it? I do not believe there is any way that Dr. Swamidass isn't aware of the profound equivocation he is promoting to the public. If he is not serving the public as a responsible voice for science, then what is his motive for so clearly misleading his readers?Upright BiPed
March 17, 2017
March
03
Mar
17
17
2017
05:48 PM
5
05
48
PM
PDT
To Whom This May Concern: Please, read carefully gpuccio's comment @4 for real clarification of this issue.Dionisio
March 17, 2017
March
03
Mar
17
17
2017
05:14 PM
5
05
14
PM
PDT
Thanks, UB. I'm just now seeing this thread. If Professor Swamidass actually said this:
Did you know that information = entropy? This means the 2nd law of thermodynamics guarantees that information content will increase with time, unless we do something to stop it.
then he does not understand the issues. I second your request @2 for him to clarify what he means. Hopefully he will stop by. My hunch, just based on the one paragraph quoted from him is that he is conflating the general idea of increased "disorder" often discussed in the context of entropy with increased complexity. A common, but mistaken notion. Add to that, he is likely mistakenly thinking that objects contain information by their mere existence. Also a very common misconception. If he has those two misconceptions firmly in mind, we can see how he might think that entropy => complexity => information. This is completely wrong, but that's my guess as to what he might have been thinking. ----- More importantly, as noted elsewhere in these pages, even if we were to inappropriately grant his claim, whatever so-called "information" he is talking about has absolutely nothing to do with real, identifiable, transmittable, translatable, functional information -- the kind of information relevant in the real world and in biology in particular.Eric Anderson
March 17, 2017
March
03
Mar
17
17
2017
04:17 PM
4
04
17
PM
PDT
UB: It's the old misunderstanding, all over again. Swamidass is using "information" in the Shannon sense. That has nothing to do with the concept of specified information or functional information, which is the only concept relevant to ID theory. It is obvious that a generic concept of "information", without any reference to a specification of any kind, can only mean the potential number of bits necessary to describe correctly something (or, as in Shannon, to convey that message). In that sense, a random sequence has more "information" than an ordered sequence. For example, if we have a random sequence of 100 bits, and we want to correctly describe it, we need all or almost all the bits to do that. But if we have a sequence made of 01 repeated 50 times, we can describe it just saying: write "01" 50 times. That is shorter than giving 100 bits. In that sense, random systems are more "complex" than ordered systems. In that sense, the second law says that the entropy in a system can only stay the same or increase, and therefore random configurations are destined to increase, according to natural laws. In that sense, order is constantly eroded by natural laws. But that has nothing to do with specified or functional information. Specified information is not random. That's why we call it "specified" or "functional": to clearly distinguish it from that kind of "information" which is not specified nor functional, that kind of "information" which is not information in our human sense at all, just a form of entropy. While random configurations are destined to increase by the work of natural laws, functional configuration are destined to decrease by natural laws. For the same exact reason. Order and function are constantly eroded by the second law. Especially function. Because, while some simple form of order can arise by natural laws at the expense of other forms of entropy, complex function never arises by natural laws. Complex function is always the product of conscious design. So, Swamidass and others can go on equivocating on the meaning of "information" (a world as ambiguous and abused as that other term, "love"). Those "reflections" have nothing to do with functional information, and with functional complexity. They have nothing to do with ID. They are meaningless word games.gpuccio
March 17, 2017
March
03
Mar
17
17
2017
02:49 PM
2
02
49
PM
PDT
Upright BiPed, Perhaps Dr S. meant anything out there except complex functional specified information? Maybe something like the firmament? Anyway, in any case, didn't somebody say that nonsense remains nonsense regardless of who says it?Dionisio
March 16, 2017
March
03
Mar
16
16
2017
10:18 PM
10
10
18
PM
PDT
Dr Swamidass, I understand you stop by this site from time to time. I have a quick question for you. Can you please pick an example and tell me any of this "information content" that will "increase with time, unless we do something to stop it"? Thanks.Upright BiPed
March 16, 2017
March
03
Mar
16
16
2017
07:07 PM
7
07
07
PM
PDT
Eric Anderson, Did you see this?!?
“Did you know that information = entropy? This means the 2nd law of thermodynamics guarantees that information content will increase with time, unless we do something to stop it.”
Utterly and totally clueless. An esteemed member of the scientific community completely misunderstands the category and subject matter he's talking about.Upright BiPed
March 16, 2017
March
03
Mar
16
16
2017
05:23 PM
5
05
23
PM
PDT
1 2

Leave a Reply