Uncommon Descent Serving The Intelligent Design Community

But is this fair to Feynman?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

File:A small cup of coffee.JPG From Simon Oxenham at BigThink:

How to Use the Feynman Technique to Identify Pseudoscience

Last week a new study made headlines worldwide by bluntly demonstrating the human capacity to be misled by “pseudo-profound bullshit” from the likes of Deepak Chopra, infamous for making profound sounding yet entirely meaningless statements by abusing scientific language.

The researchers correlate believing pseudo-profundities will all kinds of things Clever People Aren’t Supposed to Like, and one suspects the paper wouldn’t survive replication. So why is this a job for Feynman?

Richard Feynman (1918-1988)

This is all well and good, but how are we supposed to know that we are being misled when we read a quote about quantum theory from someone like Chopra, if we don’t know the first thing about quantum mechanics?

Actually, one can often detect BS without knowing much about the topic at hand, because it often sounds deep but doesn’t reflect common sense. Anyway, from Feynman,

I finally figured out a way to test whether you have taught an idea or you have only taught a definition. Test it this way: You say, ‘Without using the new word which you have just learned, try to rephrase what you have just learned in your own language. Without using the word “energy,” tell me what you know now about the dog’s motion.’ You cannot. So you learned nothing about science. That may be all right. You may not want to learn something about science right away. You have to learn definitions. But for the very first lesson, is that not possibly destructive? More.

It won’t work because many people who read pop science literature do so for the same reason others listen to Deepak Chopra: They want to be reassured against their better judgement or the evidence.  Whether it’s that there are billions of habitable planets out there or that chimpanzees are entering the Stone Age, or that everything is a cosmic accident, or whatever the current schtick is.

And Feynman won’t help them, nor will a bucket of ice water. And it’s not fair to drag ol’ Feynman into it just because he said some true things like,

The first principle is that you must not fool yourself and you are the easiest person to fool.

Give the guy a break.

That said, Feynman (1918–1988) may have, through no fault of his (long-deceased) own, played a role in getting a science journalist dumped recently on suspicious grounds. See “Scientific American may be owned by Nature, but it is run by Twitter

Follow UD News at Twitter!

Hat tip: Stephanie West Allen at Brains on Purpose

Comments
K-M: That is a pile on tactic. I showed that the question being asked is irrelevant to the pivotal issue and so is a red herring led out to a strawman. Piling on by pouring ad hominems in and igniting to cloud, confuse and poison and polarise the atmosphere for discussion by making unwarranted accusations of cowardice simply tells us you are an enemy of the material truth or an enabler of such. The pivotal issue is config spaces and FSCO/I as definable clusters in such, how can we credibly get to same? Blind chance and necessity are not credible on needle in haystack, and FSCO/I routinely comes about by design -- trillions of cases in point. Indeed, it is a reliable sign of design. Now, look at cases where we do not have direct observation of causal process. It's not an orderly repetitious crystal like diamonds or salt. Its not a random tar or cluster of mineral particles, it is a wiring diagram organised, complex, functionally specific system. Now, what is the most reasonable causal explanation, why? (And BTW that is part of why the question is misdirected.) KF PS: The Question Z is trying to distract with is in effect the same as that a perfectly ordered repetitive pattern has near zero information. A flat random distribution has Shannon info capacity at max, and meaningful, aperiodic, functional strings that are constrained to be specific by function have a higher Shannon info capacity metric than a periodic crystal like string but a lower one than random noise. And I answer in this dual form to bring out the issue that the three types of sequence are fundamentally different. Discussion on strings is WLOG. Also Shannon info capacity is distinct from functionally specific info strings.kairosfocus
January 4, 2016
January
01
Jan
4
04
2016
04:08 PM
4
04
08
PM
PDT
KF: What is being studiously avoided and diverted from..." IS Z's QUESTION. Are you too cowardly (or incapable) of answering it? I can answer it for you, if you want. Or are you terrified of the idea of followers finding out that you don't know all the answers?Ken_M
January 4, 2016
January
01
Jan
4
04
2016
03:56 PM
3
03
56
PM
PDT
kairosfocus: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. So do that with an unused microprocessor sitting on a shelf and a diamond of equivalent mass. Which has lower entropy?Zachriel
January 4, 2016
January
01
Jan
4
04
2016
03:27 PM
3
03
27
PM
PDT
F/N: What is being studiously avoided and diverted from, from 200 above, as was addressed to Z: >>5: What part of the following clip from the Wiki article [on info thermo-d and its bridge to classic thermodynamics considerations] do you not understand? in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) 6: What part of this clip from Orgel, 1973, is it that you do not understand? . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002.] One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 – 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 – 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken’s remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ] 7: Likewise, what part of this from Wicken, do you not understand? ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.] 8: What part of, these are not design advocates but are frankly facing the issues at stake from OOL up through OO body Plans to OO the human one, do you not understand? 9: What part of, 10^80 atoms of the observed cosmos, acting as observers every 10^-14 s, for 10^17 s would be unable to blindly sample a fraction of the space for 1,000 coins or bits, larger than one straw to a haystack that dwarfs the said observed cosmos, do you not understand? 10: What part of, the search of a space would be a subset so that the set of possible searches is the power set of cardinality 2^C, C being that of the original set, rendering the search for a golden search much harder than the direct search, do you not understand? 11: What part of, on trillions of observed cases and the linked needle in haystack, blind search challenge, the only reliable explanation of FSCO/I is design, do you not understand? 12: Which part of, FSCO/I is thus an inductively reliable sign of design do you not understand? 13: Which part of the search of config spaces and blind transitions among such is at the heart of stat mech thinking, do you not understand?>>kairosfocus
January 4, 2016
January
01
Jan
4
04
2016
02:53 PM
2
02
53
PM
PDT
kairosfocus: what part of being a commonly observed phenomenon, which per a like consequence of the underlying statistics that pose a needle in haystack challenge becomes maximally unlikely to be observed on blind chance and/or mechanical necessity We both agree it takes energy to rearrange matter into brains or diamonds. Mung: Is my brain at or near equilibrium? Your brain in particular? {Just kidding!} The typical brain is not at equilibrium, which brings us a bit closer to the point. There's no unambiguous measure of entropy for non-equilibrium systems, though it can be defined for the whole macroscopic system. Let's try a different example. Which has lower entropy, an unused microprocessor sitting on a shelf or a diamond of like mass?Zachriel
January 4, 2016
January
01
Jan
4
04
2016
02:21 PM
2
02
21
PM
PDT
Mung: Is equilibrium thermodynamics the same as non-equilibrium thermodynamics? Zachriel: No. The difference is time. Is my brain at or near equilibrium?Mung
January 4, 2016
January
01
Jan
4
04
2016
12:08 PM
12
12
08
PM
PDT
Z, a third time -- what part of being a commonly observed phenomenon, which per a like consequence of the underlying statistics that pose a needle in haystack challenge becomes maximally unlikely to be observed on blind chance and/or mechanical necessity (as are macro-fluctuations that would violate the second law . . . ), is it that is so hard for objectors of your ilk to acknowledge? Or is it, that FSCO/I’s origin is readily observed: not blind chance and mechanical necessity on the gamut of the observed cosmos — maximally implausible — but intelligently and purposefully directed configuration . . . so that we have an epistemic right to infer on best explanation that it is an empirically reliable sign of design as cause. KF PS: I am discussing the statistics of configuration spaces, which is foundational to how we get to the eqns.kairosfocus
January 4, 2016
January
01
Jan
4
04
2016
10:00 AM
10
10
00
AM
PDT
Mung: Is equilibrium thermodynamics the same as non-equilibrium thermodynamics? No. The difference is time. Notably, kairosfocus's comments concern equilibrium thermodynamics.Zachriel
January 4, 2016
January
01
Jan
4
04
2016
07:58 AM
7
07
58
AM
PDT
...if you are trying to change its PoV as nothing can do that. Are you saying it's brain has hardened into diamond?Mung
January 4, 2016
January
01
Jan
4
04
2016
07:38 AM
7
07
38
AM
PDT
Is equilibrium thermodynamics the same as non-equilibrium thermodynamics?Mung
January 4, 2016
January
01
Jan
4
04
2016
07:34 AM
7
07
34
AM
PDT
kairosfocus- If you need the practice then continue. I am just saying attempting a discussion with Zachriel is useless if you are trying to change its PoV as nothing can do that.Virgil Cain
January 4, 2016
January
01
Jan
4
04
2016
07:31 AM
7
07
31
AM
PDT
kairosfocus: Here I am trying to draw his attention to the material issue that he would duck by pretending that a diamond crystal is comparable to the functionally specific organisation of a brain. Never said or thought such a thing. kairosfocus: the issue is to account for the FSCO/I rich heat engines and/or energy conversion devices that carry out key metabolic processes in the cell. FSCO/I is not found in thermodynamic equations. From the viewpoint of thermodynamics, energy is required to rearrange atoms into brains or diamonds. The question then is which has lower entropy, a brain or a like mass of diamonds. So, do you give up?Zachriel
January 4, 2016
January
01
Jan
4
04
2016
06:47 AM
6
06
47
AM
PDT
Z, what part of being a commonly observed phenomenon, which per a like consequence of the underlying statistics that pose a needle in haystack challenge becomes maximally unlikely to be observed on blind chance and/or mechanical necessity (as are macro-fluctuations that would violate the second law . . . ), is it that is so hard for objectors of your ilk to acknowledge? Or is it, that FSCO/I’s origin is readily observed: not blind chance and mechanical necessity on the gamut of the observed cosmos — maximally implausible — but intelligently and purposefully directed configuration . . . so that we have an epistemic right to infer on best explanation that it is an empirically reliable sign of design as cause. KF PS: VC, you have a point. Here I am trying to draw his attention to the material issue that he would duck by pretending that a diamond crystal is comparable to the functionally specific organisation of a brain. As in handy side-track. Pretend that I have no answer to the entropy metric for a brain vs a crystal of similar mass, but Z et al do -- crystals are ordered and relatively low entropy; tissues are functionally organised and vary a fair amount but meet fairly hard limits as to what is workable. Does this change the stat underpinnings and needle in haystack implications coming from stat mech that ground FSCO/I as a reliable sign of design -- no. So, it is irrelevant but trotting out a distraction is all they got. You got a hammer everything must be a nail. That is how desperate the objectors now seem to be.kairosfocus
January 4, 2016
January
01
Jan
4
04
2016
06:37 AM
6
06
37
AM
PDT
We recommend that everyone ignore Zachriel. Trying to reason with it is useless and causes distress.Virgil Cain
January 4, 2016
January
01
Jan
4
04
2016
06:35 AM
6
06
35
AM
PDT
kairosfocus, Sorry you are unable to answer the question.Zachriel
January 4, 2016
January
01
Jan
4
04
2016
06:32 AM
6
06
32
AM
PDT
Z, what part of being a commonly observed phenomenon, which per a like consequence of the underlying statistics that pose a needle in haystack challenge becomes maximally unlikely to be observed on blind chance and/or mechanical necessity (as are macro-fluctuations that would violate the second law . . . ), is it that is so hard for objectors of your ilk to acknowledge? Or is it, that FSCO/I's origin is readily observed: not blind chance and mechanical necessity on the gamut of the observed cosmos -- maximally implausible -- but intelligently and purposefully directed configuration . . . so that we have an epistemic right to infer on best explanation that it is an empirically reliable sign of design as cause. KFkairosfocus
January 3, 2016
January
01
Jan
3
03
2016
09:47 AM
9
09
47
AM
PDT
kairosfocus: The upshot of that relevant issue is that on stat considerations, FSCO/I is not plausibly the product of blind chance and necessity FSCO/I is not in the laws of thermodynamics. Which has lower thermodynamic entropy, a brain or a like mass of diamond?Zachriel
January 3, 2016
January
01
Jan
3
03
2016
06:41 AM
6
06
41
AM
PDT
Z, when I again and again took time to point out the way stat thermo-d relates . . . cf 200 above most recently, you repeatedly ignored and reverted to distractions, irrelevancies (including those warned against by Orgel and Wicken, of 40 years standing: the mechanically necessary ordering of crystals has nothing to do with wiring diagram, functionally specific info-rich organisation of significant complexity . . . ) -- and dismissal. That tells us all we need to know. The upshot of that relevant issue is that on stat considerations, FSCO/I is not plausibly the product of blind chance and necessity, for reasons connected to needle in haystack search. As in, relative statistical weight of clusters of microstates in a config or phase space, relative to accessible resources to take up configs. Inductive evidence on a trillion member base shows FSCO/I a reliable sign of design as cause. From OOL to OO body plans to OO our own, we have huge FSCO/I to address, strongly pointing to design. That has been pointed out repeatedly for years and has met with rhetorical stunts such as you are attempting, revealing again what is really likely to be going on: dismissive conclusion is in hand, just find a talking point to trot it out and expect widespread indoctrination in evolutionary materialism to carry the rhetorical day. At this stage, I am in effect writing for record. KFkairosfocus
January 2, 2016
January
01
Jan
2
02
2016
04:34 PM
4
04
34
PM
PDT
kairosfocus: you have put up more dismissive and distractive rhetoric relative to the substantial matter. Not at all. You said it was a matter of statistical thermodynamics. Which has lower thermodynamic entropy, a brain or a like mass of diamond?Zachriel
January 2, 2016
January
01
Jan
2
02
2016
12:06 PM
12
12
06
PM
PDT
Z, you have put up more dismissive and distractive rhetoric relative to the substantial matter. Notice what Orgel and Wicken cautioned. KFkairosfocus
January 2, 2016
January
01
Jan
2
02
2016
11:55 AM
11
11
55
AM
PDT
kairosfocus: 1 ... 18 Eighteen bullet points and nearly 1500 words, yet did you answer? Which has lower thermodynamic entropy, a brain or a like mass of diamond? We even gave you a hint @194.Zachriel
January 2, 2016
January
01
Jan
2
02
2016
07:12 AM
7
07
12
AM
PDT
Mung, For excellent reason. KFkairosfocus
January 1, 2016
January
01
Jan
1
01
2016
04:00 PM
4
04
00
PM
PDT
I understand that I do not have the kind of blind faith required to believe the materialist origins story.Mung
January 1, 2016
January
01
Jan
1
01
2016
03:07 PM
3
03
07
PM
PDT
Z, 1: What part of a string of N-elements * -* -*- . . . * in S states defines a config space of scope S^N do you not understand, e.g. 2^500 = 3.27*10^150, 2^1000 = 1.07*10^301? (And if you don't, after so many years of ID objecting, what does that say of your collective and the circle of objector sites?) 2: Likewise, what part of anything reducible to a nodes-arcs wiring diagram that defines specific functional organisation (e.g. an exploded view assembly diagram for an ABU 6500 reel) can readily be reduced to a description language chaining Y/N elements such as AutoCAD etc do, do you not understand? 3: What part of, this means description on specific strings gives an index of complexity for FSCO/I and is WLOG, do you not understand? 4: What part of this gives a representation of a config space do you not understand, and if you don't what part of get thee a copy of LK Nash and his discussion of H/T coins as first example for stat mech do you not understand? (And in case you imagine this is unrepresentative consider a paramagnetic substance of 500 or 1,000 atoms in an array per Mandl, as an example. Start 1,000 such in a random state and allow to evolve at random 10^14 times/s for 10^17 s, what are the odds that we would plausibly come up with a code for say a functional protein of 232 AAs or lines in a program that must function or ascii code for a sentence in English of 143 characters etc? ) 5: What part of the following clip from the Wiki article do you not understand?
in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>])
6: What part of this clip from Orgel, 1973, is it that you do not understand?
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002.] One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 - 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 - 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken's remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]
7: Likewise, what part of this from Wicken, do you not understand?
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.]
8: What part of, these are not design advocates but are frankly facing the issues at stake from OOL up through OO body Plans to OO the human one, do you not understand? 9: What part of, 10^80 atoms of the observed cosmos, acting as observers every 10^-14 s, for 10^17 s would be unable to blindly sample a fraction of the space for 1,000 coins or bits, larger than one straw to a haystack that dwarfs the said observed cosmos, do you not understand? 10: What part of, the search of a space would be a subset so that the set of possible searches is the power set of cardinality 2^C, C being that of the original set, rendering the search for a golden search much harder than the direct search, do you not understand? 11: What part of, on trillions of observed cases and the linked needle in haystack, blind search challenge, the only reliable explanation of FSCO/I is design, do you not understand? 12: Which part of, FSCO/I is thus an inductively reliable sign of design do you not understand? 13: Which part of the search of config spaces and blind transitions among such is at the heart of stat mech thinking, do you not understand? 14: Which part of, it is now a commonplace that there are thousands of protein fold clusters in AA sequence space, where a great many are of but few members and are deeply isolated, raising search space challenges, do you not understand? 15: Which part of, fold and fit of coded, assembled AA sequence strings, are key parts of typical protein function do you not understand? (where 20^300 for a typical protein gives a config space of 2.04*10^390 ) 16: Which part of, on the above no effective blind search of the space required to form proteins to make a first living cell by sudden or gradual means on the gamut of the observed cosmos is credible, do you not understand? 17: Which part of, there is no sound conflation of crystal formation and relevant protein assembly based on coded strings, do you not understand? (Or for that matter, formation in a Darwin's pond etc.) 18: Which part of, the search problem is compounded hopelessly when we see that new body plans require 10 - 100+mn bases and embryologically and ecologically feasible system composition, do you not understand? Including, our own. KFkairosfocus
January 1, 2016
January
01
Jan
1
01
2016
02:47 PM
2
02
47
PM
PDT
Zachriel: So you can’t. That’s fine. We recommend readers ignore your comments about statistical thermodynamics then. Mung: That bit of [il]logic hardly follows. Zachriel: Kairosfocus claims that statistical thermodynamics supports his claims concerning biological structures. Even if this is true, and even if statistical thermodynamics do not supports his claims concerning biological structures, it does not follow that readers ought to ignore his comments about statistical thermodynamics.Mung
January 1, 2016
January
01
Jan
1
01
2016
01:03 PM
1
01
03
PM
PDT
Regular strings have low complexity and high specificity. Random strings have high complexity and zero specificity. Neither type of string is good at describing functional structures. Snowflakes, dunes, convection patterns and such like are regular non-functional structures. Function is a telic phenomenon that cannot be adequately described merely in terms of thermodynamics. On the axis of order to chaos complex function e.g. biological function is closer to chaos. But that is only a projection. http://www.tbiomed.com/content/4/1/47EugeneS
January 1, 2016
January
01
Jan
1
01
2016
12:35 PM
12
12
35
PM
PDT
kairosfocus: protein fold domains are on the whole deeply isolated in AA sequence space. So if we were to take, say, eighty random amino acids, we would never expect them to form a fold?Zachriel
January 1, 2016
January
01
Jan
1
01
2016
12:17 PM
12
12
17
PM
PDT
kairosfocus: And once we are past the 500 – 1,000 bit threshold, the only observed and needle in haystack plausible cause is, design. How many bits in a snowflake? You said it was a matter of statistical thermodynamics, so you should be able to answer. Which has lower thermodynamic entropy, a brain or a like mass of diamond?Zachriel
January 1, 2016
January
01
Jan
1
01
2016
12:08 PM
12
12
08
PM
PDT
Z, Let's go back to what Wiki was forced to admit on record, which you and others of your ilk keep ignoring or twisting:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
As has been highlighted, entropy is a state not a path function so its components can be partitioned. In the case of FSCO/I, that is a relevant component, one that is easily identifiable by it works or not, based on specificity of configuration. So, the cluster of possible states to remain in such a state or to get to it, is very small relative to the set of possibilities. And this extends to not only forming and organising original live molecules and cells, but to getting the DNA and proteins etc for new body plan components. A string of n components with 20 states per element has 20^n possibilities, and one with 4 states, 4^n. Those soon enough exceeded 500 - 1,000 bits at 4.32 and 2 bits per element respectively. A small part of the problem, but already we are beyond the threshold for just one typical 300 or so AA protein. We need hundreds and hundreds to get to life forms and to get onwards to body plans. Where, protein fold domains are on the whole deeply isolated in AA sequence space. FSCO/I is readily observed [works/fails], and comes in deeply isolated islands of function in the config spaces of all possible arrangements because of the need to assemble and couple the right bits and pieces in the right order and orientation, and the same for complementary bits and pieces, in a key-lock wiring diagram organised system -- not a simple ordered repetitive crystal, as Orgel and Wicken long ago pointed out -- and are being studiously ignored on. Hence the issue of getting to that from Darwin's pond etc, with only physics and chem available to blindly do the work. And once we are past the 500 - 1,000 bit threshold, the only observed and needle in haystack plausible cause is, design. That is what you are ducking. It would be funny, if in the end it were not so sad. KFkairosfocus
January 1, 2016
January
01
Jan
1
01
2016
12:04 PM
12
12
04
PM
PDT
Mung: That bit of [il]logic hardly follows. Kairosfocus claims that statistical thermodynamics supports his claims concerning biological structures. The standard molar entropy of various substances can be measured experimentally. Certainly, he should be able to give at least a qualitative answer, but he can't. Keeping in mind that the brain is about 3/4 water, standard molar entropy: Diamond: 2.4 Liquid Water: 70 More complex molecules usually have a higher entropy. Does that help narrow down the answer? http://tinyurl.com/StandardMolarEntropiesZachriel
January 1, 2016
January
01
Jan
1
01
2016
11:47 AM
11
11
47
AM
PDT
1 2 3 4 5 10

Leave a Reply