Uncommon Descent Serving The Intelligent Design Community

Introducing “Sewell’s Law”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In an April 2, 2007 post, I noted the similarity between my second law argument (“the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view”), and Bill Dembski’s argument (in “The Design Inference”) that only intelligence can account for things that are “specified” (=macroscopically describable) and “complex” (=extremely improbable). I argued that the advantage of my formulation is that it is based on a widely recognized law of science, that physics textbooks practically make the design argument for you, all you have to do is point out that the laws of probability do (contrary to common belief!) still apply in open systems, you just have to take into account the boundary conditions in the case of an open system (see A Second Look at the Second Law ).

However, after making this argument for several years, with very limited success, I have come to realize that the biggest disadvantage of my formulation is: it is based on a widely recognized law of science, one that is very widely misunderstood. Every time I write about the second law, the comments go off on one of several tangents that sometimes have something vaguely to do with the second law, but have in common only that they divert attention away from the question of probability.

So I have decided to switch tactics, I am introducing Sewell’s law: “Natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view.” I still insist that this is indeed the underlying principle behind all applications of the second law, the only thing that all applications have in common, in fact. But since even the mention of “second law” draws such “kneejerk reactions” (as Philip Johnson put it), let’s forget about the second law of thermodynamics and focus on the underlying principle, Sewell’s law. My main point is still the same as before, that natural forces cannot rearrange atoms into computers and spaceships and the Internet here, whether the Earth is an open system or not. But now you cannot avoid the question of probability by saying the second law doesn’t really apply to computers and spaceships (although most physics textbooks do apply it to the breaking of glasses and burning of libraries, etc); whether the second law applies or not depends on which formulation you buy. But it seems to violate Sewell’s law. Unless, of course, you believe that it is not really extremely improbable that the four forces of physics would rearrange the basic particles of physics into computers and TV sets and libraries full of novels and science texts; in that case I can’t reach you.

Comments
Phevans you stated; FWIW I think we’re on a similar page, inasmuch as we both agree that NS+RM can “generate increasingly meaningful (subjective) information”, and that this isn’t in any way in violation of the 2LoT (though correct me if I’m wrong). Now I don't know but you may know something that I'm not privy too. Yet, I point out that in all the countless millions of observations of mutations in DNA in the labratory, NOT ONE has ever been shown to unambiguosly increase information. For you to believe information can increase in such many has no solid empirical evidence in which to base the belief on. Like I said I may be wrong..If I am I, and many other people, would please like to know the empirical evidence that you base this belief on.bornagain77
May 24, 2007
May
05
May
24
24
2007
05:26 AM
5
05
26
AM
PDT
jack krebs Can you offer a definition of specification that would be useable: something mathematically feasible that could be applied to all sorts of different things, including those that we believe are not designed, and that would be replicable in that different people would get the same results when using the definition irrespective of their intuitive preconceptions? Such a definition is needed in order to test hypotheses about design. If I could it would sure help out some of these forensic sciences. Just imagine how useful it would be in criminal investigations if intent could be determined by a mathematical formula or for archeologists if a calculator could determine if an object was designed or not. ID and evolution are both soft sciences. You wouldn't be the first impose a double standard when it comes to ID but it would still be a disappointment if you did.DaveScot
May 24, 2007
May
05
May
24
24
2007
04:09 AM
4
04
09
AM
PDT
Dave I understand that over large scales (hundreds of billions of years) we can equate mass and energy, and conclude that mass will tend to convert to energy and disperse (i.e. become less ordered). I can also see what you're saying about alphabet soup, although I'm not 100% convinced that this is strictly an application of the 2LoT (not saying you're wrong, just that I'm unconvinced!) However if we take Hamlet spelled out in alphabet soup as the example, then this is one physical representation of some information. Of course, shaking it up will destroy this representation, and even leaving it for (m|b)illions of years will do the same. However the representation of information in DNA isn't "shaken up" or destroyed, it's replicated again and again, so the arguments made around particular physical representations of information are tangential at best. FWIW I think we're on a similar page, inasmuch as we both agree that NS+RM can "generate increasingly meaningful (subjective) information", and that this isn't in any way in violation of the 2LoT (though correct me if I'm wrong).Phevans
May 24, 2007
May
05
May
24
24
2007
01:48 AM
1
01
48
AM
PDT
Jerry 1. I envy you. Have fun in Greece. 2. Pornography and obscenity can be defined easily. Pornography is making images for the purpose titillation. The difficulty comes in writing law -- was the image made for titillation or to inform the public about the dress of native girls in the South Seas? Obscenity means offensive to the community. Again, not hard to define. What is hard to define is to the degree to which a community can react to being offended without violating an individual's right ot offend.tribune7
May 23, 2007
May
05
May
23
23
2007
05:01 PM
5
05
01
PM
PDT
I would like to point out that the greastest majority of extinctions in the fossil record were natural, that is they were not brought about by cataclysm, I believe the figure is 95% of animals go extinct by natural causes. The most likely cause of most of these "natural" extinctions is "Genetic Entropy". I think the average time for genetic meltdown is estimated to be 4 million years. Though there are examples of species lasting far longer than that. It is would be interesting to find which animals lasted longer. I think a clear prediction of ID could be made that would state something to the effect , The more the species had to adapt the quicker it would suffer Genetic Meltdown due to the accumulation of deleterious mutations. Likewise ID will predict the less selection pressure on a animal in the fossil record the longer it will last in the fossil record since it will have less Genetic entropy. This line of reasoning should produce results that are far more accurate than Darwin's fabled tree of life diagramsbornagain77
May 23, 2007
May
05
May
23
23
2007
05:00 PM
5
05
00
PM
PDT
Can you offer a definition of specification that would be useable: How about a pattern in which the components can be organized in many ways according to chance but in which only one causes an event? something mathematically feasible that could be applied to all sorts of different things, But we aren't trying to make it mathematically feasible, remember? (although I think I just did). We are merely trying to define the word so it can better be used as part of the broader construct of CSI. including those that we believe are not designed, and that would be replicable in that different people would get the same results when using the definition irrespective of their intuitive preconceptions? But this has long occurred. Was the edge on the stone caused by natural forces or was it put there by design? What's relatively new is applying this criteria to biology.tribune7
May 23, 2007
May
05
May
23
23
2007
04:49 PM
4
04
49
PM
PDT
"So the challenge is to what is common about a 500 coin toss with all heads, Mt. Rushmore, a deck of cards sorted into suits, an English paragraph and DNA." Pardon my ramblings, but it sounds like a question about human psychology. Perhaps here is no common thread there in reality, just subjective classifications. (Based on what, is still a good question.) The thing is, if it turns out to be an illusion of the mind, then what is truly reliable about human intution? The sword seems to chop materialism (against human reason) as well as support it (against a designer.) But if reason is unreliable, why then, it is unreliable for everything except pragmatic survival function. At any rate, this kind of talk is very interesting to me as of late. Keep up the good work. I think someone is onto something.mike1962
May 23, 2007
May
05
May
23
23
2007
02:48 PM
2
02
48
PM
PDT
Granville Would the Design Correlary to Sewell's law hold?: "Designed systems do macroscopically describable things which are extremely improbable from the microscopic point of view.” OR Does would this have to be written in terms of Dembski's filter of chance, law and complex specified information?DLH
May 23, 2007
May
05
May
23
23
2007
01:26 PM
1
01
26
PM
PDT
tribune7, Haven't had much time to follow this discussion since my wife and I are leaving for Greece tomorrow for an intellectual holiday with some friends learning about what started Western Civilization. I think the comment was made on an older thread about not being able to define "it" but we can recognize it when we see it. Then "it" was meant to be pornography or obscenity. You can substitute specificity for either one and be in the same conundrum. So the challenge is to what is common about a 500 coin toss with all heads, Mt. Rushmore, a deck of cards sorted into suits, an English paragraph and DNA. "That is the question" as one former writer once said.jerry
May 23, 2007
May
05
May
23
23
2007
01:19 PM
1
01
19
PM
PDT
Shannon entropy ("Information") should be considered as the capacity of the channel to hold information. It cannot distinguish between randomness and specified information with the same frequency of letters. e.g., between Pi, and Pi run through one way hash or seed to a random sequence generator. When we know the information it can be recognized or identified. The difficulty is identifying information with no a priori knowledge, especially when the information is complex. However this still has meaning to the originator, even if the observer may not recognize it. This is "subjective" only to the extent that the knowledge is not shared. Once it is, then it can be viewed as "objective"'.DLH
May 23, 2007
May
05
May
23
23
2007
01:18 PM
1
01
18
PM
PDT
DaveScot: "Random change (which the four forces can generate) plus natural selection (preference or preservation of one change over another) coupled with a feedback mechanism (heredity) can indeed generate increasingly meaningful (subjective) information. But in order to defeat the increasing improbability of larger jumps in meaning it must add it in tiny (more probable) steps. The argument thus becomes one of discontinuities that must be bridged in small steps to get from inanimate matter to complex living systems. In theory there may exist a series of arbitrarily small steps but on the other hand there may be discrete transitions required that that the laws of probability make virtually impossible." This is how I have seen it also. Unfortunately, despite all the arguments, there seems to be no absolute principle such as a second law of thermodynamics as applied to information or complex specified information, that forbids the accumulation of modest amounts of complex specified information in small steps from random variations filtered by selection processes. Ultimately it comes down to the argument from improbability based on the recognition of "irreducibly complex" biological structures and systems, and on the known extremely large total amount of complex specified information in living organisms. Irreducible complexity is understood not to be absolute impossibility of having been generated by Darwinistic processes, but the extreme improbability of bridging the large gaps necessary for each step to be either adaptively advantageous or neutral. This is an extreme improbability based on the the relatively limited time and number of generations available to achieve these biological systems based on the fossil record. Unfortunately this works down to a debate over quantities, rates and probabilities rather than absolute principles and laws.magnan
May 23, 2007
May
05
May
23
23
2007
12:03 PM
12
12
03
PM
PDT
Survival of the Likeliest?Chimera
May 23, 2007
May
05
May
23
23
2007
12:00 PM
12
12
00
PM
PDT
Hi, a number of times I've tried to post a link to an article which argues that thermodynamics may be the driver of evolution rather than being a hindrance to it - but does not seem like it wants to appear!Chimera
May 23, 2007
May
05
May
23
23
2007
11:57 AM
11
11
57
AM
PDT
DaveScot: "The key observation is that 2LoT still applies even to subjective information. If we order the letters in the alphabet soup into a subjectively meaningful pattern and then leave to up to nature the subjective information will diffuse into meaningless (objective) information. Theoretically the order still exists (information cannot be lost) but it definitely becomes more diffuse. Stephen Hawking fought for years to prove that information is lost (destroyed) in a black hole but he eventually conceded that it is not so the axiom that information cannot be created or destroyed still stands." It seems to me the "subjective order" you refer to is equivalent to the "complex specified information" defined by Dembski. It is the basic information content irrespective of any complex specified or subjective content to the information, that cannot be desroyed. The subjective complex specified information definitely is destroyed as in your example where the letters in a can of alphabet soup spilled on a table are laboriously arranged by an intelligent agent to spell out Lincoln's Gettysburg Address. Gathering the letter bits up and mixing them together again in the can will not just diffuse the complex specified information - it will destroy it. You can't put Humpty Dumpty together again. What is not destroyed is the Shannon information content of some n number of letters in a 26 letter alphabet.magnan
May 23, 2007
May
05
May
23
23
2007
11:34 AM
11
11
34
AM
PDT
I like the tack that Professor Sewell is taking here. One thing NDE has always lacked is an underpinning natural law or laws to support it. While the below may be a somewhat sophmoric attempt on my part (I'm an electrical engineer, and don't have a lot of formal training in information theory), it may perhaps spark interesting discussion. Sewell's Law would certainly belong in here, perhaps in substitution the SLoT reference. --------------------------------- Laws of Information For Biological Intelligent Design (BID) to come fully into its own as a scientific theory, it needs to demonstrate predictive power. In the other physical sciences, natural laws describe with accurate and repeatable precision the outcome of processes or operation of systems, when complete knowledge of present conditions is known. Central to BID is the concept of Complex Specified Information, or CSI. Is it possible to postulate a set of laws to describe information and its interaction with matter? I attempt to posit some here. What is “information”? - Information is a purposeful arrangement of physical matter that can: o Be an input to a process (much like energy), to control said process o Provide a record of past processes o Describe processes not yet realized - Information is required to direct a process to an outcome that would otherwise be prohibited by one or more natural laws. It is a counteracting agent to natural law (creates “counter-flow”). - Because information is composed of physical matter, it is subject to decay over time. What is a “natural law”? - A natural law describes the behavior of matter and energy under prescribed conditions. - A natural law allows accurate predictions of future outcomes, when present conditions are known with certainty. - The properties of matter and energy determine the law, the law does not control matter and energy Questions (answers to which are subject to discussion): 1. Is it possible to write a set of Information laws, comparable to the laws of thermodynamics, gravity, motion, etc? Answer: Yes. 2. Is it possible for information to exist in the absence of an interpretive entity or process? Answer: No. Information only has meaning in the context of an interpretive entity or process. 3. If not, must the interpretive entity or process be “intelligent”, i.e. sentient? Answer: No, but it could be argued that the interpretive entity must itself be the product of intelligence. 4. Are there any observed examples of spontaneous, undirected (by intelligent agents) generation of new (complex and specified) information in nature? Answer: I don’t know. However, even a beneficial (from a natural selection standpoint) biological mutation represents an overall loss of information in all observed cases. Proposed “Laws of Information” 1. An increase in the specified complexity of any system requires in increase in information. 2. Information entropy increases over time in a closed system (i.e. in the absence of intelligent input or importation of information from outside the system). 3. Randomness and information content are inverses of one another; as order increases, the amount of information necessary to describe the system tends to decrease. Example: Transition of a collection of water molecules from a liquid to a solid state (i.e. freezing) results in an increase in order and a decrease in the information needed to fully describe its state. 4. The more complex and specified the system, the greater the likelihood that random changes to its CSI will be deleterious to said system.sabre
May 23, 2007
May
05
May
23
23
2007
11:01 AM
11
11
01
AM
PDT
ph_evans What does the 2LoT (thanks for the acronym!) apply to other than energy? Everything! Matter and energy are equivalent according to E=MC^2. Everything tends toward homogeneity. Even baryonic matter is thought to eventually decay into photons although this has yet to be observed. The half-life of a proton is thought to be 10^35 years. When a proton decays it is thought to become a positron and a pion which then very quickly decay into a flash of photons which then diffuse in all directions at the speed of light. This can be equated to order. Ordered systems tend toward disorder. This can also be thought of as sorted. Sorted collections tend to become unsorted. Diffusion is the process by which order becomes disorder. It gets a little dicier when 2LoT is applied to information. In Shannon information the maximum amount of information is the least amount of order. Take a serial bit stream - a sequence of binary states 0 or 1 (or true/false, on/off, black/white, whatever). If there's any ordering the Shannon information content decreases. It takes far less information to describe a bit stream composed entirely of ones or zeroes than it does a stream containing mixtures of both. If the stream is totally random that is where it takes the most information to describe it. But that's objective information. In our context we're interested in subjective information. Imagine a bowl of alphabet soup. Stir it all you want, let it settle, and you might get a few bits of subjective information like CAT or DOG but you're never, or very close to never, going to see it to settle into a page of text from War and Peace. But the difference is entirely subjective. Without the specification of language (an independently given specification) which is in the eyes of the beholder, there's no difference. The key observation is that 2LoT still applies even to subjective information. If we order the letters in the alphabet soup into a subjectively meaningful pattern and then leave to up to nature the subjective information will diffuse into meaningless (objective) information. Theoretically the order still exists (information cannot be lost) but it definitely becomes more diffuse. Stephen Hawking fought for years to prove that information is lost (destroyed) in a black hole but he eventually conceded that it is not so the axiom that information cannot be created or destroyed still stands. But that's tangential. We're interested in the diffusion of information. Scatter a DNA molecule to the four winds and the information might still be there but the living thing that needed the DNA will be deader than a doornail. So let's apply this to living systems in more detail. Take a protein such as hemoglobin. Rearrange the monomer sequence so it no longer transports oxygen. The objective information content doesn't change but subjectively it's the difference between life and death. The universe doesn't prefer one order or the other but the organism that needs oxygen transport certainly has a preference. The objective forces of nature don't tend to build subjectively meaningful patterns. The four forces of physics can accidently generate subjective meaning but as the subjective complexity increases the probability of accidental arrangement decreases. Random change (which the four forces can generate) plus natural selection (preference or preservation of one change over another) coupled with a feedback mechanism (heredity) can indeed generate increasingly meaningful (subjective) information. But in order to defeat the increasing improbability of larger jumps in meaning it must add it in tiny (more probable) steps. The argument thus becomes one of discontinuities that must be bridged in small steps to get from inanimate matter to complex living systems. In theory there may exist a series of arbitrarily small steps but on the other hand there may be discrete transitions required that that the laws of probability make virtually impossible. Take the transition from a prokaryote to a eukaryote. In nature we observe only two discrete states - nucleate or anucleate. A single large step by chance appears to be prohibitively unlikely. Intelligence agency that can impose any physically possible order can certainly accomplish the transition. What series of small steps each with a reasonable possibility and which also have natural selection value can bridge this discontinuity? The burden of proof that chance & necessity can accomplish things that otherwise appear to require intelligent agency to overcome improbabilities I believe lies with whoever is making the claim that chance & necessity can bring about the observed outcome. It's already well established that intelligent agency can impose any physically possible order regardless of the improbability by chance alone. Making quantum leaps in order that would otherwise be prohibitively unlikely is the hallmark of intelligent agency. DaveScot
May 23, 2007
May
05
May
23
23
2007
10:14 AM
10
10
14
AM
PDT
This is exactly the line of reasoning that will crush materialistic evolution. Like many of you, I've tried to argue the second law and was given the tired rebutal of open and closed system. Yet, we are in fact dealing with a second, more nuanced, level of entropy that is completely separate from the material realm. Yet this entropy is not completely treated as separate,,, YET. "Genetic Entropy" to be precise. This level of entropy found at the "spiritual" level of information is proving to be more robust than the entropy of the "material" realm. In fact a pure prediction of Theism would predict that there will NEVER be a violation of "Genetic Entropy" without input from an intelligent source. This is of course in direct contradiction with materialism. This fact, which is becoming increasing clear to science as time passes, should be written in the proper mathematical formula to overturn the proposed fourth law of dynamics equations that have been presented as the Onsager reciprocal relations. You may find the equations here; http://en.wikipedia.org/wiki/Laws_of_thermodynamics The new formula could then be used to support a NEW and proper fouth law of thermodynamics. Of course, this mathematical formulation has mosy likely already been accomplished and has not been accepted properly as the fourth law because of the materialistic paradigm that is hindering scientific progress right now. When a proper Law of information is accepted across the board it truly will be the crowning moment for the ID movement in the progress of science.bornagain77
May 23, 2007
May
05
May
23
23
2007
09:07 AM
9
09
07
AM
PDT
Hey guys, what about Dembski's use of minimum description length and algorithmic compressability as a measure of "specification"? I think you may be giving up prematurely on an objective measure of specification. It is my hunch that not all avenues have been developed.Atom
May 23, 2007
May
05
May
23
23
2007
08:43 AM
8
08
43
AM
PDT
Can you offer a definition of specification that would be useable: something mathematically feasible that could be applied to all sorts of different things, including those that we believe are not designed, and that would be replicable in that different people would get the same results when using the definition irrespective of their intuitive preconceptions? Such a definition is needed in order to test hypotheses about design.Jack Krebs
May 23, 2007
May
05
May
23
23
2007
08:26 AM
8
08
26
AM
PDT
But how can you objectively measure specification? I don’t believe you can. Specification is tangible and our brains use it (consciously or unconsciously) constantly in evaluation and decision. Specification is a product of mind, not nature. A good and interesting observation. I hope Jerry's reading. One thing: it should be possible to measure the effects of specification by defining it -- i.e. recognizing its reality -- then comparing events that match that definition with events that don't. I think the hangup -- as evident in recent discussions -- is that there has been a feeling that the reality of specification has to be proved. Rather than trying to prove it mathematically, we'd should accept its existence as axiomatic. Thank you Jerry for your stubborness. Another point: someone will say this is a circular arugment -- treating specification as axiomatic means treating design of life as axiomatic. Not so. Objects of known design exist. They met certain criteria. Applying this criteria to life indicateslife to be designed. It then becomes the responsibility for dissenters to show life not to be designed rather than whine about how it is somehown unfair to apply to life the criteria of design.tribune7
May 23, 2007
May
05
May
23
23
2007
07:49 AM
7
07
49
AM
PDT
Here is an interesting article from PLOS Biology. It argues that the laws of thermodynamics may advance evolution rather than being at odds with it. According to this reasoning even hurricanes and galaxies would be forms of life... Survival of the Likeliest?Chimera
May 23, 2007
May
05
May
23
23
2007
06:35 AM
6
06
35
AM
PDT
pk4_paul: Sure, here's the experiment: do a computer simulation which starts with the initial (before life appeared) positions and velocities of every fundamental particle in our solar system (I think we can ignore the effects of other stars) and models the effects of the four known forces of physics (gravity, the electromagnetic force, and the strong and weak nuclear forces) on these particles, run the simulation out to the current date, and see if humans and computers and spaceships and the Internet form. Of course, the effects of the basic forces on the basic particles are not strictly deterministic, according to quantum mechanics; we can only state the "probabilities". Thus we would have to assume this "supernatural" (in the most literal sense of the word) component to be truly random, unintelligent, and simulate it using some sort of random number generator. Unfortunately, such simulations tend to require a lot of computer time and memory, I don't think I can do the experiment on my laptop.Granville Sewell
May 23, 2007
May
05
May
23
23
2007
04:50 AM
4
04
50
AM
PDT
Granville, a fall back complaint against ID when all else fails is pointing out a lack of experimental studies IDists are able to point to to support their claims. There are two difficulties with this. One being that many claims of standard theories would require knocking down something that never took place anyway; not an easy task. Abiogenesis being the outstanding example. Second, there is much empirical data supporting ID claims but the opposition clamors that IDists do not perform the work on their own. IOW, IDists don't do research to further their theories. My question is: are you able to suggest an approach that could be used to experimentally advance your 2LT case?pk4_paul
May 23, 2007
May
05
May
23
23
2007
04:22 AM
4
04
22
AM
PDT
In my 2001 Mathematical Intelligencer defense of my 2000 Math. Intelligencer article, I began "Mathematicians are trained to value simplicity. When we have a simple, clear proof of a theorem, and a long, complicated, counter-argument, full of hotly debated and unverifiable points, we accept the simple proof, even before we find the errors in the complicated argument." This is the advantge of the second law, or "Sewell's Law", or specified complexity, argument: evolutionary biologists have a long complicated argument, with virtually no experimental confirmation, which claims to prove that natural forces created all the order we see on Earth today, but there is an extremely simple, direct proof that it couldn't have. As a mathematician, I prefer the simple, clear proof, and thus frankly don't believe you need to know much biology to reject the long complicated argument. I think it is no coincidence that Dembski and I are both mathematicians, and that many other mathematicians share our views (though most are reluctant to express them publicly). But I haven't had much luck convincing biologists that the argument is this simple! I once had the honor of having Michael Behe sit in on a talk where I presented my second law argument. At the end, he said, someone like Dawkins will argue that things change fundamentally once you have an organism which can reproduce, and I said, "do you agree with that?". He laughed and said, no. I said, neither do I. Most people need to find the errors in the long, complicated argument before rejecting it, and that is exactly what Behe and others are doing. For me, that isn't necessary, when we have such a simple, clear, counter-argument.Granville Sewell
May 23, 2007
May
05
May
23
23
2007
03:24 AM
3
03
24
AM
PDT
Sigh: I avoided links but the old spam filter got me anyway. I simply note that 1] Dr Sewell, thanks for trying again; I would adjust slightly: “Natural forces do not [spontaneously] do macroscopically [simply] describable things which are extremely improbable from the microscopic point of view.” 2] PE needs to recognise that there is such a thing as configurational entropy, related to degree of freedom of distribution of mass, not just energy. (Of this, diffusion is an iconic example -- why is it that a drop of dye in a vat of water will spread out but not spontaneously reform itself?) 3] Thence we see that there are many things that are not logically impossible, and are not subject to force/potential field barriers, that are relative to alternatives at micro level are so utterly improbable that they do not happen based on undirected chance plus natural forces. 4] As TBO in their TMLO of 1984 showed, following Brillouin et al, this can be lined to information -- information stored in the composition of strings of monomers in bio-molecules is spatially confined and vastly improbable spontaneously relative to non-functional states. Biofunctionality is of course observable and simply describable at macro-level. For more cf my always linked through my handle above. Cheerio GEM of TKIkairosfocus
May 23, 2007
May
05
May
23
23
2007
03:02 AM
3
03
02
AM
PDT
Hi Prof Sewell: Thanks for trying again! Now, without wishing to entertain the sort of long exchange I had last time around, with Pixie et al [cf onward link through my always linked, recently updated appendix A], I would comment: 1] I would adjust slightly, but significantly: “Natural forces do not [spontaneously] do macroscopically [simply] describable things which are extremely improbable from the microscopic point of view.” 2] The word, "spontaneous," is there to highlight that intelligent agents do intervene and use the available forces, phenomena and materials of nature to create things that are macroscopically simply describable. 3] There is yet another underlying trap, I am afraid: many do not understand just how a probability barrier can exist, as opposed to one directly based on a potential barrier. But, just because there is no physical force or logical contradiction involved does not mean that a particular state is likely to happen spontaneously -- and in the cases we are looking at prospectively, "unlikelihood" is of the order of 1 chance in 10^150 or worse, i.e practically impossible within an observed cosmos typically estimated at 10^80 atoms, and 13.7 BY to date. This is the same reasoning that underlies the force of statistical inference under Fisher's concept that if something is really unlikely to happen by chance among a set of possible outcomes, then the safe bet is that if it happened, it happened by intent, i.e the basis for rejecting the null hyp. Thus, the error in the common "lottery" fallacy: --> FYI, would-be objectors, Dr Sewell is speaking about the concept that a given macroscopically observable and simply specifiable state as a rule is associated with a great many microscopic configurations of matter and energy [Yes, cf diffusion and free expansion for micro distributions of mass not just energy -- there is a configurational form of entropy in physics]. --> On the basic statistical thermodynamic principle that all accessible microstates are equiprobable, stat thermo-D has been built and has had great success. The direct implication of that is, that though all microstates are equiprobable, some states are such that absent imposed constraints on the system, they are utterly unlikely to emerge spontaneously on the gamut of the observed cosmos across its whole history. --> WHY is that so? ANS: because there are so many other states available that such special states are overwhelmed. E.g. put a drop of dye in a vat of water, and allow it to diffuse. You will never see that the drop spontaneously reforms, as the scattered macrostate has in it so many more microstates than the clumped one. [Therein lieth the concept of configurational entropy . . . highly relevant to the link between entropy and information. Cf my linked for an introductory discussion.] 4] I have put in the -- strictly unnecessary but clarifying -- word "simply" to underscore the point that macrostates compress [usually on a lossy basis] the descriptive information on the system in view. Thence, if we lack detailed information at the micro-level, we are forced to assume random behaviour relative to our uncertainties, and so can only harvest such work or functionality as is consistent with that want of information. [Cf Harry Robertson's Statistical Thermophysics, PHI, for the elaborating details of this observation. FYI, objectors, there is an informational school in statistical thermodynamics, tracing to the work of Gibbs and Brillouin etc.] 5] Finally, PE, there is such a thing as configurational entropy linked to the degree of freedom of distribution/location of micro-elements in space [i.e volume] [and not just freedom of distribution of energy]. So, as for instance Thaxton et al exploit brilliantly following Brillouin in their classic 1984 work, TMLO, ch 8 [cf my always linked appendix A for the link -- dodging that spam filter], we see that certain spatially constrained micro-level configurations of matter can store information and/or perform privileged functions that can then be observed and simply described at macro-level. It turns out that such configurations are utterly improbable on the scope of the integrated functional elements of say a prototypical cell requiring a DNA strand of length 300 - 500 k base pairs long -- not to mention the underlying codes, algorithms and executing enzymes, ribosomes etc. [The lower end of the range just cited is the level where existing life forms functionally disintegrate once knockouts reach a certain threshold. 300k 4-state elements can take up a space of 4^300k = 9.94*10^180,617. In short the functional configurations are hopelessly isolated in a space dominated by non-functional ones, relative to spontaneous mechanisms relying only on chance plus undirected natural forces. THAT is why OOL research is more or less at a conundrum.] Cheerio GEM of TKI PS: For my always linked, click on my handle.kairosfocus
May 23, 2007
May
05
May
23
23
2007
02:51 AM
2
02
51
AM
PDT
Dave
Subjective information, or specified complexity, appears to be subject to 2LoT but mind (intelligence) can violate 2LoT by routinely choosing to do what is almost impossible for nature such as making a gold watch from a gold nugget
Why are you saying that specified complexity appears to be subjected to 2LoT? Perhaps you refer only to the product of intelligence and not to a whichever device that applies intelligent decisions.kairos
May 23, 2007
May
05
May
23
23
2007
02:34 AM
2
02
34
AM
PDT
If I can echo Phevans, Having read 'A Second Look at the Second Law' and then the 'kneejerk reaction' on the PT, the only fair criticism I noted was that the relation between heat and information thermodynamics was not made clear. Given that the second law was formulated for heat transfer (hence THERMOdynamics), is there consensus that the second law's application to information is warranted? Is there a good summary on the net anyone can link to? Even better, Granville, how about an addendum to your article?antg
May 23, 2007
May
05
May
23
23
2007
02:25 AM
2
02
25
AM
PDT
Dave What does the 2LoT (thanks for the acronym!) apply to other than energy? It refers to entropy, but Informational entropy is quite different concept to thermodynamic entropy. I've not seen an argument (convincing or otherwise) for applying a law from one domain into another. I'd be interested to get some links.Phevans
May 23, 2007
May
05
May
23
23
2007
12:55 AM
12
12
55
AM
PDT
continuing I think a lot of people just skip right over all this and concede that only intelligence can defeat virtually impossibly long odds to produce specified outcomes. The argument then becomes a question of whether natural selection mimics intelligence in this capacity. Most of us concede that natural selection can work in a preferential manner selecting outcomes that have long odds against them but not impossibly long odds. A random mutation in a gene that works to defeat an antibiotic, for instance, but not something like turning a scaled limb into a feathered wing in one fell swoop. I think most of us accept that it is theoretically possible for that virtually impossible instantaneous change to occur in a series of not so unlikely small changes. Darwin skipped right up to this point. But he admitted that there may be discontinuities that cannot be bridged in small steps. And 150 years later we're still there. The living world is flush with discontinuities without plausible bridges i.e. the "gaps" in evolution. The chance worshippers declare it's only our ignorance or lack of imagination at fault for there being gaps. The ID crowd declares the discontinuities are unbridgeable by natural selection in small steps so it must be a higher order intelligence that can plan and execute the leap across the discontinuities. The thing about natural selection is it can't plan ahead for anything. It is reactive where higher order intelligence is proactive. Thus for me it comes down to demonstration. If it can be demonstrated that a series of small steps subject to reactionary natural selection can bridge any given gap then I'll accept it. Until then intelligent agency must remain at least a live possibility and rejecting it out of hand when we already know intelligent agency exists in the universe today without knowing it didn't exist in the past is not science - it's dogma. I prefer to focus on just one gap - the giant leap from inanimate matter to a replicator with a symbolic inheritance mechanism such that natural selection can begin to work in the first place. DaveScot
May 22, 2007
May
05
May
22
22
2007
10:54 PM
10
10
54
PM
PDT
1 2 3

Leave a Reply