Uncommon Descent Serving The Intelligent Design Community

Introducing “Sewell’s Law”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In an April 2, 2007 post, I noted the similarity between my second law argument (“the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view”), and Bill Dembski’s argument (in “The Design Inference”) that only intelligence can account for things that are “specified” (=macroscopically describable) and “complex” (=extremely improbable). I argued that the advantage of my formulation is that it is based on a widely recognized law of science, that physics textbooks practically make the design argument for you, all you have to do is point out that the laws of probability do (contrary to common belief!) still apply in open systems, you just have to take into account the boundary conditions in the case of an open system (see A Second Look at the Second Law ).

However, after making this argument for several years, with very limited success, I have come to realize that the biggest disadvantage of my formulation is: it is based on a widely recognized law of science, one that is very widely misunderstood. Every time I write about the second law, the comments go off on one of several tangents that sometimes have something vaguely to do with the second law, but have in common only that they divert attention away from the question of probability.

So I have decided to switch tactics, I am introducing Sewell’s law: “Natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view.” I still insist that this is indeed the underlying principle behind all applications of the second law, the only thing that all applications have in common, in fact. But since even the mention of “second law” draws such “kneejerk reactions” (as Philip Johnson put it), let’s forget about the second law of thermodynamics and focus on the underlying principle, Sewell’s law. My main point is still the same as before, that natural forces cannot rearrange atoms into computers and spaceships and the Internet here, whether the Earth is an open system or not. But now you cannot avoid the question of probability by saying the second law doesn’t really apply to computers and spaceships (although most physics textbooks do apply it to the breaking of glasses and burning of libraries, etc); whether the second law applies or not depends on which formulation you buy. But it seems to violate Sewell’s law. Unless, of course, you believe that it is not really extremely improbable that the four forces of physics would rearrange the basic particles of physics into computers and TV sets and libraries full of novels and science texts; in that case I can’t reach you.

Comments
Re PE, no 55:
The point is that there is plenty of interesting debate to be had without blatantly inaccurate assertions being posted and debated. If you concede the point under debate that random mutations can add information, then there’s no need for this to continue . . .
1] Now, of course, long since, the fact that someone made an error was conceded and corrected. So at best this is like one who is sent to take down a mountain, turning aside to a molehill, then scraping it down with a shovel and saying that he has dealt with the mountain. In other words, we here can see the fallacy in the easy slide we are being “invited” to make: from “random mutations can add information [in trivial cases of a few bits of change, most often by disabling existing functions . . . e.g. in antibiotic resistance or the like]” to RM accounts for generating the scope of information relative to biofunction across the biological world. --> Here I add that natural selection is simply a filter, unless we can FIRST generate the biofunctional DNA and express it in life systems, there can be no competition on differential reproduction. And such information is contingent, so the only credible dominant mechanisms are chance and purposeful agency. --> In every directly observed case of generation of complex specified information beyond the Dembski bound, this has only happened by agent action. --> The same holds for actual observed cases of irreducibly complex systems, i.e multicomponent systems that break down in function if one or more core-functional parts fail to work. (I have not found alleged counterexamples to this particularly impressive once we go beyond the gleeful headlines and summary statements, BTW. If just one case in point hoods form say Bebe's 1996 presentation, the evolutionary materialist account for biodiversity collapses. Cf Loennig's excellent work onthis, accessibel through my always linked, as usual.) 2] In short, we are here gliding by the problem highlighted by Sewell, in a rush off to a strawman. Namely, Sewell pointed out that: “Natural forces do not do macroscopically [i.e “simply”] describable things which are extremely improbable from the microscopic point of view.” (Nor, am I overly impressed by the idea that the post that is being discussed is simply a springboard for us to debate and draw attention to our own ideas and agendas. That notion may hold for those who want to capture a discussion and divert it from a direction dangerous to their agenda, but such diversions have names in logic: red herrings, strawmen, and the like. If you have a substantial point or correction, that is different from distracting attention.) 3] Now, too, in that phrase “ extremely improbable from the microscopic point of view” lurks all the substantial issues at stake. So to glide by it by substituting “add information” -- one bit of information is not at all in the same ball park as 500 bits, or the millions to billions of bits that are expressed in DNA molecules in life forms [not to mention in the underlying coding system and algorithms that lurk in the DNA's object code] -- is to tilt at a handy strawman set up by someone's sloppy phrasing. (Onlookers observe how PE latched on to the single phrase, whilst insistently ignoring the substantial point. ID onlookers – here is a lesson on how important it is to be careful in how we speak, as we are dealing with people who will use any such error to get away from addressing the real problem on the merits. Of course, we will then be accused of being complex or long-winded, but there is never an end to possible rhetorical objections to ANY statement.) 4] The substitution of a strawman again crops up in the cases being cited from the literature: mutations that are based on one or a few base pairs, or on gene duplication accidents etc or even from the re-emergence of deleted code through redundancy mechanisms. All of these have NOTHING to do with the origination of hundreds of kilobits to megabits to gigabits of novel biofuncional information at novel body-plan level (or the like) in the relevant beyond merely astronomical configuration spaces. We can therefore draw our own conclusions that Sewell's major point stands, if the best that can be done by those who oppose his main point is to major on minors, or to otherwise divert attention. GEM of TKI kairosfocus
phevans What evidence could possibly show that something is not a process of “front loading” but is in fact a random mutation? Answer: nothing. If you are willing to say that rm+ns is pseudo-science which can never be shown to be true, even in principle, then I guess you're right that nothing can falsify design. Personally I'm willing to give the NeoDarwinian theory of macroevolution more time to prove itself keeping in mind that if it's false it will never be proven. ID has nothing to prove. We aleady know that intelligent agency can alter the course of evolution through purposeful changes to genomic information i.e. genetic engineering. DaveScot
Bornagain77: "....nor did they consider the equally valid presumption of “Front Loading” that presumes a complex feedback control loop in the Genome that “mathematically or logically originated” the duplicated gene." "...#1 the (duplicated) gene is a spare tire gene that already had the ability in it. #2 the probability of totally random processes finding the correct mutations to modify the gene are fantastic thus the experiment actually demonstrates “front loading” that is preprogramed in the cell" You have made some good arguments for ID in this debate. I might mainly differ only in the preferred subhypothesis of how ID in evolution actually takes place. From the above quotes it appears that you favor the "front loading" hypothesis of ID among all the other versions. The first quote indicates that you consider that the duplication of the gene was itself a response by the organism to environmental stress, brought about by a complex feedback mechanism in the organism. The "front loading" was apparently in the existence from very early times of the feedback mechanism itself. The second quote implies that the subsequent exceedingly improbable (from a random standpoint) adaptive mutations to the duplicate gene were "front-loaded" in some way. One form of this would be a very complex built-in system which senses environmental stress, determines what genetic changes are necessary to respond to it, and modifies the appropriate gene(s) accordingly. Alternately in the front loading concept, the duplicate genes and adaptive mutations could have been stored in the genome in the beginning and somehow intelligently accessed as necessary. All the different hypotheses of ID of course have various pros and cons. It seems to me that this is less plausible than simply positing that (most of) the genomic changes including gene duplications and simpler mutations have somehow been induced directly at many times in evolution, by some unknown intelligent agent. magnan
The insults aren't necessary, thanks all the same. I'll be leaving the thread after this post. "the probability of totally random processes finding the correct mutations to modify the gene are fantastic thus the experiment actually demonstrates “front loading” that is preprogramed in the cell" Here's a thought experiment for you. What evidence could possibly show that something is not a process of "front loading" but is in fact a random mutation? Answer: nothing. Therefore you get to keep your bias and I get to keep mine. "your definition of information is divorced from the overall function of the organism" Your definition of information inexplicably has a nebulous concept of "function" tied up in it which makes it impossible to measure. Once you find out a way to measure it, please get in touch. Phevans
Phevans, I would also like to add that your definition of information is divorced from the overall function of the organism. Thus your definition is wanting in integrity to empirical validation. For the neo-darwinism scenario to be proven true, the union of information to increased functionality of an organism must remain unbroken. You want to claim increase of information in organism's that have lost functionality when compared to the orginal organism in "normal" environments. This is clearly wishful speculation on your part if you claim this is proof for neo-darwinism. Your gun is quite empty Phevans! bornagain77
Phevans, your gun is quite empty of bullets! I believe the following is what your so excited about. the fact that the "new" galactosidase enzyme didn't evolve from scratch, but was produced by a small number of mutations in an existing gene, albeit in an operon far distant from the deleted galactosidase gene. In a similar way, the gene for the repressor of this newly-evolved galactosidase, a protein that controls its expression, was rendered lactose-sensitive by a simple mutation in its sequence. In other words, the "new" 2-part system was produced by a couple of rather minor mutations in two pre-existing genes. Now let me get this straight; you are claiming this experiment is a totally random process by which information was created and thus proves neo-darwinism true. Well: #1 the gene is a spare tire gene that already had the ability in it. #2 the probability of totally random processes finding the correct mutations to modify the gene are fantastic thus the experiment actually demonstrates "front loading" that is preprogramed in the cell. i.e. Was it just luck as required for proof of neo-darwinism or is there a deeper control loop in the genome finding the correct response? The math clearly points to the latter. #3 the "evolved" irreducible complex system is less robost than the system that was removed. So you are still square one in generating novel information that increases functionality of an organism. Phevans you have no bullets, I can assure you that each system you show will be found as such. I've been through these arguements too many times before. bornagain77
Oooh, here's a beaut I hadn't seen before http://www.evcforum.net/cgi-bin/dm.cgi?action=msg&f=10&t=186&m=1 Not only introduction of novel genetic information, but one which forms an irreducibly complex system! Very nice. Bornagain, you seem to be missing the point with this quote: "Gene duplication, polyploidy, insertions, etc. do not help — they represent an increase in amount of DNA, but not an increase in the amount of functional genetic information—these create nothing new. Macroevolution needs new genes" Gene duplication *creates* new genes. It's mutations on these genes that provides the variance that natural selection works on. Also, you need to be more careful about your terminology. An increase in information in the genome is a very different prospect to novel behaviour expressed at the "macro" level. On a side note, here's a great review from Nature on various ways that new genetic data can arise http://www3.uta.edu/faculty/betran/naturereviews.pdf Phevans
Phevans, I might add that, in regards to information, Gene duplication, polyploidy, insertions, etc. do not help — they represent an increase in amount of DNA, but not an increase in the amount of functional genetic information—these create nothing new. Macroevolution needs new genes (for making feathers on reptiles, for example). Thus you are at square one again in generating novel genes by a totally random process. As I stated before, all test in inducing random mutations have been very disappointing to evolutionists. This is a very simple thing Phevans. The genome is absolutely required to be proven to have a certain amount of flexibilty to "totally" random mutations in order for the neo-Darwinists scenario to even be considered true in the first place. Extensive test have failed to demonstrate any genome flexibility to totally unambiguous random mutations. This is a crushing fact that rules Neo-Darwinism out at the start of the debate! It is a truth that plain and simply cannot be overcome by any amount of wishful speculation! bornagain77
Phevans, IF you are defending the materialistic position, you are required to prove the duplication was a totally random event. Yet even the authors of the paper you cite state: We propose a genetic mechanism to account for these changes and speculate as to their adaptive significance in the context of gene duplication as a common response of microorganisms to nutrient limitation. So even the authors of the paper admit the need for a "preexisting mechanism" to explain the phenomena. Phevans this example you show me is screaming "front loading" not the purely random mutation that is absolutely required to prove neo-darwinism true!! Thus your need to prove a truly random and beneficial event has happened is still unsatisfied. This is a good example of a preexisting complex feedback control loop by the way. In regards to your assertion that it is truly generating novel information I point out two facts. #1 the gene that was duplicated was a preexisting gene , duplicated by a preexisting mechanism I might add, in responce to stress placed on its environment. #2 The overall functionality of the yeast is decreased in its normal environment, thus when the stress is removed from the mutant yeast, the original yeast will be favored by selection over the mutant yeast. Clearly you have not demonstrated a gain in information since you have not "built a better overall yeast" that will be favored by selection. bornagain77
kairosfocus: "PE, you cannot turn a triviality that exploits a sloppy remark by a commenter into a major conclusion on the issue in the main." I'm not. I'm attempting to curb the use of sloppy and inaccurate generalisations which are parroted as being gospel when they are patently and provably false. These blog posts are starting points for discussion, and BA and I were discussing an side issue which naturally came up in the conversation. You have no right to dictate what is and is not valid debate. You'll notice that not once have I claimed that new genome information proves Darwinism or disproves ID. This is not my point. The point is that there is plenty of interesting debate to be had without blatantly inaccurate assertions being posted and debated. If you concede the point under debate that random mutations can add information, then there's no need for this to continue. Phevans
Okay, a couple of quick points: 1] BA 77: Good thoughts, and yes I had forgotten that side -- if an observation is common across several hypotheses, it cannot distinguish between them. [This is of course epistemology and logic as applied to science. Both philosophical sub-disciplines.] 2] PE: I’m responding to that claim, and onl;y that claim PE, you cannot turn a triviality that exploits a sloppy remark by a commenter into a major conclusion on the issue in the main. I have pointed out the balance of the matter and have focussed on the main issue. That you choose to ignore that and refocus insistently on a side issue that is at best trivial, is telling and not in your case's favour. 3] To BA 77: I’m really not interested in playing the faux-philosophy game. You made a claim that mutations never add information. If you make empirically false claims, then I will show that you are mistaken. Please stick to this issue. Here I again must beg to intervene. The substantial point in the thread is as I summarised yesterday, and the "addition of information" claim is a red herring and strawman relative to the issue in the main. You have yet to address the key question of originating 250+ base pr of novel biofunctional information originating by chance, much less the aggregate of 500+ k of information, with associated molecular machines, and the underlying algorithms and computer language. To major on such minors while the key issues go a-begging is not a sign of a healthy position. 4] faux-philosophy game Finally, the philosophically loaded roots of the matter cannot be so easily dismissed as you attempt by using this term. For instance, one of the reasons that it is held that inference to design is inherently unscientific is the assertion of so-called Methodological naturalism, which in effect asserts an attempted redefinition of "Science" that rules that scientific thought may only think in terms of entities permitted by the evolutionary materialist cascade for the origins of the world as we experience it: cosmological, chemical, biological and socio-cultural. That begs major phil of sci questions, and is historically inaccurate. [Cf my always linked and onward discussions such as in Peterson.] Further, so soon as we ask why it is we think in terms of chance, law-like and agency based causal forces and factors, we are in the province of phil. This is what underlies e.g. hypothesis testing and Fisher's approach; further tot his, that the resulting empirical reasoning is defeatable thus provisional, and that this is a feature of scientific work, are phil and historical qs too. And, it would hardly be fair or accurate to say that such underlying issues are off-topic, as these are where the main issues have to be decided. GEM of TKI kairosfocus
Kairosfocus: The discussion I was involved in with DaveScot and BornAgain77 got to this point: bornagain77: "I point out that in all the countless millions of observations of mutations in DNA in the labratory, NOT ONE has ever been shown to unambiguosly increase information." I'm responding to that claim, and onl;y that claim. If you're not interested, feel free to ignore us. BornAgain: if you want an example of observed gene duplication, here's one - http://mbe.oxfordjournals.org/cgi/content/abstract/15/8/931 From a Sewell information POV, there is clearly more information. From the point of view of CSI: - the genome is now more complex - the genome is now more specified (more bits) - there is more information Please show how this mutation does not add information to the genome. I'm really not interested in playing the faux-philosophy game. You made a claim that mutations never add information. If you make empirically false claims, then I will show that you are mistaken. Please stick to this issue. Phevans
Phevans: You claim proof for evolution by siting a study on gene duplication. Yet they infer the gene was duplicated first then go about to prove it. This is a common practice of evolutionists (and many other scientists) and is clearly the practice of "bad science". To commit to a prior philosophical bias and then set out to prove it is exactly the opposite of how science should work. Science should presuppose all reaonable philosophical presumptions are true and then set out to see which one is "most likely" true. Your study has several presumptions prior to investigation. It presumes that the origination of the gene is a totally random process for one. Yet obviously they did not consider the equally valid ID presumption that the Gene was designed that way in the genome in the first place nor did they consider the equally valid presumption of "Front Loading" that presumes a complex feedback control loop in the Genome that "mathematically or logically originated" the duplicated gene. But above all this and indeed even before all this , For gene duplication to even be considered a viable random process by the evolutionary scientists they must first prove that the genome itself is indeed flexible enough to allow the "totally random" nature of mutations that the materialistic/evolutionary theory is absolutely required to have to be true in the first place. As I have stated before, all test in these areas have been highly disappointing to the evolutionists (not one uncontestable mutation out of millions!). And even though this is absolutely required to be proven in the lab for evolution to be considered true, Evolutionists blindly ignore this crushing fact of the overwhelming detrimental nature of purely random mutations to the DNA. Yet, in the same vein, the fact that the overwhelming majority of mutations to DNA are found to be slightly detrimental is absolutely crushing to the materialistic theory of evolution since the slightly negative mutations are never selected out and they therefore spread throughout the entire population before any "supposed" beneficial ever even has a chance to occur. The principle is called "Genetic Entropy". I suggest you order the book from amazon. You see Phevans, materialism and thus evolution are both constrained to satisfy the empirical evidence for blind chance before they can even be considered valid. They have not done so and in my opinion from the evidence I've seen so far they will never satisfy this requirement. bornagain77
PE:
RE: The original discussion was to see information added to a genome . . .
The substantial and original discussion, as can easily be observed above, in Dr Sewell’s post is on:
“Natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view.”
1] What I have done is to provide a specific context for assessing just what “improbable” means, courtesy the Dembski probability bound; this is standard for discussions on the issues of CSI. 2] So, to try to revert to discussions of single-point information loss mutations or the like, to take rhetorical advantage of careless wording by a commenter, is to switch to a strawman, I am afraid. (Onlookers: Those who turn away from the real issue to attack strawmen, thereby reveal that they cannot properly address the issue in the main.) 3] Not surprisingly we then see a citation on exactly what was never the serious focus for discussion: An example is this paper . . . . An example of a gene duplicated which then adapts for better functionality., which illustrates the point aptly. This is a case of replication of existing information with slight modification [e.g. note Fig 1 in yr linked, which speaks of 28 base prs differences . . . well within 250, and that is all the way across to human DNA . . .not exactly the immediate ancestral species!]. 4] This is very far indeed form a realistic test, where for instance the Cambrian Revolution originated on the usual timelines, dozens of phyla within a few dozen MY. The jump from unicellular organisms to create dozens of phyla is huge, as can be seen in the gap highlighted by Meyer et al int heir now famous paper. For instance, a modern arthropod has ~ 180 Mn bpr, and unicellular species reasonably would have had say 1 mn or of that order. Where did all that additional information to create the new body plans come from, so fast? 5] As for your attempted dismissal of CSI, I note that we can clearly enough see whether something is functional/non functional, and this is a good enough specification. We can then look at the information storage element involved, and see how many possible configs can be stored in that space, I^N: 250 4-state elements have ~ 3.27*10^150 possible configs. Of these only a very small fraction for any typical 250-element functional chain of DNA, will retain function against RANDOM changes. And of course this would have to be replicated for every 250-monomer block in a biofucntional string of DNA. 6] This is a very plausible metric of the quantity of information that has to be generated to get to that functionality [we are simply working the other way], and it reflects the well-known high degree of isolation in the overall config space for biofunctionality. In short the CSI issue does not go away so easily as you imagine. 7] As to the Talk Origin cites, the issue is not whether the papers exist, but over what is the substantive issue at stake, and what are the materially relevant items of evidence relative to that. TO is notorious for strawmen, elephant hurling tactics and literature bluffing. (EH, FYI, pretends to a consensus by papering over a major series of issues with a few smooth words as though that says it all [the CC debate is a classic, with the typical mantras on “consensus”]. Lit bluffs cite places where passing reference tot he themes in an issue are made, but do not address the matter in the main on the substance [this is a common tactic, e. g the notorious stack of journal articles presented io Behe by the lawyer on the other side in the Dover trial].) If you have something serious on the merits, as per the proper focus of the post, let us hear that. Otherwise by tilting at strawmen you imply that you cannot address the issue on the merits. GEM of TKI kairosfocus
kairosfocus: "I have yet to see in particular the peer-reviewed, generally accepted documentation of an empirically observed significant novel biofunction originating by chance that requires say 250 base pairs of novel DNA information formed out of noise. That is, substantially, what BA 77, DS and others are asking for. Replication or gene transfer or reproduction through redundancies etc do not count. Nor, does hurling elephants nor literature bluffing." The original discussion was to see information added to a genome. I'll readily admit that there are no examples of the size you specify; it would be most surprising if there had been, with the timescales we're talking about. I fail to see how replication and mutation of genes is not a clear case of information being added to a genome, for any definition of information. Regardless of how people feel about TO, the references are valid. An example is this paper: http://www.nature.com/ng/journal/v30/n4/full/ng852.html An example of a gene duplicated which then adapts for better functionality. I'm quite happy to take the definition of CSI as the formal definition of information; however as there's no way of measuring CSI it renders the whole discussion moot, since we can't say whether CSI has increased or decreased pre- or post-mutation. Phevans
Hi BA 77:
it is a commonly know[n] fact that the further a bacteria deviates from its starting point the less fit for survival it quickly becomes.
This is of course a characteristic of isolated islands of functionality in a configurational space. Over the weekend, in a funeral for a relative, I happened to be sitting behind three generations of surviving women. As I observed the Grandma, the daughter and the grand-daughter generations, I thought about how our bodies wander across islands of functionality as we age, then what happens when we pass the edge of bio-function, lying just across the way in the casket. Then, in the yard outside, what happens thereafter as all active maintenance mechanisms fail. It led me to reflect on just how pervasive is the issue of entropy, and how persistently systems in functional macrostates tend to drift away in the absence of active measures to self-maintain, or maintenance interventions from outside. [Makes me wonder about the physics of the fall as described in Genesis . . . was there a supervisory control system based in a domain of reality that is not based on molecules (so is free of the constraint of 2 LoT), that can maintain the body in the face of the trends locked up in molecular dynamics? Has that been switched off for one reason or another?] In short, a funeral can be an interesting time for reflection. And, of course, there is th point in Clausius' very first example of 2 LoT at work, that raw injections of energy without configuring information tend to add to the degree of disorder at molecular levels. Then, this morning, you talk about genetic entropy -- multigenerational wandering away from islands of functionality leading to extinction. Maybe we need to use 2 LoT, molecular form, as Sewell discusses, to do a rethink of a lot of things . . . GEM of TKI kairosfocus
Phevans, Since you posted I guess you sincerily believe there is examples out there that increase information/function. Most of the so called beneficial mutations that you will be real excited to show me will ALL turn out to be of the bacteria/antibiotic adaptation variety. Dr. Sanford deals with this type of mutation in his book "Genetic Entropy" . He says all mutations like this are the result of some type of loss of function in the bacteria, i.e. the antibiotic binding protein in the bacteria losses the ability to recognize the antibiotic molecule. He compares it to someone losing their car alarm. Many people would see the loss as beneficial. Yet in actuality it is still loss of function for the bacteria and thus, in my very pragmatic definition, loss of information in the bacteria. As Patrick stated most people on this site have heard this type of argument many times before, You fail to fully appreciate the fact that there actually is no proof of any evolution where you think there is some. NEVER has there been a bacteria transform into any other bacteria depite extensive experimentation trying to accomplish this. In fact it is a commonly know fact that the further a bacteria deviates from its starting point the less fit for survival it quickly becomes. Where is your proof phevans? You have none. Please Show me any one thing that you believe is rock solid proof of evolution and I will clearly show you why it proves ID. This is a fair challenge to you. Until then take care. bornagain77
phevans, I wouldn't bother posting links to talkorigins. Long-time UD members have all read it-and the updates-and we're not impressed at all. In fact, it's generally considered to be a bad source of information. Try PNAS or other sources.
You are mistaken, for any formal definition of the word information.
I take it that you refuse to accept the formal definition of CSI, etc as valid? Refuse the information category that is actually under consideration and force-fit another instead in order to make arguments work. We see this tactic all the time and it's not conducive to a good discussion. Patrick
PE: I took a look a the TO link,and as usual I am not impressed. (These are prime examples of the "anything can happen in an open system" fallacy.) I have yet to see in particular the peer-reviewed, generally accepted documentation of an empirically observed significant novel biofunction originating by chance that requires say 250 base pairs of novel DNA information formed out of noise. That is, substantially, what BA 77, DS and others are asking for. Replication or gene transfer or reproduction through redundancies etc do not count. Nor, does hurling elephants nor literature bluffing. Lets see specifics that pass the sort of hurdle that must have been passed to get OOL or body plan level evolution if NDT and linked OOL models are as well established as advertised. GEM of TKI kairosfocus
H'mm tried to post on the difference info carrying capacity [so-called Shannon info], functionally specified (algorithm-implementing] information, FS COMPLEX info [FSI but beyond 500 bits etc],and semantic [symbolic, language like] info but the filter grabbed away . . . GEM of TKI kairosfocus
Gentlemen: Can I suggest that we make a distinction here: 1] Information-carrying capacity [what so-called Shannon Information measures], vs: 2] Functionally specified information, which is like the object code in a microcontroller or the like -- information at work in an information-processing system. DNA fits in here. [Note my use of examples to specify what i am talking about] 3] Functionally specified COMPLEX information -- functional information that goes beyond the Dembski type bound in terms of being over 500 or so bits length in storage capacity required, or, more generally beyond that which is so isolated in the relevant configuration space that would fit the storage capacity that it is less than 1 in 10^150. [This is the threshold beyond which it is not credible that on the gamut of the observed cosmos, we could by chance dominated processes get to the islands of functionality.] --> BTW, a good engineer can do a whole lot with 500 bits of storage. 4] Semantic information -- what we use when we interact using concepts, ideas and the like, i.e language in the subjectively meaningful/symbolic sense not just functional/algorithm controlling sense. In doing this I am sort of following Berlinski. I think that it helps us keep out of ambiguities. On that too, I think I would like to see documented cases of empirically observed mutations that lead to biologically novel functions, as opposed to the usually met with loss-of-function oriented point mutations or simple breeding out of variability through the founder principle, or replication of existing functionality through redundancy mechanisms. Then, I want to see such innovation documented through recent replicable studies -- not inferences, extrapolations and assumptions on the deep past -- that exceed 500 bits, i.e 250 base pairs of noise converted by chance into biofunctional DNA. Beyond that, I want to see the same lead to novel body plans, comparable to say a dog acquiring ability to fly. Then, we can say that NDT style macroevolution has some serious empirical warrant. Until then, I remain for excellent reason profoundly skeptical that such has the required degree of warrant relative to the obvious inference that based on experience of what agents do, FSCI in biofunctional systems -- starting with the molecular technologies of the cell -- is just that: artifacts of intelligent, purposefully acting agents. GEM of TKI kairosfocus
A quick browse through the talkorigins faq will give you all the information you need. The relevant link is here: http://talkorigins.org/indexcc/CB/CB102.html How could "microevolution" occur without any novel abilities or information being generated? Before you say horizontal gene transfer, where do these new genes come from? Do they not count as new information? DS: my argument is that 2LoT isn't relevant to particular encodings of information. A given genome is a form of abstract information and isn't affected by the 2LoT, but one particular encoding of that genome in the form of DNA may be affected. However that particular encoding will be replaced long before this happens. Phevans
Phevans, Since you seem to know of beneficial mutations in DNA that have unambiguosly increased information/function that are not in any of the literature I've read, I would like for you to point them out for me. (As well I know quite a few others who would like to know of these beneficial mutations you know of.) Until you point them out I will continue to state my original claim! bornagain77
phevans Obviously we aren't discussing Shannon information. The background noise in a semiconductor is about as random as random gets. In terms of Shannon information it's packed as full of information as you can get but in terms of just about any other kind of information it is devoid of content unless it happens to be the universe telling us secrets and we don't have the decryption key. As I said we need to be clear it's subjective information we are discussing. Information coded or encrypted into a storage or transmission media. A modulated radio signal carrying an encoded voice is one example. DNA encoded with instructions for constructing proteins is another. DaveScot
bornagain77: "in all the countless millions of observations of mutations in DNA in the labratory, NOT ONE has ever been shown to unambiguosly increase information." You are mistaken, for any formal definition of the word information. DaveScot referred earlier to Shannon information, which is a measure of the information content of an arbitrary string. It's what is generally meant when discussing the information content of the genome, though if you are referring to a different sort of information, please let me know and point me at a formal definition. As DS mentioned, Shannon information is highest in a totally random string, and it is inversely proportional to the probability of the string's occurrence. That is, low information content means the string is highly likely to occur, and high information content means that it is unlikely to occur randomly. Random mutations in the genome can either increase or decrease the information content. It is this variation on which natural selection works. It has been mathematically and experimentally verified, and is not under serious debate from either the Darwinian or ID sides. Please stop peddling these misconceptions. Phevans
H'mm: Some interesting developments -- this thread is a spiral not a deadlocked circle. 1] Jerry: in the discussion of this in February, bFast made by far the best observation of specificity, namely that the sequence or object was closely correlated with functional processes and thus specified by them. That is part of why, in my always linked, I specifically discuss FSCI -- FUNCTIONALLY specified, complex information. First observe a function [eg bio-function based on molecular energy converters], then discuss how informational elements are arranged to get there. Turns out that the functionality rests on complex digital codes and sequences of meaningful elements that are so isolated in the configuration space that they cannot be credibly accessed through chance plus necessity alone acting through known patterns of random behaviour at micro levels, i.e Sewell's Law and its background in statistical thermodynamics. (Cf my own intro level discussion in Appendix A to my always linked, with onward links.] 2] I think the coin toss and card deck examples are red herrings in this and while they indicate intelligence, they are qualitatively different processes. It so happens that the 500-coin toss or the serving up a suspicious hand at poker cases are illustrative of the underlying issues of digital configurational spaces and functionality. Both work because they are connected to the disreputable origins of probability theory, gambling games and the problem of cheating or which bets make sense to take. The mathematics so discovered or exemplified has great applicability and utility in many other cases of greater moment. Also, the cases are relatively easy to understand. For instance, if I were to come to you with 500 coins on a tray vigorously being tossed and dancing fast, then let them settle down -- lo, all heads -- and I say that I just tossed at random and this very special and specific and simply describable outcome happened just so, would you believe me? [Or would you suspect that for instance all the coins might just be double headed? You are not permitted to inspect the coins.] So, they are not red herrings but toy examples of a much bigger thing. 3] Mt Rushmore vs rocks in New Hampshire or Hawaii Again, Mt Rushmore has semantic context and functionality, not just coincidence. Rocks may form vaguely head-like shapes and shadows, but they do not spontaneously form a cluster that resembles four historically important presidents of the US to the point of being a high-quality group portrait in accordance with principles of monumental sculpture. 4] PC to BA 77: we both agree that NS+RM can “generate increasingly meaningful (subjective) information”, and that this isn’t in any way in violation of the 2LoT The issue here is probabilities relative to accessible microstates and associated macroscopically observable outcomes, and in the question in view, the minimum degree of complexity and specification is well beyond the Dembski bound of what is remotely likely to happen on the gamut of the observed cosmos. Just look at the OOL issue and the issue of innovating body plans . . . [If a system is not beyond that bound, it is not FSCI in the meaning of the term we are discussing, 500 bits of information storage capacity is enough if the relevant state in question is unique, and 1 in 10^150 otherwise. i.e we just described the relevant step size, and relative to getting 500 k or so DNA elements to begin life or 10's of millions to innovate new body plans, the "small" step is beyond the Dembski bound. Recall you have to get to a sufficient complexity and show it empirically, to get to reproducing life which can then undergo natural selection. Cf TBO's classic discussion in TMLO] RM + NS can explain micro-evolution in at least some cases -- as a general rule by LOSS of biofunctional information through scrambling [e.g antibiotic resistance etc] -- but it simply cannot reasonably surmount the probabilistic hurdle Sewell is pointing to. THAT is what is being dodged in the "anything can happen in an open system" mantra. No, absent intelligently directed constraints, open systems migrate naturally towards more probable and less functional macro-configurations, ie the ones with overwhelmingly more microstates that fit in with them. That is what drives diffusion, i.e. increases configurational entropy, and it is what drives the disintegration and decay that we find ourselves fighting so often on all fronts on which we like to make progress. Ir so happens that evolutionary materialism is opposed to it, but that does not make it any less empirically well established as a physical principle. Nor does it change the fact hat every case of FSCI that we directly know th cause of, is again and again, a case where intelligent action was brought into creative play. Therefore, on empirically anchored inference to best explanation -- screams from the evo mat advocates notwithstanding -- the FSCI in life forms etc is best explained by the same basic mechanism: intelligent action that intended to create such function, and succeeded GEM of TKI kairosfocus
Rude: "Which brings up something I’ve wondered about: Is design written in the language of mathematics/logic and is that itself designed? If the laws of physics are contingent then those who see them as designed are correct. But does it stop somewhere? Or is it design all the way down?" A good twist on the "turtles all the way down" aphorism. That is a profound question I have wondered about myself. For instance the behavior of integers. Could there be a realm where the series of primes was different from what has been 'discovered' by mathematicians? If so, it would seem to violate some sort of fundamental rules governing anything for it to exist at all. In it's most simple form this argument from the known behavior of numbers in mathematics could use as an example the simple multiplication tables. Could the insight that 2 x 2 = 4 be a limited human perception not necessarily applying to higher levels of consciousness or existence, in which the known behavior of numbers in mathematics is a design choice by a higher intelligence? I think this question is vastly beyond the capability of human intelligence to fathom, because the known structure of mathematics is inherent in the structure of the physical world and human rational thought itself. magnan
WmAD observes that there are archaeological objects that we are able to determine are designed but not what they were designed for. Dr. Egnor posted a bit on one such object (the Antikythera Mechanism) at http://www.evolutionnews.org/. Atom
Interesting stuff—no time to digest it all! But in one of his books—if my memory serves me—WmAD observes that there are archaeological objects that we are able to determine are designed but not what they were designed for. This dissociates the specification from function, though mostly I think it would be the same as the function. Linguistics in America splits into two camps, where what interests the one—the formalists—are structures with no inherent function and what interests the latter—the functionalists—is connecting those structures to function. Time has not been kind to the formalists in that little by little each structure and transformation has been shown to code for function. So can we say that linguistic function (semantic roles, discourse coherence …) is the specification that confirms the design (complex syntactic structures and grammatical processes)? And were the formalists to be successful in identifying a body of complex functionless linguistic forms, would we still identify them as designed? What would be the specification? Which brings up something I’ve wondered about: Is design written in the language of mathematics/logic and is that itself designed? If the laws of physics are contingent then those who see them as designed are correct. But does it stop somewhere? Or is it design all the way down? Dave Scott says, “But how can you objectively measure specification? I don’t believe you can. Specification is tangible and our brains use it (consciously or unconsciously) constantly in evaluation and decision. Specification is a product of mind, not nature.” Perhaps mathematics/computer savvy linguists should be working on this, for linguists distinguish between old and new information: Every clause advances the discourse with a small amount of new information (else it is redundant) but also ties into the discourse with a small bit of old information (else it is incoherent). But you’re right, this distinction is subjective, yet in the flow of language perhaps it could be measured. Words, you know, have meaning, but devoid of a context there is no information. The minimal unit of information is the clause (the proposition to logicians, the function to mathematicians). Dave Scott again, “Is it still science when it can’t be described objectively? Is specification excluded from science because there’s no way to scientifically or mathematically distinguish War and Peace from a book of gibberish?” Not, of course, if we’re not demarcationists. Also I might add that the tools of hard science—namely human language and its derivative mathematics—are subjective. Pardon my rambling, folks, just wanted to get in my two cents re specification—it’s the all important component of design. Rude
I made the following comment "what is common about a 500 coin toss with all heads, Mt. Rushmore, a deck of cards sorted into suits, an English paragraph and DNA." The coin toss and the deck of cards are different from the other three examples. Mt. Rushmore, an English paragraph and DNA all become meaningful because they are associated with outside functional things. The coin toss and deck of cards are both based on the nature of their internal sequences, not some outside criteria. It is just a thought and in the discussion of this in February, bFast made by far the best observation of specificity, namely that the sequence or object was closely correlated with functional processes and thus specified by them. I think the coin toss and card deck examples are red herrings in this and while they indicate intelligence, they are qualitatively different processes. Maybe specified is not the best word This is just a thought. jerry
Phevans you stated; FWIW I think we’re on a similar page, inasmuch as we both agree that NS+RM can “generate increasingly meaningful (subjective) information”, and that this isn’t in any way in violation of the 2LoT (though correct me if I’m wrong). Now I don't know but you may know something that I'm not privy too. Yet, I point out that in all the countless millions of observations of mutations in DNA in the labratory, NOT ONE has ever been shown to unambiguosly increase information. For you to believe information can increase in such many has no solid empirical evidence in which to base the belief on. Like I said I may be wrong..If I am I, and many other people, would please like to know the empirical evidence that you base this belief on. bornagain77
jack krebs Can you offer a definition of specification that would be useable: something mathematically feasible that could be applied to all sorts of different things, including those that we believe are not designed, and that would be replicable in that different people would get the same results when using the definition irrespective of their intuitive preconceptions? Such a definition is needed in order to test hypotheses about design. If I could it would sure help out some of these forensic sciences. Just imagine how useful it would be in criminal investigations if intent could be determined by a mathematical formula or for archeologists if a calculator could determine if an object was designed or not. ID and evolution are both soft sciences. You wouldn't be the first impose a double standard when it comes to ID but it would still be a disappointment if you did. DaveScot
Dave I understand that over large scales (hundreds of billions of years) we can equate mass and energy, and conclude that mass will tend to convert to energy and disperse (i.e. become less ordered). I can also see what you're saying about alphabet soup, although I'm not 100% convinced that this is strictly an application of the 2LoT (not saying you're wrong, just that I'm unconvinced!) However if we take Hamlet spelled out in alphabet soup as the example, then this is one physical representation of some information. Of course, shaking it up will destroy this representation, and even leaving it for (m|b)illions of years will do the same. However the representation of information in DNA isn't "shaken up" or destroyed, it's replicated again and again, so the arguments made around particular physical representations of information are tangential at best. FWIW I think we're on a similar page, inasmuch as we both agree that NS+RM can "generate increasingly meaningful (subjective) information", and that this isn't in any way in violation of the 2LoT (though correct me if I'm wrong). Phevans
Jerry 1. I envy you. Have fun in Greece. 2. Pornography and obscenity can be defined easily. Pornography is making images for the purpose titillation. The difficulty comes in writing law -- was the image made for titillation or to inform the public about the dress of native girls in the South Seas? Obscenity means offensive to the community. Again, not hard to define. What is hard to define is to the degree to which a community can react to being offended without violating an individual's right ot offend. tribune7
I would like to point out that the greastest majority of extinctions in the fossil record were natural, that is they were not brought about by cataclysm, I believe the figure is 95% of animals go extinct by natural causes. The most likely cause of most of these "natural" extinctions is "Genetic Entropy". I think the average time for genetic meltdown is estimated to be 4 million years. Though there are examples of species lasting far longer than that. It is would be interesting to find which animals lasted longer. I think a clear prediction of ID could be made that would state something to the effect , The more the species had to adapt the quicker it would suffer Genetic Meltdown due to the accumulation of deleterious mutations. Likewise ID will predict the less selection pressure on a animal in the fossil record the longer it will last in the fossil record since it will have less Genetic entropy. This line of reasoning should produce results that are far more accurate than Darwin's fabled tree of life diagrams bornagain77
Can you offer a definition of specification that would be useable: How about a pattern in which the components can be organized in many ways according to chance but in which only one causes an event? something mathematically feasible that could be applied to all sorts of different things, But we aren't trying to make it mathematically feasible, remember? (although I think I just did). We are merely trying to define the word so it can better be used as part of the broader construct of CSI. including those that we believe are not designed, and that would be replicable in that different people would get the same results when using the definition irrespective of their intuitive preconceptions? But this has long occurred. Was the edge on the stone caused by natural forces or was it put there by design? What's relatively new is applying this criteria to biology. tribune7
"So the challenge is to what is common about a 500 coin toss with all heads, Mt. Rushmore, a deck of cards sorted into suits, an English paragraph and DNA." Pardon my ramblings, but it sounds like a question about human psychology. Perhaps here is no common thread there in reality, just subjective classifications. (Based on what, is still a good question.) The thing is, if it turns out to be an illusion of the mind, then what is truly reliable about human intution? The sword seems to chop materialism (against human reason) as well as support it (against a designer.) But if reason is unreliable, why then, it is unreliable for everything except pragmatic survival function. At any rate, this kind of talk is very interesting to me as of late. Keep up the good work. I think someone is onto something. mike1962
Granville Would the Design Correlary to Sewell's law hold?: "Designed systems do macroscopically describable things which are extremely improbable from the microscopic point of view.” OR Does would this have to be written in terms of Dembski's filter of chance, law and complex specified information? DLH
tribune7, Haven't had much time to follow this discussion since my wife and I are leaving for Greece tomorrow for an intellectual holiday with some friends learning about what started Western Civilization. I think the comment was made on an older thread about not being able to define "it" but we can recognize it when we see it. Then "it" was meant to be pornography or obscenity. You can substitute specificity for either one and be in the same conundrum. So the challenge is to what is common about a 500 coin toss with all heads, Mt. Rushmore, a deck of cards sorted into suits, an English paragraph and DNA. "That is the question" as one former writer once said. jerry
Shannon entropy ("Information") should be considered as the capacity of the channel to hold information. It cannot distinguish between randomness and specified information with the same frequency of letters. e.g., between Pi, and Pi run through one way hash or seed to a random sequence generator. When we know the information it can be recognized or identified. The difficulty is identifying information with no a priori knowledge, especially when the information is complex. However this still has meaning to the originator, even if the observer may not recognize it. This is "subjective" only to the extent that the knowledge is not shared. Once it is, then it can be viewed as "objective"'. DLH
DaveScot: "Random change (which the four forces can generate) plus natural selection (preference or preservation of one change over another) coupled with a feedback mechanism (heredity) can indeed generate increasingly meaningful (subjective) information. But in order to defeat the increasing improbability of larger jumps in meaning it must add it in tiny (more probable) steps. The argument thus becomes one of discontinuities that must be bridged in small steps to get from inanimate matter to complex living systems. In theory there may exist a series of arbitrarily small steps but on the other hand there may be discrete transitions required that that the laws of probability make virtually impossible." This is how I have seen it also. Unfortunately, despite all the arguments, there seems to be no absolute principle such as a second law of thermodynamics as applied to information or complex specified information, that forbids the accumulation of modest amounts of complex specified information in small steps from random variations filtered by selection processes. Ultimately it comes down to the argument from improbability based on the recognition of "irreducibly complex" biological structures and systems, and on the known extremely large total amount of complex specified information in living organisms. Irreducible complexity is understood not to be absolute impossibility of having been generated by Darwinistic processes, but the extreme improbability of bridging the large gaps necessary for each step to be either adaptively advantageous or neutral. This is an extreme improbability based on the the relatively limited time and number of generations available to achieve these biological systems based on the fossil record. Unfortunately this works down to a debate over quantities, rates and probabilities rather than absolute principles and laws. magnan
Survival of the Likeliest? Chimera
Hi, a number of times I've tried to post a link to an article which argues that thermodynamics may be the driver of evolution rather than being a hindrance to it - but does not seem like it wants to appear! Chimera
DaveScot: "The key observation is that 2LoT still applies even to subjective information. If we order the letters in the alphabet soup into a subjectively meaningful pattern and then leave to up to nature the subjective information will diffuse into meaningless (objective) information. Theoretically the order still exists (information cannot be lost) but it definitely becomes more diffuse. Stephen Hawking fought for years to prove that information is lost (destroyed) in a black hole but he eventually conceded that it is not so the axiom that information cannot be created or destroyed still stands." It seems to me the "subjective order" you refer to is equivalent to the "complex specified information" defined by Dembski. It is the basic information content irrespective of any complex specified or subjective content to the information, that cannot be desroyed. The subjective complex specified information definitely is destroyed as in your example where the letters in a can of alphabet soup spilled on a table are laboriously arranged by an intelligent agent to spell out Lincoln's Gettysburg Address. Gathering the letter bits up and mixing them together again in the can will not just diffuse the complex specified information - it will destroy it. You can't put Humpty Dumpty together again. What is not destroyed is the Shannon information content of some n number of letters in a 26 letter alphabet. magnan
I like the tack that Professor Sewell is taking here. One thing NDE has always lacked is an underpinning natural law or laws to support it. While the below may be a somewhat sophmoric attempt on my part (I'm an electrical engineer, and don't have a lot of formal training in information theory), it may perhaps spark interesting discussion. Sewell's Law would certainly belong in here, perhaps in substitution the SLoT reference. --------------------------------- Laws of Information For Biological Intelligent Design (BID) to come fully into its own as a scientific theory, it needs to demonstrate predictive power. In the other physical sciences, natural laws describe with accurate and repeatable precision the outcome of processes or operation of systems, when complete knowledge of present conditions is known. Central to BID is the concept of Complex Specified Information, or CSI. Is it possible to postulate a set of laws to describe information and its interaction with matter? I attempt to posit some here. What is “information”? - Information is a purposeful arrangement of physical matter that can: o Be an input to a process (much like energy), to control said process o Provide a record of past processes o Describe processes not yet realized - Information is required to direct a process to an outcome that would otherwise be prohibited by one or more natural laws. It is a counteracting agent to natural law (creates “counter-flow”). - Because information is composed of physical matter, it is subject to decay over time. What is a “natural law”? - A natural law describes the behavior of matter and energy under prescribed conditions. - A natural law allows accurate predictions of future outcomes, when present conditions are known with certainty. - The properties of matter and energy determine the law, the law does not control matter and energy Questions (answers to which are subject to discussion): 1. Is it possible to write a set of Information laws, comparable to the laws of thermodynamics, gravity, motion, etc? Answer: Yes. 2. Is it possible for information to exist in the absence of an interpretive entity or process? Answer: No. Information only has meaning in the context of an interpretive entity or process. 3. If not, must the interpretive entity or process be “intelligent”, i.e. sentient? Answer: No, but it could be argued that the interpretive entity must itself be the product of intelligence. 4. Are there any observed examples of spontaneous, undirected (by intelligent agents) generation of new (complex and specified) information in nature? Answer: I don’t know. However, even a beneficial (from a natural selection standpoint) biological mutation represents an overall loss of information in all observed cases. Proposed “Laws of Information” 1. An increase in the specified complexity of any system requires in increase in information. 2. Information entropy increases over time in a closed system (i.e. in the absence of intelligent input or importation of information from outside the system). 3. Randomness and information content are inverses of one another; as order increases, the amount of information necessary to describe the system tends to decrease. Example: Transition of a collection of water molecules from a liquid to a solid state (i.e. freezing) results in an increase in order and a decrease in the information needed to fully describe its state. 4. The more complex and specified the system, the greater the likelihood that random changes to its CSI will be deleterious to said system. sabre
ph_evans What does the 2LoT (thanks for the acronym!) apply to other than energy? Everything! Matter and energy are equivalent according to E=MC^2. Everything tends toward homogeneity. Even baryonic matter is thought to eventually decay into photons although this has yet to be observed. The half-life of a proton is thought to be 10^35 years. When a proton decays it is thought to become a positron and a pion which then very quickly decay into a flash of photons which then diffuse in all directions at the speed of light. This can be equated to order. Ordered systems tend toward disorder. This can also be thought of as sorted. Sorted collections tend to become unsorted. Diffusion is the process by which order becomes disorder. It gets a little dicier when 2LoT is applied to information. In Shannon information the maximum amount of information is the least amount of order. Take a serial bit stream - a sequence of binary states 0 or 1 (or true/false, on/off, black/white, whatever). If there's any ordering the Shannon information content decreases. It takes far less information to describe a bit stream composed entirely of ones or zeroes than it does a stream containing mixtures of both. If the stream is totally random that is where it takes the most information to describe it. But that's objective information. In our context we're interested in subjective information. Imagine a bowl of alphabet soup. Stir it all you want, let it settle, and you might get a few bits of subjective information like CAT or DOG but you're never, or very close to never, going to see it to settle into a page of text from War and Peace. But the difference is entirely subjective. Without the specification of language (an independently given specification) which is in the eyes of the beholder, there's no difference. The key observation is that 2LoT still applies even to subjective information. If we order the letters in the alphabet soup into a subjectively meaningful pattern and then leave to up to nature the subjective information will diffuse into meaningless (objective) information. Theoretically the order still exists (information cannot be lost) but it definitely becomes more diffuse. Stephen Hawking fought for years to prove that information is lost (destroyed) in a black hole but he eventually conceded that it is not so the axiom that information cannot be created or destroyed still stands. But that's tangential. We're interested in the diffusion of information. Scatter a DNA molecule to the four winds and the information might still be there but the living thing that needed the DNA will be deader than a doornail. So let's apply this to living systems in more detail. Take a protein such as hemoglobin. Rearrange the monomer sequence so it no longer transports oxygen. The objective information content doesn't change but subjectively it's the difference between life and death. The universe doesn't prefer one order or the other but the organism that needs oxygen transport certainly has a preference. The objective forces of nature don't tend to build subjectively meaningful patterns. The four forces of physics can accidently generate subjective meaning but as the subjective complexity increases the probability of accidental arrangement decreases. Random change (which the four forces can generate) plus natural selection (preference or preservation of one change over another) coupled with a feedback mechanism (heredity) can indeed generate increasingly meaningful (subjective) information. But in order to defeat the increasing improbability of larger jumps in meaning it must add it in tiny (more probable) steps. The argument thus becomes one of discontinuities that must be bridged in small steps to get from inanimate matter to complex living systems. In theory there may exist a series of arbitrarily small steps but on the other hand there may be discrete transitions required that that the laws of probability make virtually impossible. Take the transition from a prokaryote to a eukaryote. In nature we observe only two discrete states - nucleate or anucleate. A single large step by chance appears to be prohibitively unlikely. Intelligence agency that can impose any physically possible order can certainly accomplish the transition. What series of small steps each with a reasonable possibility and which also have natural selection value can bridge this discontinuity? The burden of proof that chance & necessity can accomplish things that otherwise appear to require intelligent agency to overcome improbabilities I believe lies with whoever is making the claim that chance & necessity can bring about the observed outcome. It's already well established that intelligent agency can impose any physically possible order regardless of the improbability by chance alone. Making quantum leaps in order that would otherwise be prohibitively unlikely is the hallmark of intelligent agency. DaveScot
This is exactly the line of reasoning that will crush materialistic evolution. Like many of you, I've tried to argue the second law and was given the tired rebutal of open and closed system. Yet, we are in fact dealing with a second, more nuanced, level of entropy that is completely separate from the material realm. Yet this entropy is not completely treated as separate,,, YET. "Genetic Entropy" to be precise. This level of entropy found at the "spiritual" level of information is proving to be more robust than the entropy of the "material" realm. In fact a pure prediction of Theism would predict that there will NEVER be a violation of "Genetic Entropy" without input from an intelligent source. This is of course in direct contradiction with materialism. This fact, which is becoming increasing clear to science as time passes, should be written in the proper mathematical formula to overturn the proposed fourth law of dynamics equations that have been presented as the Onsager reciprocal relations. You may find the equations here; http://en.wikipedia.org/wiki/Laws_of_thermodynamics The new formula could then be used to support a NEW and proper fouth law of thermodynamics. Of course, this mathematical formulation has mosy likely already been accomplished and has not been accepted properly as the fourth law because of the materialistic paradigm that is hindering scientific progress right now. When a proper Law of information is accepted across the board it truly will be the crowning moment for the ID movement in the progress of science. bornagain77
Hey guys, what about Dembski's use of minimum description length and algorithmic compressability as a measure of "specification"? I think you may be giving up prematurely on an objective measure of specification. It is my hunch that not all avenues have been developed. Atom
Can you offer a definition of specification that would be useable: something mathematically feasible that could be applied to all sorts of different things, including those that we believe are not designed, and that would be replicable in that different people would get the same results when using the definition irrespective of their intuitive preconceptions? Such a definition is needed in order to test hypotheses about design. Jack Krebs
But how can you objectively measure specification? I don’t believe you can. Specification is tangible and our brains use it (consciously or unconsciously) constantly in evaluation and decision. Specification is a product of mind, not nature. A good and interesting observation. I hope Jerry's reading. One thing: it should be possible to measure the effects of specification by defining it -- i.e. recognizing its reality -- then comparing events that match that definition with events that don't. I think the hangup -- as evident in recent discussions -- is that there has been a feeling that the reality of specification has to be proved. Rather than trying to prove it mathematically, we'd should accept its existence as axiomatic. Thank you Jerry for your stubborness. Another point: someone will say this is a circular arugment -- treating specification as axiomatic means treating design of life as axiomatic. Not so. Objects of known design exist. They met certain criteria. Applying this criteria to life indicateslife to be designed. It then becomes the responsibility for dissenters to show life not to be designed rather than whine about how it is somehown unfair to apply to life the criteria of design. tribune7
Here is an interesting article from PLOS Biology. It argues that the laws of thermodynamics may advance evolution rather than being at odds with it. According to this reasoning even hurricanes and galaxies would be forms of life... Survival of the Likeliest? Chimera
pk4_paul: Sure, here's the experiment: do a computer simulation which starts with the initial (before life appeared) positions and velocities of every fundamental particle in our solar system (I think we can ignore the effects of other stars) and models the effects of the four known forces of physics (gravity, the electromagnetic force, and the strong and weak nuclear forces) on these particles, run the simulation out to the current date, and see if humans and computers and spaceships and the Internet form. Of course, the effects of the basic forces on the basic particles are not strictly deterministic, according to quantum mechanics; we can only state the "probabilities". Thus we would have to assume this "supernatural" (in the most literal sense of the word) component to be truly random, unintelligent, and simulate it using some sort of random number generator. Unfortunately, such simulations tend to require a lot of computer time and memory, I don't think I can do the experiment on my laptop. Granville Sewell
Granville, a fall back complaint against ID when all else fails is pointing out a lack of experimental studies IDists are able to point to to support their claims. There are two difficulties with this. One being that many claims of standard theories would require knocking down something that never took place anyway; not an easy task. Abiogenesis being the outstanding example. Second, there is much empirical data supporting ID claims but the opposition clamors that IDists do not perform the work on their own. IOW, IDists don't do research to further their theories. My question is: are you able to suggest an approach that could be used to experimentally advance your 2LT case? pk4_paul
In my 2001 Mathematical Intelligencer defense of my 2000 Math. Intelligencer article, I began "Mathematicians are trained to value simplicity. When we have a simple, clear proof of a theorem, and a long, complicated, counter-argument, full of hotly debated and unverifiable points, we accept the simple proof, even before we find the errors in the complicated argument." This is the advantge of the second law, or "Sewell's Law", or specified complexity, argument: evolutionary biologists have a long complicated argument, with virtually no experimental confirmation, which claims to prove that natural forces created all the order we see on Earth today, but there is an extremely simple, direct proof that it couldn't have. As a mathematician, I prefer the simple, clear proof, and thus frankly don't believe you need to know much biology to reject the long complicated argument. I think it is no coincidence that Dembski and I are both mathematicians, and that many other mathematicians share our views (though most are reluctant to express them publicly). But I haven't had much luck convincing biologists that the argument is this simple! I once had the honor of having Michael Behe sit in on a talk where I presented my second law argument. At the end, he said, someone like Dawkins will argue that things change fundamentally once you have an organism which can reproduce, and I said, "do you agree with that?". He laughed and said, no. I said, neither do I. Most people need to find the errors in the long, complicated argument before rejecting it, and that is exactly what Behe and others are doing. For me, that isn't necessary, when we have such a simple, clear, counter-argument. Granville Sewell
Sigh: I avoided links but the old spam filter got me anyway. I simply note that 1] Dr Sewell, thanks for trying again; I would adjust slightly: “Natural forces do not [spontaneously] do macroscopically [simply] describable things which are extremely improbable from the microscopic point of view.” 2] PE needs to recognise that there is such a thing as configurational entropy, related to degree of freedom of distribution of mass, not just energy. (Of this, diffusion is an iconic example -- why is it that a drop of dye in a vat of water will spread out but not spontaneously reform itself?) 3] Thence we see that there are many things that are not logically impossible, and are not subject to force/potential field barriers, that are relative to alternatives at micro level are so utterly improbable that they do not happen based on undirected chance plus natural forces. 4] As TBO in their TMLO of 1984 showed, following Brillouin et al, this can be lined to information -- information stored in the composition of strings of monomers in bio-molecules is spatially confined and vastly improbable spontaneously relative to non-functional states. Biofunctionality is of course observable and simply describable at macro-level. For more cf my always linked through my handle above. Cheerio GEM of TKI kairosfocus
Hi Prof Sewell: Thanks for trying again! Now, without wishing to entertain the sort of long exchange I had last time around, with Pixie et al [cf onward link through my always linked, recently updated appendix A], I would comment: 1] I would adjust slightly, but significantly: “Natural forces do not [spontaneously] do macroscopically [simply] describable things which are extremely improbable from the microscopic point of view.” 2] The word, "spontaneous," is there to highlight that intelligent agents do intervene and use the available forces, phenomena and materials of nature to create things that are macroscopically simply describable. 3] There is yet another underlying trap, I am afraid: many do not understand just how a probability barrier can exist, as opposed to one directly based on a potential barrier. But, just because there is no physical force or logical contradiction involved does not mean that a particular state is likely to happen spontaneously -- and in the cases we are looking at prospectively, "unlikelihood" is of the order of 1 chance in 10^150 or worse, i.e practically impossible within an observed cosmos typically estimated at 10^80 atoms, and 13.7 BY to date. This is the same reasoning that underlies the force of statistical inference under Fisher's concept that if something is really unlikely to happen by chance among a set of possible outcomes, then the safe bet is that if it happened, it happened by intent, i.e the basis for rejecting the null hyp. Thus, the error in the common "lottery" fallacy: --> FYI, would-be objectors, Dr Sewell is speaking about the concept that a given macroscopically observable and simply specifiable state as a rule is associated with a great many microscopic configurations of matter and energy [Yes, cf diffusion and free expansion for micro distributions of mass not just energy -- there is a configurational form of entropy in physics]. --> On the basic statistical thermodynamic principle that all accessible microstates are equiprobable, stat thermo-D has been built and has had great success. The direct implication of that is, that though all microstates are equiprobable, some states are such that absent imposed constraints on the system, they are utterly unlikely to emerge spontaneously on the gamut of the observed cosmos across its whole history. --> WHY is that so? ANS: because there are so many other states available that such special states are overwhelmed. E.g. put a drop of dye in a vat of water, and allow it to diffuse. You will never see that the drop spontaneously reforms, as the scattered macrostate has in it so many more microstates than the clumped one. [Therein lieth the concept of configurational entropy . . . highly relevant to the link between entropy and information. Cf my linked for an introductory discussion.] 4] I have put in the -- strictly unnecessary but clarifying -- word "simply" to underscore the point that macrostates compress [usually on a lossy basis] the descriptive information on the system in view. Thence, if we lack detailed information at the micro-level, we are forced to assume random behaviour relative to our uncertainties, and so can only harvest such work or functionality as is consistent with that want of information. [Cf Harry Robertson's Statistical Thermophysics, PHI, for the elaborating details of this observation. FYI, objectors, there is an informational school in statistical thermodynamics, tracing to the work of Gibbs and Brillouin etc.] 5] Finally, PE, there is such a thing as configurational entropy linked to the degree of freedom of distribution/location of micro-elements in space [i.e volume] [and not just freedom of distribution of energy]. So, as for instance Thaxton et al exploit brilliantly following Brillouin in their classic 1984 work, TMLO, ch 8 [cf my always linked appendix A for the link -- dodging that spam filter], we see that certain spatially constrained micro-level configurations of matter can store information and/or perform privileged functions that can then be observed and simply described at macro-level. It turns out that such configurations are utterly improbable on the scope of the integrated functional elements of say a prototypical cell requiring a DNA strand of length 300 - 500 k base pairs long -- not to mention the underlying codes, algorithms and executing enzymes, ribosomes etc. [The lower end of the range just cited is the level where existing life forms functionally disintegrate once knockouts reach a certain threshold. 300k 4-state elements can take up a space of 4^300k = 9.94*10^180,617. In short the functional configurations are hopelessly isolated in a space dominated by non-functional ones, relative to spontaneous mechanisms relying only on chance plus undirected natural forces. THAT is why OOL research is more or less at a conundrum.] Cheerio GEM of TKI PS: For my always linked, click on my handle. kairosfocus
Dave
Subjective information, or specified complexity, appears to be subject to 2LoT but mind (intelligence) can violate 2LoT by routinely choosing to do what is almost impossible for nature such as making a gold watch from a gold nugget
Why are you saying that specified complexity appears to be subjected to 2LoT? Perhaps you refer only to the product of intelligence and not to a whichever device that applies intelligent decisions. kairos
If I can echo Phevans, Having read 'A Second Look at the Second Law' and then the 'kneejerk reaction' on the PT, the only fair criticism I noted was that the relation between heat and information thermodynamics was not made clear. Given that the second law was formulated for heat transfer (hence THERMOdynamics), is there consensus that the second law's application to information is warranted? Is there a good summary on the net anyone can link to? Even better, Granville, how about an addendum to your article? antg
Dave What does the 2LoT (thanks for the acronym!) apply to other than energy? It refers to entropy, but Informational entropy is quite different concept to thermodynamic entropy. I've not seen an argument (convincing or otherwise) for applying a law from one domain into another. I'd be interested to get some links. Phevans
continuing I think a lot of people just skip right over all this and concede that only intelligence can defeat virtually impossibly long odds to produce specified outcomes. The argument then becomes a question of whether natural selection mimics intelligence in this capacity. Most of us concede that natural selection can work in a preferential manner selecting outcomes that have long odds against them but not impossibly long odds. A random mutation in a gene that works to defeat an antibiotic, for instance, but not something like turning a scaled limb into a feathered wing in one fell swoop. I think most of us accept that it is theoretically possible for that virtually impossible instantaneous change to occur in a series of not so unlikely small changes. Darwin skipped right up to this point. But he admitted that there may be discontinuities that cannot be bridged in small steps. And 150 years later we're still there. The living world is flush with discontinuities without plausible bridges i.e. the "gaps" in evolution. The chance worshippers declare it's only our ignorance or lack of imagination at fault for there being gaps. The ID crowd declares the discontinuities are unbridgeable by natural selection in small steps so it must be a higher order intelligence that can plan and execute the leap across the discontinuities. The thing about natural selection is it can't plan ahead for anything. It is reactive where higher order intelligence is proactive. Thus for me it comes down to demonstration. If it can be demonstrated that a series of small steps subject to reactionary natural selection can bridge any given gap then I'll accept it. Until then intelligent agency must remain at least a live possibility and rejecting it out of hand when we already know intelligent agency exists in the universe today without knowing it didn't exist in the past is not science - it's dogma. I prefer to focus on just one gap - the giant leap from inanimate matter to a replicator with a symbolic inheritance mechanism such that natural selection can begin to work in the first place. DaveScot
I had no trouble understanding A Second Look. In fact when I read it I thought "Wow. Someone else gets it!". But I couldn't get anyone else to get it. The first level of misunderstanding comes from those who don't understand that 2LoT applies to more than heat. The second level of misunderstanding comes from those who understand that 2LoT applies to things other than heat, like information, but they don't understand that information and heat aren't equivalent - thereby you get the argument that order (information) can increase in an open system i.e. the sun inputting energy to the earth. They mistakenly make energy from the sun equivalent to information. Things get a little more complicated from there. You run into Maxwell's Demon which equates information and energy in a way, even though it's still controversial. But that's a tangent that leads to more misunderstanding. Information, like energy, can be neither created nor destroyed, it only changes form. There isn't any more objective information in an automobile than in the raw materials that make it up. It takes just as much information to describe the state of the atoms in a pile of rust as it would to describe the atoms in the automobile the pile of rust once was. So what changes? It then dawned on me that we need to describe a new class of information. Subjective information. The universe doesn't distinguish a book as having more information in it whether it's War and Peace or random gibberish. The objective information content is equivalent. The difference is subjective, not objective. War and Peace has specified complexity. But how can you objectively measure specification? I don't believe you can. Specification is tangible and our brains use it (consciously or unconsciously) constantly in evaluation and decision. Specification is a product of mind, not nature. Subjective information, or specified complexity, appears to be subject to 2LoT but mind (intelligence) can violate 2LoT by routinely choosing to do what is almost impossible for nature such as making a gold watch from a gold nugget. And it all boils down to probability whether it's heat or information. Intelligent agency can make the improbable probable. Routinely. It's the hallmark of intelligence. But it's still subjective and hence, I think, impervious to mathematical discrimination. That then brings up the $64,000 question. Is it still science when it can't be described objectively? Is specification excluded from science because there's no way to scientifically or mathematically distinguish War and Peace from a book of gibberish? DaveScot
Granville, Your arguments and logic are so transparently obvious that they should not even have to be presented. They should be self-evident, but aren't (only to those with an antiquated philosophical pre-commitment to the spontaneous generation of everything). GilDodgen
How many things don't have a knee jerk reaction in these events Dr. Sewell? Keep up the Good work! jpark320
Dave Scot's recent (May 20) post on Specified Complexity got me started thinking about this again. Note that the objections to the specified complexity argument are very similar to those raised against "Sewell's Law", see the footnote of A Second Look at the Second Law . Granville Sewell

Leave a Reply