Uncommon Descent Serving The Intelligent Design Community

# Introducing “Sewell’s Law”

Share
Flipboard
Print
Email

In an April 2, 2007 post, I noted the similarity between my second law argument (“the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view”), and Bill Dembski’s argument (in “The Design Inference”) that only intelligence can account for things that are “specified” (=macroscopically describable) and “complex” (=extremely improbable). I argued that the advantage of my formulation is that it is based on a widely recognized law of science, that physics textbooks practically make the design argument for you, all you have to do is point out that the laws of probability do (contrary to common belief!) still apply in open systems, you just have to take into account the boundary conditions in the case of an open system (see A Second Look at the Second Law ).

However, after making this argument for several years, with very limited success, I have come to realize that the biggest disadvantage of my formulation is: it is based on a widely recognized law of science, one that is very widely misunderstood. Every time I write about the second law, the comments go off on one of several tangents that sometimes have something vaguely to do with the second law, but have in common only that they divert attention away from the question of probability.

So I have decided to switch tactics, I am introducing Sewell’s law: “Natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view.” I still insist that this is indeed the underlying principle behind all applications of the second law, the only thing that all applications have in common, in fact. But since even the mention of “second law” draws such “kneejerk reactions” (as Philip Johnson put it), let’s forget about the second law of thermodynamics and focus on the underlying principle, Sewell’s law. My main point is still the same as before, that natural forces cannot rearrange atoms into computers and spaceships and the Internet here, whether the Earth is an open system or not. But now you cannot avoid the question of probability by saying the second law doesn’t really apply to computers and spaceships (although most physics textbooks do apply it to the breaking of glasses and burning of libraries, etc); whether the second law applies or not depends on which formulation you buy. But it seems to violate Sewell’s law. Unless, of course, you believe that it is not really extremely improbable that the four forces of physics would rearrange the basic particles of physics into computers and TV sets and libraries full of novels and science texts; in that case I can’t reach you.

Re PE, no 55:
The point is that there is plenty of interesting debate to be had without blatantly inaccurate assertions being posted and debated. If you concede the point under debate that random mutations can add information, then thereÃ¢â‚¬â„¢s no need for this to continue . . .
1] Now, of course, long since, the fact that someone made an error was conceded and corrected. So at best this is like one who is sent to take down a mountain, turning aside to a molehill, then scraping it down with a shovel and saying that he has dealt with the mountain. In other words, we here can see the fallacy in the easy slide we are being Ã¢â‚¬Å“invitedÃ¢â‚¬Â to make: from Ã¢â‚¬Å“random mutations can add information [in trivial cases of a few bits of change, most often by disabling existing functions . . . e.g. in antibiotic resistance or the like]Ã¢â‚¬Â to RM accounts for generating the scope of information relative to biofunction across the biological world. --> Here I add that natural selection is simply a filter, unless we can FIRST generate the biofunctional DNA and express it in life systems, there can be no competition on differential reproduction. And such information is contingent, so the only credible dominant mechanisms are chance and purposeful agency. --> In every directly observed case of generation of complex specified information beyond the Dembski bound, this has only happened by agent action. --> The same holds for actual observed cases of irreducibly complex systems, i.e multicomponent systems that break down in function if one or more core-functional parts fail to work. (I have not found alleged counterexamples to this particularly impressive once we go beyond the gleeful headlines and summary statements, BTW. If just one case in point hoods form say Bebe's 1996 presentation, the evolutionary materialist account for biodiversity collapses. Cf Loennig's excellent work onthis, accessibel through my always linked, as usual.) 2] In short, we are here gliding by the problem highlighted by Sewell, in a rush off to a strawman. Namely, Sewell pointed out that: Ã¢â‚¬Å“Natural forces do not do macroscopically [i.e Ã¢â‚¬Å“simplyÃ¢â‚¬Â] describable things which are extremely improbable from the microscopic point of view.Ã¢â‚¬Â (Nor, am I overly impressed by the idea that the post that is being discussed is simply a springboard for us to debate and draw attention to our own ideas and agendas. That notion may hold for those who want to capture a discussion and divert it from a direction dangerous to their agenda, but such diversions have names in logic: red herrings, strawmen, and the like. If you have a substantial point or correction, that is different from distracting attention.) 3] Now, too, in that phrase Ã¢â‚¬Å“ extremely improbable from the microscopic point of viewÃ¢â‚¬Â lurks all the substantial issues at stake. So to glide by it by substituting Ã¢â‚¬Å“add informationÃ¢â‚¬Â -- one bit of information is not at all in the same ball park as 500 bits, or the millions to billions of bits that are expressed in DNA molecules in life forms [not to mention in the underlying coding system and algorithms that lurk in the DNA's object code] -- is to tilt at a handy strawman set up by someone's sloppy phrasing. (Onlookers observe how PE latched on to the single phrase, whilst insistently ignoring the substantial point. ID onlookers Ã¢â‚¬â€œ here is a lesson on how important it is to be careful in how we speak, as we are dealing with people who will use any such error to get away from addressing the real problem on the merits. Of course, we will then be accused of being complex or long-winded, but there is never an end to possible rhetorical objections to ANY statement.) 4] The substitution of a strawman again crops up in the cases being cited from the literature: mutations that are based on one or a few base pairs, or on gene duplication accidents etc or even from the re-emergence of deleted code through redundancy mechanisms. All of these have NOTHING to do with the origination of hundreds of kilobits to megabits to gigabits of novel biofuncional information at novel body-plan level (or the like) in the relevant beyond merely astronomical configuration spaces. We can therefore draw our own conclusions that Sewell's major point stands, if the best that can be done by those who oppose his main point is to major on minors, or to otherwise divert attention. GEM of TKIkairosfocus
May 31, 2007
May
05
May
31
31
2007
03:35 AM
3
03
35
AM
PDT
phevans What evidence could possibly show that something is not a process of Ã¢â‚¬Å“front loadingÃ¢â‚¬Â but is in fact a random mutation? Answer: nothing. If you are willing to say that rm+ns is pseudo-science which can never be shown to be true, even in principle, then I guess you're right that nothing can falsify design. Personally I'm willing to give the NeoDarwinian theory of macroevolution more time to prove itself keeping in mind that if it's false it will never be proven. ID has nothing to prove. We aleady know that intelligent agency can alter the course of evolution through purposeful changes to genomic information i.e. genetic engineering.DaveScot
May 29, 2007
May
05
May
29
29
2007
04:25 PM
4
04
25
PM
PDT
Bornagain77: "....nor did they consider the equally valid presumption of Ã¢â‚¬Å“Front LoadingÃ¢â‚¬Â that presumes a complex feedback control loop in the Genome that Ã¢â‚¬Å“mathematically or logically originatedÃ¢â‚¬Â the duplicated gene." "...#1 the (duplicated) gene is a spare tire gene that already had the ability in it. #2 the probability of totally random processes finding the correct mutations to modify the gene are fantastic thus the experiment actually demonstrates Ã¢â‚¬Å“front loadingÃ¢â‚¬Â that is preprogramed in the cell" You have made some good arguments for ID in this debate. I might mainly differ only in the preferred subhypothesis of how ID in evolution actually takes place. From the above quotes it appears that you favor the "front loading" hypothesis of ID among all the other versions. The first quote indicates that you consider that the duplication of the gene was itself a response by the organism to environmental stress, brought about by a complex feedback mechanism in the organism. The "front loading" was apparently in the existence from very early times of the feedback mechanism itself. The second quote implies that the subsequent exceedingly improbable (from a random standpoint) adaptive mutations to the duplicate gene were "front-loaded" in some way. One form of this would be a very complex built-in system which senses environmental stress, determines what genetic changes are necessary to respond to it, and modifies the appropriate gene(s) accordingly. Alternately in the front loading concept, the duplicate genes and adaptive mutations could have been stored in the genome in the beginning and somehow intelligently accessed as necessary. All the different hypotheses of ID of course have various pros and cons. It seems to me that this is less plausible than simply positing that (most of) the genomic changes including gene duplications and simpler mutations have somehow been induced directly at many times in evolution, by some unknown intelligent agent.magnan
May 29, 2007
May
05
May
29
29
2007
02:24 PM
2
02
24
PM
PDT
The insults aren't necessary, thanks all the same. I'll be leaving the thread after this post. "the probability of totally random processes finding the correct mutations to modify the gene are fantastic thus the experiment actually demonstrates Ã¢â‚¬Å“front loadingÃ¢â‚¬Â that is preprogramed in the cell" Here's a thought experiment for you. What evidence could possibly show that something is not a process of "front loading" but is in fact a random mutation? Answer: nothing. Therefore you get to keep your bias and I get to keep mine. "your definition of information is divorced from the overall function of the organism" Your definition of information inexplicably has a nebulous concept of "function" tied up in it which makes it impossible to measure. Once you find out a way to measure it, please get in touch.Phevans
May 29, 2007
May
05
May
29
29
2007
12:20 PM
12
12
20
PM
PDT
Phevans, I would also like to add that your definition of information is divorced from the overall function of the organism. Thus your definition is wanting in integrity to empirical validation. For the neo-darwinism scenario to be proven true, the union of information to increased functionality of an organism must remain unbroken. You want to claim increase of information in organism's that have lost functionality when compared to the orginal organism in "normal" environments. This is clearly wishful speculation on your part if you claim this is proof for neo-darwinism. Your gun is quite empty Phevans!bornagain77
May 29, 2007
May
05
May
29
29
2007
09:52 AM
9
09
52
AM
PDT
Phevans, your gun is quite empty of bullets! I believe the following is what your so excited about. the fact that the "new" galactosidase enzyme didn't evolve from scratch, but was produced by a small number of mutations in an existing gene, albeit in an operon far distant from the deleted galactosidase gene. In a similar way, the gene for the repressor of this newly-evolved galactosidase, a protein that controls its expression, was rendered lactose-sensitive by a simple mutation in its sequence. In other words, the "new" 2-part system was produced by a couple of rather minor mutations in two pre-existing genes. Now let me get this straight; you are claiming this experiment is a totally random process by which information was created and thus proves neo-darwinism true. Well: #1 the gene is a spare tire gene that already had the ability in it. #2 the probability of totally random processes finding the correct mutations to modify the gene are fantastic thus the experiment actually demonstrates "front loading" that is preprogramed in the cell. i.e. Was it just luck as required for proof of neo-darwinism or is there a deeper control loop in the genome finding the correct response? The math clearly points to the latter. #3 the "evolved" irreducible complex system is less robost than the system that was removed. So you are still square one in generating novel information that increases functionality of an organism. Phevans you have no bullets, I can assure you that each system you show will be found as such. I've been through these arguements too many times before.bornagain77
May 29, 2007
May
05
May
29
29
2007
09:22 AM
9
09
22
AM
PDT
Oooh, here's a beaut I hadn't seen before http://www.evcforum.net/cgi-bin/dm.cgi?action=msg&f=10&t=186&m=1 Not only introduction of novel genetic information, but one which forms an irreducibly complex system! Very nice. Bornagain, you seem to be missing the point with this quote: "Gene duplication, polyploidy, insertions, etc. do not help Ã¢â‚¬â€ they represent an increase in amount of DNA, but not an increase in the amount of functional genetic informationÃ¢â‚¬â€these create nothing new. Macroevolution needs new genes" Gene duplication *creates* new genes. It's mutations on these genes that provides the variance that natural selection works on. Also, you need to be more careful about your terminology. An increase in information in the genome is a very different prospect to novel behaviour expressed at the "macro" level. On a side note, here's a great review from Nature on various ways that new genetic data can arise http://www3.uta.edu/faculty/betran/naturereviews.pdfPhevans
May 29, 2007
May
05
May
29
29
2007
08:48 AM
8
08
48
AM
PDT
Phevans, I might add that, in regards to information, Gene duplication, polyploidy, insertions, etc. do not help Ã¢â‚¬â€ they represent an increase in amount of DNA, but not an increase in the amount of functional genetic informationÃ¢â‚¬â€these create nothing new. Macroevolution needs new genes (for making feathers on reptiles, for example). Thus you are at square one again in generating novel genes by a totally random process. As I stated before, all test in inducing random mutations have been very disappointing to evolutionists. This is a very simple thing Phevans. The genome is absolutely required to be proven to have a certain amount of flexibilty to "totally" random mutations in order for the neo-Darwinists scenario to even be considered true in the first place. Extensive test have failed to demonstrate any genome flexibility to totally unambiguous random mutations. This is a crushing fact that rules Neo-Darwinism out at the start of the debate! It is a truth that plain and simply cannot be overcome by any amount of wishful speculation!bornagain77
May 29, 2007
May
05
May
29
29
2007
07:14 AM
7
07
14
AM
PDT
Phevans, IF you are defending the materialistic position, you are required to prove the duplication was a totally random event. Yet even the authors of the paper you cite state: We propose a genetic mechanism to account for these changes and speculate as to their adaptive significance in the context of gene duplication as a common response of microorganisms to nutrient limitation. So even the authors of the paper admit the need for a "preexisting mechanism" to explain the phenomena. Phevans this example you show me is screaming "front loading" not the purely random mutation that is absolutely required to prove neo-darwinism true!! Thus your need to prove a truly random and beneficial event has happened is still unsatisfied. This is a good example of a preexisting complex feedback control loop by the way. In regards to your assertion that it is truly generating novel information I point out two facts. #1 the gene that was duplicated was a preexisting gene , duplicated by a preexisting mechanism I might add, in responce to stress placed on its environment. #2 The overall functionality of the yeast is decreased in its normal environment, thus when the stress is removed from the mutant yeast, the original yeast will be favored by selection over the mutant yeast. Clearly you have not demonstrated a gain in information since you have not "built a better overall yeast" that will be favored by selection.bornagain77
May 29, 2007
May
05
May
29
29
2007
06:42 AM
6
06
42
AM
PDT
kairosfocus: "PE, you cannot turn a triviality that exploits a sloppy remark by a commenter into a major conclusion on the issue in the main." I'm not. I'm attempting to curb the use of sloppy and inaccurate generalisations which are parroted as being gospel when they are patently and provably false. These blog posts are starting points for discussion, and BA and I were discussing an side issue which naturally came up in the conversation. You have no right to dictate what is and is not valid debate. You'll notice that not once have I claimed that new genome information proves Darwinism or disproves ID. This is not my point. The point is that there is plenty of interesting debate to be had without blatantly inaccurate assertions being posted and debated. If you concede the point under debate that random mutations can add information, then there's no need for this to continue.Phevans
May 29, 2007
May
05
May
29
29
2007
06:26 AM
6
06
26
AM
PDT
Okay, a couple of quick points: 1] BA 77: Good thoughts, and yes I had forgotten that side -- if an observation is common across several hypotheses, it cannot distinguish between them. [This is of course epistemology and logic as applied to science. Both philosophical sub-disciplines.] 2] PE: IÃ¢â‚¬â„¢m responding to that claim, and onl;y that claim PE, you cannot turn a triviality that exploits a sloppy remark by a commenter into a major conclusion on the issue in the main. I have pointed out the balance of the matter and have focussed on the main issue. That you choose to ignore that and refocus insistently on a side issue that is at best trivial, is telling and not in your case's favour. 3] To BA 77: IÃ¢â‚¬â„¢m really not interested in playing the faux-philosophy game. You made a claim that mutations never add information. If you make empirically false claims, then I will show that you are mistaken. Please stick to this issue. Here I again must beg to intervene. The substantial point in the thread is as I summarised yesterday, and the "addition of information" claim is a red herring and strawman relative to the issue in the main. You have yet to address the key question of originating 250+ base pr of novel biofunctional information originating by chance, much less the aggregate of 500+ k of information, with associated molecular machines, and the underlying algorithms and computer language. To major on such minors while the key issues go a-begging is not a sign of a healthy position. 4] faux-philosophy game Finally, the philosophically loaded roots of the matter cannot be so easily dismissed as you attempt by using this term. For instance, one of the reasons that it is held that inference to design is inherently unscientific is the assertion of so-called Methodological naturalism, which in effect asserts an attempted redefinition of "Science" that rules that scientific thought may only think in terms of entities permitted by the evolutionary materialist cascade for the origins of the world as we experience it: cosmological, chemical, biological and socio-cultural. That begs major phil of sci questions, and is historically inaccurate. [Cf my always linked and onward discussions such as in Peterson.] Further, so soon as we ask why it is we think in terms of chance, law-like and agency based causal forces and factors, we are in the province of phil. This is what underlies e.g. hypothesis testing and Fisher's approach; further tot his, that the resulting empirical reasoning is defeatable thus provisional, and that this is a feature of scientific work, are phil and historical qs too. And, it would hardly be fair or accurate to say that such underlying issues are off-topic, as these are where the main issues have to be decided. GEM of TKIkairosfocus
May 29, 2007
May
05
May
29
29
2007
04:03 AM
4
04
03
AM
PDT
Kairosfocus: The discussion I was involved in with DaveScot and BornAgain77 got to this point: bornagain77: "I point out that in all the countless millions of observations of mutations in DNA in the labratory, NOT ONE has ever been shown to unambiguosly increase information." I'm responding to that claim, and onl;y that claim. If you're not interested, feel free to ignore us. BornAgain: if you want an example of observed gene duplication, here's one - http://mbe.oxfordjournals.org/cgi/content/abstract/15/8/931 From a Sewell information POV, there is clearly more information. From the point of view of CSI: - the genome is now more complex - the genome is now more specified (more bits) - there is more information Please show how this mutation does not add information to the genome. I'm really not interested in playing the faux-philosophy game. You made a claim that mutations never add information. If you make empirically false claims, then I will show that you are mistaken. Please stick to this issue.Phevans
May 29, 2007
May
05
May
29
29
2007
03:31 AM
3
03
31
AM
PDT
Phevans: You claim proof for evolution by siting a study on gene duplication. Yet they infer the gene was duplicated first then go about to prove it. This is a common practice of evolutionists (and many other scientists) and is clearly the practice of "bad science". To commit to a prior philosophical bias and then set out to prove it is exactly the opposite of how science should work. Science should presuppose all reaonable philosophical presumptions are true and then set out to see which one is "most likely" true. Your study has several presumptions prior to investigation. It presumes that the origination of the gene is a totally random process for one. Yet obviously they did not consider the equally valid ID presumption that the Gene was designed that way in the genome in the first place nor did they consider the equally valid presumption of "Front Loading" that presumes a complex feedback control loop in the Genome that "mathematically or logically originated" the duplicated gene. But above all this and indeed even before all this , For gene duplication to even be considered a viable random process by the evolutionary scientists they must first prove that the genome itself is indeed flexible enough to allow the "totally random" nature of mutations that the materialistic/evolutionary theory is absolutely required to have to be true in the first place. As I have stated before, all test in these areas have been highly disappointing to the evolutionists (not one uncontestable mutation out of millions!). And even though this is absolutely required to be proven in the lab for evolution to be considered true, Evolutionists blindly ignore this crushing fact of the overwhelming detrimental nature of purely random mutations to the DNA. Yet, in the same vein, the fact that the overwhelming majority of mutations to DNA are found to be slightly detrimental is absolutely crushing to the materialistic theory of evolution since the slightly negative mutations are never selected out and they therefore spread throughout the entire population before any "supposed" beneficial ever even has a chance to occur. The principle is called "Genetic Entropy". I suggest you order the book from amazon. You see Phevans, materialism and thus evolution are both constrained to satisfy the empirical evidence for blind chance before they can even be considered valid. They have not done so and in my opinion from the evidence I've seen so far they will never satisfy this requirement.bornagain77
May 28, 2007
May
05
May
28
28
2007
03:23 PM
3
03
23
PM
PDT
PE:
RE: The original discussion was to see information added to a genome . . .
The substantial and original discussion, as can easily be observed above, in Dr SewellÃ¢â‚¬â„¢s post is on:
Ã¢â‚¬Å“Natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view.Ã¢â‚¬Â
1] What I have done is to provide a specific context for assessing just what Ã¢â‚¬Å“improbableÃ¢â‚¬Â means, courtesy the Dembski probability bound; this is standard for discussions on the issues of CSI. 2] So, to try to revert to discussions of single-point information loss mutations or the like, to take rhetorical advantage of careless wording by a commenter, is to switch to a strawman, I am afraid. (Onlookers: Those who turn away from the real issue to attack strawmen, thereby reveal that they cannot properly address the issue in the main.) 3] Not surprisingly we then see a citation on exactly what was never the serious focus for discussion: An example is this paper . . . . An example of a gene duplicated which then adapts for better functionality., which illustrates the point aptly. This is a case of replication of existing information with slight modification [e.g. note Fig 1 in yr linked, which speaks of 28 base prs differences . . . well within 250, and that is all the way across to human DNA . . .not exactly the immediate ancestral species!]. 4] This is very far indeed form a realistic test, where for instance the Cambrian Revolution originated on the usual timelines, dozens of phyla within a few dozen MY. The jump from unicellular organisms to create dozens of phyla is huge, as can be seen in the gap highlighted by Meyer et al int heir now famous paper. For instance, a modern arthropod has ~ 180 Mn bpr, and unicellular species reasonably would have had say 1 mn or of that order. Where did all that additional information to create the new body plans come from, so fast? 5] As for your attempted dismissal of CSI, I note that we can clearly enough see whether something is functional/non functional, and this is a good enough specification. We can then look at the information storage element involved, and see how many possible configs can be stored in that space, I^N: 250 4-state elements have ~ 3.27*10^150 possible configs. Of these only a very small fraction for any typical 250-element functional chain of DNA, will retain function against RANDOM changes. And of course this would have to be replicated for every 250-monomer block in a biofucntional string of DNA. 6] This is a very plausible metric of the quantity of information that has to be generated to get to that functionality [we are simply working the other way], and it reflects the well-known high degree of isolation in the overall config space for biofunctionality. In short the CSI issue does not go away so easily as you imagine. 7] As to the Talk Origin cites, the issue is not whether the papers exist, but over what is the substantive issue at stake, and what are the materially relevant items of evidence relative to that. TO is notorious for strawmen, elephant hurling tactics and literature bluffing. (EH, FYI, pretends to a consensus by papering over a major series of issues with a few smooth words as though that says it all [the CC debate is a classic, with the typical mantras on Ã¢â‚¬Å“consensusÃ¢â‚¬Â]. Lit bluffs cite places where passing reference tot he themes in an issue are made, but do not address the matter in the main on the substance [this is a common tactic, e. g the notorious stack of journal articles presented io Behe by the lawyer on the other side in the Dover trial].) If you have something serious on the merits, as per the proper focus of the post, let us hear that. Otherwise by tilting at strawmen you imply that you cannot address the issue on the merits. GEM of TKIkairosfocus
May 28, 2007
May
05
May
28
28
2007
06:18 AM
6
06
18
AM
PDT
kairosfocus: "I have yet to see in particular the peer-reviewed, generally accepted documentation of an empirically observed significant novel biofunction originating by chance that requires say 250 base pairs of novel DNA information formed out of noise. That is, substantially, what BA 77, DS and others are asking for. Replication or gene transfer or reproduction through redundancies etc do not count. Nor, does hurling elephants nor literature bluffing." The original discussion was to see information added to a genome. I'll readily admit that there are no examples of the size you specify; it would be most surprising if there had been, with the timescales we're talking about. I fail to see how replication and mutation of genes is not a clear case of information being added to a genome, for any definition of information. Regardless of how people feel about TO, the references are valid. An example is this paper: http://www.nature.com/ng/journal/v30/n4/full/ng852.html An example of a gene duplicated which then adapts for better functionality. I'm quite happy to take the definition of CSI as the formal definition of information; however as there's no way of measuring CSI it renders the whole discussion moot, since we can't say whether CSI has increased or decreased pre- or post-mutation.Phevans
May 28, 2007
May
05
May
28
28
2007
04:09 AM
4
04
09
AM
PDT
Hi BA 77:
it is a commonly know[n] fact that the further a bacteria deviates from its starting point the less fit for survival it quickly becomes.
This is of course a characteristic of isolated islands of functionality in a configurational space. Over the weekend, in a funeral for a relative, I happened to be sitting behind three generations of surviving women. As I observed the Grandma, the daughter and the grand-daughter generations, I thought about how our bodies wander across islands of functionality as we age, then what happens when we pass the edge of bio-function, lying just across the way in the casket. Then, in the yard outside, what happens thereafter as all active maintenance mechanisms fail. It led me to reflect on just how pervasive is the issue of entropy, and how persistently systems in functional macrostates tend to drift away in the absence of active measures to self-maintain, or maintenance interventions from outside. [Makes me wonder about the physics of the fall as described in Genesis . . . was there a supervisory control system based in a domain of reality that is not based on molecules (so is free of the constraint of 2 LoT), that can maintain the body in the face of the trends locked up in molecular dynamics? Has that been switched off for one reason or another?] In short, a funeral can be an interesting time for reflection. And, of course, there is th point in Clausius' very first example of 2 LoT at work, that raw injections of energy without configuring information tend to add to the degree of disorder at molecular levels. Then, this morning, you talk about genetic entropy -- multigenerational wandering away from islands of functionality leading to extinction. Maybe we need to use 2 LoT, molecular form, as Sewell discusses, to do a rethink of a lot of things . . . GEM of TKIkairosfocus
May 28, 2007
May
05
May
28
28
2007
03:40 AM
3
03
40
AM
PDT
Phevans, Since you posted I guess you sincerily believe there is examples out there that increase information/function. Most of the so called beneficial mutations that you will be real excited to show me will ALL turn out to be of the bacteria/antibiotic adaptation variety. Dr. Sanford deals with this type of mutation in his book "Genetic Entropy" . He says all mutations like this are the result of some type of loss of function in the bacteria, i.e. the antibiotic binding protein in the bacteria losses the ability to recognize the antibiotic molecule. He compares it to someone losing their car alarm. Many people would see the loss as beneficial. Yet in actuality it is still loss of function for the bacteria and thus, in my very pragmatic definition, loss of information in the bacteria. As Patrick stated most people on this site have heard this type of argument many times before, You fail to fully appreciate the fact that there actually is no proof of any evolution where you think there is some. NEVER has there been a bacteria transform into any other bacteria depite extensive experimentation trying to accomplish this. In fact it is a commonly know fact that the further a bacteria deviates from its starting point the less fit for survival it quickly becomes. Where is your proof phevans? You have none. Please Show me any one thing that you believe is rock solid proof of evolution and I will clearly show you why it proves ID. This is a fair challenge to you. Until then take care.bornagain77
May 26, 2007
May
05
May
26
26
2007
03:32 PM
3
03
32
PM
PDT
phevans, I wouldn't bother posting links to talkorigins. Long-time UD members have all read it-and the updates-and we're not impressed at all. In fact, it's generally considered to be a bad source of information. Try PNAS or other sources.
You are mistaken, for any formal definition of the word information.
I take it that you refuse to accept the formal definition of CSI, etc as valid? Refuse the information category that is actually under consideration and force-fit another instead in order to make arguments work. We see this tactic all the time and it's not conducive to a good discussion.Patrick
May 26, 2007
May
05
May
26
26
2007
10:32 AM
10
10
32
AM
PDT
PE: I took a look a the TO link,and as usual I am not impressed. (These are prime examples of the "anything can happen in an open system" fallacy.) I have yet to see in particular the peer-reviewed, generally accepted documentation of an empirically observed significant novel biofunction originating by chance that requires say 250 base pairs of novel DNA information formed out of noise. That is, substantially, what BA 77, DS and others are asking for. Replication or gene transfer or reproduction through redundancies etc do not count. Nor, does hurling elephants nor literature bluffing. Lets see specifics that pass the sort of hurdle that must have been passed to get OOL or body plan level evolution if NDT and linked OOL models are as well established as advertised. GEM of TKIkairosfocus
May 26, 2007
May
05
May
26
26
2007
03:54 AM
3
03
54
AM
PDT
H'mm tried to post on the difference info carrying capacity [so-called Shannon info], functionally specified (algorithm-implementing] information, FS COMPLEX info [FSI but beyond 500 bits etc],and semantic [symbolic, language like] info but the filter grabbed away . . . GEM of TKIkairosfocus
May 26, 2007
May
05
May
26
26
2007
03:38 AM
3
03
38
AM
PDT
Gentlemen: Can I suggest that we make a distinction here: 1] Information-carrying capacity [what so-called Shannon Information measures], vs: 2] Functionally specified information, which is like the object code in a microcontroller or the like -- information at work in an information-processing system. DNA fits in here. [Note my use of examples to specify what i am talking about] 3] Functionally specified COMPLEX information -- functional information that goes beyond the Dembski type bound in terms of being over 500 or so bits length in storage capacity required, or, more generally beyond that which is so isolated in the relevant configuration space that would fit the storage capacity that it is less than 1 in 10^150. [This is the threshold beyond which it is not credible that on the gamut of the observed cosmos, we could by chance dominated processes get to the islands of functionality.] --> BTW, a good engineer can do a whole lot with 500 bits of storage. 4] Semantic information -- what we use when we interact using concepts, ideas and the like, i.e language in the subjectively meaningful/symbolic sense not just functional/algorithm controlling sense. In doing this I am sort of following Berlinski. I think that it helps us keep out of ambiguities. On that too, I think I would like to see documented cases of empirically observed mutations that lead to biologically novel functions, as opposed to the usually met with loss-of-function oriented point mutations or simple breeding out of variability through the founder principle, or replication of existing functionality through redundancy mechanisms. Then, I want to see such innovation documented through recent replicable studies -- not inferences, extrapolations and assumptions on the deep past -- that exceed 500 bits, i.e 250 base pairs of noise converted by chance into biofunctional DNA. Beyond that, I want to see the same lead to novel body plans, comparable to say a dog acquiring ability to fly. Then, we can say that NDT style macroevolution has some serious empirical warrant. Until then, I remain for excellent reason profoundly skeptical that such has the required degree of warrant relative to the obvious inference that based on experience of what agents do, FSCI in biofunctional systems -- starting with the molecular technologies of the cell -- is just that: artifacts of intelligent, purposefully acting agents. GEM of TKIkairosfocus
May 26, 2007
May
05
May
26
26
2007
03:35 AM
3
03
35
AM
PDT
A quick browse through the talkorigins faq will give you all the information you need. The relevant link is here: http://talkorigins.org/indexcc/CB/CB102.html How could "microevolution" occur without any novel abilities or information being generated? Before you say horizontal gene transfer, where do these new genes come from? Do they not count as new information? DS: my argument is that 2LoT isn't relevant to particular encodings of information. A given genome is a form of abstract information and isn't affected by the 2LoT, but one particular encoding of that genome in the form of DNA may be affected. However that particular encoding will be replaced long before this happens.Phevans
May 26, 2007
May
05
May
26
26
2007
03:17 AM
3
03
17
AM
PDT
Phevans, Since you seem to know of beneficial mutations in DNA that have unambiguosly increased information/function that are not in any of the literature I've read, I would like for you to point them out for me. (As well I know quite a few others who would like to know of these beneficial mutations you know of.) Until you point them out I will continue to state my original claim!bornagain77
May 25, 2007
May
05
May
25
25
2007
12:54 PM
12
12
54
PM
PDT
phevans Obviously we aren't discussing Shannon information. The background noise in a semiconductor is about as random as random gets. In terms of Shannon information it's packed as full of information as you can get but in terms of just about any other kind of information it is devoid of content unless it happens to be the universe telling us secrets and we don't have the decryption key. As I said we need to be clear it's subjective information we are discussing. Information coded or encrypted into a storage or transmission media. A modulated radio signal carrying an encoded voice is one example. DNA encoded with instructions for constructing proteins is another.DaveScot
May 25, 2007
May
05
May
25
25
2007
10:24 AM
10
10
24
AM
PDT
bornagain77: "in all the countless millions of observations of mutations in DNA in the labratory, NOT ONE has ever been shown to unambiguosly increase information." You are mistaken, for any formal definition of the word information. DaveScot referred earlier to Shannon information, which is a measure of the information content of an arbitrary string. It's what is generally meant when discussing the information content of the genome, though if you are referring to a different sort of information, please let me know and point me at a formal definition. As DS mentioned, Shannon information is highest in a totally random string, and it is inversely proportional to the probability of the string's occurrence. That is, low information content means the string is highly likely to occur, and high information content means that it is unlikely to occur randomly. Random mutations in the genome can either increase or decrease the information content. It is this variation on which natural selection works. It has been mathematically and experimentally verified, and is not under serious debate from either the Darwinian or ID sides. Please stop peddling these misconceptions.Phevans
May 25, 2007
May
05
May
25
25
2007
07:35 AM
7
07
35
AM
PDT
H'mm: Some interesting developments -- this thread is a spiral not a deadlocked circle. 1] Jerry: in the discussion of this in February, bFast made by far the best observation of specificity, namely that the sequence or object was closely correlated with functional processes and thus specified by them. That is part of why, in my always linked, I specifically discuss FSCI -- FUNCTIONALLY specified, complex information. First observe a function [eg bio-function based on molecular energy converters], then discuss how informational elements are arranged to get there. Turns out that the functionality rests on complex digital codes and sequences of meaningful elements that are so isolated in the configuration space that they cannot be credibly accessed through chance plus necessity alone acting through known patterns of random behaviour at micro levels, i.e Sewell's Law and its background in statistical thermodynamics. (Cf my own intro level discussion in Appendix A to my always linked, with onward links.] 2] I think the coin toss and card deck examples are red herrings in this and while they indicate intelligence, they are qualitatively different processes. It so happens that the 500-coin toss or the serving up a suspicious hand at poker cases are illustrative of the underlying issues of digital configurational spaces and functionality. Both work because they are connected to the disreputable origins of probability theory, gambling games and the problem of cheating or which bets make sense to take. The mathematics so discovered or exemplified has great applicability and utility in many other cases of greater moment. Also, the cases are relatively easy to understand. For instance, if I were to come to you with 500 coins on a tray vigorously being tossed and dancing fast, then let them settle down -- lo, all heads -- and I say that I just tossed at random and this very special and specific and simply describable outcome happened just so, would you believe me? [Or would you suspect that for instance all the coins might just be double headed? You are not permitted to inspect the coins.] So, they are not red herrings but toy examples of a much bigger thing. 3] Mt Rushmore vs rocks in New Hampshire or Hawaii Again, Mt Rushmore has semantic context and functionality, not just coincidence. Rocks may form vaguely head-like shapes and shadows, but they do not spontaneously form a cluster that resembles four historically important presidents of the US to the point of being a high-quality group portrait in accordance with principles of monumental sculpture. 4] PC to BA 77: we both agree that NS+RM can Ã¢â‚¬Å“generate increasingly meaningful (subjective) informationÃ¢â‚¬Â, and that this isnÃ¢â‚¬â„¢t in any way in violation of the 2LoT The issue here is probabilities relative to accessible microstates and associated macroscopically observable outcomes, and in the question in view, the minimum degree of complexity and specification is well beyond the Dembski bound of what is remotely likely to happen on the gamut of the observed cosmos. Just look at the OOL issue and the issue of innovating body plans . . . [If a system is not beyond that bound, it is not FSCI in the meaning of the term we are discussing, 500 bits of information storage capacity is enough if the relevant state in question is unique, and 1 in 10^150 otherwise. i.e we just described the relevant step size, and relative to getting 500 k or so DNA elements to begin life or 10's of millions to innovate new body plans, the "small" step is beyond the Dembski bound. Recall you have to get to a sufficient complexity and show it empirically, to get to reproducing life which can then undergo natural selection. Cf TBO's classic discussion in TMLO] RM + NS can explain micro-evolution in at least some cases -- as a general rule by LOSS of biofunctional information through scrambling [e.g antibiotic resistance etc] -- but it simply cannot reasonably surmount the probabilistic hurdle Sewell is pointing to. THAT is what is being dodged in the "anything can happen in an open system" mantra. No, absent intelligently directed constraints, open systems migrate naturally towards more probable and less functional macro-configurations, ie the ones with overwhelmingly more microstates that fit in with them. That is what drives diffusion, i.e. increases configurational entropy, and it is what drives the disintegration and decay that we find ourselves fighting so often on all fronts on which we like to make progress. Ir so happens that evolutionary materialism is opposed to it, but that does not make it any less empirically well established as a physical principle. Nor does it change the fact hat every case of FSCI that we directly know th cause of, is again and again, a case where intelligent action was brought into creative play. Therefore, on empirically anchored inference to best explanation -- screams from the evo mat advocates notwithstanding -- the FSCI in life forms etc is best explained by the same basic mechanism: intelligent action that intended to create such function, and succeeded GEM of TKIkairosfocus
May 25, 2007
May
05
May
25
25
2007
04:49 AM
4
04
49
AM
PDT
Rude: "Which brings up something IÃ¢â‚¬â„¢ve wondered about: Is design written in the language of mathematics/logic and is that itself designed? If the laws of physics are contingent then those who see them as designed are correct. But does it stop somewhere? Or is it design all the way down?" A good twist on the "turtles all the way down" aphorism. That is a profound question I have wondered about myself. For instance the behavior of integers. Could there be a realm where the series of primes was different from what has been 'discovered' by mathematicians? If so, it would seem to violate some sort of fundamental rules governing anything for it to exist at all. In it's most simple form this argument from the known behavior of numbers in mathematics could use as an example the simple multiplication tables. Could the insight that 2 x 2 = 4 be a limited human perception not necessarily applying to higher levels of consciousness or existence, in which the known behavior of numbers in mathematics is a design choice by a higher intelligence? I think this question is vastly beyond the capability of human intelligence to fathom, because the known structure of mathematics is inherent in the structure of the physical world and human rational thought itself.magnan
May 24, 2007
May
05
May
24
24
2007
02:59 PM
2
02
59
PM
PDT
WmAD observes that there are archaeological objects that we are able to determine are designed but not what they were designed for. Dr. Egnor posted a bit on one such object (the Antikythera Mechanism) at http://www.evolutionnews.org/.Atom
May 24, 2007
May
05
May
24
24
2007
02:15 PM
2
02
15
PM
PDT
Interesting stuffÃ¢â‚¬â€no time to digest it all! But in one of his booksÃ¢â‚¬â€if my memory serves meÃ¢â‚¬â€WmAD observes that there are archaeological objects that we are able to determine are designed but not what they were designed for. This dissociates the specification from function, though mostly I think it would be the same as the function. Linguistics in America splits into two camps, where what interests the oneÃ¢â‚¬â€the formalistsÃ¢â‚¬â€are structures with no inherent function and what interests the latterÃ¢â‚¬â€the functionalistsÃ¢â‚¬â€is connecting those structures to function. Time has not been kind to the formalists in that little by little each structure and transformation has been shown to code for function. So can we say that linguistic function (semantic roles, discourse coherence Ã¢â‚¬Â¦) is the specification that confirms the design (complex syntactic structures and grammatical processes)? And were the formalists to be successful in identifying a body of complex functionless linguistic forms, would we still identify them as designed? What would be the specification? Which brings up something IÃ¢â‚¬â„¢ve wondered about: Is design written in the language of mathematics/logic and is that itself designed? If the laws of physics are contingent then those who see them as designed are correct. But does it stop somewhere? Or is it design all the way down? Dave Scott says, Ã¢â‚¬Å“But how can you objectively measure specification? I donÃ¢â‚¬â„¢t believe you can. Specification is tangible and our brains use it (consciously or unconsciously) constantly in evaluation and decision. Specification is a product of mind, not nature.Ã¢â‚¬Â Perhaps mathematics/computer savvy linguists should be working on this, for linguists distinguish between old and new information: Every clause advances the discourse with a small amount of new information (else it is redundant) but also ties into the discourse with a small bit of old information (else it is incoherent). But youÃ¢â‚¬â„¢re right, this distinction is subjective, yet in the flow of language perhaps it could be measured. Words, you know, have meaning, but devoid of a context there is no information. The minimal unit of information is the clause (the proposition to logicians, the function to mathematicians). Dave Scott again, Ã¢â‚¬Å“Is it still science when it canÃ¢â‚¬â„¢t be described objectively? Is specification excluded from science because thereÃ¢â‚¬â„¢s no way to scientifically or mathematically distinguish War and Peace from a book of gibberish?Ã¢â‚¬Â Not, of course, if weÃ¢â‚¬â„¢re not demarcationists. Also I might add that the tools of hard scienceÃ¢â‚¬â€namely human language and its derivative mathematicsÃ¢â‚¬â€are subjective. Pardon my rambling, folks, just wanted to get in my two cents re specificationÃ¢â‚¬â€itÃ¢â‚¬â„¢s the all important component of design.Rude
May 24, 2007
May
05
May
24
24
2007
02:03 PM
2
02
03
PM
PDT
I made the following comment "what is common about a 500 coin toss with all heads, Mt. Rushmore, a deck of cards sorted into suits, an English paragraph and DNA." The coin toss and the deck of cards are different from the other three examples. Mt. Rushmore, an English paragraph and DNA all become meaningful because they are associated with outside functional things. The coin toss and deck of cards are both based on the nature of their internal sequences, not some outside criteria. It is just a thought and in the discussion of this in February, bFast made by far the best observation of specificity, namely that the sequence or object was closely correlated with functional processes and thus specified by them. I think the coin toss and card deck examples are red herrings in this and while they indicate intelligence, they are qualitatively different processes. Maybe specified is not the best word This is just a thought.jerry
May 24, 2007
May
05
May
24
24
2007
06:13 AM
6
06
13
AM
PDT
1 2 3