Uncommon Descent Serving The Intelligent Design Community

Media Mum about Deranged Darwinist Gunman

Categories
Culture
Darwinism
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

John West of the Discovery Institute Reports:

But when a gunman inspired by Darwinism takes hostages at the offices of the Discovery Channel, reporters seem curiously uninterested in fully disclosing the criminal’s own self-described motivations. Most of yesterday’s media reports about hostage-taker James Lee dutifully reported Lee’s eco-extremism and his pathological hatred for humanity. But they also suppressed any mention of Lee’s explicit appeals to Darwin and Malthus as the intellectual foundations for his views. At least, I could find no references to Lee’s Darwinian motivations in the accounts I read by the New York Times, the Los Angeles Times, the Washington Post, ABC, CNN, and MSNBC.

Major Media Spike Discovery

Comments
gpuccio, Thank you for another detailed reply. Here, from that reply, is where I think we can have a productive conversation about quantifying CSI:
Now, if we are speaking of a transition where the change in functional information is, say, of two AAs, as it is likely in the Lenski model or in the nylonase model, we are in the range of less than 10 bits. I think nobody in his mind has ever suggested that kind of threshold for CSI. So, it is very simple: a transition between two states where the change is of less than 10 bits can never be defined as a transition which has generated new CSI.
I'm somewhat confused by this because you first estimate that the changes in Lenski's experiment constitute about 10 bits of CSI then you claim that it is not CSI. It seems like you are using the term in two different ways. If my understanding of your argument is correct, you're claiming that evolutionary mechanisms cannot generate more than a certain amount of CSI as a single mutation becomes fixed in a population. However, evolution proceeds by many small changes. If each mutation generates 10 bits of CSI, it only takes 15 mutations to hit your 150 bit boundary. You seem to be saying that evolutionary mechanisms can't generate large amounts of CSI in one fell swoop. Assuming that CSI can be rigorously defined, that would be an expected prediction of modern evolutionary theory. Evolution deals with incremental change based on what already exists. Your previous calculations that came out with large CSI values for certain proteins do so only by ignoring evolutionary mechanisms and assuming that the protein appeared complete and whole in its current form. That is not a biologically realistic scenario. I would very much like to drive this quantification discussion to a conclusion. Could you please take me step-by-step through the calculation of CSI for Lenski's citrate eating e. coli? There is more than enough information about that experiment online for us to make some reasonable estimates about the number of mutations required. This exercise will also allow us to address the ambiguity around what constitutes a valid specification.MathGrrl
September 6, 2010
September
09
Sep
6
06
2010
08:32 AM
8
08
32
AM
PDT
F/N: Pardon, but to return to the main area of focus for the thread, I think 28's alternatives to the desperate darwinism tinged neo-malthusianism that helped Mr Lee go off the rails [cf 19], is a point to begin finding ways forward; and it allows us to get out of the poisonous atmosphere fostered by the sort of media bias the original post complains of. So, pardon my putting the following back on the table: ==================== >> Instead of a sterile debate in a poisonous rhetorical atmosphere, let us instead discuss possibilities for a positive future that uses the human capacity to intelligently analyse the possibilities of the forces and materials of the world, to create opportunities for a future that gets us out of the neo-Malthusian trap. As sparkers for thought: 1 –> Energy is the key resource for everything else. So, long term, fusion, shorter term, I think new wave fission such as developments of pebble bed technology offer us a way forward. 2 –> Information technologies, though they are rooted in some of the dirtiest industries of all [look up what happens with Si chip fabrication . . . ] are a key intellectual power multiplier, so this technology should be given a priority, on both hard and soft sides. 3 –> The modularity principle would allow things to be localised, reducing the need for massive conurbations, that seem to have largely become ungovernable. Technologies should be as modular as possible, and as networkee as possible to take advantage of network economics. 4 –> Timber is the major construction resource, so we should look to sustainable timbers, especially the potential of processed bamboos based on species such as Guadua angustifolia [100', 5 - 7 y, higher growth density than pine forests]. Bamboo and the like also can make paper. 5 –> A lot of construction of relatively light structures can move to technologies such as bamboo bahareque, through a modern version of wattle-daub. 6 –> The automotive industry needs to go fuel cell long run, shorter run, I like things like algae oil [couple coal plants to feed bio oils grown by algae, cutting emissions 50%], and I think if we can do butanol in a fermenter cost effectively, we are looking at 1:1 for gasoline for Otto cycle engines. 7 –> That brings up biotech. We need a big thrust to get cells to manufacture as much of our chemistry as we can, industrial and pharmaceutical. Bugs will do it for us, on the cheap, once they are reprogrammed. (Remember, they are existing Von Neumann replicator technologies.) 8 –> Wind and solar will probably remain fringe but useful technologies. With one major exception, we need to look back to sailing schooners as small regional carriers in a post oil world. 9 –> Rail is the most inherently efficient bulk mover land transportation system, so we need to look to how that could be regenerated — subsidies, overt and hidden, killed rail. 10 –> We need to look to aquaculture and high tech agriculture to feed ourselves. 11 –> We need to break out of Terra, using our moon as staging base — 1/6 gravity makes everything so much easier to do, with Mars as the first main target. Beyond Mars, the Asteroid belt. 12 –> Transform these and we are looking at real estate to absorb a huge onward population. 13 –> As a long shot, high risk high payoff investment, physics needs to look at something that can get us an interstellar drive and transportation system. 14 –> So, investment in high energy accelerators and related physics and astronomy should be seen as a long term investment of high risk but potentially galaxy-scale [or bigger?] payoff. 15 –> Settling the Solar system takes the potential human population to the dozens of billions. 16 –> If we can break out and find terraformable star systems beyond, the sky is literally the limit, even if we are restricted to a habitable zone band in our galaxy. (For, we are dealing with potentially millions of star systems.) _____________________ Now, wouldn’t it have made a big difference if we had been discussing these sorts of possibilities instead of the eco-collapse, climate collapse and over-population themes that serve little purpose but to drive people to desperation — and into the arms of those who offer convenient “solutions”? >> ==================== I think finding a way forward is what we really need to discuss. And, by putting up something serious and discussing it, we can move beyond the limits set by media biases. GEM of TKIkairosfocus
September 6, 2010
September
09
Sep
6
06
2010
03:15 AM
3
03
15
AM
PDT
GP: You are also very right. One way to see that is to look at the plausibility thresholds identified by Abel in his recent paper, for different scales:
The UPM from both the quantum (q?A) and classical molecular/chemical (c?A) perspectives/levels can be quantified by Equation 1. This equation incorporates the number of possible transitions or physical interactions that could have occurred since the Big Bang. Maximum quantum-perspective probabilistic resources q?u were enumerated above in the discussion of a UPB [6,7] [[8] (pg. 215-217)]. Here we use basically the same approach with slight modifications to the factored probabilistic resources that comprise ?. Let us address the quantum level perspective (q) first for the entire universe (u) followed by three astronomical subsets: our galaxy (g), our solar system (s) and earth (e). Since approximately 1017 seconds have elapsed since the Big Bang, we factor that total time into the following calculations of quantum perspective probabilistic resource measures. Note that the difference between the age of the earth and the age of the cosmos is only a factor of 3. A factor of 3 is rather negligible at the high order of magnitude of 1017 seconds since the Big Bang (versus age of the earth). Thus, 1017 seconds is used for all three astronomical subsets: [Universe qWu = 10^43 trans/s * 10^17s to date * 10^80 p,n,e = 10^140 Galaxy qWg = . . . * 10^67 = 10^127 Solar system qWs = . . .*10^57 = 10^117 Earth qWe = . . .*10^42 = 10^102 These above limits of probabilistic resources exist within the only known universe that we can repeatedly observe--the only universe that is scientifically addressable. Wild metaphysical claims of an infinite number of cosmoses may be fine for cosmological imagination, religious belief, or superstition. But such conjecturing has no place in hard science. Such claims cannot be empirically investigated, and they certainly cannot be falsified. They violate Ockham's (Occam's) Razor [40]. No prediction fulfillments are realizable. They are therefore nothing more than blind beliefs that are totally inappropriate in peer-reviewed scientific literature. Such cosmological conjectures are far closer to metaphysical or philosophic enterprises than they are to bench science.
For chemical reactions with 10^-13 s as the speed limit, these fall to 10^108, 10^96, 10^85, and 10^70 respectively. Thus, the scope of search resources for chemistry is well within the limit you are proposing. Even, for the whole universe, much less my monkeys, keyboards and banana plantations model for digital text strings -- an upper limit on keyboarding would be like 10^-3 s, generous given the realities of keyboard bounce and the need for debouncing. [Even the use of a cross-coupled RS latch with a chageover switch or a latching JK flipflop where the o/p switches the f/f to the store state on first contact in ~ 10 ns depending on IC technology will not get us beyond that.] So, 150 functional bits [2^150 ~ 1.43*10^45 . . . Get XCalc, folks; my favourite convenience RPN simple calc] is fairly safe for slow reactions relative to long chain organic, endothermic molecules that have to be chaperoned step by step in observed cases. Note how there are hundreds of ribosomes typically, putting parallel processing to get cumulative speed. The Durston paper then puts in the knife, starting with just the individual proteins, not even the level of organising the life form, getting metabolism demanding many times over multiplied dozens of proteins, and self replication. Then, to innovate new body plans . . . In short, there are very good reasons why we do not see the Darwinian mechanisms doing much more than Behe's edge of evo observations from Malaria parasites. Organisms do adapt and vary, but the mechanisms by which the organisms come about and their body plans come about, are far beyond what it is credible that chance variations and mechanical necessity can do, whether with chemicals in warm little ponds, or with genetic accidents. And we have just one empirically known mechanism that can routinely exceed random walk searches: active information injected by intelligences. GEM of TKIkairosfocus
September 6, 2010
September
09
Sep
6
06
2010
03:00 AM
3
03
00
AM
PDT
KF: thank you for your always precise contributions. There is no doubt that 500 or 1000 bits is a very reasonable threshold for an universal case. My reason for proposing a lower threshold of 150 bits in the specific context of proteins and biological information, is tha here we are not in a general case, but in a very particular one. And we have a good knowledge of the available probabilstic resources, which, even in an extreme and very safe calculation, are much lower than those of the universe, taking into consideration a reasonable esteem of the most favourable case (bacteria), with the highest population number and replication rate, and as a timeframe the estimated useful age of earth, let's say 4 billion years. Moreover, when we apply the calculation to protein families, we can try a reasonable estimate of the functional space, and so we need not push our threshold higher for safety reasons. For instance, if we look at the estimates of FSCI in Durston's paper, we can see that 28 protein families out of 35 he has analyzed are well beyond my threshold of 150 functional bits. And 12 of them are beyond the 500 bits threshold, with 6 beyond the 1000 bits. The 7 ones which are below the 150 bits threshold are very short proteins, of less than 100 AAs. But it is interesting to observe that, out of 12 protein families below 100 AAs length, 5 do have a functional complexity higher than 150 bits, starting with insulin which is 66 AAs long, and has a functional complexity of 156 Fits. Therefore, the important point is that functional complexity is a function of length, but not only of length. The Durston method measures very well, and very naturally, how much of the AA length is really necessary for the function, which, together with length, is a very good estimate of the target space. It is obvious, at least for me, that even the worst case of ankyrin, with its 33 AAs and 46 Fits, could not emerge in a random system (the probability for that to happen still being of 1:7x10^13, which is reassuring enough. But we have to fix a threshold somewhere, and for protein complexity I believe that 150 Fits is very appropriate, because it can be derived by reasonable and very generous assumptions about possible random biological systems, and empirically it seems to detect almost all the cases of complexity in the existing proteome, leaving out only a minority of probable false negatives.gpuccio
September 6, 2010
September
09
Sep
6
06
2010
02:15 AM
2
02
15
AM
PDT
PS: GP [and MG], the reason for going from 500 bits -- the number of Planck time quantum states the atoms of the observed universe run through in the thermodynamically credible lifespan of the cosmos -- is that it is often hard to specify how many states will be functional. So, taking the number of states of the observed universe as the practical upper limit on an island of function -- it is the largest functional object we observe! -- we isolate that to the degree of 1 in 10^150, by squaring this number. (Practically speaking, start with 500 bits and double it. Number of possible states for B bits is 2^B. Doubling B gives a squared number of states. 16 bits gives 64 k address space, and 32 bits gives 4 gigs.) At 1,000 bits, an island of function the scope of our whole universe will be utterly isolated. And, no practically feasible search will be able to use up even a small fraction of 10^150 random walk search-steps. That means that we are utterly unlikely to get to shores of an island of function by sheer dumb luck. Where, as Marks and Dembski show us, on average, search algorithms will be no better than random walk searches. [Unless the algor is very well matched to the space on intelligent, active info, it will be typically WORSE than random search: if you take a multiple choice test based on misinformation, you are MORE likely to pick wrong items than if you picked at random.] In the real world -- as opposed tot he world of selectively hyperskeptical objections -- we routinely recognise intelligent configuration of symbols from their complexity and organised functionality. That is why we know the posts above are intelligently designed, not random bursts of noise on the Internet. We have an Internet full of examples of FSCI being the product of intelligence, and ZERO cases of FSCI being proiduced by lucky noise. That is why objections are getting into ever more esoteric issues on what intelligence is, whether free will exists, and whether the matter can be quantified -- never mind that the quantifications are easily accessible [cf no 27, WACs top right at UD for summaries], they too will be objected to in turn; again and again. That is why I am picking the simplest, as it is what we routinely implicitly use when we go buy a hard drive or more memory for our PCs, or when we send out a file as an e-mail attachment and notice its size. In short, when we see objections tot he sort of simple metric above: X = F*C*B, we have good reason to know the objections are selectively hyperskeptical, and we can see why relative to very familiar examples. Indeed, as UB and TGP keep on pointing out, the objectors must intelligently produce samples of FSCI to make the objection to FSCI, underscoring that the objections are self-referentially incoherent. As in, reductio ad absurdum.kairosfocus
September 6, 2010
September
09
Sep
6
06
2010
01:40 AM
1
01
40
AM
PDT
MathGrrl: Pardon. Above, you interjected a question on the quantification of CSI/[d]FSCI, in a thread which is primarily about dealing with media bias and manipulation. (And since in comment no 1, you made an objection on the subject, which was answered, you are aware of the main subject.) By 41 - 46, you had several responses, INCLUDING DISCUSSIONS OF THE QUANTIFICATION OF CSI/[d]FSCI AND RELATED MATTERS, WITH ONWARD LINKS THAT DISCUSS THE GENERAL CONCEPT OF INFORMATION AND GO SO FAR AS ONWARD ISSUES IN THERMODYNAMICS. In response, you picked on 46, which was a "sauce for the goose" turnabout intended to show the inadequacy of objections on definition that demand mathematicisation where it is inappropriate. By 53 - 55, GP and the undersigned repeated the exercise, and UB pointed to the underlying incoherence again; GP pointing to a previous case where you apparently dropped out of a discussion when he answered the same basic question/objection. In addition, in my own responses, I highlighted the variety, limitations and problems associated with definitions. In your latest responses, I find little responsiveness to this substantial body of discussion. In particular, your repetition of demands on definitions does not seem to reflect any serious engagement of the issue of definition and the linked one of quantification. I will go so far here, as to point out that if you call for an inspection of the properties of your PC's hard drive [or even of a document written in Word or the like], you will see a listing of xxx bits. That listing gives the number of bits at work to store the functionally specific information that makes your PC work, or allows you to present the document in question. Shannon information is in effect a raw capacity or transmission measure, with some adjustment based on the redundancy of the alphabet of symbols used and some further allowance for the effects of noise, as is discussed in Section A the always linked through my handle in the LH column. (As a rule, some symbols from a set of such will be more often used, e.g. e vs x in English. [And that in turn answers to the question of what symbols are, by pointing to a live example, the ASCII set, or the old fashioned glyphs we studied in our first classes in school. Also, given long experience at UD, when we see one definition simply being the occasion for more and more demands for further definitions, in a context that refuses to engage the already linked serious discussions on the nature and limits of definition: either you go circular [a dictionary] or you have primitive concepts that are not further defined than by pointing to cases and using family resemblance [the sciences, Mathematics], then the implication is that the objections are rhetorical and distractive not substantial.]) However, the information listed for your hard drive is not simply Shannon info, it is information that functions, and in specific ways. As this PC reminded me the hard way a scarce week back, a very few corrupted bits in Config Sys are quite enough to kill function, and lose you the months of stored info on a hard drive when the drive has to be wiped and reloaded with the OS. So, we can see just how specific functionality can be. Believe you me, the "islands of function in a wider sea of non-functional configs" is NOT a mere empty metaphor. (Advice: back up all your files that if you lose, you regret [at min, email to yourself on an Internet mail account] . . . I got caught getting lazy on that, again. "It's a new PC . . . " and "I gotta rush off . . . " OUCH.) So also, as noted, recognising that we can identify adequate vs inadequate function -- does it WORK? -- we may define a dummy variable on go/no go: F = 1/0. Similarly, on applying a threshold for complexity -- more or less a measure of how isolated and hard to find at random target zones will be in configuration spaces stipulated by the size of the body of information and the scope of the set of symbols, we can have a go/no on complex enough that random walks are unlikely to encounter the island of function: 500 - 1,000 bits is more than good enough, usually we just use 1,000 bits and again do C = 1/0. The FSCI simple metric is one step away: Take the product of number of bits at work [B], with F and C: X = F*C*B. (NB: This is actually presented in Section A, the always linked, and in a simple form in the Weak Argument correctives, 25 - 28.) A bit rough and ready, but good enough for a lot of practical work. (BTW, grades in school are assessed and measured in a very similar way, as are job performance ratings. In short, the approach is quite good enough for serious real-world contexts.) In short, pardon again: your behaviour (given several on-point responses that you did not address) raises a serious question: are you raising a serious burning question, or are you dragging a distractive red herring across the track of a subject that is going where you do not wish to see it go? In this case, highlighting how the major media [including nominally "conservative" houses] is so often dominated by an evolutionary materialistic agenda that turns that ideology into a sacred cow. So, hard questions about what evolutionary materialism is doing to us as a culture that need to be asked, are being persistently ducked by those who man our microphones, cameras and editorial staff-rooms. Not to mention, our classrooms. It is therefore highly relevant to cite some choice words from Lord Keynes' peroration to his famous General Theory:
. . . Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Not, indeed, immediately, but after a certain interval; for in the field of economic and political philosophy there are not many who are influenced by new theories after they are twenty-five or thirty years of age, so that the ideas which civil servants and politicians and even agitators apply to current events are not likely to be the newest. But, soon or late, it is ideas, not vested interests, which are dangerous for good or evil.
And so, MG, it is not so much that Mr Lee was insane, but that the mad man is the canary in the mines, informing us of the poisonous currents in the air, by his unbalanced ravings. As was pointed out in 19 - 21 and 36 above. If we are to take this thread in a positive direction, why not pick up on the positive responses in 28 above, which suggest a way to solutions, short and long term, including a short discussion on why even investment in some truly esoteric particle/high energy physics research may have long term incalculably positive payoffs? I think this can provide a mutually beneficial and relatively uncontentious focus for discussion that could help us turn a sad tragedy into a possibility for hope. Can we go for that? GEM of TKIkairosfocus
September 6, 2010
September
09
Sep
6
06
2010
01:16 AM
1
01
16
AM
PDT
MathGrrl: I still don’t see how, according to your own definitions, the mutations that arose during Lenski’s long running experiment did not create CSI. I want to answer simply and explicitly to this point, so that it is really clear. You certainly know that the role of complexity in any CSI definition is to rule out simple changes whose complexity is in the range of a random model explanation. That requires a threshold of complexity to define CSI. The threshold is usually taken at a very high level (of complexity, or at a very low level of probability, which is the same), just to be "on the safe side". The threshold can vary according to the context, and therefore to the possible probabilistic resources of the model we are considering. Dembski has spoken of a UPB, at about 500 bits of complexity. That's because he is reasoning at the level of the whole known universe, and even in that context he want to stay "very safe", to avoid any possible false positive. Others, even more generously, have pushed that threshold to 1000 bits. I maintain that, for any realistic model of biological information, where we certainly have potential probabilistic resource in a much lower range, a threshold of, say, 150 bits is much more appropriate. Now, if we are speaking of a transition where the change in functional information is, say, of two AAs, as it is likely in the Lenski model or in the nylonase model, we are in the range of less than 10 bits. I think nobody in his mind has ever suggested that kind of threshold for CSI. So, it is very simple: a transition between two states where the change is of less than 10 bits can never be defined as a transition which has generated new CSI. As far as I know, the threshold I have suggested (150 bits) for biological models is the lowest which has ever been suggested. You can stick to that in your discussions with me, if you want, but not to any lower threshold value. So, just to be clear, to show to me that any transition has generated CSI, it is not only necessary that the transition has generated a new function (which in the case of Lenski and nylonase can be reasonably assumed), but also that the transition itself implied a functional complexity of at least 150 bits. IOW, that at least 35 independent AA mutations were necessary for the new function to appear. As you can see, I am keeping the threshold very low. I like risk. But two AAs has never been a complexity threshold for anyone.gpuccio
September 5, 2010
September
09
Sep
5
05
2010
10:56 PM
10
10
56
PM
PDT
MathGrrl: Please, explain better what is not clear to you abot the quantification of dFSCI. In my two posts I linked above, I have answered in detail to this point. Would you please comment in some detail about them? Again, you speak of citrate digestion. I quote from my first post: "Yes, but it is a multiple function, a sysyem, and it depends on many differnt simple functions. Most of those functions have not varied in Lenski’s experiment (probably). If Behe is right, the only function which has varied is in the transport system, and that variation is probably due to a tweaking of an existing transport protein. And that tweaking is probably due to very few mutations. We are, IOW, in the same scenario as nylonase: microevolution. In this scenario, we can reason about CSI/dFSCI only after we know in detail what has changed at the molecular level, what biochemical function exactly has appeared which was not present before, and in what protein. This is an important point. We cannmot reason about CSI if we do not know the details of what we are observing (or, at least, the details of the model we are evaluating: more on that after). CSI is not philosophy: it is science." So, again, to compute the variation in functional complexity in a transition, you must know in detail what has changed at the molecular level. Even if we probably don't know that in the case of Lensky's system, is extremely likely that the mutations are only a few (maybe even one or two). That rules out that new CSI has been generated. And again, the new function which has emerged is probably permeability to citrate, and not the ability to digest it. To compute CSI in a transition, we must know in some detail the starting state and the final state at the molecular level. That's why I ask you again what is your model of the emergence of a new protein domain in natural history, an event which certainly occurred at least a few thousand times independently. If you woul offer at least one hypothetic model, such as: protein domain superfamily A emerged form the existing, different protein domain superfamily B at this point of natural history, in approximately this time, we can make a rough computation of the dFSCI variation implied in the transition. By the way, an explicit method to compute the functional information variation in a transition is outlined in the Durston paper, many times linked, upon which I don't remember you have ever commented. So, welcome to the discussion again, but please be specific and answer our specific comments. In detail.gpuccio
September 5, 2010
September
09
Sep
5
05
2010
10:33 PM
10
10
33
PM
PDT
MathGrrl, I think the point we're all attempting to make here is that scientific questions often do not lend themselves to rigorous mathematical equations and definitions as you seem to require of ID. As I pointed out in an earlier post, Darwinists to not entertain that level of rigor on many (or most) of their arguments. FSCI is a useful term to a point, but perhaps not to the point you require. That does not render it scientifically useless. At the same time, neither is "evolution" or even "RV + NS," or most of the terminology by which Darwinists maintain the science of ToE. Earth and biological science is not necessarily mathematical. Math is a much more rigorous and exact discipline than science. Science attempts to arrive at the best explanation (especially when we're dealing with events, which neither you nor I have personally witnessed). When math can be applied to scientific arguments, we are all the better for it, but quite often phenomena do not currently lend themselves to such precision. They may eventually, and we should attempt to be as precise as we can, but in order for you to understand ID's basic arguments, such precision is not "essential." I will give you this much; quantification may be useful, as some of the current ID research attempts to do just that. It may be useful to be able to quantify the level of FSCI in DNA, but at this point, I don't believe it is "essential," as you seem to believe it is. We don't quantify the exact amount of FSCI in the posts we offer on this board, yet I don't think any of us would doubt that the FSCI is present. From the upthread discussion" BA @ 4 “The fact is that there is not one single instance of the purely material processes creating any functional information whatsoever!!!” TGP: "This is true and I’ll take a second to tell Mathgrrl why it will always be true." MG: "from your post upthread are meaningless. We need a referent to objective reality for that claim to be assessed." We have the referent objective reality available to us. BA stated exactly what that reality is. The problem we face is when the question is begged regarding RV + NS to be able to produce CSI. The typical Darwinian answer is contrary to our experience; thus, it is reasonable to object. It seems to me that many Darwinists attempted to attack Behe's very basic definition of "irreducible complexity" not long ago with the same sort of rhetoric. "IC can't be quantified, therefore it is useless." Then they attempted to demonstrate that Behe's mousetrap is not irreducibly complex; thus negating their earlier objection. If IC can't be quantified, then their attempt to refute it is meaningless. Of course IC can be quantified. It is quantified in the precise definition Behe supplied for it. But they failed to apply Behe's precise definition in their attempted refutation; preferring to redefine what he meant so they could attack it. Then they applied a similar argument to the "objective reality" of the bacterial flagellum, and failed once again. I understand your caution in wanting to know exactly what we mean by CSI or further FSCI, but those terms are defined as rigorously as can be expected for the moment. And if you look at the history of ID terminology, proponents attempt to redefine those terms more rigorously as reasonable objections occur. This is why we now speak of FSCI as more quantifiable than simply CSI. They may not be quantifiable as far as precise numeric equations as to what FSCI contains, but they are quantifiable as far as being able to recognize FSCI from non FSCI (Shannon information, for example). Here's a thought experiment for you: What is complex information? What sort of information fits with that definition? Well, Shannon Information fits as well as CSI fits. Both are forms of complex information. What is Complex Specified Information? Well Shannon information does not fit, because it is not specified. The information in this post fits because it fits all three criteria; it is information, it is complex, and it is specified. What is functionally specified complex information? Shannon? no, The information in this sentence? Yes. All printed information that forms sentences? Not necessarily. Not all sentences are necessarily functional. For example: "Judge bananas shut smoothly on mountain oceans." It's a complete sentence, is complex and specified, but it forms no function relevant to reality. Well, you could argue that it forms a function as far as an example of what I mean, but by itself without reference to my argument here, it serves no function. You see then how the definition is quantified? It weeds out certain types of information. That may be as quantifiable as we presently can be. But the definition is useful because it relates to our experience. In our experience, we encounter FSCI only in reference to purposeful conscious applications, and not in reference to non-purposeful, non-conscious applications such as RV + NS, or any sort of chance happening, such as throwing magnetic letters at a refrigerator, or may cat walking across my computer keyboard and producing Shannon information. Any argument that we encounter RV + NS producing FSCI in biological systems is simply question begging at this point. This is not to say that you couldn't demonstrate how RV + NS can produce FSCI, but in our present experience, it has not been demonstrated.CannuckianYankee
September 5, 2010
September
09
Sep
5
05
2010
07:23 PM
7
07
23
PM
PDT
Mathgrrl, why do you not address this post here?,,,, https://uncommondescent.com/darwinism/media-mum-about-deranged-darwinist-gunman/#comment-363043 ,,,In which the mathematical definition of functional information, and how it relates to molecular biology, is laid out in detail? That is exactly what you asked for, it is not? Do you truly believe Lenski's 'cuddled' e-coli, which are kept isolated from the wild strain, are proof of a gain in functional information? It is not even close Mathgrrl, even though it is sold as supposedly irrefutable proof of evolution! These following articles refute Richard E. Lenski's 'supposed evolution' of the citrate ability for the E-Coli bacteria after 20,000 generations of the E-Coli from his 'Long Term Evolution Experiment' (LTEE) which has been going on since 1988: Multiple Mutations Needed for E. Coli - Michael Behe Excerpt: As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1) Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.),,, If Lenski’s results are about the best we've seen evolution do, then there's no reason to believe evolution could produce many of the complex biological features we see in the cell. http://www.amazon.com/gp/blog/post/PLNK3U696N278Z93O Lenski's e-coli - Analysis of Genetic Entropy Excerpt: Mutants of E. coli obtained after 20,000 generations at 37°C were less “fit” than the wild-type strain when cultivated at either 20°C or 42°C. Other E. coli mutants obtained after 20,000 generations in medium where glucose was their sole catabolite tended to lose the ability to catabolize other carbohydrates. Such a reduction can be beneficially selected only as long as the organism remains in that constant environment. Ultimately, the genetic effect of these mutations is a loss of a function useful for one type of environment as a trade-off for adaptation to a different environment. http://www.answersingenesis.org/articles/aid/v4/n1/beneficial-mutations-in-bacteria Lenski's work actually did do something useful in that it proved that 'convergent evolution' is impossible because it showed that evolution is 'historically contingent'. This following video and article make this point clear: Lenski's Citrate E-Coli - Disproof of Convergent Evolution - Fazale Rana - video http://www.metacafe.com/watch/4564682 The Long Term Evolution Experiment - Analysis Excerpt: The experiment just goes to show that even with historical contingency and extreme selection pressure, the probability of random mutations causing even a tiny evolutionary improvement in digestion is, in the words of the researchers who did the experiment, “extremely low.” Therefore, it can’t be the explanation for the origin and varieity of all the forms of life on Earth. http://www.scienceagainstevolution.org/v12i11f.htm Upon closer inspection, it seems Lenski's 'cuddled' E. coli are actually headed for genetic meltdown instead of evolving into something better. New Work by Richard Lenski: Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal. http://www.evolutionnews.org/2009/10/new_work_by_richard_lenski.html Is this your ace in the hole Mathgrrl? And if your supposedly strongest piece of evidence falls completely apart upon cursory examination, what does this say about all the other evidence you have been brainwashed with? further notes: In fact, trying to narrow down an actual hard number for the truly beneficial mutation rate, that would actually explain the massively integrated machine-like complexity of proteins we find in life, is what Dr. Behe did in this following book: The Edge Of Evolution - Michael Behe - Video Lecture http://www.c-spanvideo.org/program/199326-1 A review of The Edge of Evolution: The Search for the Limits of Darwinism The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have "invented" little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). Dr. Behe states in The Edge of Evolution on page 135: "Generating a single new cellular protein-protein binding site (in other words, generating a truly beneficial mutational event that would actually explain the generation of the complex molecular machinery we see in life) is of the same order of difficulty or worse than the development of chloroquine resistance in the malarial parasite." That order of difficulty is put at 10^20 replications of the malarial parasite by Dr. Behe. This number comes from direct empirical observation. Richard Dawkins’ The Greatest Show on Earth Shies Away from Intelligent Design but Unwittingly Vindicates Michael Behe - Oct. 2009 Excerpt: The rarity of chloroquine resistance is not in question. In fact, Behe’s statistic that it occurs only once in every 10^20 cases was derived from public health statistical data, published by an authority in the Journal of Clinical Investigation. The extreme rareness of chloroquine resistance is not a negotiable data point; it is an observed fact. http://www.evolutionnews.org/2009/10/richard_dawkins_the_greatest_s.html Thus, the actual rate for 'truly' beneficial mutations, which would account for the staggering machine-like complexity we see in life, is far in excess of one-hundred-billion-billion mutational events. So this one in a thousand, to one in a million, number for 'truly' beneficial mutations is actually far, far, too generous for an estimate for evolutionists to use as an estimate for beneficial mutations. In fact, from consistent findings such as these, it is increasingly apparent the principle of Genetic Entropy is the overriding foundational rule for all of biology, with no exceptions at all, and belief in 'truly' beneficial mutations is nothing more than wishful speculation on the materialist part which has no foundation in empirical science whatsoever. Evolution vs. Genetic Entropy - video http://www.metacafe.com/watch/4028086 The foundational rule of Genetic Entropy for biology, which can draw its foundation in science from the twin pillars of the Second Law of Thermodynamics and from the Law of Conservation of Information (Dembski, Marks, Abel), can be stated something like this: "All beneficial adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally created in the parent species genome."bornagain77
September 5, 2010
September
09
Sep
5
05
2010
06:24 PM
6
06
24
PM
PDT
Upright Biped,
Mathgirl, your response to TGPeeler in 51 is so completely self serving that it’s a little difficult to see it as anything but a matter of pure rhetoric. Which it is.
Asking for definitions in order to understand someone's argument is "pure rhetoric"? Actually, I am doing tgpeeler the courtesy of taking his points seriously and spending my time and intellectual effort to understand them better. You, on the other hand, are being remarkably uncivil by casting baseless aspersions on my intentions.
You are more than welcome to remedy that situation by applying your own objection to your own objection, and simply answering the question upthread from CY. Perhaps that will provide a certain amount of perspective to the rhetoric. “Could you please provide a mathematical definition of “definition,” so that any interested observer can objectively measure it, and thus know exactly what you’re referring to?” This question, by your own standards, is a completely legitimate question, and should be answered. So please answer it.
I could go down the rathole you're attempting to dig by explaining how mathematicians define terms, but that would simply distract from the main point of the discussion (which, I suspect, is your goal). The point is that terms of art like "functional information" must be rigorously defined before they are used. The inability of tgpeeler or yourself to do so means that any claims about functional information are quite literally meaningless. If you would like to respond to my courtesy with courtesy, I would be delighted to continue the discussion with you. If, instead, you want to persist in your attempts at distraction, I'll spend my time with the more civil participants here.MathGrrl
September 5, 2010
September
09
Sep
5
05
2010
05:20 PM
5
05
20
PM
PDT
gpuccio! Hello again! I did indeed drop out of our previous discussion due to the pressures of the first weeks of the new semester, for which I hope you will accept my apologies.
think we had some discussions about functional informayion recently. I would suggest you look at my last answers to you here: https://uncommondescent.com.....ent-362414 and here: https://uncommondescent.com.....ent-362415 to which, I believe, you have never answered.
Thank you for the links and for the time you spent to write those posts.
And if there is any other aspect of the quantification of functional information in proteins which you would like to discuss, let’s do it.
Great! After reading your linked posts, it seems to me that quantification is exactly what is missing. I still don't see how, according to your own definitions, the mutations that arose during Lenski's long running experiment did not create CSI. In fact, we seem to agree that understanding the evolutionary history of a particular function is essential to determining the amount of CSI in the underlying proteins and genome. That suggests that we should be in agreement that simply computing four to the power of the length of the genome or twenty to the power of the length of the protein is not relevant to real world biological systems. However, we seem to not be in agreement on that point. Quantification is, therefore, essential to making any progress in our discussion. If you could walk me through a step by step calculation of CSI for a real world biological function (e.g. citrate digestion in e. coli or the ability to digest nylon), I think we could both learn a lot. That would enable me to independently calculate the CSI of other biological systems and determine if your claims about the inability of evolutionary mechanisms to create CSI are correct. Thanks again for coming back into the discussion.MathGrrl
September 5, 2010
September
09
Sep
5
05
2010
05:12 PM
5
05
12
PM
PDT
Part 6 and 7 of the Norman Geisler video may be very interesting to many on UD because he adds to the CS Lewis argument that God exist because there is an inherent need in man for God, by quoting the leading atheists themselves, and as was amply demonstrated by the deranged Darwinist: Norman Geisler - The New Atheism (6/8) http://www.youtube.com/watch?v=d_8aGYz6PS8 Brooke Fraser- "C S Lewis Song" http://www.youtube.com/watch?v=GHpuTGGRCbYbornagain77
September 5, 2010
September
09
Sep
5
05
2010
04:54 PM
4
04
54
PM
PDT
Cabal,
By replacing “Evolutionary theory” with “Intelligent Design”, his statement becomes more relevant with respect to reality.
By replacing "reality" with "fantasy" your statement becomes more relevant to reality. In my opinion, as far as the level of discourse in your comment goes, if that is the best evolutionists have to offer, I don't see any future for evolution. I don't even see the value of evolution in the present either. Except of course for its evolved purpose of being a wedge.Clive Hayden
September 5, 2010
September
09
Sep
5
05
2010
01:22 PM
1
01
22
PM
PDT
This video is very informative about 'New Atheists", which may be of interest to some since we have dealt with so many Darwinists who are 'New Atheist' here on UD. Norman Geisler - The New Atheism (1/8) http://www.youtube.com/watch?v=fS6UL5BvIC0bornagain77
September 5, 2010
September
09
Sep
5
05
2010
11:19 AM
11
11
19
AM
PDT
Mathgirl, your response to TGPeeler in 51 is so completely self serving that it's a little difficult to see it as anything but a matter of pure rhetoric. Which it is. You are more than welcome to remedy that situation by applying your own objection to your own objection, and simply answering the question upthread from CY. Perhaps that will provide a certain amount of perspective to the rhetoric. "Could you please provide a mathematical definition of “definition,” so that any interested observer can objectively measure it, and thus know exactly what you’re referring to?" This question, by your own standards, is a completely legitimate question, and should be answered. So please answer it.Upright BiPed
September 5, 2010
September
09
Sep
5
05
2010
10:52 AM
10
10
52
AM
PDT
MG: I suggest that after you look at GP's links, you may find it helpful tot hen scroll back up to 42 - 45 above, where several posts did respond to your request. My own simple note on the easiest metric of functional information and its significance in light of your own earlier post is in that cluster. (But you will need to think a bot on the meaning of "definition." On this one Wiki does a reasonable job here. NWE improves it, here. pay particularly close attention to the concept of ostensive definition, as this comes closest to capturing how we form a concept by abstracting from exemplars, then refine its boundaries through precising descriptions.) Thereafter I suggest 25 - 28 in the Weak Argument Correctives and relevant entries in the glossary. My own note as linked in the LH column through my handle will speak to several aspects of the information issues at an initial undergraduate level, and will also in Appendix 1 tie in classical and statistical thermodynamics. beyond that you may wish to follow the links from App 1 in that note to the TMLO discussion. (Somewhere there is also a link to the whole of the Thaxton et al book online.) These will also direct you onward to some fairly serious discussions. And, the publications at the Evo Info Lab by Marks and Dembski will be of help. GEM of TKIkairosfocus
September 5, 2010
September
09
Sep
5
05
2010
10:43 AM
10
10
43
AM
PDT
MathGrrl: I think we had some discussions about functional informayion recently. I would suggest you look at my last answers to you here: https://uncommondescent.com/intelligent-design/intelligent-design-and-the-demarcation-problem/#comment-362414 and here: https://uncommondescent.com/intelligent-design/intelligent-design-and-the-demarcation-problem/#comment-362415 to which, I believe, you have never answered. And if there is any other aspect of the quantification of functional information in proteins which you would like to discuss, let's do it.gpuccio
September 5, 2010
September
09
Sep
5
05
2010
10:10 AM
10
10
10
AM
PDT
tgpeeler,
Mathgrrl @ 39 “Could you please provide a mathematical definition of “functional information” so that any interested observer can objectively measure it?” No. I can’t. Regrets. You are free to try
It's your term, so it's up to you to define it. Until you do, statements like this one
BA @ 4 “The fact is that there is not one single instance of the purely material processes creating any functional information whatsoever!!!” This is true and I’ll take a second to tell Mathgrrl why it will always be true.
from your post upthread are meaningless. We need a referent to objective reality for that claim to be assessed. I'm genuinely interested in understanding the information theory based arguments that are seen frequently here on UD. If you decide to define your terms, I'd love to continue the discussion.MathGrrl
September 5, 2010
September
09
Sep
5
05
2010
09:52 AM
9
09
52
AM
PDT
Cabal wrote: In my opinion, as far as the level of discourse in this thread goes; if that is the best ID proponents have to offer, I don’t see any future for ID. I don’t even see any value of ID in the present either.
First of all, thank you for your participation, despite the fact I intensely disagree, I think it is valuable for readers at UD to hear the best arguments the anti-ID side has to offer. I point out the irony of this claim:
Cabal wrote: In my opinion, as far as the level of discourse in this thread goes; if that is the best ID proponents have to offer, I don’t see any future for ID. I don’t even see any value of ID in the present either.
This thread is not the best ID has to offer. But the irony is that you say you don't see any value in ID in the present day which implicitly suggests you see value in Darwinism. The irony is that if Darwinism (as defined by Dawkins and Coyne) is true, then there is no value in anything. To quote Dawkins, "there is no good no evil, nothing but pointless indifference". Darwinists defend their view as if the universe will be better if Darwinism is true and if humanity accepts Darwinism as true. That is the height of non-sequiturs! If Darwinism is true, then there is inherently no metric for what is better or worse, and thus there is no logical reason to defend Darwinism. That's what I find astonishing about Dawkinsian Darwinists, they are not able to logically demonstrate why acceptance of Darwinism is good, since, by definition, Dawkinsian Darwinism implies the notion of "good" is only an illusion. And it is amazing to me that a philosophy premised on the pointlessness of human existence should be defended with such vigor as if it were the holy grail. This is the height of irrationality. The value of ID is that if it is true it opens the possibility that there may be an Intelligent Designer, and though Intelligent Design does not necessarily imply God's existence, it certainly makes the possibility compelling. To quote Dawkins:
the presence of a creative deity in the universe is clearly a scientific hypothesis. Indeed, it is hard to imagine a more momentous hypothesis in all of science.
Even if the chance of ID being true is remote, the payoff could be infinite. If Darwinism is true, then humanity is screwed for sure, and there is no logical way to demonstrate that acceptance of Darwinism is a "good" thing! And that is the lack of logic the Dearanged Darwinist Gunman demonstrated. If Darwinism is true, and even if the human race were to go extinct, how is this logically a bad thing since extinction in the Darwinist world is a good thing!!!!! The lack of rationality by Lee is displayed in various incarnations by Darwinists like Dawkins defending the "value" of Darwinism. Finding value in Darwinism is like the search for square circles in Euclidean geometry. It is a logically contradictory concept, thus it is ironic that a Darwinist attack ID as having no value, since Darwinism on its own terms is demonstratably valueless. One might appeal to Theistic evolution as compromise. But as Coyne pointed out, Theistic evolution is really another form of creationism, and thus doesn't really help the cause of pure Darwinism. So, to hear a Darwinist use teleological terms like "value" is like Gunman Lee thinking he's doing "good" by taking people hostage to prevent human extinction. On what Darwinists grounds is any form of extinction a bad thing (even human extinction)? In Darwinism, extinction is a good thing!scordova
September 5, 2010
September
09
Sep
5
05
2010
09:39 AM
9
09
39
AM
PDT
btw, The issue with Wikipedia is that it purports to be an encyclopedia of factual information (which quite often it is, and quite oten not, depending on the level of controversy on a subject). The problem is that the "factual information" can change from one day to the next. Anytime an ID advocate attempts to update an article concerning ID, it usually changes that very day to reflect the POV of Darwinists. So you're not getting all the facts on ID, nor on evolution. All you're getting is opinion and interpretation of facts. Hence, an extremely biased approach to fact finding and reporting. I like Wiki for certain information, such as music history and the biographies of classical composers, for example, but when it comes to the life sciences, and the biographies of persons involved in controversial issues, I need to be much more cautious. In fact, I have two family members (a brother and an uncle) of some note, who have Wiki articles on them. I've contributed to those articles. Not once has any of my information changed or been disputed. Yet I can imagine that Dr. Dembski, for example can't get a "fair and balanced" biographical article written about him without the nasty biases of Darwinists creaping in.CannuckianYankee
September 4, 2010
September
09
Sep
4
04
2010
10:18 AM
10
10
18
AM
PDT
mathgrrl (and tg, stephenb), "ID advocates want clarity; Darwinist partisans want confusion. Clarity is Darwinism’s greatest enemy. Once everyone understands that there is some evidence for evolution but no evidence at all for Darwinism, Darwinists will be out of business. Thus, they must be dishonest to survive. Wikipedia, which also seeks to obfuscate, can hardly be trusted to illuminate the issue. When in doubt, read our “frequently raised objections” section to understand what ID does and does not argue." This is exactly right. mathgrrl, since even young earth creationists accept "evolution," perhaps you could be more demanding towards definitions when it comes to "evolution" as you seem to be towards ID concepts such as "functional information." I mean really, ID advocates are much more precise in what they mean than Darwinists, who think "evolution" can be used loosely to show "change over time," and that simply because organisms change over time, this means that such change occurred through unplanned natural processes. Are you satisfied that when Darwinists refer to "evolution" they mean the same thing as when young earth creationists refer to "evolution?" If not, then I think you can see why we use the terms "Darwinist" and "Darwinism" to refer to something specific; something more "mathematically definitional" to use your term. So we have two terms: "evolution" = biological change over time. "Darwinism" = evolution via random variation + natural selection. The two are not the same. Please be consistent when you demand that we be precise and clear.CannuckianYankee
September 4, 2010
September
09
Sep
4
04
2010
09:56 AM
9
09
56
AM
PDT
---mathgirl: " Wikipedia summarizes the problems with the term here [Darwinism] You do not seem to appreciate what all the fuss is about. ID does not challenge [a] macro-evolution but argues that [b] unguided, naturalistic processes such as random variation and natural selection did not drive the process. Darwinists, make claims for [b] but provide evidence only for [a] hoping that no one will notice the difference. Indeed, they have no evidence at all to support [b]. In order to promote this farce and obfuscate the issue, they purposely use the imprecise word “evolution,” which can be taken either way. ID holds that, if macro-evolution occurred, it was, at least in part, designed or programmed to unfold according to the prior intent of a designer—that it had “man [forgive the gender reference] in mind.” Darwinism insists that no intent, design, or program is necessary—“that evolution is a purposeless, mindless process that did NOT have man in mind.” Thus, ID advocates use the word “Darwinism” to refer to the stronger claim of unguided evolution as opposed to the weasel word “evolution,” which can be shifted and morphed as needed. ID advocates want clarity; Darwinist partisans want confusion. Clarity is Darwinism’s greatest enemy. Once everyone understands that there is some evidence for evolution but no evidence at all for Darwinism, Darwinists will be out of business. Thus, they must be dishonest to survive. Wikipedia, which also seeks to obfuscate, can hardly be trusted to illuminate the issue. When in doubt, read our "frequently raised objections" section to understand what ID does and does not argue.StephenB
September 4, 2010
September
09
Sep
4
04
2010
09:37 AM
9
09
37
AM
PDT
Mathgrrl @ 39 "Could you please provide a mathematical definition of “functional information” so that any interested observer can objectively measure it?" No. I can't. Regrets. You are free to try. To take CY one step further, while you're at it, perhaps you could also provide a mathematical definition of each word in the phrase "Could you please provide ... so that any interested observer can objectively measure it?" That way I might be able to understand your question. Because right now I think it's irrelevant. Thanks.tgpeeler
September 4, 2010
September
09
Sep
4
04
2010
09:20 AM
9
09
20
AM
PDT
Mathgrrl, as kairos pointed out, you have, by your own intelligence, generated far more information on this blog than can be reasonably be expected from the material processes of the entire universe over the entire history of the universe. You may say that "well given enough time evolution can reach unmatched levels of functional information/complexity in a small step by small step fashion". Yet the small steps that you are trying to traverse, in you evolutionary scenario. are shown to be anything but 'small'; Evolution vs. Functional Proteins - Doug Axe - Video http://www.metacafe.com/watch/4018222 Estimating the prevalence of protein sequences adopting functional enzyme folds: Doug Axe: Excerpt: Starting with a weakly functional sequence carrying this signature, clusters of ten side-chains within the fold are replaced randomly, within the boundaries of the signature, and tested for function. The prevalence of low-level function in four such experiments indicates that roughly one in 10^64 signature-consistent sequences forms a working domain. Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences. http://www.ncbi.nlm.nih.gov/pubmed/15321723 Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome. So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail. http://www.fourmilab.ch/documents/reading_list/indices/book_726.html If this isn't enough to make you doubt the the sufficiency of the power of "almighty" evolution to produce life on earth Mathgrrl, the fact is that genetic reductionism is not even true in the first place i.e. mutations to DNA do not solely effect body-plan morphogenesis: Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681 The Origin of Biological Information and the Higher Taxonomic Categories - Stephen Meyer"Neo-Darwinism seeks to explain the origin of new information, form, and structure as a result of selection acting on randomly arising variation at a very low level within the biological hierarchy, mainly, within the genetic text. Yet the major morphological innovations depend on a specificity of arrangement at a much higher level of the organizational hierarchy, a level that DNA alone does not determine. Yet if DNA is not wholly responsible for body plan morphogenesis, then DNA sequences can mutate indefinitely, without regard to realistic probabilistic limits, and still not produce a new body plan. Thus, the mechanism of natural selection acting on random mutations in DNA cannot in principle generate novel body plans, including those that first arose in the Cambrian explosion." http://eyedesignbook.com/ch6/eyech6-append-d.htmlbornagain77
September 4, 2010
September
09
Sep
4
04
2010
08:12 AM
8
08
12
AM
PDT
PS: oops, herekairosfocus
September 4, 2010
September
09
Sep
4
04
2010
07:42 AM
7
07
42
AM
PDT
MG: When you wrote the above post you supplied a sample of digitally coded linguistically functional, complex information. That is an instance in point. 600 128-state ASCII characters worth, or 128^600 possible configs, 2.118*10^1264. That is vastly more states than the number of Planck time states scannable by the about 10^80 atoms of our observed cosmos across its working life. In other words, if the universe were converted into impossibly fast monkeys, keyboards and banana plantations to support them, it could not produce the text once in the thermodynamically reasonable lifespan of the cosmos. But you tossed it off -- by using intelligently directed contingency -- in a few minutes. The specific function was recognisable and could be assigned a dummy variable of binary state. The complexity threshold, 1,000 bits storage capacity is easier to assign a similar dummy variable. Once complex and specific function is identified, we multiply the two dummies by the bit storage and we have a simple metric. A similar metric would obtain for say the discs of data I just used to continue reloading this PC after a config sys wipeout. But the function there would be algorithmic. A similar metric would easily extend to integrated network systems, as is discussed here. Beyond that simple metric, you can look at various metrics that have been developed in recent years. But functional bits is as familiar as the PC industry and the internet. As tot he definitionitis game, we simply note that life itself -- a condition of our doing physics or informaiton theory or whatever, and the subject of study of biology, has no clean cut definition. I suspect you remember the way basic physics definitions run in circles after a certain point [I recall my exchange with my 4th form physics teacher on that], until you resolve by pointing to specific cases and describing then saying if it is sufficiently close to that we accept it. Such ostensive definition by example and family resemblance is the basis of practical definitions. Your last post is of course a case in point. Having put this tangential issue -- if these keep up they become as red herring rhetorical tactic -- to bed, we need to return to the main issue for the thread. Media bias that is revealing on ideological commitments and issues. Issues that invite us to look into the malthusian roots of Darwinism, and to examine why it would be helpful to deal with those issues so we can get on with real solutions -- why not look at was it 27 above -- instead of playing somebody's create a perceived crisis to get an agenda mainstreamed game. G'day GEM of TKIkairosfocus
September 4, 2010
September
09
Sep
4
04
2010
07:39 AM
7
07
39
AM
PDT
Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video http://www.metacafe.com/watch/3995236 Entire video: http://vimeo.com/1775160 Functional information and the emergence of bio-complexity: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak: Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define 'functional information,' I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions. http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Hazen_etal_PNAS_2007.pdf Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 Here is the fitness test that you (or any evolutionists) must pass to concretely ascertain that the new functionality (complexity/information), which evolved, by supposedly purely material processes, was in fact not a beneficial adaptation that was derived from preexisting functional information that was already inherent in the genome. i.e. You must prove that the new functionality of a beneficial adaptation did in fact violate the principle of genetic entropy. For a broad outline of the 'Fitness test', required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles: Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Testing the Biological Fitness of Antibiotic Resistant Bacteria - 2008 http://www.answersingenesis.org/articles/aid/v2/n1/darwin-at-drugstore Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution. http://www.evolutionnews.org/2010/03/thank_goodness_the_ncse_is_wro.htmlbornagain77
September 4, 2010
September
09
Sep
4
04
2010
07:37 AM
7
07
37
AM
PDT
sorry, I meant to say "mathematical definition of 'mathematical definition.'" :)CannuckianYankee
September 4, 2010
September
09
Sep
4
04
2010
07:36 AM
7
07
36
AM
PDT
mathgirl, Could you please provide a mathematical definition of "definition," so that any interested observer can objectively measure it, and thus know exactly what you're referring to?CannuckianYankee
September 4, 2010
September
09
Sep
4
04
2010
07:27 AM
7
07
27
AM
PDT
1 2 3 4 5 6

Leave a Reply