Uncommon Descent Serving The Intelligent Design Community

Oh, you mean, there really is a bias in academe against common sense and rational thought?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Jonathan Haidt decided, for some reason, to point out the obvious to a group of American academics recently, that they are overwhelmingly modern materialist statists (liberals).

He polled his audience at the San Antonio Convention Center, starting by asking how many considered themselves politically liberal. A sea of hands appeared, and Dr. Haidt estimated that liberals made up 80 percent of the 1,000 psychologists in the ballroom. When he asked for centrists and libertarians, he spotted fewer than three dozen hands. And then, when he asked for conservatives, he counted a grand total of three.

“This is a statistically impossible lack of diversity,” Dr. Haidt concluded, noting polls showing that 40 percent of Americans are conservative and 20 percent are liberal. In his speech and in an interview, Dr. Haidt argued that social psychologists are a “tribal-moral community” united by “sacred values” that hinder research and damage their credibility — and blind them to the hostile climate they’ve created for non-liberals.

Why anyone would bother pointing that out, I don’t know. It’s not a bias against conservatives, anyway; it’s a bias against rationality, which they don’t believe in. Our brains, remember, are shaped for fitness, not for truth. Indeed, these are the very people who channel Barney Rubble and Fred Flintstone for insights into human psychology, and anyone who doubts the validity of such “research” should just shut up and pay their taxes, right?

Well, his talk had attracted  John Tierney’s attention at the New York Times (February 7, 2007), who drew exactly the right conclusion (for modern statists and Darwinists):

“If a group circles around sacred values, they will evolve into a tribal-moral community,” he said. “They’ll embrace science whenever it supports their sacred values, but they’ll ditch it or distort it as soon as it threatens a sacred value.” It’s easy for social scientists to observe this process in other communities, like the fundamentalist Christians who embrace “intelligent design” while rejecting Darwinism.

[ … ]

For a tribal-moral community, the social psychologists in Dr. Haidt’s audience seemed refreshingly receptive to his argument. Some said he overstated how liberal the field is, but many agreed it should welcome more ideological diversity. A few even endorsed his call for a new affirmative-action goal: a membership that’s 10 percent conservative by 2020. The society’s executive committee didn’t endorse Dr. Haidt’s numerical goal, but it did vote to put a statement on the group’s home page welcoming psychologists with “diverse perspectives.” It also made a change on the “Diversity Initiatives” page — a two-letter correction of what it called a grammatical glitch, although others might see it as more of a Freudian slip.

I have friends here in Canada who make bets on when the Times will finally, mercifully shut down.

Meanwhile, Megan McArdle weighs in at Atlantic Monthly, driving home the shame:

It is just my impression, but I think what conservatives want most of all is simply recognition that they are being shut out. It is a double indignity to be discriminated against, and then be told unctuously that your group’s underrepresentation is proof that almost none of you are as good as “us”. Haidt notes that his correspondence with conservative students (anonymously) “reminded him of closeted gay students in the 1980s”:

He quoted — anonymously — from their e-mails describing how they hid their feelings when colleagues made political small talk and jokes predicated on the assumption that everyone was a liberal. “I consider myself very middle-of-the-road politically: a social liberal but fiscal conservative. Nonetheless, I avoid the topic of politics around work,” one student wrote. “Given what I’ve read of the literature, I am certain any research I conducted in political psychology would provide contrary findings and, therefore, go unpublished. Although I think I could make a substantial contribution to the knowledge base, and would be excited to do so, I will not.”
Beyond that, mostly they would like academics to be conscious of the bias, and try to counter it where possible. As the quote above suggests, this isn’t just for the benefit of conservatives, either.

All together now, class, spell W-I-M-P.

Someone else writes

I have a good friend–I won’t name out him here though–who is a tenured faculty member in a premier humanities department at a leading east coast university, and he’s . . . a conservative! How did he slip by the PC police? Simple: he kept his head down in graduate school and as a junior faculty member, practicing self-censorship and publishing boring journal articles that said little or nothing. When he finally got tenure review, he told his closest friend on the faculty, sotto voce, that “Actually I’m a Republican.” His faculty friend, similarly sotto voce, said, “Really? I’m a Republican, too!”

That’s the scandalous state of things in American universities today. Here and there–Hillsdale College, George Mason Law School, Ashland University come to mind–the administration is able to hire first rate conservative scholars at below market rates because they are actively discriminated against at probably 90 percent of American colleges and universities. Other universities will tolerate a token conservative, but having a second conservative in a department is beyond the pale.

All together now, class, spell the plural, W-I-M-P-S.

Oh, heck, let me be honest, not snarky: Nothing stops the Yanks from freeing themselves from this garbage unless my British  mentor is right, and I hope he isn’t: Americans are happy to be serfs, but they don’t like being portrayed in the media as hillbillies.

So whenever the zeroes they all gladly pay taxes for threaten to do just that, they promptly cave.

If I die tonight, I want this on the record: If I couldn’t be a Canuck and managed to bear the unbearable sorrow, I’d be a true Yankee hillbilly and proud of it. Do you think we Canucks have so far stood off the Sharia lawfare crowd, with all their money and threats, by worrying much what smarmy (and sometimes vicious) tax burdens think?

Comments
KS: You said:
A typewriter set up so that typing any key will hit a Play, is simply moving up the required information pre-loading one level.
And i agree. When i say:
Bear in mind i am not suggesting an informational free lunch nor even an information gain. Merely that the language structure is fundamental to search success.
My point is that one can not disprove the theory of evolution by positing an unachievable hypothetical search challenge. The most you can hope to accomplish is to push "required information pre-loading up one level." As you observe. The net of this is to refocus the question back to the Original DNA system. Since everything points to an information rich starting point "survival of the fittest" and "mutations" seem unnecessary as information creators in the post ool world? As to where the typewriter came from. Now that's the question isn't it? JLS
JLS: A typewriter set up so that typing any key will hit a Play, is simply moving up the required information pre-loading one level. Where do you think such a wonderful, functionally specif and complex typewriter would come from, and on what credible basis? [That is the problem with the idea that the laws of nature had C-chemistry cell based life using DNA and proteins written in; you just converted the laws of our cosmos into a sophisticated computer program running on nature as physical instantiating machine. And, to move up the next level where we have a quasi infinite array of programs generating sub-cosmos simply points th the next level: how do you get a cosmos bakery that searches the local hot zone that finely, as our own cosmos' parameters are very very finely balanced on dozens of aspects, instead of making the cosmic equivalent of burned hockey pucks and half baked masses of ill-mixed dough.] GEM of TKI kairosfocus
KS: I don't take issue with the general thrust of your comments and i am definitely not asserting an "informational free lunch". I do continue to believe however that the chosen language is fundamental to the probability of search success. An example is the monkey theorem with a special typewriter/language. Assume that Shakespeare wrote 26 plays. Assume also the typewriter had 26 keys so that each key corresponded to one of the plays. With a single stroke the entire contents of a play could be communicated. This construct results in 100% search success for functionality and a 1 in 26 chance of writing Hamlet on the first try. Bear in mind i am not suggesting an informational free lunch nor even an information gain. Merely that the language structure is fundamental to search success. A language in this sense is a system of signs for encoding and decoding information. To my way of thinking the "sign" serves as a pointer to an information library. Given that the "source" and "sink" share the same library all that is required for communication is the exchange of a pointer (sign). I view the theory of evolution through this lens. Transcript errors (mutations) in the communication link can alter the pointer(s) and reveal previously unseen features in the information library. Note: this process of mutations can easily mimic an evolution from simple appearing creatures to more complex without any information gains. It all depends on the underlying structure of the information and the nature of the mutation/error correcting process. I agree that "lucky noise" isn't the creator of the information library. Not a reasonable assumption. I do however allow for this luck to have a role in the unveiling of the genome. It all hinges on the starting point (Original DNA) and the underlying information structures (Data base design). These two dictate the language of biology. JLS
JLS: You could hook up a random text generator to the full corpus of the free ebooks published by Gutenberg, and the result would still be the same. The problem is that once we pass a reasonable threshold of complexity, linguistically functional text will be so isolated in teh space of possible configs, that a random walk search will simply not be able to find anything that functions. Similarly, such a process will predictably fail to write functional code for execution by a processor. Tha tis because the funcitonal code will be specific and deeply isolated in the space of all possible configs. For just 1,000 bit strings, 125 bytes; we are talking already of 10^301 possible configs, where the search resources of our whole cosmos run out at 10^150 possible states, even when we very generously use the Planck time, which is about 10^20 times faster than strong nuclear force interactions. That is the real bite in the infinite monkeys result, and it is why that result is at the foundation of thermodynamics. For the same reason, it is why there is no informational free lunch to be had. It is only because the notion has been subtly planted that somehow lucky noise can give us an informational free lunch, and the assumption has been made that no intelligence is possible to explain OOL etc, that gives the false impression that somehow code can write itself and find machinery to execute itself out of the resources of some warm little electrified pond with phosphoric salts in it etc. Not too long from now, people are going to shake their heads and wonder how people of our time could believe such patent absurdities. Even, as we shake our heads today at those who still propose perpetual motion machines. Then, they will tut tut on how we allowed science to be taken captive by materialistic ideologues. GEM of TKI kairosfocus
KS: The monkey theorem illustrates the challenge of locating a functional configuration in a sea of non-functional alternatives.
The probability of a monkey exactly typing a complete work such as Shakespeare's Hamlet is so tiny that the chance of it occurring during a period of time of the order of the age of the universe is minuscule, but not zero.
I accept this but suggest that this result is simply an artifact of the language (English) and the specificity of the target (Hamlet). The important relationship is the ratio of total search space to the functionally specific space. With different language assumptions the ration can vary from one to infinity. To illustrate: Defining functional information (FI) as decisional or prescribing, one bit of (FI) selects between one of two states (or symbols or letters). Two bits selects between four letters and so on. Assume as a thought experiment that original DNA had a complete genetic template for each of the possible species (assume 1,048,576). Further that a 20 bit binary code had a one to one map between each state and one of these species. This sets up a situation where we have a 2^20 total search space and a corresponding 20 bits of functionally specific space for a ratio of 1:1. With this situation we can communicate 20 bits of FSCI and be assured of a functional result. Just to carry the thought experiment one step further assume we arrange it so that binary code “zero” selects for the simplest creature and all one’s (1,048,576) selects for man. Assuming we seed the original code at zero (simplest creature) how long will it take for incremental mutations to evolve the creature to man? This concept can be extended and examined by assuming different architectures of information. The above is completely flat with poor total storage efficiency but high information content per bit of functional specificity. With additional hierarchy one can improve total storage but lower functional specificity. This all leads me to think that the monkey theorem doesn’t shed much light and a rigorous definition of the “language” is fundamental. Have I missed something? JLS
JLS: Actually such has been done, and the answer is that a config space of order about 10^50 or so is searcheable, and strings of letters [ASCII] of up to about 20 or so that are functional text, have been found. Look up the Infinite Monkeys theorem. 20 ASCII characters is about 140 bits; not relevant to any serious computational exercise. The FSCI limit is not a strictly biological limit, it is an information limit. (The pretence that unless it has been shown that FSCI is unreacheable by specifically biological means, it is presumably reachable by those means is a rhetorical device not a serious scientific proposition. The only known means of getting to FSCI is intelligence, which is precisely the problem for those who do not want to see FSCI as a signature of intelligence; but the infinite monkeys type search space analysis is a strong support for the empirical observation.) GEM of TKI kairosfocus
Kf: Thanks for the feedback. If you would indulge a basic question. You frame the issues as a resource bound search that must produce xxx amount of functionally specified information in order to approximate what we observe in nature. If the boundary exceeds the limits of the cosmos then we can assume a false hypothesis. Thus design becomes a consideration. Wouldn't it be better to frame the question in terms of what is the minimum starting point of FSCI in order for various search algorithms to accomplish what we observe? Obviously if one assumes a rich FCSI library as a starting point finding a robust island of functionality is no problem. An additional advantage is that if a minimum can be discovered it may be testable against ool work. Thanks again, This blog is a great resource. JLS
JLS: Actually, a few proteins [avg 300 AA] puts us well beyond the threshold where the search capacity of the cosmos is swamped. A minimally complex functional life form, turned out to be surprisingly complex. The minimal realistic DNA complement looks like 100 - 1,000 k bases [and the lower end are basically parasitic, i.e they are too small to be first life], and that is 2 - 3 orders of magnitude beyond the 1,000 bit point where the observed cosmos is not big enough. That is why OOL is so important. Once it is reasonably clear that not only is there no idea, but no idea of where to get the idea much less the evidence, then we see design is a serious contender. And if design of life is on the table, there is no reason to revert to anything else to account for the 10 - 100+ million new bases to make for new body plans. GEM of TKI kairosfocus
kairosfocus, Thanks for the link above. Powerfully written. It seems obvious that the question resolves to "Functionally Specific Complex Information" was a prerequisite for and present in first life. Why haven't we seen more work in this direction. For example in just the past few days i have come across an article identifying Human DNA in a bacterium. Or the fact a sand flea has more genes 130,000 that a human. We seem to have assumed that Original DNA was primitive compared to the present. Am i correct on this point? JLS
KF 179...a very interesting article. Upright BiPed
F/N: Mrs O'Leary has made a great catch that aptly sums up much of the issue on OOL, here. (I note it makes an interesting philosophical case that turns Dawkins' infinite regress of complexity argument on its head. That is of course not a scientific argument, but pursuit of proof is no respecter of disciplinary boundaries.) This clip on OOL is interesting: ___________ >> In Dawkins' own words: What Science has now achieved is an emancipation from that impulse to attribute these things to a creator... It was a supreme achievement of the human intellect to realize there is a better explanation ... that these things can come about by purely natural causes ... we understand essentially how life came into being.20 (from the Dawkins-Lennox debate) "We understand essentially how life came into being"?! – Who understands? Who is "we"? Is it Dr. Stuart Kauffman? "Anyone who tells you that he or she knows how life started ... is a fool or a knave." 21 Is it Dr. Robert Shapiro? "The weakest point is our lack of understanding of the origin of life. No evidence remains that we know of to explain the steps that started life here, billions of years ago." 22 Is it Dr. George Whitesides? "Most chemists believe as I do that life emerged spontaneously from mixtures of chemicals in the prebiotic earth. How? I have no idea... On the basis of all chemistry I know, it seems astonishingly improbable." Is it Dr. G. Cairns-Smith? "Is it any wonder that [many scientists] find the origin of life to be utterly perplexing?" 23 Is it Dr. Paul Davies? "Many investigators feel uneasy about stating in public that the origin of life is a mystery, even though behind closed doors they freely admit they are baffled ... the problem of how and where life began is one of the great out-standing mysteries of science." Is it Dr. Richard Dawkins? Here is how Dawkins responded to questions about the Origin of Life during an interview with Ben Stein in the film Expelled: No Intelligence Allowed: Stein: How did it start? Dawkins: Nobody knows how it started, we know the kind of event that it must have been, we know the sort of event that must have happened for the origin of life. Stein: What was that? Dawkins: It was the origin of the first self replicating molecule. Stein: How did that happen? Dawkins: I told you I don't know. Stein: So you have no idea how it started? Dawkins: No, No, NOR DOES ANYONE ELSE. 24 “Nobody understands the origin of life, if they say they do, they are probably trying to fool you.” (Dr. Ken Nealson, microbiologist and co-chairman of the Committee on the Origin and Evolution of Life for the National Academy of Sciences) Nobody, including Professor Dawkins, has any idea "how life came into being!" It is only this self-deceiving view of reality that allows Dawkins to declare that science has emancipated him from the impulse to attribute the astounding wonders of the living world to a creator. There is no human intellect on the face of the earth that has achieved a "better explanation." We have shown conclusively that no chemist, physicist, biologist, nor any other type of scientist has any real clue how life could have come about through "natural processes." Scientists do not understand how life "essentially" (or non-essentially for that matter), came into being. Only a "fool," a "knave" could make such an outrageous claim. Perhaps it is time for these scientists to express not awe, not admiration ... but humility. >> _____________ Worth a thought or two kairosfocus
Mathgrrl, So...you do not wish to establish the particulars of what is observed in symbols systems, and you will not allow youself to be questioned on the subject. For instance, if I ask whether or not symbols and the objects they are mapped to are discrete, then that is not a question that you intend to answer. I wonder what it is about these observed facts you wish to avoid. Could it be that even the non-controversial observations regardng digitally-encoded information work to undermine your rail against ID? If that is the case, then you are certainly not alone. Materialists often absolutely refuse to even discuss the topic - sans their assumptions. By that I mean, a simple walk through the collectively observed facts - withholding all conclusions from either side - is quite often more than can be tolerated. In fact, I do believe that this intolerance was the basis for your original comment. I asked a completely intelligible question without adding a single controversial assumption whatsoever, and your response was "Could you please rephrase to avoid loading them with what appear to be your assumptions". You see, you weren't objecting to my assumptions (because there wasn't any) you were objecting to the untainted observations themselves. Of course, despite the convenient diversions to follow, the facts of the matter remain. What are we to do with the observed fact that meaning has been instantiated into matter (long before we humans came along and "invented" symbol systems). Upright BiPed
MathGrrl speaking of 'modeling' reality, let's look at reality itself and see if Sanford's (Mendel's Accountant or Schneider's (ev) is more faithful to what reality is actually telling us; Random Mutations Destroy Information - Perry Marshall - video http://www.metacafe.com/watch/4023143/ Inside the Human Genome: A Case for Non-Intelligent Design - Pg. 57 By John C. Avise Excerpt: "Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens." http://books.google.com/books?id=M1PRvkPBKfQC&pg=PA57&lpg=PA57&dq=human+75,000+different+disease-causing+mutations&source=bl&ots=gkjosjq030&sig=gAU5AfzMehArJYinSxb2EMaDL94&hl=en&ei=kbDqS_SQLYS8lQfLpJ2cBA&sa=X&oi=book_result&ct=result&resnum=6&ved=0CCMQ6AEwBQ#v=onepage&q=human%2075%2C000%20different%20disease-causing%20mutations&f=false I went to the mutation database website and found: HGMD®: Now celebrating our 100,000 mutation milestone! http://www.biobase-international.com/pages/index.php?id=hgmddatabase I really question their use of the word "celebrating". This following study confirmed the detrimental mutation rate for humans, of 100 to 300 per generation, estimated by John Sanford in his book 'Genetic Entropy' in 2005: Human mutation rate revealed: August 2009 Every time human DNA is passed from one generation to the next it accumulates 100–200 new mutations, according to a DNA-sequencing analysis of the Y chromosome. (Of note: this number is derived after "compensatory mutations") http://www.nature.com/news/2009/090827/full/news.2009.864.html Waiting Longer for Two Mutations - Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that 'for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years' (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless "using their model" gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model. http://www.discovery.org/a/9461 The Frailty of the Darwinian Hypothesis "The net effect of genetic drift in such (vertebrate) populations is “to encourage the fixation of mildly deleterious mutations and discourage the promotion of beneficial mutations,” http://www.evolutionnews.org/2009/07/the_frailty_of_the_darwinian_h.html#more High genomic deleterious mutation rates in hominids Excerpt: Furthermore, the level of selective constraint in hominid protein-coding sequences is atypically (unusually) low. A large number of slightly deleterious mutations may therefore have become fixed in hominid lineages. http://www.nature.com/nature/journal/v397/n6717/abs/397344a0.html High Frequency of Cryptic Deleterious Mutations in Caenorhabditis elegans ( Esther K. Davies, Andrew D. Peters, Peter D. Keightley) "In fitness assays, only about 4 percent of the deleterious mutations fixed in each line were detectable. The remaining 96 percent, though cryptic, are significant for mutation load...the presence of a large class of mildly deleterious mutations can never be ruled out." http://www.sciencemag.org/cgi/content/abstract/285/5434/1748 "The likelihood of developing two binding sites in a protein complex would be the square of the probability of developing one: a double CCC (chloroquine complexity cluster), 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the entire world in the past 4 billion years, so the odds are against a single event of this variety (just 2 binding sites being generated by accident) in the history of life. It is biologically unreasonable." Michael J. Behe PhD. (from page 146 of his book "Edge of Evolution") The GS (genetic selection) Principle - David L. Abel - 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.bioscience.org/2009/v14/af/3426/fulltext.htm etc.. etc.. etc.. Perhaps MathGrrl, you might want to address a few of these questions? bornagain77
MathGrrl, I look forward to reading your peer-reviewed refutation of the Dembski-Marks paper, until then I really don't care to address your superfluous 'molehill' objections, especially since you refuse to honestly address kairosfocus's 'mountain' objections (not to mention the few objections I brought forth). bornagain77
kairosfocus, I am not up to date with the current abiogenesis literature, but your comments have piqued my interest. If I find any simulations I'll let you know. MathGrrl
bornagain77, Once again, the paper does not support your claim that ev is goal directed. I have provided links to the description of ev and the source code itself. Please reference those sources to show exactly how ev can be construed to be goal directed. If you wish to continue to reference the Bio Complexity paper as well, please first address the flaw I found in a cursory reading. MathGrrl
MG: Apology acknowledged. my objection to claimed simulations of evolution of that ilk was and is, that Ev etc are already in intelligently set-up target zones when they begin. They may model some varieties or aspects of micro-evo [but note my concern on gradual degradation and embrittlement], but beg the big questions on macro-evo, starting with the root of the tree of life and going onward to accounting for the source of major body plans. GEM of TKI GEM of TKI kairosfocus
I must say I found the conclusion a bit bizarre, take a look at what they are saying:
The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search.
and:
Schneider [16] claims that ev demonstrates that naturally occurring genetic systems gain information by evolutionary processes ...
and then:
Our results show that, contrary to these claims, ev does not demonstrate “that biological information…can rapidly appear in genetic control systems subjected to replication, mutation, and selection” [16]. We show this by demonstrating that there are at least five sources of active information in ev.
Now take a look at this:
2. The Hamming Oracle [13]. When some offspring are correctly announced as more fit than others [27], external knowledge is being applied to the search and active information is introduced. ... we are being told with respect to the solution whether we are getting “colder” or “warmer”. ... 4. Optimization by Mutation. This process discards mutations with low fitness and propagates those with high fitness.
What they appear to be saying is: because Genetic algorithms use a fitness function that is not binary (i.e. not just 'yes you are fit' or 'not you are not') but instead has a fitness function that gives higher fitness individuals a greater probability of reproducing, EV is not representative of biological evolution because external knowledge is applied. If you have NO criteria for assessing if an individual is fitter or not, or in the case of a targetted search if the individual is closer to or at the target, then you can't even perform a search. If you took D&M's criticisms on board and created a search algorithm with no criteria for success, and no mutation or any other method of moving about in the search space you wouldn't have any king of search algorithm at all! Recall this bit:
The success of ev is largely due to active information introduced by the Hamming oracle ... It is not due to the evolutionary algorithm used to perform the search.
One thing that DEFINES a GA is the use of graduated fitness evaluations BECAUSE this is what appears to occur in biology - A hamming oracle is part of an evolutionary algorithm, you can't claim that an algorithm doesn't work because it relies on part of the algorithm working!
As far as evcan be viewed as a model for biological processes in nature, it provides little evidence for the ability of a Darwinian search to generate new information. Rather, it demonstrates that preexisting sources of information can be re-used and exploited, with varying degrees of efficiency, by a suitably designed search process, biased computation structure, and tuned parameter set.
EV models selection, mutation and replication, D&M criticise it because they claim that selection injects information and so does not demonstrate biological information generation in action. Could their mistake perhaps be that they view the information in the search space as an invalid source of novelty, or perhaps as something that already exists and that the claim from biology is that it is finding something different - outside the search space? What do evolutionary algorithms do - they explore search spaces - novelty is just functional areas of a search space that haven't, or have only just, been discovered. It's a baffling conclusion to be sure! DrBot
MathGrrl, You say ev is not goal directed, and yet the peer-reviewed paper I cited says that ev is a goal directed 'search algorithm' that 'mines active information' from abstract: Search algorithms mine active information from these resources, with some search algorithms performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge in ev include a Hamming oracle and a perceptron structure that predisposes the search towards its target. The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently. And MathGrrl exactly why do you get so excited about this ev search algorithm which is shown to be less efficient than a standard 'hill climbing' algorithm that is used to efficiently solve a limited class of problems in engineering??? In computer science we recognize the algorithmic principle described by Darwin - the linear accumulation of small changes through random variation - as hill climbing, more specifically random mutation hill climbing. However, we also recognize that hill climbing is the simplest possible form of optimization and is known to work well only on a limited class of problems. Watson R.A. - 2006 - Compositional Evolution - MIT Press - Pg. 272 MathGrrl, let's say we get really honest with what unfettered random mutations can really do in reality and open up the operating system itself the Random Mutations? Accounting for Variations - Dr. David Berlinski: - video http://www.youtube.com/watch?v=aW2GkDkimkE A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA – David J D’Onofrio1, Gary An – Jan. 2010 Excerpt: It is also important to note that attempting to reprogram a cell’s operations by manipulating its components (mutations) is akin to attempting to reprogram a computer by manipulating the bits on the hard drive without fully understanding the context of the operating system. (T)he idea of redirecting cellular behavior by manipulating molecular switches may be fundamentally flawed; that concept is predicated on a simplistic view of cellular computing and control. Rather, (it) may be more fruitful to attempt to manipulate cells by changing their external inputs: in general, the majority of daily functions of a computer are achieved not through reprogramming, but rather the varied inputs the computer receives through its user interface and connections to other machines. http://www.tbiomed.com/content/7/1/3 MathGrrl, if you ever decide to be honest about what neo-Darwinian evolution can really do in reality, instead of 'propaganda programs' such as weasel and ev, here is the proper computer program that is faithful to the task; Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load: Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances. Using realistic estimates for the relevant biological parameters, we investigate the rate of mutation accumulation, the distribution of the fitness effects of the accumulating mutations, and the overall effect on mean genotypic fitness. Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space. http://bioinformatics.cau.edu.cn/lecture/chinaproof.pdf MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE http://mendelsaccount.sourceforge.net http://www.scpe.org/vols/vol08/no2/SCPE_8_2_02.pdf bornagain77
kairosfocus,
For, at no point did I do what you claimed I did; you put words in my mouth that do not belong there.
I just reviewed the thread and found the post where I mixed up my conversations with you and bornagain77. Indeed, you did not claim that ev or Tierra were goal directed. I apologize for my mistake. So, since you don't make that claim, will you be joining the argument on my side? ;-) MathGrrl
Joseph,
Here is the paper: A Vivisection of the ev Computer Organism: Identifying Sources of Active Information
Thank you for the link. I read the paper last night and it does not support the claim that the ev program is "goal directed." The closest it comes is a discussion of a Hamming Oracle, but there is a fatal flaw in that section. On the third page of the paper, the authors state:
In the search for the binding sites, the target sequences are fixed at the beginning of the search. The weights, bias, and remaining genome sequence are all allowed to vary.
This is not correct. Dr. Schneider's description of ev and the code itself make it clear that the target sequences coevolve with the rest of the genome, including the recognizer components. The only feature fixed for each run is the (randomly selected) location of each target, and even that can be eliminated without changing the results. Therefore, as I've maintained in this thread, neither ev nor Tierra are "goal directed" simulations. Lost in this discussion is the most interesting aspect of ev. Dr. Schneider wrote ev to check the results of his PhD thesis on the generation of information in real biological organisms. Using only simple evolutionary mechanisms, ev demonstrated exactly the same ability to generate Shannon information as Dr. Schneider observed in the lab. That's very strong supporting evidence that the mechanisms have been correctly identified. MathGrrl
MG: Pardon, but much more was at stake than you acknowledge. For, at no point did I do what you claimed I did; you put words in my mouth that do not belong there. Indeed, your remark just above materially misrepresent the objective situation and why I asked you to address the matter. GEM of TKI kairosfocus
Upright BiPed,
We can certainly explore whatever equivocation “is likely to slip in”, but first we should establish the observation regarding symbols, or else, we might not recognize equivocation from obfuscation. Discussing the observation itself seems to be what you wish to avoid, so lets go to it. Symbols and the things they are mapped to are discreet, are they not?
As I said before, I'm not going to play the Socratic game with you. If and when you decide to clearly state your position and claims, I will be more than happy to engage in a mutually respectful discussion. MathGrrl
kairosfocus,
1: Kindly, do please address the civility-strawman matter at 139, which also seems to have been developing with UB.
Pointing out an incorrect statement is not inherently uncivil. The claim was made that ev and Tierra are "goal directed" simulations. The simple fact is that they are not. Anyone is free to check the documentation for those two programs to confirm this. MathGrrl
Ah yes, thanks Pedant. :) Pedantry has its place, and my twice mistaken spelling of dicrete is one of them. Upright BiPed
Yes, thanks, Upright BiPed. I understand completely, although I still think that the correct spelling is discrete. (They don't call me Pedant for nothing.) Pedant
Pedant, "Dot-Dash" is a dicreet symbol mapped to the letter "A" in the English alphabet. Upright BiPed
Upright BiPed, when you asked,
Symbols and the things they are mapped to are discreet, are they not?
did you mean to say "symbols are discrete? And if that's what you meant to say, would you clarify what you were asking? Can you give examples of symbols that are discrete vs symbols that are not? Pedant
F/N 2: Marks, Dembski et al paper on ev, conclusion: ___________ >> CONCLUSIONS The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search. Indeed, other algorithms are shown to mine active information more efficiently from the knowledge sources provided by ev[13]. Schneider [16] claims that ev demonstrates that naturally occurring genetic systems gain information by evolutionary processes and that “information gain can occur by punctuated equilibrium”. Our results show that, contrary to these claims, ev does not demonstrate “that biological information...can rapidly appear in genetic control systems subjected to replication, mutation, and selection” [16]. We show this by demonstrating that there are at least five sources of active information in ev. 21 1. The perceptron structure. The perceptron structure is predisposed to generating strings of ones sprinkled by zeros or strings of zeros sprinkled by ones. Since the binding site target is mostly zeros with a few ones, there is a greater predisposition to generate the target than if it were, for example, a set of ones and zeros produced by the flipping of a fair coin. 2. The Hamming Oracle [13]. When some offspring are correctly announced as more fit than others [27], external knowledge is being applied to the search and active information is introduced. As with the child’s game, we are being told with respect to the solution whether we are getting “colder” or “warmer”. \ 3. Repeated Queries. Two queries contain more information than one. Repeated queries can contribute active information [1,2,5]. 4. Optimization by Mutation. This process discards mutations with low fitness and propagates those with high fitness. When the mutation rate is small, this process resembles a simple Markov birth process [27] that converges to the target [1,2,5]. 5. Degree of Mutation. As seen in Figure 3, the degree of mutation for ev must be tuned to a band of workable values. Our analysis highlights the importance of disclosing sources of knowledge in computer searches when measuring the ability of search mechanisms to generate novel information. As far as ev can be viewed as a model for biological processes in nature, it provides little evidence for the ability of a Darwinian search to generate new information. Rather, it demonstrates that preexisting sources of information can be re-used and exploited, with varying degrees of efficiency, by a suitably designed search process, biased computation structure, and tuned parameter set. This confirms that the conservation of information principle, as manifest in the No Free Lunch Theorems, is “very useful, especially in light of some of the sometimes-outrageous claims that had been made of specific optimization algorithms” [4]. >> ______________ MG, what is your rebuttal? And, remember, once yo0u have disposed of M, D et al, my own objections still remain, in light of the claims made by the authors of the programs. GEM of TKI kairosfocus
Oops :oops: bornagain77 (155) already posted the paper. :cool: Joseph
MathGrrl- Here is the paper: A Vivisection of the ev Computer Organism: Identifying Sources of Active Information
Abstract: ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful. Search algorithms mine active information from these resources, with some search algorithms performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge in ev include a Hamming oracle and a perceptron structure that predisposes the search towards its target. The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently.
How long do you think it will take you to get your refutation of that paper published? Joseph
F/N: __________ >> symbol [?s?mb?l] n 1. something that represents or stands for something else, usually by convention or association, esp a material object used to represent something abstract 2. (Literary & Literary Critical Terms) an object, person, idea, etc., used in a literary work, film, etc., to stand for or suggest something else with which it is associated either explicitly or in some more subtle way 3. (Mathematics) a letter, figure, or sign used in mathematics, science, music, etc. to represent a quantity, phenomenon, operation, function, etc. 4. (Psychoanalysis) Psychoanal the end product, in the form of an object or act, of a conflict in the unconscious between repression processes and the actions and thoughts being repressed the symbols of dreams 5. (Psychology) Psychol any mental process that represents some feature of external reality vb -bols, -bolling, -bolled US, -bols -boling, -boled (tr) another word for symbolize [from Church Latin symbolum, from Greek sumbolon sign, from sumballein to throw together, from syn- + ballein to throw] Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003 >> ____________ kairosfocus
Mathgrrl,
I also believe that symbols must be assigned meaning in order to function as symbols. Based on previous discussions that I’ve read here, this is one point where equivocation is likely to slip in.
We can certainly explore whatever equivocation "is likely to slip in", but first we should establish the observation regarding symbols, or else, we might not recognize equivocation from obfuscation. Discussing the observation itself seems to be what you wish to avoid, so lets go to it. Symbols and the things they are mapped to are discreet, are they not? Upright BiPed
DrBot:
Joseph – we observe stochastic processes at work in living systems. stones erode stochastically – even the ones at stonehenge. I don’t understand why both of you seem to believe that we can’t determine if life is evolving – even by studying it directly – unless we can determine exactly how the first living things were created?
So much confusion; so little time. Yes we can determine living organisms are evolving without knowing how they originated. However we cannot know if that evolution is telic or stochastic. That said, what stochastic processes do we observe at work in living systems and how as it determined they are stochastic? Stones do erode but eroding stones do not account for Stonehenge, just its current condition. It wouldn't make any sense to study Stonehenge as anything other than an artifact. Joseph
MathGrrl methinks you are much too easily impressed by appearance,,, As I believe kairosfocus said a few days ago, paraphrase,, 'Now show me a evolutionary algorithm that can program a computer better than the original computer itself is programmed and then I will indeed be impressed.' Here is the paper you requested, which I had listed previously; A Vivisection of the ev Computer Organism: Identifying Sources of Active Information George Montañez, Winston Ewert, William A. Dembski, Robert J. Marks II Abstract: ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful. Search algorithms mine active information from these resources, with some search algorithms performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge in ev include a Hamming oracle and a perceptron structure that predisposes the search towards its target. The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently. http://evoinfo.org/publications/a-vivisection-of-ev/ bornagain77
MG: 1: Kindly, do please address the civility-strawman matter at 139, which also seems to have been developing with UB. 2: Kindly, do please address the evidence that both the GA's addressed in 117 above, by the words of their creators, are in-island of function searches and crucially depend on a suitably friendly fitness landscape, instead of addressing he reality of far more complex config spaces with vast seas of non-function that deeply isolate islands of function. 3: Kindly, address the matter that it seems that a very high fraction of variations for ev et al will be favourable, when the evidence from living systems is that the overwhelming majority of variations in genes are adverse, but not sufficiently so to be immediately fatal, leading to a progressive malfunction burden, embrittlement/loss of relilience, and vulnerability to population collapse triggered by environmental crisis. 4: Kindly address the need to first create a metabolising entity with a vNSR facility, before the tree of life can have a root. 5: Kindly address the gap between the observation of islands of function for informational bio-molecules [e.g. protein fold domains, the implications of having a definite code for DNA linked to that, etc], and the implicitly assumed smooth path to transformation of body plans accessible by fine gradations. 6: Kindly, please apply said solution to explain origin of major body plans surrounding say the Cambrian, origin of vertebrates, of land animals, of birds, of bats and of men. 7: Kindly, please provide observational evidence on the ground that substantiates the claim that macroevolution by a graduallly branching tree of life pattern happened, and linked to that, that ev et al accurately model said pattern. 8: In that context, kindly explain remarks by Patterson et al and Gould et al on gaps in the fossil record. For the above, breif point form notes and links to substantiating evidence, perhaps at sites such as Wikipedia, Talk Origins, NCSE, NSTA, NAS, educational sites for college courses, etc, should be enough if there is to be found empirical, observational evidence there. Thanks GEM of TKI kairosfocus
bornagain77, The paper I was requesting is the one that you say shows that ev is "goal directed." I assure you, from personal review of both ev and Tierra, that neither of them can be so described. MathGrrl
Upright BiPed,
I also believe that symbols must be assigned meaning in order to function as symbols.
Based on previous discussions that I've read here, this is one point where equivocation is likely to slip in. "Assigning meaning" is a human activity. The risk of equivocation arises when the word "symbol" is used in another context, for example the genetic code, and the baggage of "assigning meaning" is dragged in, thereby attempting to prove ID correct by definition. I'm not saying that you are attempting to do this, only that I've seen it before and so I'm wary of the rhetorical approach you are using. In my experience it leads to more confusion than resolution. If you have a position you'd like to discuss with me, just state it. There is no need for all the questions -- you know what you want to say, I don't. Make your point as clearly and succinctly as possible. I am interested in it. MathGrrl
Hello Mathgrrl, Please allow me to show you what a “rhetorical device” is. This is my original question to you:
Do you know of any recorded information (information instantiated into matter) that came to exist by means of unguided processes?
The words I chose to use in that question (such as: any, came, exist, means, etc) are easily accessible to any English-speaking adult. The phrasing I used (such as: “do you know”) is also rather simple, and certainly not beyond the reach of even a typical child. This now leaves two objects in the sentence; the first object being “recorded information” and the second being “unguided processes”. In an information society, one would think that the general concept of “recorded information” is not too difficult to attain, yet even still, I added a parentheses with the additional clarification of “information instantiated into matter”. As for the second object, I added an entirely separate sentence in order to further clarify (by “unguided” I mean without the aid or input of a mind). So what we have is a rather simple and direct question: "Do you know of any recorded information that came to exist without the input of a mind?" Do you see? It’s all very simple. But what was your response?
Upright BiPed, I’m afraid I can’t make any sense out of your question
I am not afraid to admit it, Mathgrrl – I just don’t believe you. I don’t think there is a single word or phrase in that question that you don’t understand. I believe you use those words regularly and are familiar with each of them. Honestly, I think you understand the question, and quite frankly, I think you should have answered it. - - - - - - I then went on to ask you two other questions in that same post:
Do you know of any recorded information that doesn’t exist by means of an abstraction – symbolic representation?
and
Do you know of any symbols that were assigned meaning by means of unguided processes?
Here too we find very simple questions given in common English. Instead of addressing these questions you went on to insinuate that I was making assumptions, in fact, you suggested that these questions were actually “loaded” with assumptions. What are these assumptions Mathgrrl? From my questions you might gather that I obviously assume information can be recorded. Surely that is not what you are objecting to. You can also gather that I believe information is recorded by means of symbolic representation. I also believe that symbols must be assigned meaning in order to function as symbols. These are not terrible controversial assumptions. Are these the assumptions you wish to object to? I find that hard to believe. What is interesting Mathgrrl, is that I made no controversial assumptions. I simply asked you a couple of questions based upon the general observations that anyone can make. Suddenly, I get the distinct feeling that if these observations cannot be discussed without inserting your own assumptions in them from the start, then you would prefer to act as if you don’t understand the question. Upright BiPed
MathGrrl here is the paper on PI, Prescriptive Information (PI) Excerpt: The informal adjective “prescriptive” has been used for decades, if not centuries, to describe functional information. But the formal term “Prescriptive Information (PI)” first appeared in scientific literature in 2004 (Trevors and Abel, 2004), although its unnamed uniqueness and importance was delineated earlier (Abel, 2000, 2002). The formal term of PI was further developed in “More than metaphor: Genomes are objective sign systems (Abel and Trevors, 2006a, 2007): "The “meaning” (significance) of Prescriptive Information (PI) is the function that information instructs or produces at its destintion." The definitive paper on prescriptive information, especially as it relates to genetic and epigenetic controls of living metabolism, was in press for nearly two and a half years. It finally appeared in peer-reviewed literature in April of 2009 (Abel, 2009a). A closely related and integral concept of prescriptive information is Functional Sequence Complexity (FSC) (Abel and Trevors, 2005) FSC addresses the unique ability of linear digital symbol systems to represent and provide integrative controls of physical systems. A major breakthrough in semantic and biosemiotic research was the development of a method to quantify FSC, including the FSC of nucleic acids and proteins (Durston, et al., 2007). Szostak et al have shared in emphasizing the need to further qualify the nature of functional information (Szostak, 2003). An alternative attempt to measure “functional information” has also been published (Hazen, et al., 2007). Important terms relating to PI include Choice Contingency, as opposed to mere Chance Contingency and law-like necessity (Abel and Trevors, 2006b, Abel, 2009c, Trevors and Abel, 2004). The Cybernetic Cut defines a seemingly infinitely deep ravine that divides mere physicodynamic constraints from formal controls (Abel, 2008a, b). The CS Bridge is the one-way bridge across The Cybernetic Cut made possible through instantiation of formal choices into physical configurable switch-settings (Abel, 2008a). No one has ever observed PI flow in reverse direction from inanimate physicodynamics to the formal side of the ravine—the land of bona fide formal pragmatic “control.” The GS Principle states that selection for potential function must occur at the molecular-genetic level of nucleotide selection and sequencing, prior to organismic existence (Abel, 2009b, d). Differential survival/reproduction of already-programmed living organisms (natural selection) is not sufficient to explain molecular evolution or life-origin (Abel, 2009b). Life must be organized into existence and managed by prescriptive information found in both genetic and epigenetic regulatory mechanisms. The environment possesses no ability to program linear digital folding instructions into the primary structure of biosequences and biomessages. The environment also provides no ability to generate Hamming block codes (e.g. triplet codons that preclude noise pollution through a 3-to-1 symbol representation of each amino acid) (Abel and Trevors, 2006a, 2007). The environment cannot decode or translate from one arbitrary language into another. The codon table is arbitrary and physicodynamically indeterminate. No physicochemical connection exists between resortable nucleotides, groups of nucleotides, and the amino acid that each triplet codon represents. Although instantiated into a material symbol system, the prescriptive information of genetic and epigenetic control is fundamentally formal, not physical. http://www.us.net/life/index.htm Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html It is very easy MathGrrl, instead a very involved mathematics,all you, or any other neo-Darwinists, has to do to provide 'concrete' proof for material processes generating prescriptive information, is to show the origination of any novel 'code' by purely material processes: "A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107." (The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.) Here is the challenge put to you MathGrrl, explained so simply that a child can understand it,,, The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video http://www.metacafe.com/watch/4060532 further note; Biophysicist Hubert Yockey determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the optimal universal genetic code that is found in nature. The maximum amount of time available for it to originate is 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that is optimal. Put simply, natural selection lacks the time necessary to find the optimal universal genetic code we find in nature. (Fazale Rana, -The Cell's Design - 2008 - page 177) Moreover the first DNA code in life had to be at least as complex as the current DNA code found universally in life: Shannon Information - Channel Capacity - Perry Marshall - video http://www.metacafe.com/watch/5457552/ “Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible” Donald E. Johnson – Bioinformatics: The Information in Life "In the last ten years, at least 20 different natural information codes were discovered in life, each operating to arbitrary conventions (not determined by law or physicality). Examples include protein address codes [Ber08B], acetylation codes [Kni06], RNA codes [Fai07], metabolic codes [Bru07], cytoskeleton codes [Gim08], histone codes [Jen01], and alternative splicing codes [Bar10]. Donald E. Johnson – Programming of Life – pg.51 - 2010 further notes: Besides multiple layers of 'classical information' embedded in overlapping layers throughout the DNA, there has now been discovered another layer of 'quantum information' embedded throughout the DNA: Quantum Information In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ The ‘Fourth Dimension’ Of Living Systems https://docs.google.com/document/pub?id=1Gs_qvlM8-7bFwl9rZUB9vS6SZgLH17eOZdT4UbPoy0Y The relevance of continuous variable entanglement in DNA – June 21, 2010 Abstract: We consider a chain of harmonic oscillators with dipole-dipole interaction between nearest neighbours resulting in a van der Waals type bonding. The binding energies between entangled and classically correlated states are compared. We apply our model to DNA. By comparing our model with numerical simulations we conclude that entanglement may play a crucial role in explaining the stability of the DNA double helix. http://arxiv.org/abs/1006.4053v1 Quantum entanglement holds together life’s blueprint Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ bornagain77
bornagain77,
DRBot, by all means, please present a concrete example of life generating any functional prescriptive information whatsoever.
This requires a rigorous mathematical definition of "functional prescriptive information" and examples of how to calculate it. Is this another variant of CSI? gpuccio made an impressive, albeit ultimately unsuccessful, attempt to do so at the four part discussion set up by Mark Frank. I would be very interested if you could help us progress that effort. MathGrrl
Joseph,
They are targeted searches.
Read the documentation at the links I provided. Your claim is incorrect.
While I’m happy to discuss those simulators in more detail, my core point above is that neither ev nor Tierra has an explicit goal.
There was just a peer-reviewed paper that exposed EV as a targeted search.
Please provide a link. If the paper made that claim, it is wrong. MathGrrl
DRBot, by all means, please present a concrete example of life generating any functional prescriptive information whatsoever. bornagain77
KF: we can observe life and study genetics to establish if or how life evolves from one species to the next. these observations are not contingent on knowing the OOL. We can study how water turns into ice without knowing the OOW. Lifes origin is not directly connected to how life works as we observe it today - we can study it directly! Joseph - we observe stochastic processes at work in living systems. stones erode stochastically - even the ones at stonehenge. I don't understand why both of you seem to believe that we can't determine if life is evolving - even by studying it directly - unless we can determine exactly how the first living things were created? DrBot
Joseph, is this the paper? A Vivisection of the ev Computer Organism: Identifying Sources of Active Information George Montañez, Winston Ewert, William A. Dembski, Robert J. Marks II Abstract: ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful. Search algorithms mine active information from these resources, with some search algorithms performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge in ev include a Hamming oracle and a perceptron structure that predisposes the search towards its target. The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently. http://evoinfo.org/publications/a-vivisection-of-ev/ further notes: Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism – Dembski – Marks – Dec. 2009 Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it. http://evoinfo.org/publications/evolutionary-synthesis-of-nand-logic-avida/ Constraints vs. Controls – Abel – 2010 Excerpt: Classic examples of the above confusion are found in the faulty-inference conclusions drawn from many so-called “directed evolution,” “evolutionary algorithm,” and computer-programmed “computational evolutionary” experimentation. All of this research is a form of artificial selection, not natural selection. Choice for potential function at decision nodes, prior to the realization of that function, is always artificial, never natural. http://www.bentham.org/open/tocsj/articles/V004/14TOCSJ.pdf Yet MathGrrl and DrBot, does it not strike you in the least bit particular that you guys are squabbling over whether evolutionary algorithms generate any functional prescriptive information whatsoever, when the simplest life easily outclasses the best computer programs man has ever devised???? As well it strikes me as very peculiar that you guys are arguing so strenuously for 'proof of evolution' in the first place with what are clearly intelligently designed computer programs!?! Should not you guys, if you are truly trying to impress people with the overwhelming validity of neo-Darwinism, use concrete examples from life itself instead of using devised computer programs??? But then again perhaps you guys should stick with trying to fool people into believing in neo-Darwinism with devised computer programs instead of life since life itself offers you no relief from your poverty of evidence,,, The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html As well it bothers me that you guys are so eager to divorce evolution from the origin of life issue,,, One Million Dollar Origin Of Life Prize; http://www.us.net/life/index.htm For if purely material processes have such a extremely difficult time accounting for any functionally coded information whatsoever, as is shown by extensive origin of life research, then what in blue blazes makes you think that when replication is thrown on top of material processes you will suddenly overcome this universe wide chasm that separates functional information and material processes? Don't you guys see the disconnect in your logic??? The Capabilities of Chaos and Complexity - David L. Abel - 2009 Excerpt: "A monstrous ravine runs through presumed objective reality. It is the great divide between physicality and formalism. On the one side of this Grand Canyon lies everything that can be explained by the chance and necessity of physicodynamics. On the other side lies those phenomena than can only be explained by formal choice contingency and decision theory—the ability to choose with intent what aspects of ontological being will be preferred, pursued, selected, rearranged, integrated, organized, preserved, and used. Physical dynamics includes spontaneous non linear phenomena, but not our formal applied-science called “non linear dynamics”(i.e. language,information). http://www.mdpi.com/1422-0067/10/1/247/pdf bornagain77
Dr Bot: Your analogy breaks down. Water's origin is not inherently connected to how water is said to be evolving into something else. You are looking at a theory of origin of species that first begs the question of origin of the first species, on the same begged issue of functional info origins, that is begged on origin of novel body plans. GEM of TKI kairosfocus
Wow DrBot- nice of you to ignore my explanation: Ya see if living organisms did ot arise from non-living matter via stochastic procsses then there would be no reason to infer stochastic processes are solely responsible for its subsequent diversity. Determining Stonehenge was designed was a huge factor into how it was investigated. True we don't know how it was designed and constructed, but we investigate differently from rock formations nature creates. Just about everything changes when the "how it came to be that way?" switches from nature, operating freely to intelligent design. Joseph
Joseph: I think the more fundamental point is that hey are searches within already hit on target zones. You are right that the programs preload fitness funcitons and in effect start at a given point on the shoreline, then toss out a ring of ranom points. Then where the steep ascent is inferred from what most improves. This then leads to a hill-climbing. This builds in all sorts of targetting info, and leads to an implicit targetting that tracks functional peaks. What is not built in is the observed insensitivity that causes gradual degradation of function, then extinction into non-function through genetic degradation. If one good micro mut that may push up the hill a bit has to reckon with thousands of small degradations that reduce overall function gradually,the overwhelming trend will be sliding down the hill into the sea of non-function. That would especially happen if the functionality loses resilience as more and more sub-functions and flexibility are lost. Then, push in an environmental catastrophe and bingo, function vanishes. Extinction of the gradually less resilient that then become unfit. Yet another way in which big questions are being begged. GEM of TKI kairosfocus
Yes, whenever you study how something works there is always a question of how it came to be. Studying water begs the question of the origin of water. The point, which you seem to have missed, is that you don't need to answer the question of OOW (Origin Of Water) in order to study how water works. Inferring a designer begs the same questions of course! Joseph:
You need to know HOW life came to b in order to figur out HOW it evolved.
No you don't - you only need to know how life came to be if you want to understand how it came to be. DrBot
F/N: onlookers, in effect darwinian style theory, despite pretensions, is a theory of micro-adaptations or what could be called microevo; which is not even disputed by modern Young Earth Creationists. That is then without acknowledging the leap, simply extrapolated into the very different issue of macro evo, i.e getting to novel body plans [requiring upwards of 10 mn bits of dFSCI, unaccounted for on the undirected search capacity of the observed universe], and the root question of getting to metabolising, self replicating cell based life that implements a vNSR [100 + k bits of dFSCI] is simply begged quietly. kairosfocus
G: Pardon, but a matter of greater moment is now on the table for you, as 130 above highlights: civility. Are you now willing to withdraw your accusation that I have misrepresented the authors of ev etc by claiming they have set explicit targets, since it should be plain that my point is their intelligence set up the whole exercise within the target zone, i.e the targetting exercise is already done by the time we see the programs? GEM of TKI kairosfocus
Dr Bot: pardon, but that boils down to you accept the begging of t5he question. GEM of TKI kairosfocus
MathGrrl:
Simulations like ev and Tierra are modeling evolution.
Thy don't modelevolution as according to the theory of evoluton. They are targeted searches.
While I’m happy to discuss those simulators in more detail, my core point above is that neither ev nor Tierra has an explicit goal.
There was just a peer-reviewed paper that exposed EV as a targeted search. Go figure... Joseph
DrBot:
You don’t need to know how life came to be in order to discover if it can evolve – we can study life to see if it does.
You need to know HOW life came to b in order to figur out HOW it evolved. Ya see if living organisms did ot arise from non-living matter via stochasti procsses then there would be no reason to infer stochastic processes are solely responsible for its subsequent diversity. Joseph
MathGrrl: Thanks ;) DrBot
Upright BiPed,
Your characterization is your own; feel free to reject it. - – - – - What is necessary for a symbollic relationship to exist between two discreet objects?
Again with the rhetorical devices? If you have a position you think is strong, please just state it clearly so that we can discuss it. Thus far, the most I can glean from your posts in this thread, with the most generous interpretation I can manage, is: Humans use symbols. It seems like you're trying to get from that observation to some point about there being symbols in biological organisms, but that would be mere equivocation and word games. I can't see the argument from your statements to date in any case. Do you actually want to discuss your views or not? MathGrrl
DrBot, Very well put. You have summarized my position succinctly and eloquently. Thanks. MathGrrl
KF, Is the germ theory of disease similarly flawed - after all it only looks at microorganisms and how their behaviour affects disease, this begs the question of their origin, but the theory avoids answering it.
n –> FYI, MG, I have nowhere stated that ev or tiera have EXPLICIT target points that hey move towards, but instead that they beg the real question by setting up a strawman issue: moving around in an island of function.
They are set up to explore islands of functionality - this is what the experiments are about - they are not set up to study how self replicators could arise by natural means - this is NOT what the experiments are about. If I set up an experiment to study how replicating orginasm change from generation to generation when subject to selection pressures then this is what my experiment is set up to do - it is not a strawman to study this aspect of evolutionary theory but not explain how self replicators could arise naturally - they are two different questions. Of course the existence of life begs the question of its origins, but so does the existance of oxygen and water, of planets and moons. We can study life, moons, atoms, and understand how they work - what rules govern their lives - the freezing and boiling points of water will NOT change if we discover that water was designed, so we don't need to know if it was designed in order to understand how it is affected by temperature. Life could have been designed, but it could also be evolving because it was DESIGNED to evolve - we can determine if life evolves emperically without having an answer to the question of lifes origins. Your position that we can't study the evolution (or not) of life today because we don't know if it is the product of a mind can apply to all aspects of science so it would render science impossible:
Evolutionary theory is focused on what happens once replicators exist. e –> TRANSLATION: having no cogent answer on OOL, the question is quietly begged, leaving the whole darwinian tree of life without a viable root
Tectonic plate theory is focussed on what happens once a planet with a molten core has formed -> TRANSLATION: having no cogent answer on OOP (Origin Of Planets), the question is quietly begged, leaving the whole drifting of continents without a viable root. You don't need to know how life came to be in order to discover if it can evolve - we can study life to see if it does. You don't need to know how a planet formed in order to discover if continents can drift - we can study continents to see if they do. DrBot
PS: I am sure you know that much of Darwin's theory was responding to Paley's watchmaker idea. What is significant to above is that Paley, in Ch 2, discussed the idea of a self-replicating watch, and what would be needed for that, and where it would point. That is, he implied or at least foreshadowed the question of OOL (where an already functional entity, a watch now has the added facility of self-replication), and of the vNSR. Darwin could not have been ignorant of this issue, he just ducked it with a few weasel words about a creator at the end of origin. 150 years later, that cannot continue. (Not to mention, HS and college texts routinely discuss OOL in connexion with evolution; typically in an a priori implicitly materialistic context. The question must now be un-begged, I am afraid.) kairosfocus
MG: Following up, let's do a bit of noting on points, re your 125 in rebuttal to my now 117: __________________ >> Abiogenesis research is focused on how the first replicators arose. a --> The root of Darwin's tree of life is the first cluster of ancestral organisms. b --> without viable dynamics on this the whole macro evolutionary edifice is a tree cut off from its root. (And, from outset as the close off at ch 15 of origin shows, in the tangled bank remarks, Darwin's theory was about Macro evo from 1859; just he was following Wilson's manipulative dictum in Arte of Rhetorique, that if you have a weak point studiously pass over it in silence and shift the focus elsewhere. It worked.) c --> so macroevo is a massive begging of the question of getting first the metabolising self replicator. Hence the relevance of the von Neumann Self Replicator [vNSR] issue. d --> Without a vNSR cell empirically justified as spontaneously coming from chance plus necessity in the still little electrified pond or wherever, there is no root to evolutionary materialistic theories. So, do let us know how a code based self replicator was added as a facility to a metabolising entity . . . Evolutionary theory is focused on what happens once replicators exist. e --> TRANSLATION: having no cogent answer on OOL, the question is quietly begged, leaving the whole darwinian tree of life without a viable root Simulations like ev and Tierra are modeling evolution. f --> Begged questions 2 - 5:
BQ 2: It is assumed a priori a la Lewontin & Sagan et al, that materialistic eviolution is the only credible explanation of origins:
To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality . . . . we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[From: “Billions and Billions of Demons,” NYRB, January 9, 1997.]
BQ3: Tiera, Ev and kin are then trotted out as showing how twerdun, but this is begging the question of actual empirical warrant for that assumtion. BQ 4: Ev and Tiera [et al], as I showed at 117, IMPLICITLY work from the start-point of incremental improvements within an island of function, i.e. the issue of getting to the shores of an island of function in the midst of a vast sea of non-functional configs, is begged. (Put another way, you have no right to assume that a fitness landscape is so conveniently shaped. You need to be getting to the first set of islands of function at 100 - 1000 k bits of info, then onward to provide new body plans at 10 - 100 mn or more, before trying to play around within any such islands to get to peaks by incremental hill-climbing.) BQ 5: such GAs also provide a designed context that purposefully sets up a working software package, instead of providing the software ab initio from chance plus necessity, as I suggested at what is now 119 above.
While I’m happy to discuss those simulators in more detail, g --> First, kindly answer to the excerpts at 117, where I summarise from the sites how these GA's beg the question of getting to shores of an island of function. h --> Incremental hill climbing or niche specialisation within such an island was never a serious question, as we have had breeds of dogs and varieties of fruit trees for thousands of years. That is partly through breeding out to specialisms from general blended forms, and partly by small mutations that usually destroy bits and pieces of genetic info, e.g. to get the smashed mouth of the English Bulldog, or the muts that gave rise to blond and red haired humans. my core point above is that neither ev nor Tierra has an explicit goal. i --> And, where did I ever say they had an EXPLICIT goal built into their algors? j --> Have I not, as just again, rather objected that hey are set up on the target already, i.e within an island of function, through a process that is purposefully designed and constructed? k --> Do you not see that this is a red herring led out to a strawman, by putting words in my mouth that simply do not belong there? That is clear from their readily available documentation. l --> More on the strawman, now being set up with the ad hominem that I have falsely characterised what they did, i.e the oil of ad hominems -- in turnabout accusation form, is being poured on the strawman. Your claim otherwise is incorrect. m --> Ignition, so now I am ignorant, stupid, insane or an outright liar. See how Dawkins et al have utterly poisoned the atmosphere? Did it ever occur to you that I might instead be pointing out something you have evidently overlooked, including when I cited and highlighted the problem at 117? Let me clip from that post, assuming that you will take time to read the highlighted web clips:
[Re tiera] See the core problem? You START in an island of function when the problem is to get TO an island of function. That begs question no 1. So, what we have is an optimisation program that depends on preloaded information about peaks of performance, and preloaded prestested functional information produced by purposeful intelligent design, to do some hill climbing. And of course one of the biggest already solved problems is the origin of symbolic, meaningful, functional messages and the executing machinery that makes it work. Which, is what UB was getting at . . . . [Re ev] In short, Ev, too, is operating within an island of existing function, using a symbolic, meaningful, coded entity set up and developed by highly intelligent designers, and is set up to reward fine increments in function. Oops, again. The problem, is to get TO such islands of function, whether we are talking origin of metabolising self-replicating life or origin of embryologically, environmentally and reproductively feasible novel body plans. No one disputes that within an island of fucntion, some variations may be =rewarded artificially or naturally., including Young Earth Creationists. The problem that repeatedly keeps getting lost in the excited discussions of moving around within islands of function — and the denunciations of those who challenge the problem — is the root problem: getting TO the islands of function in a sea of non-function, on chance plus necessity WITHOUT intelligence.
n --> FYI, MG, I have nowhere stated that ev or tiera have EXPLICIT target points that hey move towards, but instead that they beg the real question by setting up a strawman issue: moving around in an island of function. (Weasel is worse, it rewards non-functional configs on increments in proximity to pre-loaded target. Ev Tiera avida et al are slightly more clever, without an explicit target, but they too are begging big questions loaded with the import that FSCI is the product of design, as only designers have the capacity per observations, to use insight, purpose and knowledge to put complex systems at or near islands of function in vast seas of non-function. Throwing out rings of random sample points on shores of an island of function, then climbing the steepest ascent or a steep ascent, does not answer the decisive issue of getting to the shores of such an island in the first place. Similarly crispifying performance in a fuzzy control system or moving to optimal antenna designs are based on oodles of built in knowledge.) o --> The real, and silently passed by issue, is to get to such islands of function in the midst of vast seas of non-function, where the infinite monkeys theorem applies. >> ___________________ I trust we can now start over, on a more reasonable basis, with the atmosphere cleared of the poisonous, polarising Dawkinsian framing. GEM of TKI kairosfocus
Mathgrrl, Your characterization is your own; feel free to reject it. - - - - - What is necessary for a symbollic relationship to exist between two discreet objects? Upright BiPed
MG: I repeat, the problem is begging the question, by starting in an island of function, whereby what is being explained is no mystery: hill-climbing. (Way back in my first calculus class, I learned this as one way to find a maximum or minimum.) The root problem is to get to the island of function, for both the original problem, first life; and for novel body plans. GEM of TKI kairosfocus
Eugen, I really appreciate Wheeler's work. As well I really appreciate Zeilinger's work which had built on Wheeler's insights: "It from bit symbolizes the idea that every item of the physical world has at bottom - at a very deep bottom, in most instances - an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin." John Archibald Wheeler Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum teleportation: http://www.metanexus.net/Magazine/ArticleDetail/tabid/68/id/8638/Default.aspx Zeilinger's principle The principle that any elementary system carries just one bit of information. This principle was put forward by the Austrian physicist Anton Zeilinger in 1999 and subsequently developed by him to derive several aspects of quantum mechanics. http://science.jrank.org/pages/20784/Zeilinger%27s-principle.html#ixzz17a7f88PM In the beginning was the bit - New Scientist Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle. http://www.quantum.at/fileadmin/links/newscientist/bit.html Quantum Entanglement and Teleportation - Anton Zeilinger - video http://www.metacafe.com/watch/5705317/ bornagain77
Upright BiPed,
In the definition of ‘symbol’ you posted above, each of the examples given centers on the phenomena of two distinct and separate things having a semiotic relationship established between them (i.e., a limousine having a relationship to the idea of wealth and authority, a written notation on a sheet of music noting the manner in which the music is to be played, the abbreviated notations a chemist might make to denote certain chemicals. One could easily expand those examples to any number of other instances where one discreet thing is symbolically mapped to another discreet thing. How does that semiotic relationship become established?
I find your attempted use of Socratic dialog to be a transparent rhetorical device used in an attempt to demonstrate dominance -- you as the teacher and me as the student. I reject this relationship. I would like to understand your point and have a discussion between equals. Please just state your position and we can proceed. MathGrrl
kairosfocus,
You START in an island of function when the problem is to get TO an island of function.
Abiogenesis research is focused on how the first replicators arose. Evolutionary theory is focused on what happens once replicators exist. Simulations like ev and Tierra are modeling evolution. While I'm happy to discuss those simulators in more detail, my core point above is that neither ev nor Tierra has an explicit goal. That is clear from their readily available documentation. Your claim otherwise is incorrect. MathGrrl
Bornagain thanks for "The foundation of reality itself is information!" Please read a bit on my favorite physicist John Wheeler http://suif.stanford.edu/~jeffop/WWW/wheeler.txt Eugen
Those are good, but I liked your clarity of evolutionary algorithms starting out in islands of function instead of visa versa,, at 114 https://uncommondescent.com/intelligent-design/oh-you-mean-there-really-is-a-bias-in-academe-against-common-sense-and-rational-thought/#comment-372414 bornagain77
BA: H'mm, that's a new thought, maybe I need to write something specific. Meanwhile, what do you think of my discussions here, here and here? GEM of TKI kairosfocus
Mathgrrl, In the definition of ‘symbol’ you posted above, each of the examples given centers on the phenomena of two distinct and separate things having a semiotic relationship established between them (i.e., a limousine having a relationship to the idea of wealth and authority, a written notation on a sheet of music noting the manner in which the music is to be played, the abbreviated notations a chemist might make to denote certain chemicals. One could easily expand those examples to any number of other instances where one discreet thing is symbolically mapped to another discreet thing. How does that semiotic relationship become established? Upright BiPed
kf, thanks for the clarity, do you have a page I can reference on evolutionary algorithms? bornagain77
PS: If Ev and Tiera had first generated the base code for the programs by random generation filtered for function, we would be talking something here. We could even start by generating basic modules of scope 1,000 bits (=125 bytes, small for a program) or so, then chaining these basic modules at random until higher order functions emerge. But alas, we are not. [Of course that would boil down to my infinite monkeys test of FSCI origin.] kairosfocus
MathGrrl, this is simply ludicrous, you innocently state; 'I thought we were discussing whether or not ev and Tierra are goal directed.' when I pointed out the fact that reality itself is founded on information. That Fact of how reality is actually constructed is of no trivial concern MathGrrl!!!! John 1:1-3 In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. and indeed MathGrrl, THAT FACT goes to the very heart of the issue, namely the failure of anyone to demonstrate a 'non-trivial' gain in information that would explain the monstrously complex information we find in the simplest of life.,,, Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information - David L. Abel and Jack T. Trevors - Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8 "No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms' genomes programmed?" http://www.biomedcentral.com/content/pdf/1742-4682-2-29.pdf MathGrrl The only reason you are forced to play in your dream-world of evolutionary algorithms generating 'trivial goal-directed' information in the first place is because no one has demonstrated a violation of genetic entropy by passing this simple test; Is Antibiotic Resistance Evidence For Evolution? - The Fitness Test http://www.metacafe.com/watch/3995248/ much less has anyone demonstrated that purely material processes have the ability to generate even the simplest of life (even with all the help of experts trying to get life from non-life),,, Shoot as I pointed out before the problem is information! The foundation of reality itself is information! And as far as we can tell Information always comes from a mind, never from any material processes! bornagain77
MG: I see your, 101:
BA: . . . but alas MathGrrl, THEY ARE ALL goal directed; MG: Nope, they’re not . . .
The bland declaration, of course is not enough to answer the point. For instance, on the about tiera page:
The Tierra C source code creates a virtual computer and its Darwinian operating system, whose architecture has been designed in such a way that the executable machine codes are evolvable. This means that the machine code can be mutated (by flipping bits at random) or recombined (by swapping segments of code between algorithms), and the resulting code remains functional enough of the time for natural (or presumably artificial) selection to be able to improve the code over time.
See the core problem? You START in an island of function when the problem is to get TO an island of function. That begs question no 1. So, what we have is an optimisation program that depends on preloaded information about peaks of performance, and preloaded prestested functional information produced by purposeful intelligent design, to do some hill climbing. And of course one of the biggest already solved problems is the origin of symbolic, meaningful, functional messages and the executing machinery that makes it work. Which, is what UB was getting at. Oops. As regards EV, we see on the FAQ page:
the input parameters define Rfrequency, which is determined by information put into the program, but that is not the information being measured from the organisms. Remember that we are measuring Rsequence from patterns in the genome, and this starts out near zero bits, as you can see from the green curve in the graph [NB: but this is set up so that even a few bits will create a rise in function -- when the threshold for life is ~ 100 - 1,000 kbits ab initio, and for major body plans 10 - 100 + Mbits a few bits is well within power of random walk to see quick results. Ab initio requirements of that scope are not within the capability of the observed cosmos -- oops!] . Also, the size of the genome and number of required sites can be set to a wide variety of values and yet Rsequence still evolves towards Rfrequency. This only happens by replication, mutation and selection, demonstrating that those factors are necessary and sufficient for information gain to occur [Goal post moving: the information that is required to get to the island of function is already built in long since, and the increments in info to gain function are unrealistically low, so we have basically a programmed search in a nice space for an optimum] . . . . so long as the recognition function gives a finely graded and ordered response to input sequences. [finely graded means that tiny steps will function, but we credibly need steps of order 100 kbits and more] In the Ev program, recognition is done using a numerical matrix of numbers, encoded in the genomes.
In short, Ev, too, is operating within an island of existing function, using a symbolic, meaningful, coded entity set up and developed by highly intelligent designers, and is set up to reward fine increments in function. Oops, again. The problem, is to get TO such islands of function, whether we are talking origin of metabolising self-replicating life or origin of embryologically, environmentally and reproductively feasible novel body plans. No one disputes that within an island of fucntion, some variations may be =rewarded artificially or naturally., including Young Earth Creationists. The problem that repeatedly keeps getting lost in the excited discussions of moving around within islands of function -- and the denunciations of those who challenge the problem -- is the root problem: getting TO the islands of function in a sea of non-function, on chance plus necessity WITHOUT intelligence. BA is right, and MG you are not. GEM of TKI kairosfocus
Eugen: I see you are having fun. We live on a privileged planet, on a privileged star, in a privileged zone of a privileged galaxy, in a highly privileged cosmos. The HR diagram is just the beginning! Dr Bot I think wanted to ask rather than answer. And, the question he asked was on a tangent to the point I answered: getting dFSCI out of a noise generator is a test of the design inference, and it shows the problem of finding islands of function in vast seas of non-function, by chance and/or necessity without intelligence. Until you have a self-replicating coded entity you cannot appeal to the idea that small changes in the system may improve its functionality and drive evolutionary change. of course there is the neutral mut issue and the questions of long term drift and degradation i.e genetic entropy. But that I need to pause on to answer to MG. GEM of TKI GEM of TKI kairosfocus
bornagain77,
Something else for you to consider MathGrrl is the fact that information is shown to be separate from, foundational to, and thus greater than, matter or energy.
I thought we were discussing whether or not ev and Tierra are goal directed. MathGrrl
Upright BiPed, symbol |?simb?l| noun a thing that represents or stands for something else, esp. a material object representing something abstract : the limousine was another symbol of his wealth and authority. See note at emblem . • a mark or character used as a conventional representation of an object, function, or process, e.g., the letter or letters standing for a chemical element or a character in musical notation. • a shape or sign used to represent something such as an organization, e.g., a red cross or a Star of David. verb ( -boled, -boling; Brit. -bolled, -bolling) [ trans. ] archaic symbolize. ORIGIN late Middle English (denoting the Apostles' Creed): from Latin symbolum ‘symbol, Creed (as the mark of a Christian),’ from Greek sumbolon ‘mark, token,’ from sun- ‘with’ + ballein ‘to throw.’ Amusing as it is to cut and paste from my electronic dictionary, I was actually hoping for you to clearly state your assumptions and rephrase your questions so that I could understand them. If you are using any terms in non-standard ways, please provide your definitions as well. Given that base, we ought to be able to communicate effectively, don't you think? MathGrrl
Mathgrrl, You need it more simple? Sure, no problem. Do you know what a symbol is? Upright BiPed
further note: Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate. http://journals.witpress.com/journals.asp?iid=47 Quantum Information In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ 4-Dimensional Quarter Power Scaling In Biology - short video http://www.metacafe.com/watch/5964041/ The ‘Fourth Dimension’ Of Living Systems https://docs.google.com/document/pub?id=1Gs_qvlM8-7bFwl9rZUB9vS6SZgLH17eOZdT4UbPoy0Y "Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Weiner - MIT Mathematician - Father of Cybernetics Quantum entanglement holds together life’s blueprint - 2010 Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ bornagain77
MathGrrl, I understand upright completely. Perhaps this short video will help you understand: The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video http://www.metacafe.com/watch/4060532/ Something else for you to consider MathGrrl is the fact that information is shown to be separate from, foundational to, and thus greater than, matter or energy. Thus please tell me exactly why I should expect that which is less than to spontaneously give rise to that which is greater than itself? Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. --- As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/teleportation1.htm Quantum Teleportation - IBM Research Page Excerpt: "it would destroy the original (photon) in the process,," http://www.research.ibm.com/quantuminfo/teleportation/ Unconditional Quantum Teleportation - abstract Excerpt: This is the first realization of unconditional quantum teleportation where every state entering the device is actually teleported,, http://www.sciencemag.org/cgi/content/abstract/282/5389/706 bornagain77
MathGrrl, I understand upright completely. Perhaps this short video will help you understand: The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video http://www.metacafe.com/watch/4060532/ Something else for you to consider MathGrrl is the fact that information is shown to be separate from, foundational to, and thus greater than, matter or energy. Thus please tell me exactly why I should expect that which is less than to spontaneously give rise to that which is greater than itself? Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. --- As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/teleportation1.htm Quantum Teleportation - IBM Research Page Excerpt: "it would destroy the original (photon) in the process,," http://www.research.ibm.com/quantuminfo/teleportation/ Unconditional Quantum Teleportation - abstract Excerpt: This is the first realization of unconditional quantum teleportation where every state entering the device is actually teleported,, http://www.sciencemag.org/cgi/content/abstract/282/5389/706 further note: Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate. http://journals.witpress.com/journals.asp?iid=47 Quantum Information In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ 4-Dimensional Quarter Power Scaling In Biology - short video http://www.metacafe.com/watch/5964041/ The ‘Fourth Dimension’ Of Living Systems https://docs.google.com/document/pub?id=1Gs_qvlM8-7bFwl9rZUB9vS6SZgLH17eOZdT4UbPoy0Y "Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Weiner - MIT Mathematician - Father of Cybernetics Quantum entanglement holds together life’s blueprint - 2010 Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ bornagain77
Upright BiPed, I'm afraid I can't make any sense out of your questions in 103 and 104. Could you please rephrase to avoid loading them with what appear to be your assumptions, or at least state those assumptions explicitly? Thanks. MathGrrl
Mathgrrl, What I am asking should be simple given the prevailing certainty. What exactly in the process where a chemical compound assigns meaning to other chemical compounds, and then using that meaning, organizes those into a function in order to serve a purpose. Upright BiPed
Mathgrrl, Do you know of any recorded information (information instantiated into matter) that came to exist by means of unguided processes? By "unguided" I mean without the aid or input of a mind? Do you know of any recorded information that doesn't exist by means of an abstraction - symbolic representation? Do you know of any symbols that where assigned meaning by means of unguided processes? Upright BiPed
MathGrrl, if you don't mind, I'll wait for your, or any other neo-Darwinists, peer-reviewed refutation of Abel's null hypothesis. The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html http://www.us.net/life/index.htm Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html bornagain77
bornagain77,
‘Evolutionary algorithms such as ev and Tierra (neither of which is explicitly “goal directed”) do reflect those mechanisms, in simplified form, so it is not surprising that they work.’ but alas MathGrrl, THEY ARE ALL goal directed;
Nope, they're not. Please read the websites of each program (available here for ev and here for Tierra) for yourself if you don't believe me. By the way, both you and kairosfocus have written a lot in this thread about CSI and its variants. Mark Frank created a thread on his blog where gpuccio worked with myself and several other people to quantify his dFSCI concept. We weren't successful, but perhaps you two could assist. I hope Mark doesn't think it presumptuous of me to post the link to the first of four threads here. MathGrrl
Hi Kairos I'm reading your IOSE blog where I started with my favorite:astronomy-cosmology. That reminded me of recent article on M dwarfs. Follow this http://www.centauri-dreams.org/?p=16327 for an interesting article . M dwarfs (red dwarfs) make 76% of all stars in the galaxy. They produce frequent and powerful flares which would destroy life on any planet orbiting such star. Implications are quite interesting. Original research paper can be found at http://arxiv.org/PS_cache/arxiv/pdf/1012/1012.0577v1 ( I tried to embed links but it just would not take it) Looks like we can eliminate 76% stars as life friendly from the get go. It would seem there are still plenty of stars left . Unfortunately,most of those are super giants, bright giants ,giants,sub giants, sub dwarfs,white dwarfs. The most suitable are G main sequence dwarfs like our Sun if they are in the quite narrow galactic habitable zone.You talk about that around Hertzsprung-Russell diagram in IOSE article on cosmology. Hopefully G main sequence star would also be extremely stable like our Sun. Even if we find Sun like star with planets we would prefer to see it as single star and not part of binary or any other multi star system. Close binary or multi star system would eventually disturb planet orbits and send them into the void. I think it will be very difficult to find "privileged star" and even harder one with "privileged planet" around it. Another recent article I found is slightly amusing but informative.It's about us living inside a massive star forging "furnace". Here is the article http://www.popsci.com/science/article/2011-01/fyi-what-does-space-smell Re Dr Bot I just finished reading all the posts and I'm wondering why Dr Bot didn't propose revision to your experiment if he wasn't satisfied with it. Eugen
Eugen: Once the big 40 is on the list, one does not need to quit smoking to run risks on the old waistline. Just as with the glasses question. I suggest we should recall the older usage: what we call science was seen as natural philosophy and natural history as reconstructed. Well established findings were then seen as knowledge, "science" being the Latin word for knowledge. ["Knowledge" is Greek, from GNOSIS.] That is, we need to see that the issues of the epistemology of warrant of knowledge claims are never far from the surface in science. And, when we set out to reconstruct the remote, unobserved past beyond generally accepted record, these issues are even more important. In particular, we are looking at inferences to best current, empirically based, provisional explanation. Empirically anchored abductive reasoning. As Pierce argued for. And for that I argue that our methods -- the plural is deliberate -- should acknowledge the realities that causal patterns, processes and reliable signs point to chance, mechanical necessity and design as three distinct causal factors. GEM of TKI kairosfocus
Consider this variation on the zener diode challenge: Assuming that the information in a "life form string” was 1000 bits and the total possible combinations of viable combinations (species) was 1.7 million. How many trials would it take to construct one of the viable strings assuming that you randomly selected each bit in an incremental manner and restarted the string at each dead end? Or A similar experiment but discover how many bits a life form string would have to be limited to in order to have a 50/50 chance of constructing one of the life form strings given the life time of the earth and the total computational resource space. JLS
Kairos I had to print post 91. Thanks for all the links. I definitely want to read ,rather print out every link there. I’m little overwhelmed with all the material. I think you are scientist and philosopher by the way you write. I’m neither so it will take me a week, maybe more to absorb all that information. I read cosmology, quantum physics, astronomy and question on nature of reality. At a very quick glance it seems there is some of each there presented in a way I’ve never seen before. Dr Bot Sorry I didn’t read much past 91. I noticed your remark on quitting smoking. Watch out, you may get chubby now just like me. Eugen
I am a bit late to this thread but if i might jump in and ask a question. I am interested in the exploring the limits of chance process's to construct functional information. Current evolution theory posits a relatively small role to information in the original DNA system and a rather larger role to subsequent chance construction. I am not sure why that is so but i imagine it is sort of like the big bang. If things seem to be spreading apart then if we reverse time they would necessarily come back together. I haven't seen it quantified but it seems to me that the upper limit of development is created by the combination of irreducible complexity and the need for growing specificity as construction develops. A simple example is that with any design task the first component selected has a wide latitude but with each subsequent selection the functionally effective choices narrow. The same narrowing you get as you construct a language string. Each additional letter faces a compound probability limit forced by a requirement for meaningfulness or(functionality). My particular interest is in exploring the limits as we expand the available alphabet. For example we could take the previous example of a language string but stipulate that our tool box started with a pixel. Clearly we could make every letter and many more but could we construct functional information? What if words, sentences, paragraph were in our tool box? The ultimate question is what information (alphabet) do we require in our tool box (original DNA) to go from primitive forms to the current state. Is there reason to believe that the original DNA had limited specific information? Thanks. JLS
F/N: I am not at all claiming impossibility (and that is why Darwin's remarks in Origin Ch 6 as excerpted are deck-stacking), just that some things that on bare possibility of logic and physics can be so, face such a configuration space hurdle that they become empirically in praxis reliably unobservable. And, that is the foundation of the second law of thermodynamics, statistical form. [E.g. it is physically possible that all the O2 molecules in the room where you sit could spontaneously all go to one end, leaving you choking. But these states are so isolated in the space of configs, that we would on an infinite monkeys analysis, not expect to see this once in the lifespan of the observed universe.] PS: Acknowledged, and appreciated. kairosfocus
Dr Bot: Pardon, but in science, hypothesies, per the Galilean principle, are subject to the test of empirical observation. Moreover, in a world where we have laws like the second law of thermodynamics and the repeated attempts to put up perpetual motion machines in violation, there is also an analysis on where a proposal is plausible enough to be worth theoretical consideration in absence of direct empirical support. (Cf Abel's discussion on plausibility. If you want to show us an informational and [self-]organisational free lunch, SHOW it. Just as, if you want to show us a work/energy free lunch, SHOW it. I can readily show that intelligence es routinely produce FSCO/I, and I have repeatedly shown the analysis in brief as to why we should not reasonably expect the spontaneous origin of FSCO/I, absent strong direct observation that something is very wrong with the analysis on searching config spaces. Extraordinary claims require extraordinary [actually: ADEQUATE] evidence. In this case, observation.) GEM of TKI kairosfocus
Kindly, provide a single case where the progression from prebiotic conditions, to chemistry that produces replicating molecules and onwards to the sort of metabolising and von Neumann Replicators has been shown in the lab or observed on the ground. There simply are none. The above is wholly speculative, and is not science.
It is called an Hypothesis - They argue that it could be possible, and test their hypotheses experimentally. If it had already been demonstrated then there would be no need to use science to examine the question. You are claiming that it is impossible - but I find the evidence lacking for reasons already explained in great detail. I'm done here, but it has been stimulating. On a parting note before it slips my mind again - I was a little terse with you on an earlier thread (a few weeks ago I think) where I over-reacted to a comment of yours, but for which I ought to apologise - My excuse: I had just given up smoking! DrBot
F/N: if you want to specifically look at the Orgel-Shapiro exchange, cf e.g. here. (Please bear in mind that every comment I submit at UD links to this discussion, through my handle.) kairosfocus
Dr Bot, I see your workload calls, and indeed, I am taking a sneakout on energy and development issues; I understand. However, a few responsive remarks are in order: 1: the problem is that these hypotheses don’t argue that fully formed replicators came together from a soup of random atoms, interacting via random laws, they argue, in various forms, that replicators formed through complex chemical processes – through the operation of many complex rules (only one of which is chaos). Kindly, provide a single case where the progression from prebiotic conditions, to chemistry that produces replicating molecules and onwards to the sort of metabolising and von Neumann Replicators has been shown in the lab or observed on the ground. There simply are none. The above is wholly speculative, and is not science. Indeed, had you specifically addressed the points raised here, I would have taken this one more seriously. But this is simple speculation without serious warrant. Observe for one instance how Orgel and Shapiro mutually self destruct on metabolism and genes first theories. Worse, the basic point is that this is all on a red erring led away to a strawman. The test I put forth above is on the question of whether we have good evidence that forces of chance and necessity under hghly favourable [relativley simple conditions and rapid generation of cases] can credibly give rise to functionally specific complex and coded information, the class of information in DNA. As you have acknowledged, there is no good reason to see this as empirically possible. To now appeal to speculative forces of organisation to write the DNA code's language, to create the algorithms and machines for implementing such, and to write the substance of the codes, is piling speculation on speculation in the teeth of an obvious alternative. Designers are the ONLY routinely observed and analytically warranted source of FSCO/I. In fact the reason why the observation of such complex FSCO/I in the living cell is not taken as morally certain evidence of design, is that there is an imposition of Lewontinian a priori evolutionary materialism that assumes it is impossible for a designer to have been there. So, since chance can with logical possibility do just about anything [as in it is logically possible to see the 2nd law of thermodynamics violated routinely], that is what "must" have given rise to the highly fortunate configurations that led to life, with a bit of help from laws of chemistry. I add, the relevant thermodynamics case was long since addressed here, and onward since 1984 in TMLO chs 7, 8 and 9. 2: Examples of these rules – the precursors of life – are the creation of order and complexity (not functionality in the first instance) like the examples I have given (crystals, layering and others like mollecular chains) In other words they argue that first life is contingent on complex, rule following chemistry, not just randomness put through a filter. This is a disappointing conflation of order and organisation, in the teeth of much opportunity to get this right, and insisting on already corrected irrelevant examples. Let us cite Orgel, in 1973, again:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added. Crystals, of course, would by extension include snow crystals, and order enfolds cases such as vortexes, up to and including hurricanes etc. Cf. here.]
As Thaxton et al summarise from Yockey and Wicken, in TMLO, Ch 8, in 1984:
Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wicken note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.
3: Because your experiment – your proposed evidence – does not include these contingencies – it cannot produce order or complexity for the same reasons it cannot produce FCSI – it does not address any of the OOL hypotheses and so it does not refute them. Again, this is a case of setting the exercise in a different context from what it was intended to show and succeeds in showing. By moving goalposts, however, success can be made out to be failure. The key issue, ever since MF in 5 above and my response in 7, was the claim that he design theory is not empirically falsifiable. To that I replied that if ever we can show that with reasonable evidence directed forces of chance and/or mechanical necessity can spontaneously give rise to functionally specific, complex information, then the ID design inference would at once fall to the ground. So, I proposed a direct test on a million PCs with zener noise ckts, driving pseudorandom binary sequence circuits to flatten out the randomness. If in any reasonable time 1,000 bits (143 7-bit ASCII characters) could be generated spelling out a coherent message, that would be a direct empirical falsification of design theory. The infinite monkeys tests show that it is possible to find such sequences in a config space of about 10^50, but we are dealing with one of order 10^300, which is analytically (on pretty much the same grounds as the statistical form of thermodynamics is grounded) beyond the credible search resources of the observed cosmos. In several remarks above, you acknowledge that this test is passed, i.e the ID inference to design on FSCI is testable and on this class of tests so far stands supported by the evidence. That is all that is needed to dispose of the common rhetorical gambit that the design inference is empirically unfalsifiable. You then injected ideas about stratigraphic layers and the like hoping that forces of chance plus necessity would act together to give rise to order and complexity. They do, but they do not in our observation give rise to functionally specific complex organisation and associated information beyond 1,000 bits. That is, the only empirically known force capable of producing COUPLED functional specificity AND complexity is choice configuration, or design. In the case of the proposed origin of life by speculative scenarios, they lack empirical warrant and they run afoul of the known chemistry that has been investigated. Dawkins' speculative replicator is just that, speculative. Metab first and genes first theories shred each other, as the recent exchange between Orgel and Shapiro showed. Neither is credible. In addition, the FSCI in DNA for life as we OBSERVE it is not going to be explained on mechanical necessity [as that gives rise to low contingency, where information storage requires HIGH contingency, as the sequences of letters in messages in this thread show]. Nor, on free, stochastic contingency, or chance, as that will be dominated by the sea of non-functional configs, and the resources of the observed cosmos are not enough to move the scope of search beyond a practical zero. Nor will the low contingency tendency of mechanical necessity now supply the ability to configure towards functional configs. The order of atoms in a crystal has inherently low information, and a random cluster of crystals is chaos not information based function. a clay layer in a pond does not provide a template that creates DNA and the machines to make its function work. Nor is there any evidence of a chain from simple molecules to the organised structures of the living cell. Speculation is not evidence. The logical explanation is the obvious one: the only empirically and analytically well warranted causal factor that routinely produces FSCO/I is design. Intelligence guides contingency to fulfill purpose, and achieve function. Posts in this thread are a good example in point. So are algorithms, data structures and coded programs that work. GEM of TKI kairosfocus
KF, forgive me if I don't participate in this stimulating discussion for a while - I've got to be disciplined and avoid it so I can catch up with my overwhelming workload! I will leave with a few (not so short) summary comments: Firstly, you still haven't convinced me of the effecacy of your experiment. Eugen sums it up nicely:
Algorithm is a set of rules. These rules are more basic and are defined by logic. Is this what DrBot is looking for? Not pure randomness but some basic rules to guide randomness. Elementary particles of matter have basic properties like spin, charge, mass etc. which define them and set basic rules of their behavior.
Your argument is one attempting, gallantly, to refute OOL hypotheses that argue life emerges through the functioning of the laws of nature (call it chance + necessicity), the problem is that these hypotheses don't argue that fully formed replicators came together from a soup of random atoms, interacting via random laws, they argue, in various forms, that replicators formed through complex chemical processes - through the operation of many complex rules (only one of which is chaos). Examples of these rules - the precursors of life - are the creation of order and complexity (not functionality in the first instance) like the examples I have given (crystals, layering and others like mollecular chains) In other words they argue that first life is contingent on complex, rule following chemistry, not just randomness put through a filter. Because your experiment - your proposed evidence - does not include these contingencies - it cannot produce order or complexity for the same reasons it cannot produce FCSI - it does not address any of the OOL hypotheses and so it does not refute them. In other words Darwinists, OOL researchers, would agree with you and me that your system will not produce FCSI in any reasonable time, bit they would also argue, rightly in my opinion, that it excludes all of the natural mechanisms (the rules of chemistry) that they believe are required for life to emerge, and as such makes no argument against their claims - The experiment is, if you'll forgive the phrase, attacking a strawman. My critique is sincerely intended to help you strengthen your argument by directing it towards the actual claims being made by Darwinists. I pray that makes my position clear. Now before I depart, one other comment:
Actually, until one can show and account for the origin of a von Neumann type self replicating facility in a metabolising entity on chance plus necessity in a warm little pond or the like, one cannot properly resort to the claimed power of lucky noise rewriting the DNA software in a reproducing organism.
This is not strictly correct is it. I hope you would not dispute the fact that life exists, that we can observe living things reproducing with variety, study DNA and cellular dynamics etc. That life exists is not in dispute, we see it all around us and we can study it in detail, this will allow us to determine, eventually, if the claim that mutations can lead to diversity, new traits and ultimatly to new species, is true or not. These will be emperical facts, observed and tested, they will not change if we discover that life was designed, or that life came about by happenstance. (Note: I am not saying these claims are true, merely that they can be tested) So KF, when you say: one cannot properly resort to the claimed power of lucky noise rewriting the DNA software in a reproducing organism because we do not know what caused first life you are incorrect - We can test the claims of the power of mutation+inheritance+selection+whatever other mechanisms might be at play, the resulting observed facts about Biology today are not directly dependent on knowing the origins of life. Look at it this way - We can determine, by studying and testing, how an internal combustion engine works if we have one in front of us. It coming about by chance or by design will not alter the way it works - it just means we understand its origins. A few footnotes: Your last post about probabilities, search and allowable time is slightly incorrect. Yor description of the time it would take to search through all possible 1kbit configs is cached in terms of an exhastive or iterative search, not a random one. For example a random search will have a probability of searching the same location many times because it does not exclude past searched items from future searches (this actually makes the idea of randomness as one root of OOL worse because the time required gets larger) On the other hand (and as a technical note) If we assume an iterative search then the claim that the universe is to short lived to find functional configs only stands up if all functional configs are to be found at the end of the search. To understand the probabilities better we really need to know where any islands of functionality lie in the search space (because some could be in the first 10^50 iterative seaches), how many islands there are and how broad an area they span. I'm afraid that, unless you know something the rest of us don't, then we don't actually know what the simplest chemical replicator looks like, how much variety it would tolerate, or how many other types of replicator are possible of a similar or rising complexity. In other words, and put in terms of your examples: How many configurations of the 1kbit string are functional, and what is the distribution? Although the shores of islands of function argument is a solid one, without a map of the ocean we can't be certain of the probabilities - and this is where OOL research may actually deliver some valuable results from an ID perspective - they are mapping that ocean, they just might not like what they find! DrBot
I am fairly new to the debate but i have formed a general overview of the question. Here's how i see it in thumbnail sketch. I know this isn't the normal formulation of the arguments but it appears to sharpen the issues (At least in my mind). Your comments are appreciated? Thanks. Theory of evolution: 1. Simple DNA system emerges (self-replicating life forms) 2. Life forms are shaped to their environment by inherited traits and natural selection. (survival of the fittest) 3. Occasional series of chance events construct novel features which accumulate and transform simple DNA systems into information rich DNA systems. 4. MAN is created. Theory of ID: 1. Complex DNA system emerges (self-replicating life forms) 2. Life forms are shaped to their environment by inherited traits and natural selection. (survival of the fittest) 3. Some limited information is added by chance events. 4. MAN emerges. Did life start with a bucket full of information and then add a thimble full through natural selection and chance events? Did life start with a thimble of information and fill a bucket through a series of incremental advances? Was MAN created or did MAN emerge? JLS
Eugen: Now that the Darwin Day service drop-out is over . . . The laws of physics, whether of mechanical necessity or of random processes [and the latter require a set-up condition to have a specific distribution, Maxwell-Boltzmann (cf here how that is set up qualitatively and its implications), Gaussian, Poisson, Weibull, etc], do act like a programme of constraints that given an initial condition, set up the cosmos as a stage, and put players and dynamical processes playing out on it. Indeed, it turns out that the processes, constraining rules, parameters, brute givens etc are on many, many dimensions, ever so finely balanced or tuned in ways that facilitate the existence of C-chemistry, cell-based, complex life, through many, many integrated networks of dynamical interactions flowing from a properly set-up big bang singularity. This starts with the nuclear properties that have to be just so to get a cosmos in which per nuclear reaction paths in stars, the four most dominant atoms are H, He, C and O. The properties of these atoms are very, very special indeed, and facilitate ever so many processes crucial to life. E.g. C is the connector block atom, with H its most common companion. H and O go together to make H2O, which is the crucial and highly facilitating medium for cell based life on a terrestrial planet like ours. (And BTW, notice, I am emphasising stuff that is based on what we do know, with abundant empirical underpinnings [cf the full discussion in the linked page and watch the onward vids] about the key underpinnings for life such as we enjoy, and about fusion processes, stars and relativistic cosmology.) So powerful is just this initial cluster of fine-tuned parameters, that they are the basis for Sir Fred Hoyle's famous statement:
From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has "monkeyed" with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.Cited, Bradley, in "Is There Scientific Evidence for the Existence of God? How the Recent Discoveries Support a Designed Universe". Emphasis added.] . . . . I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [["The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12]
Remember, this is a life-long agnostic/atheist speaking. And, a holder of a Nobel-equivalent prize for astronomy. (An astrophysicist of the first rank for decades, and a personal scientific hero.) Just on that alone, we would have good reason, even through a multiverse model, for inferring to a designed cosmos, with high confidence on inference to best explanation. So, an extra-cosmic, extremely knowledgeable and powerful designer is a credible, scientifically warranted view. Jooking in a bit of phil, the radical contingency of our cosmos strongly points to a necessary being as its causal root, a being that is sufficient in itself, i.e. has no causal dependence on external necessary factors. An eternal and indestructible, causally efficacious being capable of building a cosmos like we observe. With that in the background, and the evident going out of the way to set a stage for life, I do not strictly need any further evidence of design in our world. I could in principle, happily be a believer in a cosmos that has the additional property that through as yet undiscovered laws, it is programmed to set up terrestrial planets like ours [another set of fine tuning parameters!] and the chemistry of warm little electrified ponds, or undersea vents, or comets etc, is such that the components of life come to be, then they self assemble and evolve till we are here. Ultimate front loading in a designed cosmos. A cosmos that on the cosmological evidence I can comfortably conclude was designed by a necessary, intelligent and powerful being. But, once we turn to life on our planet, cell based life, lo and behold, we find ever so much FSCO/I, again pointing to direct design as its most credible explanation. That holds for its origin, and the advocates of spontaneous abiogenesis on laws of chance and necessity have never been able to convince me that that amount of functionally specific organisation, on the gamut of our observable cosmos, could credibly happen once. On thermodynamics and linked origin of information grounds backed up by direct induction from the source of FSCO/I that we routinely observe. Then, when I am invited to accept that huge increments in FSCO/I spontaneously arose and led to dozens of novel body plans, starting with the Cambrian life revolution [cf here], my informed incredulity meter pegs, so hard the needle gets bent. Life starts out with 100 - 1,000+ kbits worth of DNA, and the panoply of machinery that makes it work. It is credible that novel body plans need 10's - 100's+ of millions of bits of additional DNA information, and the isolation of the relevant islands of function is being documented by the way we find say huge chunks of essentially human genome in the kangaroo, a marsupial; a totally different order of mammal. Then, I see the sort of deck-stacking arguments that Darwin put on the table in Origin, Ch 6, in addressing FSCO and related irreducible complexity as strong indicators of design (a la Paley and teh watch, including the self-replicating watch of Ch 2 that somehow gets left off when Paley is routinely trashed):
IF it could be demonstrated [that's in context a demand for absolute proof, which is not in the gift of science] that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. No doubt many organs exist of which we do not know the transitional grades, more especially if we look to much-isolated species, round which, according to the theory, there has been much extinction. Or again, if we take an organ common to all the members of a class, for in this latter case the organ must have been originally formed at a remote period, since which all the many members of the class have been developed; and in order to discover the early transitional grades through which the organ has passed, we should have to look to very ancient ancestral forms, long since become extinct. 1 We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind . . .
Sorry, great bearded one, that is the injection of logical possibility, in a context where the proper standard is empirically anchored reasonable likelihood. As in, if you cannot show us an observed case of FSCO/I by chance plus necessity, then on inference to best explanation, the best explanation for FSCO/I remains what it was in Paley's day, and in Plato's day for that matter. Design. Which we routinely observe creating such FSCO/I and can back up with the infinite monkeys type analysis that grounds statistical thermodynamics. So, we have an inference to best, empirically anchored explanation, design, going up against deck-stacking question-begging a materialist priorism. If you doubt me on this last, cf Lewontin's infamous 1997 admission:
To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. ]
ID thinker Philip Johnson's rebuke to that is apt:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
GEM of TKI PS: Dr Bot, please note that I am not making offers of "proof" here (or further above in the thread) but am pointing to inference to best current empirically anchored explanation. And, that his is what is the point in my zener diode triggered random number generator. For, the generation of a coherently functional text in a random process [much faster than a biological one would be and entirely comparable to the claimed powers of Ev, Avida etc to illustrate how evolution works, except that I am saying, first you have to get to the shores of an island of function before you can start hill climbing to peaks of function] is an empirical test of the claim that within reasonable likelihood, FSCI can be originated by chance plus mechanical necessity. [If you cannot get to first base by hitting the ball and running, it is pointless discussing rounding the bases and heading for home plate. Just so, without an empirically anchored case that functionally specific, complex organisation and associated information relevant to origin of life and of novel body plans is reasonably likely on the gamut of our observed cosmos, I am entirely within my scientific epistemic rights to infer on best explanation to the known, empirically warranted source of such FSCO/I: design. And, to use the test of setting up the task of generating a 1-k bit text from the published corpus of free electronic books by random walk processes, as a reasonable test of the capacity of chance and mechanical necessity to get us to FSCO/I. The evidence is, that such processes can get us to islands of fucntion in config spaces of order 10^50 cells, but for the 1,000 bit threshold, we are looking at 10^301 cells, and for observed simplest life, we are looking at far larger yet spaces, 100 - 1,000 k bits. 2^100,000 = 9.99 *10^30,102. Origin of embryologically and environmentally feasible novel body plans goes well beyond that again, looking for islands of function in config spaces of order 10's to 100's of millions of bits. A space of 10 mn bits has 9.05*10^3,010,299 cells. The ~ 10^80 atoms of our whole observed cosmos, changing state 10^20 mn times faster than strong nuclear force interactions, for the thermodynamically credible lifespan of our cosmos (50 mn times the time said to have elapsed since the singularity), would go through about 10^150 states, a practical zero relative to such spaces.] kairosfocus
F/N: I believe this was accidentally cross-threaded by Frost 122585, and seems to best fit here, so pardon my doing that: ______________________ >> The objection that Mark is raising here is actually one for Theology. The central point of the objection is that ID is not scientific because it does not include within it a testable feature of the designer itself. There are several problems here. Firstly, the question of who the designer is- what its nature and purpose and limits of its prowess are etc – is not the focus or purpose of the theory of ID at all. ID is concerned with whether something IS or IS NOT designed. The definition of ID is “the theory that certain patterns in nature are best explained as the product of intelligence.” Period. So the objection about intention of a designer is arbitrary to begin with- however it does imply a real theological inquiry. Secondly, it is a very interesting admission when Mark says the purposes or intentions of a designer ARE testable- because to Mark one can judge the legitimacy of the nature of the designer by the effects that are attributed to it. However, if this is true then it is also true that you can judge the cause of the designed object in question by it’s nature as well -for if you cannot even discern if something is or is not designed to begin with, then you certainly cannot discern if something is designed for the presupposed purpose it is thought to be designed for. Next, the reason why this is a theological inquiry is that the objection concerning intention of the designer is really about the nature of the designer itself. The problem here just like with the question of evil in theology, and many thing in life, is that looks can be deceiving. For many Christians the fall of man is the reason why things that were originally meant to be good are now left flawed. But for many other more skeptical theists – or those who are not biblical literalists they see flaws in designs in nature as being there for a purpose- a challenge or a spiritual purpose- for man kind who needs to over come them in order to become spiritually stronger, enlightened, complete etc. Or in a nut shell as Kariosfocus likes to put it “for the purpose of building souls”- Beautifully put. So the issues concerning the intentions of the designer are actually not easily testable because the purpose/s of the designer can be equally and likely more complex than the purposes behind the most complex paintings, sculptures, works of literature and other forms of art. Only the artist themselves truly knows the intention behind the design of their work- yet many things can still be inferred and understood- just not all things. So I think Mark’s admission that the Designer’s motives can be testable is little more than a bait and switch tactic- one that seeks to put the designer into a small box made of straw that can perhaps be more easily knocked over. I am not saying this was Mark’s intention but it has long been one of the debating tactics of many before. >> ________________________ kairosfocus
Hi Kairos Again, some quick ideas before pandemonium starts (i.e. kids come home) Algorithm is a set of rules. These rules are more basic and are defined by logic. Is this what DrBot is looking for? Not pure randomness but some basic rules to guide randomness. Elementary particles of matter have basic properties like spin, charge, mass etc. which define them and set basic rules of their behavior. Basic rules that guide behavior of particles can assemble nice repetitive patterns like crystals and snowflakes. Some other basic rules of nature like gravity can create repetitive patterns of matter like sediment layers. Eugen
Eugen Your simple algor would do a random walk from initial location in Hamming distance space. Whether you do it by doing an add-to [or more likely an X-OR], or if you simply do a shift left game, dumping out the old bits and putting in the new ones in succession for a 1,000-bit shift register. To do our snowflake, we could do a digital cam game, where we program arc around a circle [or, to get real literal, a hexagon] and then do a random length variation in radius, which we can then do a real or virtual follower on. (Hardware and software are functionally equivalent.) The digital snowflake cam would be a combination of the programmed necessity and the chance variation. For that matter if you wanted to do a digital sim on layering, you can set up a layering sequence then do random variations on it. Or if you wanted to do a text string, you can program a set sequence of letters then do a random variation on them by adding to or subtracting from the ascii codes. (For onlookers: Yes, you can add and subtract ascii symbols, which would change the symbols.) One could even do the test of starting with a known 1,000 letter functional string then see if we can move to a different string while preserving function all the way. Works easily with three letter strings: rat --> cat --> sat --> sit --> pit --> pet --> pea --> tea --> ten --> men --> man. But once strings get significantly longer, there will be trouble. The hamming distance between functional 143 [isn't that the tweet length?] letter strings is a lot harder to bridge. (Message, micro evo is far more feasible than macro.) In neither case would you change the basic point that FSCI is not reachable by chance plus necessity. In short the basic test remains valid. GEM of TKI kairosfocus
Maybe not good idea but just thinking loud here. We can provide more regular output in the experiment to simulate polarity or affinity for characters. It could quickly get complicated so we have to stick to simplest algorithm possible. All that's needed are few lines of code that will simply look at random generator output and add next character of the same kind. It would act like a simple "attractor". If we are dealing with ones and zeroes then if random generator output is zero we add another zero to the string. If RGO is 1 we add 1 to the string. Oops looks like we are just doubling randomness. If we would try to add more regularity in another dimension to create "snowflake", algorithm would quickly get too complicated. Would this make Dr.Bot happier? Eugen
markf you state; 'That random variation that we observe between parents and offspring plus natural selection was responsible for the vast majority of diversity of life we see today.' and yet we find,,, “Whatever we may try to do within a given species, we soon reach limits which we cannot break through. A wall exists on every side of each species. That wall is the DNA coding, which permits wide variety within it (within the gene pool, or the genotype of a species)-but no exit through that wall. Darwin's gradualism is bounded by internal constraints, beyond which selection is useless." R. Milner, Encyclopedia of Evolution (1990) This following study is very interesting for the researcher surveyed 130 DNA-based evolutionary trees to see if the results matched what 'natural selection' predicted for speciation and found: Accidental origins: Where species come from - March 2010 Excerpt: If speciation results from natural selection via many small changes, you would expect the branch lengths to fit a bell-shaped curve.,,, Instead, Pagel's team found that in 78 per cent of the trees, the best fit for the branch length distribution was another familiar curve, known as the exponential distribution. Like the bell curve, the exponential has a straightforward explanation - but it is a disquieting one for evolutionary biologists. The exponential is the pattern you get when you are waiting for some single, infrequent event to happen.,,,To Pagel, the implications for speciation are clear: "It isn't the accumulation of events that causes a speciation, it's single, rare events falling out of the sky, so to speak." http://www.newscientist.com/article/mg20527511.400-accidental-origins-where-species-come-from.html?page=2 EXPELLED - Natural Selection And Genetic Mutations - video http://www.metacafe.com/watch/4036840 "...but Natural Selection reduces genetic information and we know this from all the Genetic Population studies that we have..." Maciej Marian Giertych - Population Geneticist - member of the European Parliament - EXPELLED ,,, so markf perhaps you would like to be the first neo-Darwinist to actually provide the empirical evidence to back up your grandiose claims???!!!! bornagain77
#63 Dala You cut out a rather important element of what I wrote: That random variation that we observe between parents and offspring plus natural selection was responsible for the vast majority of diversity of life we see today. You are right that if the theory was simply - completely random variation led to current diversity - then it would be unfalsifiable. It would allow for a buffalo to spring from a banana (to be totally absurd). That is just as magical (and meaningless) as a designer of unspecified powers and motives. However, that is not what Darwin or modern evolutionary theory propose. They are talking about the kind of reproduction and variation of the kind we see about us. And this does require enough time and particulate inheritance to succeed. I am glad we agree that the theory that RM+NS accounts for all diversity is false. markf
MathGrrl, look at it this way. The information in even the simplest of life is far, far more sophisticated than anything our best teams of software engineers have ever devised,, Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information – David L. Abel and Jack T. Trevors – Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8 “No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms’ genomes programmed?” http://www.biomedcentral.com/content/pdf/1742-4682-2-29.pdf Simplest Microbes More Complex than Thought - Dec. 2009 Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes." http://www.creationsafaris.com/crev200912.htm#20091229a First-Ever Blueprint of 'Minimal Cell' Is More Complex Than Expected - Nov. 2009 Excerpt: A network of research groups,, approached the bacterium at three different levels. One team of scientists described M. pneumoniae's transcriptome, identifying all the RNA molecules, or transcripts, produced from its DNA, under various environmental conditions. Another defined all the metabolic reactions that occurred in it, collectively known as its metabolome, under the same conditions. A third team identified every multi-protein complex the bacterium produced, thus characterising its proteome organisation. "At all three levels, we found M. pneumoniae was more complex than we expected," Human DNA is like a computer program but far, far more advanced than any software we've ever created. Bill Gates, The Road Ahead, 1996, p. 188 3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell - Oct. 2009 Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip -- while avoiding the knots and tangles that might interfere with the cell's ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication. http://www.sciencedaily.com/releases/2009/10/091008142957.htm ,, and yet MathGrrl, despite having information packed to the gills in the simplest of life,,, ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.” Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894 ,,,There is not even one example of the evolution of even a 'simple' biological machine of system,,, ,,,we must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.’ Franklin M. Harold,* 2001. The way of the cell: molecules, organisms and the order of life, Oxford University Press, New York, p. 205. *Professor Emeritus of Biochemistry, Colorado State University, USA ,,,despite this MathGrrl, no one has ever violated Genetic Entropy''' The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Evolution Vs Genetic Entropy - Andy McIntosh - video http://www.metacafe.com/watch/4028086 ,,,,it boils down to this,,, Prescriptive Information (PI) - Abel - 2009 Excerpt: Formal choices of mind can be recorded into physicality through the purposeful selection of unique physical objects called “tokens.” A different formal meaning and function is arbitrarily assigned to each token. Formal rules, not laws, govern the combinations and collective meaning of multiple tokens in a Material Symbol system (MSS) (Rocha, 1997 6069). The recordation of successive purposeful choices into a MSS allows formal PI (Prescriptive Information) to be instantiated into a physical matrix. http://www.scitopics.com/Prescriptive_Information_PI.html In summary: Material processes cannot create codes or axioms since codes and axioms are ‘transcendent rules’ which have no basis in a material substrate, but instead axioms and codes must always arise from a transcendent mind and are always imposed onto a material substrate: i.e. no known material process can form a transcendent symbolic representation of itself only a mind with ‘intent’ can order a material substrate to be ordered as such. etc.. etc.. etc.. bornagain77
MathGrrl, look at it this way. The information in even the simplest of life is far, far more sophisticated than anything our best teams of software engineers have ever devised,, Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information – David L. Abel and Jack T. Trevors – Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8 “No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms’ genomes programmed?” http://www.biomedcentral.com/content/pdf/1742-4682-2-29.pdf Simplest Microbes More Complex than Thought - Dec. 2009 Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes." http://www.creationsafaris.com/crev200912.htm#20091229a First-Ever Blueprint of 'Minimal Cell' Is More Complex Than Expected - Nov. 2009 Excerpt: A network of research groups,, approached the bacterium at three different levels. One team of scientists described M. pneumoniae's transcriptome, identifying all the RNA molecules, or transcripts, produced from its DNA, under various environmental conditions. Another defined all the metabolic reactions that occurred in it, collectively known as its metabolome, under the same conditions. A third team identified every multi-protein complex the bacterium produced, thus characterising its proteome organisation. "At all three levels, we found M. pneumoniae was more complex than we expected," http://www.sciencedaily.com/releases/2009/11/091126173027.htm Human DNA is like a computer program but far, far more advanced than any software we've ever created. Bill Gates, The Road Ahead, 1996, p. 188 3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell - Oct. 2009 Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip -- while avoiding the knots and tangles that might interfere with the cell's ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication. http://www.sciencedaily.com/releases/2009/10/091008142957.htm ,, and yet MathGrrl, despite having information packed to the gills in the simplest of life,,, ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.” Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894 ,,,There is not even one example of the evolution of even a 'simple' biological machine of system,,, ,,,we must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.’ Franklin M. Harold,* 2001. The way of the cell: molecules, organisms and the order of life, Oxford University Press, New York, p. 205. *Professor Emeritus of Biochemistry, Colorado State University, USA ,,,despite this MathGrrl, no one has ever violated Genetic Entropy''' The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html http://www.us.net/life/index.htm Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Evolution Vs Genetic Entropy - Andy McIntosh - video http://www.metacafe.com/watch/4028086 ,,,,it boils down to this,,, Prescriptive Information (PI) - Abel - 2009 Excerpt: Formal choices of mind can be recorded into physicality through the purposeful selection of unique physical objects called “tokens.” A different formal meaning and function is arbitrarily assigned to each token. Formal rules, not laws, govern the combinations and collective meaning of multiple tokens in a Material Symbol system (MSS) (Rocha, 1997 6069). The recordation of successive purposeful choices into a MSS allows formal PI (Prescriptive Information) to be instantiated into a physical matrix. http://www.scitopics.com/Prescriptive_Information_PI.html In summary: Material processes cannot create codes or axioms since codes and axioms are ‘transcendent rules’ which have no basis in a material substrate, but instead axioms and codes must always arise from a transcendent mind and are always imposed onto a material substrate: i.e. no known material process can form a transcendent symbolic representation of itself only a mind with ‘intent’ can order a material substrate to be ordered as such. etc.. etc.. etc.. bornagain77
MG: Ev is an intelligently designed algorithm that uses active information input by the designers to produce the appearance of evolutionary progress by chance plus necessity. The real issue posed by the FSCI challenge -- and BTW, does Ev progress by 1000+ bit increments of novel info by chance? -- is not the hill climbing within an island of function, but to get to the shores of such an island in a sea of non function when your leaky raft drifting by chance has not got enough resources to last long enough to credibly drift to an island. GEM of TKI kairosfocus
MathGrrl states; 'Evolutionary algorithms such as ev and Tierra (neither of which is explicitly “goal directed”) do reflect those mechanisms, in simplified form, so it is not surprising that they work.' but alas MathGrrl, THEY ARE ALL goal directed; Signature In The Cell - Review Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs. Software Engineer - quoted to Stephen Meyer http://www.scribd.com/full/29346507?access_key=key-1ysrgwzxhb18zn6dtju0 The Capabilities of Chaos and Complexity - David L. Abel Excerpt: "To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory." http://www.mdpi.com/1422-0067/10/1/247/pdf LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/ Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism - Dembski - Marks - Dec. 2009 Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it. http://evoinfo.org/publications/evolutionary-synthesis-of-nand-logic-avida/ And MathGrrl, the simulations CLEARLY DO NOT reflect reality: Arriving At Intelligence Through The Corridors Of Reason (Part II) - April 2010 Excerpt: Summarizing the status quo, Johnson notes for example how AVIDA uses “an unrealistically small genome, an unrealistically high mutation rate, unrealistic protection of replication instructions, unrealistic energy rewards and no capability for graceful function degradation. It allows for arbitrary experimenter-specified selective advantages”. Not faring any better, the ME THINKS IT IS LIKE A WEASEL algorithm is programmed to direct a sequence of letters towards a pre-specified target. https://uncommondescent.com/intelligent-design/arriving-at-intelligence-through-the-corridors-of-reason-part-ii/ The Problem of Information for the Theory of Evolution – debunking Schneider's ev computer simulation Excerpt: In several papers genetic binding sites were analyzed using a Shannon information theory approach. It was recently claimed that these regulatory sequences could increase information content through evolutionary processes starting from a random DNA sequence, for which a computer simulation was offered as evidence. However, incorporating neglected cellular realities and using biologically realistic parameter values invalidate this claim. The net effect over time of random mutations spread throughout genomes is an increase in randomness per gene and decreased functional optimality. http://www.trueorigin.org/schneider.asp Constraints vs. Controls - Abel - 2010 Excerpt: Classic examples of the above confusion are found in the faulty-inference conclusions drawn from many so-called “directed evolution,” “evolutionary algorithm,” and computer-programmed “computational evolutionary” experimentation. All of this research is a form of artificial selection, not natural selection. Choice for potential function at decision nodes, prior to the realization of that function, is always artificial, never natural. http://www.bentham.org/open/tocsj/articles/V004/14TOCSJ.pdf etc... etc... etc... bornagain77
F/N: Onlookers, it is LOGICALLY POSSIBLE and empirically indistinguishable from the world we think we inhabit, that the cosmos was created five minutes ago in an instant, complete with the imagined memories and traces of the past, immediate and remote. kairosfocus
Dr Bot and MG: Dr Bot thanks for an in faireness remark. Actually, until one can show and account for the origin of a von Neumann type self replicating facility in a metabolising entity on chance plus necessity in a warm little pond or the like, one cannot properly resort to the claimed power of lucky noise rewriting the DNA software in a reproducing organism. Darwins evolutionary tree of life lacks a root. next, to account for novel body plans you have to account for increments of 10+ Mbits of new DNA bases that have to be functional from embryological development on. Until you can empirically show that chance plus necessity without intelligent action, can credibly give rise to genetic bioinformation on that sort of scale, you have no right to claim that macroevolution is an empirical possibility. And in that context, the issue of getting to complex, specifically coded functional configs by chance and necessity without intelligent direction has to be shown. We know routinely that intelligence is capable of generating FSCI. Simply asserting logical possibility then refusing to allow the possibility of a designer at the times and places in question is begging some very big questions. GEM of TKI kairosfocus
DrBot writes:
To be fair to KF, his experiment is aimed at biogenisis (the origin of self replicatiors) so it doesn’t need to include many elements of evolutionary theory – No reproduction or inheritance, or possibly selection.
Ah, that's not how I was reading the discussion. Kairosfocus, could you please clarify if you are limiting your statements to abiogenesis or if you mean for them to apply to evolution as well? Thanks! MathGrrl
Dr Bot: The test for producing FSCI by lucky noise a la infinite monkeys does relate to many phenomena we observe in nature and as nature, starting with cases of coded information such as we see in DNA, and in similar text strings. We already have in hand an analysis -- cf here again -- that shows that it should not be possible to get functionally specific, coded text strings of significant length [1,000 bits is a useful cutoff] through undirected chance plus mechanical necessity. The tests suggested and carried out would allow us to see if we can break that analysis down by showing it to be counter-factual. Just so, we test the second law of thermodynamics by trying to create a [macro-scale] perpetual motion machine of the second kind, over the threshold where statistical fluctuations are a significant issue. Maybe you will be able to see what I am discussing form the parallel case -- yes, it is parallel, the argument above is that there is no informational free lunch just like the thermodynamics laws say there is no free work -- on how Wiki puts the issue for this case.2ndLTh forbids such:
A perpetual motion machine of the second kind is a machine which spontaneously converts thermal energy into mechanical work. When the thermal energy is equivalent to the work done, this does not violate the law of conservation of energy. However it does violate the more subtle second law of thermodynamics (see also entropy). The signature of a perpetual motion machine of the second kind is that there is only one heat reservoir involved, which is being spontaneously cooled without involving a transfer of heat to a cooler reservoir. This conversion of heat into useful work, without any side effect, is impossible, according to the second law of thermodynamics . . . . While the laws of physics are incomplete and stating that physical things are absolutely impossible is un-scientific, "impossible" is used in common parlance to describe those things which absolutely cannot occur within the context of our current formulation of physical laws.[6] The conservation laws are particularly robust from a mathematical perspective. Noether's theorem, which was proven mathematically in 1915, states that any conservation law can be derived from a corresponding continuous symmetry of the action of a physical system.[7] This means that if the laws of physics (not simply the current understanding of them, but the actual laws, which may still be undiscovered) and the various physical constants remain invariant over time — if the laws of the universe are fixed — then the conservation laws must hold. On the other hand, if the conservation laws are invalid, then much of modern physics would be incorrect as well.[8] . . . . The principles of thermodynamics are so well established, both theoretically and experimentally, that proposals for perpetual motion machines are universally met with disbelief on the part of physicists. Any proposed perpetual motion design offers a potentially instructive challenge to physicists: one is almost completely certain that it can't work, so one must explain how it fails to work.
Now, the point is that under the general macro-level constraints working on a system, there are in principle many specific micro-level distributions of mass and energy that comport with those states. these can be clustered in various interesting ways. But providing there is freedom to go to particular configs, the clusters with the overwhelmingly largest number of micro configs will be overwhelmingly most likely. So, a system that starts out in any arbitrary state and is looked at form time to time, will overwhelmingly be likely to be in the large cluster states. Such a high degree of freedom is of course a high entropy state. So, objects tend to move from low to high entropy states, as can be seen here. Just so, when we take a set of configs that may be taken up by a system describable as a set of yes/no states, we have a large number of possible configs. Of these, the ones that correspond to lingusistically, or algorithmically functional states will be a very small fraction. Spontaneous chances or states traceable to undirected chance and mechanical necessity will overwhelmingly tend to be in non funcitonal configs. But that is a prediction. The infinite monkeys lucky noise type test tests this. It is logically possible for the sysrem to move to a funcitonal config, but the overwhelming likelihood is that on spontaneous variations, we will be in non-functional ones. For just 1,000 bits, the possibility of searching enough of the state to make a functional config explicable on chance plus necessity a superior explanation to the case of intelligence, which is routinely observed, is so low that it is empirically unobservable. But this is a case of trust but verify. So, we can test. So far they tests show that had the criterion been set at the old level of 1 in 10^50 or so, it would have failed. It is at 1 in 10^150 - 1 in 10^300 or so. And, the lucky noise experiment does test whether FSCI is credibly empirically accessible by lucky noise. The answer so far -- as expected -- is, no. And, yes, despite your denial, this is an empirical test relevant to natural phenomena and cases. The most obvious one is the digitally coded info in DNA, especially the origin of cell based life and the related origin of novel body plans. for, out of noise and the mechanical laws governing he chemistry, such long chain monomers would have to come from chance and necessity if they were not designed. Necessity would click the chain together and in the case of proteins fold them. But the aspect of the sequence of monomers if it were not controlled by choice, would have to be controlled by chance. But, chance is utterly unlikely to access a functional sequence, as shown analytically and as supported by lucky noise infinite monkeys experiments. So, the conclusion is that one is empirically and analytically well warranted to infer to design, or choice as the source of the relevant bio-information. Unless, one can demonstrate instead that an intelligent designer at the relevant point is an impossibility. And that is the real issue: it is being assumed that no intelligent designer at the relevant points was possible. That is not on any specific empirical evidence or contradiction in the idea of an intelligence, but on a prior question-begging assumptions. In short, the exact sort of bias mentioned in the original post. GEM of TKI kairosfocus
MathGrrl To be fair to KF, his experiment is aimed at biogenisis (the origin of self replicatiors) so it doesn't need to include many elements of evolutionary theory - No reproduction or inheritance, or possibly selection. But the criticism of the model still stands in this context - It doesn't reflect known mechanisms of chemistry so we can't draw inferences about chemistry from it. DrBot
bornagain77 writes:
‘That isn’t at all analogous to the mechanisms identified by modern evolutionary theory, so it wouldn’t prove much of anything. MathGrrl is right,’ Are you guys finally admitting that computers, using evolutionary algorithmns, have failed to violate Dembski and Marks (LCI) Law of Conservation of Information,,, but isn’t Dawkins ‘Methinks It Is Like A Weasel’, or similar goal directed programs thereof, suppose to be slam dunk proof for you guys that computers can do what you now say they can’t do?
DrBot has done a fine job of explaining this already, but it apparently bears repeating that the experiment proposed by kairosfocus does not reflect known mechanisms of modern evolutionary theory. Evolutionary algorithms such as ev and Tierra (neither of which is explicitly "goal directed") do reflect those mechanisms, in simplified form, so it is not surprising that they work. I don't know how you get your conclusion above from what I wrote. MathGrrl
Joseph writes (41):
did just that- provided a testable hypothesis for ID- well I copied it from a referenced book.
If you're referring to your posts numbered 15 and 16, you have not specified a scientific hypothesis nor have you made any testable predictions that are entailed by such an hypothesis and that would serve to falsify it if they were found to be incorrect. If you mean this:
Therefor to falsfy any given design inference all one has to do is demonstrate that nature, operating freely can produce the effect in question.
I'm afraid you misunderstand the nature of a testable prediction. The only conclusion that can be logically drawn from testing a particular mechanism is that either it is possible for that mechanism to result in the observed effect or that it is not possible for that mechanism to result in the observed effect. Testing a prediction about a mechanism doesn't tell us anything about other possible mechanisms. This, by the way, is the fatal flaw in the explanatory filter that you mention:
This is proven by the explanatory filter which mandates that chance and necessity be given first shot at demonstrating their powers. If chance and/ or necessity can explain it then the design node is never even considered.
The correct answer isn't automatically "design" it is "we don't know." You are trying to position intelligent design as the default explanation. That's not scientific. If you want to support ID scientifically, you need to clearly state a scientific hypothesis and make testable predictions entailed by that hypothesis, without reference to other hypotheses or theories. To be testable, you're going to need to specify who, what, when, where, and how, with some degree of specificity. Thus far I haven't seen that kind of detail from any ID proponent. MathGrrl
the insistence that if there is a logical possibility, then we must resort to such, is destructive of science in general, as in principle, every apparent regularity we have ever seen could be sheer lucky coincidence.
A red herring - I am not arguing about logical possibilities, I am pointing out that your experiment does not apply to reality so it cannot be used to make inferences about reality. DrBot
The FSCI by zener noise and PRBS register infinite monkeys experiment then tests whether FSCI, in general, can be had by lucky noise.
And by the SAME experiment you can show that many non FSCI patterns cannot be had by lucky noise. Unfortunatly these patterns are observed to occur without intelligence - Your experiment is flawed.
the question is: can we infer that FSCI is accessible by lucky noise, empirically not just as a logical possibility?
Yes we can infer that, I agree, I always have agreed. Many non FSCI patterns cannot be had by lucky noise. Your experiment does not apply to they systems being studied. DrBot
You are still avoiding the issue - you can't prove that patterns found in nature are the product of design if your test system doesn't generate any patterns found in nature. You are simply assuming your conclusions and designing an experiment to confirm them - this does not help ID. DrBot
PS: I suggest you may wish to see the discussion of lucky noise here. kairosfocus
Dr Bot: the issue is not whether we have observed the function of rocks looking like the faces of Washington er al. We already have that in hand. We also have in hand the mesh of nodes approach that allows us to describe the rock faces, and deduce how much implicit info is in them, by testing say a 3-D computer model for what random added noise will do to the portraits. Next, we ask, is the number of Y/N decisions to get to the portraits > 1,000? Obviously, Y. that is, we see functional specificity and complex information on the same aspect. We are in a narrow target zone. The FSCI by zener noise and PRBS register infinite monkeys experiment then tests whether FSCI, in general, can be had by lucky noise. If FSCI is reasonably and repeatably empirically accessible by lucky noise, the design inference, in all its forms, is dead. Just so, would be the second law of thermodynamics, which is closely related. Why is this specific testing of the accessiblility of FSCI by lucky noise, in general, so hard to see? the question is: can we infer that FSCI is accessible by lucky noise, empirically not just as a logical possibility? If it is, the design inference is dead. Similarly, the issue is NOT whether lucky noise in principle can get us to FSCI. Nor whether chance plus necessity in principle could deliver anything in particular, but whether this is an empirically reasonable and plausible explanation. the insistence that if there is a logical possibility, then we must resort to such, is destructive of science in general, as in principle, every apparent regularity we have ever seen could be sheer lucky coincidence. GEM of TKI kairosfocus
Dala, pardon, I see now, G kairosfocus
Dr Bot: The point of FUNCTIONAL specificity, is, first, observe function. If there is no function, the criterion simply does not apply. Design thinkers will happily look at a complex item that there is no grounds for assigning functional specificity, and saying: no grounds for inferring specificity on function so, bye bye. As it turns out, there are a great many cases where we do observe function and specificity that is information-rich. So, when I look at your:
it may be the case that FCSI is contingent on sub units that exhibit complexity or regularity . . . . The functional snowflake is contingent on a general crystalline structure (regularity) but in a specific configuration. The general structure can result from chance and necessicity so the specific configuration is contingent on the general structure – If your experiment designs out the ability to generate any snow flakes you have also designed out any way of it generating a functional snow flake.
. . . this is irrelevant. Any material entity will exhibit behaviour constrained by law [made of atoms, which behave in a lawlike way], and most will also show some stochastic patterns, even something so simple as surface roughnesss. The issue is, is there an aspect of function that is crucially dependent on specific information, and in turn is requiring 1000+ yes/no decisions to specify it? For instance, the snowflake cam machine thought exercise, in the first part, there was a specific dependency. So we have a right to infer that the cams in question, though of an unusual material, were designed. (The machine as a whole would by obvious manifest signs of design be just that.) You will further observe that I specifically took note of the crystalline structure. [Which I BTW note extends to say Steel -- do you want to imply seriously that a steel gear, cam or car part, because it is made of a crystalline material, cannot be inferred to as designed unless the LOGICAL POSSIBILITY of chance is ruled out ahead of time? I again invite you to look at the Abel plausibility bound and the basic premise of the second law of thermodynamics, statistical form. Oterwise you are inferring that the watch on your arm cannot be inferred to as designed unless the logical possibility that a volcano spurted it out is ruled out absolutely. We are here dealing with reasonable empirically well founded warrant, not answering to every twist and turn of selective hyperskepticism.] In a dendritic snowflake, the crystalline structure forces a hexagonal form. But, the relevant aspect is that dendrites can grow on the branches in many ways, and we here imagine that someone is able to control that precisely enough to make a cam out of a snowflake. Such a pattern can then in principle be made use of in a cam. And, if we see a machine that uses a snowflake as a cam, where the precise shape is specific on function, i.e function depends strongly on the precision of the shape (just try playing around with the shape of the gearing in your watch to see what I mean), then we have good reason to infer the design of the cam. Now the last part of the cite moves to a classic red herring. The test exercise as discussed is designed to test the specific aspect of phenomena, where we ALREADY see FSCI, and we must ask, where can such FSCI, per empirical observable tests, come from. We already know from the very fact of a config space that any config is possible in principle. But the material issue is whether the typical configs accessible on chance can reasonably get you to an island of function on essentially random walk trial and error from an arbitrary start point. The answer of the empirics, as wella s the analysis of such random walks in such a spac e, is that there is not a basis for searching enough out of 10^300+ configs to have any confi8dence that we can catch a functional config. This grounds the inference that since we routinely see FSCI coming from intelligence and only from such, its presence is a good sign of design as cause. The fact that you have joined with Dala in resorting to bare logical possibility as a resort, is telling. GEM of TKI kairosfocus
Another example just occurred ... Apply your experiment to Mt Rushmore, could the action of wind and water (and gravity and temperature fluctuations) produce erosion patterns like the presidential faces - Obviously not! - but if you want to design an experiment to demonstrate this then it is no good if your experiment doesn't allow ANY erosion patterns AT ALL. Hypothesis: Erosion can't produce the faces at Mt Rushmore. Experiment: A system that does not model erosion. Observation: Faces do not appear. Are you getting any closer to understanding this important point? DrBot
kairosfocus:...Then, think about how your resort to try to support evolutionary materialism You missunderstand me, kairosfocus. I was replying to markf's claim that evolution by random mutation and natural selection cound be falsified by knowing the age of the earth. My point was that there is no way to falsify a theory which claims that something happened by a random event (or a chain of random events.) Can such a theory be called scientific? Dala
KF, Try looking at this from the top down. If you take a system with FCSI and break it apart (destroy function) you end up with a collection of non functional bits, some of which exhibit patterns that are emperically observed to result from chance and necessity but which are non-random. In other words it may be the case that FCSI is contingent on sub units that exhibit complexity or regularity. Your experiment will not generate complexity or regularity so it will not - BY (accident of) DESIGN - produce FCSI. The question of whether FCSI can be generated by anything other than intelligence is not actually at issue here - and I note you continue to claim falsely that I'm making a concession when I say I believe that random systems can't generate FCSI: I never claimed that they could so it is not a concession to say I don't believe they can - The issue is the formulation of your experiment: By creating a system that does not produce patterns observed to be the result of chance + necessicity in nature you cannot then claim that its failure to generate patterns observed in nature (but which have only been observed to result from intelligence) is proof that chance and necessicity cannot generate these patterns. All you have done is specify a system that does not produce patterns observed in nature. In order for me to accept the proposed experiment as valid you need to demonstrate that FCSI is not contingent in any way on any form of complexity or regularity observed to result from natural forces.
Now, let us imagine a system with a snowflake cam bar that is functionally specific and complex. That is, if the dendrites are wrong, even by a small amount, the system will not work.
Yo are inadvertently illustrating my point here - The functional snowflake is contingent on a general crystalline structure (regularity) but in a specific configuration. The general structure can result from chance and necessicity so the specific configuration is contingent on the general structure - If your experiment designs out the ability to generate any snow flakes you have also designed out any way of it generating a functional snow flake. Ergo: If your system cannot produce patterns observed to result from C+N in nature it cannot be used as a test to see if other patterns observed in nature are the result of either C+N or design. Far from being, as you accuse, a strawman red herring, this is a critical flaw in your proposed proof. Note that no goal posts have been moved here, my position is the same (if a little more refined) as it was at the start - Your hypothesis is fine and I do not object to it one bit, I take issue with the proposed experiment because it is flawed. DrBot
Dala: Re: With “randomness” there is no such thing “not enough time” . . . . There is no limit to what a random mutaion could create in the next generation. Logical/physical possibility is not a reasonable criterion for an empirical claim. If that were so, the second law of thermodynamics in particular would collapse, as it rests on the statistical balance of accessible clusters of micro states, thence the likelihood that a system will move to a more or less likely cluster across time. (Cf the discussion here, in context.) Indeed, it could be argued that ANY pattern we think was a result of mechanical necessity was simply a matter of chance to date. Your claims boil down to a rejection of scientific reasoning on observed facts. Please see the discussion of Abel's universal plausibility bound, here. Also, the disucssion on the design inference explanatory filter and inference to law, chance or design on aspects of a given phenomenon, here. Then, think about how your resort to try to support evolutionary materialism reveals itself as undermining the very basis of science itself. GEM of TKI kairosfocus
Is the claim that complex life arose by random mutation (and natural selection) falsifiable? (cut)...This is more or less Darwin’s initial proposal. At least two possible observations were offered shortly afterwards that would have falsified this proposal. (a) The earth is not old enough (Lord Kelvin). However, it turned out it was old enough. With "randomness" there is no such thing "not enough time". In a VEEEEEERY unlikely mutation, a frog could turn into a human in the next generation. Giving yourself more time just makes things "more probable"; you have more chances to do the improbable. (b) Inheritance is blended not particulate. However, it turned out it was particulate. Again, this makes no difference whatsoever. There is no limit to what a random mutaion could create in the next generation. (2) That random variation that we observe between parents and offspring plus natural selection was responsible for all of diversity of life we see today. This has been falsified. That we have observed other ways that diversity arises doesent mean that randomness didn't create the diveristy we observe today. All you have falsified the theory that only randomness produces change. And no matter how much you try you will never be able to falsify the theory that randomness creates SOME of the variations either. Again, with randomness, it's all about what we accept as likely or unlikely. You cannot falsify it in any way. Dala
F/N: I must also pause to correct the attempt to undermine the design inference explanatory filter by setting up a strawman misrepresentation that incorrectly suggests that it is prone to false positives. When in fact, it goes out of its way in willingness to accept false negatives to be as sure as empirical tests can be that when it rules positively, it does so reliably. 1 --> First, Dr Rob presumably knows that ANY data structure can be converted into a suitable cluster of networked string data elements [often by using pointers as control elements to navigate around the network] 2 --> As a direct consequence, an analysis of FSCI on string data structures is without loss of generality, i.e.: 3 --> since we can convert a given 3-D, timelined organised functionally integrated system into a set of structured yes/no decisions, we can apply the 1,000 bit threshold, FSCI test to complex functional systems that implicitly store information in how they are organised and synchronised. (The cam bar as a programming element is a case in point, including our thought exercise snowflake cam bar.) 4 --> With that in mind, let us look at how we would observe an organised system based on selection of key aspects such as of course the cam bar. 5 --> Now, let us imagine a system with a snowflake cam bar that is functionally specific and complex. That is, if the dendrites are wrong, even by a small amount, the system will not work. (Let us imagine, it is something like a complex cloth weaving loom.) 6 --> So, we can map the dendrites as data storage elements and synchronise them on whatever serves as the controlling clock. 7 --> Soon, very soon, we would surpass 1,000 bits of information stored in the implicit set of yes/no decisions to shape the snowflakes so they would function correctly in the integrated system. 8 --> Let us apply the aspects based filter, with the snowflake cam bar as the aspect of interest:
a: is the precise, functionally specific shape explained by mechanical necessity? (No, the flake forms in a hexagonal shape as constrained by laws linked to the nature of the H2O molecule and its constituent atoms, but the dendrites have freedom to take a particular location and length.) b: Is the shape that functions in a precisely integrared sysrem explained by chance, i.e freedom to take up any config in the space of configs, and simply getting this one by the luck of the draw? (No: the shape sits in an island of function that is at least 1,000 yes/no basic decisions deep, which cannot reasonably be searched out by random walk based searches on the gamut of the observable cosmos.) c: is it credibly explained by purposeful choice? (yes, as the precision of the shape allows an integrated system to work.)
9 --> So, if we saw a loom controlled by a snowflake based cam bar, we would be reasonably confident that the shape of the snowflakes in that case were not accidental, once we saw that we have isolated islands of function dependent on the particular shape beyond 1,000 yes/no decisions. 10 --> If we saw a similar machine where any shape of snowflake within a wide range would produce a pattern, we would not be justified to think the shape of the snowflakes was a matter of purposeful choice. 11 --> Instead, we would most likely conclude that the machine would have been designed to incorporate the necessity plus chance elements of the snowflake's shape, presumably to create artistically unique weaves of cloth. (Notice how we have shifted the aspect we are considering here.) GEM of TKI kairosfocus
So, to complain that the test for whether FSCI would result from an infinite monkeys test is not going to replicate a stratigraphic layering pattern is to inject an utter irrelevancy, a red herring. The evidence, which you concede, is that FSCI is not a credible product of undirected chance plus necessity. A stratigraphic column, a very different thing, is a credible product of chance plus necessity, as can be empirically observed and verified. It produces complexity of particle patterns in a cementitious matrix, like the complexity of a lump of granite, and a fairly simple banding pattern of layers as variations in circumstances shift the layering. (E.g. between what look to be mud flow layers of rounded off rocks by the rod cut by Govt HQ here, there is a 2” or so layer of what looks like a fine cement plaster, probably due to a very powerful pyroclastic flow event; the event that wiped out St Pierre Martinique in May 1902 put just 3/4” of fine ash on the ground. Thereafter, the mud flow aggregations continue in thicker layers of several feet each.) but, there is no tight coupling between the complexity and the order, nor is the resulting pattern functionally specific. You have compared bananas with mangoes. Similarly, the same error occurs here:
Your own experiment would not produce a pattern analogous to a snowflake but you claim that its inability to produce a pattern analogous to DNA or a cell is evidence that they are not the product of natural processes. By your own argument, as a result of your own experiment, it would appear that snowflakes are the product of design and cannot be the result of chance + necessity – Why should I accept the result of an experiment that would consistently produce false positives when evaluated with known empirical test cases
1 --> The exercise of trying to generate linear strings showing functional sequence complexity by chance plus necessity without choice through an infiite monkeys exercise is of course utterly different from the circumstances that produce a 3-dimensional, hexagon-symmetry branching pattern snowflake. 2 --> This is bananas and mangoes.
By way of utter contrast again, D/RNA and protein molecules are exactly string structures, produced by chaining of contingent monomers into configurations that are functional because of the complex sequence that exploits chemistry and other factors [such as 3-D shape so that there is a key-lock fitting effect for D/RNA], just as text strings in this post are functional because they are composed of special sequences of letters in a linear data array that follow certain rules of symbolism and meaning.
3 --> Just so, with D/RNA, the specific sequence of GCAT/U monomers specifies a relevant protein sequence (and carries out associated regulatory controls), based on symbolic meaning of elements in succession in a string structure. 4 --> There are cellular machines that then use this structure to assemble protein molecules step by step. 5 --> Such strings of amino acids then fold (by themselves or with aid of more machines) to structures in certain patterns relevant to their onward function as cellular machinery that are deeply isolated in the config space of such sequences. 6 --> That onward configuration is indeed based on physical-chemical forces, but those forces are being exploited based on the information coded in strings. So, the strings must be explained. 7 --> And, the infinite monkeys test is a valid test for the possibility of getting to meaningful strings by chance plus necessity. Noting, that the sequence on the string is on the whole NOT constrained by chemistry, i.e. the D/RNA sequence is not driven by bonding or related forces. Any one of the GCAT/U can be succeeded by any other of the GCAT/U in the same string. 8 --> In the case of DNA, the two inter-wound strings in the helix are key-lock complementary, but that is from one string to the corresponding member of the other string, not in the sequence of the one string. (The key-lock complementarity is in fact the basis for the information storage, similar to how the ridges and valleys in a Yale-type key and lock store functional information, or how cams on a cam bar store functional information; as was used in many mechanical systems as a sort of usually analogue programming.) 9 --> So, the infinite monkeys test is precisely relevant to the material question of generating specifically functional complex, information-bearing data strings by chance and necessity without choice as a material factor. 10 --> The tests show, as noted, that what is of order 10^50 configs, is searchable within relevant resources, but what is of order 10^300 or more is a very different matter. 11 --> This evidence from tests that had the threshold been at the old level form statistical thermodynamics in the days before modern computers, were at about 10^50 WOULD HAVE BEEN FAILED, further supports the conclusion that when we test for the origin of functionally specific strings by chance and necessity, we are doing a relevant test. 12 --> Irrelevant cases that are not about FSCI and would not create functionally specific complex organisation embedding implicit FSCI – a complex cam bar would be a good case in point [and in principle we could make such a cam bar from a controlled set of snowflakes!] – are irrelevant. 13 --> But let us build on the snowflake cam bar example as a further thought exercise. Here, we imagine a bar of cams that in sequence would control complex machinery to carry out a co-ordinated task, based on a follower that then transfers the stored information to the mechanism. Unless the shape of the elements is precisely controlled and synchronised, such a unit will fail in operation. (And of course the 1,000 basic yes/no decisions threshold to get to the functional system will be passed rather quickly in this case.) 14 --> So, he claim you made is based on a red herring led out to a strawman, by way of moving goal posts. ______________ GEM of TKI kairosfocus
Dr Bot: I follow up overnight. First, MF's initial challenge has been that the core assertions and claims of design theory are untestable. In response, I set up the construct FSCO/I, especially in the form dFSCI. If it can be shown that -- in any reasonable situation, under any reasonable circumstances traceable to chance and necessity without intervention of purposeful choice, on the gamut of our observed cosmos, FSCO/I spontaneously originates, then design theory is falsified. Why is that? Fundamentally, because design theory is a theory of origin of specifically functional, complex organisation and associated information. So, if this sort of thing can be shown to routinely or credibly rarely but plausibly originate by chance contingency acting with the laws of mechanical necessity without design, the central claims of design theory would lose credibility. Complex Specified information is about the sort of thing we find in FSCI (as is brought out here), and irreducible complexity is a form of FSCO, which is associated with implied FSCI, through the C1 - 5 factors identified and discussed here. The infinite monkeys type test, in the form of a zener noise ckt flattened out with a PRBS counter and register ckt, is a convenient way to produce the combination of chance and necessity in action that would potentially search a config space for spontaneous FSCI. In practical tests it is proved that things in config spaces of order 10^50 or so, are reachable. But things in spaces of order 10^300, on empirical and analytical grounds, are not; at least in empirical terms; logical-physical possibility is note to be conflated with empirical plausibility and credible observability. (Hence,the Abel-type universal plausibility bound.) This is an important first empirical test. It is therefore credible to conclude that where we see FSCO/I -- with the tight coupling of specificity of function and complexity in the sense of islands of function in large config spaces set by contingencies of order 1,000 or more basic yes/no decisions, its best current explanation on empirical observations and associated analysis is choice, rather than chance and/or necessity. Now, too, we need to distinguish the constructs: (i) randomness, (ii) order and (iii) organisation, as Abel and Trevors did in their discussion paper as previously linked on three types of sequence complexity. This reflects a key remark by J S Wicken, in 1979, that has come up several times in the ongoing ID Foundations series here at UD: ____________________ >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in. Also, since complex organisation can be analysed as a wiring network, and then reduced to a string of instructions specifying components, interfaces and connecting arcs, functionally specific complex organisation [FSCO] as discussed by Wicken is implicitly associated with functionally specific complex information [FSCI]. )] >> _____________________ Similarly, and as already excerpted above, Orgel in 1973 distinguished:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added. Crystals, of course, would by extension include snow crystals, and order enfolds cases such as vortexes, up to and including hurricanes etc. ]
I have extended the point on crystals to include the complex case of the dendritic snowflake, by observing that the part that is simple, symmetric and orderly traces to forces of necessity keyed to the structure of the H2O molecule, and the complex branching structure traces to specific happenstance of atmospheric conditions at the moment of forming by riming etc. Now, you have objected that something like stratigraphic layering is not explained on chance alone, but by chance and necessity in concert. Then, you have extended this to the claimed power of chemistry in the biologically relevant context. (Oddly, when I took up that context starting with Darwin's still little electrified pond full of salts etc, you have now objected that my remarks are irrelevant to your concerns. But, as TMLO Chs 7 – 9 discusses, that is precisely relevant to the issue of whether chemistry and associated thermodynamically driven kinetics, can account for the origin of life with its complex functional organisation. Cf my always linked note App 1 here.) Stratigraphic layering of course, is a case of chance plus necessity in action, giving rise to a complex pattern of particles in layers, typically driven by hydrodynamic sorting and settling mechanisms, or similar mechanism in a volcanic deposition episode, whether by pyroclastic flow or by lahar etc – very relevant to where I sit as I type this. This is not a case of FSCO/I; as Joseph has pointed out: there is not a tight, functional coupling between the specific configuration of elements and complexity. If the pattern of currents and suspended particles had been different for the moment, a different rock pattern would have been deposited and that would be that, any pattern would more or less do as well in very broad limits. Stochastically dominated, chance contingency, not choice, is the best explanation. And, the explanatory filter's verdict would be just that. [ . . . ] kairosfocus
Dr Bot: Please read here and onward on three varieties of sequence complexity. GEM of TKI kairosfocus
Something like a dendritic snowflake has a ordered hex structure, and a variable system of dendrites, but the complex part is chance and the ordered part is too simple and constrained to be informational.
I don't think I'm going to get anywhere with this but here goes. Your own experiment would not produce a pattern analogous to a snowflake but you claim that its inability to produce a pattern analogous to DNA or a cell is evidence that they are not the product of natural processes. By your own argument, as a result of your own experiment, it would appear that snowflakes are the product of design and cannot be the result of chance + necessity - Why should I accept the result of an experiment that would consistently produce false positives when evaluated with known empirical test cases? Your last two posts about OOL and FCSI are irrelevant to my point, which you continue to avoid dealing with. Your experiment would not demonstrate what you claim it can demonstrate - you need to devise a better experiment, that is all. DrBot
#55 Dala Sorry - I did not address the claim that life started by a random event. This is not falsifiable and is not a scientific hypothesis. All scientific OOL hypotheses specify more detail. markf
#21 Dala Is the claim that complex life arose by random mutation (and natural selection) falsifiable? Is the claim that life started by a “random event” falsifiable? Yes. Differentiate two proposals: (1) That random variation that we observe between parents and offspring plus natural selection was responsible for the vast majority of diversity of life we see today. This is more or less Darwin's initial proposal. At least two possible observations were offered shortly afterwards that would have falsified this proposal. (a) The earth is not old enough (Lord Kelvin). However, it turned out it was old enough. (b) Inheritance is blended not particulate. However, it turned out it was particulate. (2) That random variation that we observe between parents and offspring plus natural selection was responsible for all of diversity of life we see today. This has been falsified. Genetic drift and small amount of Larmarckian inheritance have been shown to play a role - although it is debated to what extent. markf
PS: As has been pointed out from Orgel onward, ordered, repeating structures do not store significant quantities of functional information. They are simply too constrained. Something like a dendritic snowflake has a ordered hex structure, and a variable system of dendrites, but the complex part is chance and the ordered part is too simple and constrained to be informational. Conceivably, we could control the micro atmospheric conditions and store information in the dendrites, but that would be a case of design. Uncontrolled stochastic contingency is what we call chance. Choice driven contingency is what we call design. kairosfocus
Dr Bot: I note that there is above, an analysis of where the contingency comes from in the systems that blend chance plus necessity. Not the necessity [which is the source of natural regularities or the predictability and controllability of an engineered system] but the chance component. There is a myth that through accumulating small chance steps with a filter based on necessity, one can climb the easy slope up the back end of Mt Improbable. (Indeed, GAs do searches on much smaller config spaces; where the configs have different degrees of function and we can compare with an objective function and climb to better performance.) The problem with that myth is that it starts way too late, as I pointed out already: fitness slopes with possibility of improvement are WITHIN islands of function. The root problem is to first get to shores of an isolated island of function in a truly large config space. On evidence relevant to the living cell, the simplest independent life forms are going to use 100,000 to 1, mn bits or so worth of DNA, and they are most likely going to be at the upper end. (The organisms at the lower end are parasites of one kind or another.) Until you are at that threshold you do not have a credible metabolising entity with a self-replication facility. 1,000 bits, the FSCI cutoff is two to three orders of magnitude below that on bit depth, and hugely below it on config space; which is an exponent of bit depth. Going beyond that, you will note that I have pointed out that to get to the novel, multicellular body plans we are looking at 10's - 100's of Mbits, dozens of times over. I almost don't need to note that the window shown by the Cambrian on the usual timeline is about 10 MN years. Again, the tests are that when novel DNA is put into ova for diverse species that have sufficiently divergent body plans, development begins on the host ova plan, then fails when the DNA to write required proteins is missing. In short the organisation of the host cell as a whole is a part of the relevant bio-information. So, a test that looks for 1,000 bit chunks of functional information on chance then onward would string these together, is a reasonable test. The problem is, we know on analysis and experience that a config space of 10^300 or so cells is not sufficiently searchable on the gamut of our observable cosmos to make a difference. And, the real ones credible for first life start well beyond that. If you want to start in something like a space of 10^50 configs, you need to justify that empirically in a context relevant to OOL, then OO body plans. Remember, 25 ASCII characters worth of info is what you are talking about there, equivalent to 3 - 4 typical length English words. How much of an algorithm or program can you specify in that much space? What sort of wiring network could you specify in that much space? Do you see why the FSCI threshold test is a significant one? GEM of TKI kairosfocus
And notice I have in mind that the zener noise goes to a PRBS counter, one that uses XOR feedback links and feed forward links, etc to get a pseudo-random chain.
Point taken - I had forgotten about this extra step, you have random noise being put through a mechanism. Unfortunatly it still doesn't get past the basic problem.
Likelihood of FSCI as shown practically nil.
Likelyhood of ordered repeating structures like crystals and sediment as shown is also practically nil.
So, any test by which FSCI could potentially be generated, is a legitimate test of the design inference, through testing the infinite monkeys model.
I disagree for reasons already stated, but I'll try again from a different angle. We can invent many randomly based systems, with some mechanical necessity, that might potentially generate FCSI but with a probability so low as to say that we can rule it out as occuring reasonably by chance. Your system is one example. This in its self is not enough to demonstrate that natural forces cannot generate FCSI, all you demonstrate is that your system can't generate FCSI. In order to use it as evidence of other systems not generating FCSI you need to show why it applies in the general case, rather than the specific one. I'm afraid you haven't done this for the simple reason that your system doesn't generate patterns that we KNOW are the result of chance+necessity in the natural world. The way your system works rules out any high probability of producing non FCSI non-random patterns of the kind found in nature - this implies the possibility that we have ruled out by accident of design the possibility of it also producing FCSI - in other words we can't rely on this system as a reasonable model of nature so we can't draw any inferences about nature from it. any test by which FSCI could potentially be generated is not a legitimate test of the design inference. Only tests which we can show are directly relevant to the systems in question.
Along the way, you actually admit the central point: chance is not a credible seed for FSCI.
I've never actually disputed it, I don't believe that the kind of order found in biology can come about without intelligent design, I'm just arguing that the specific test you proposed is flawed.
And in any case this is as Joseph has aptly pointed out, a red herring led away to a strawman, creating the false and misleading, ad hominem laced impression that I do not know what I am talking about on this topic.
I believe that your proposed experimental proof is flawed, you are claiming that criticising your ideas is an attack on you personally. Does your criticism of me also constitute an ad-hom? I don't believe that critiquing each others ideas and claims constitutes a personal attack. I don't see how we can have any kind of reasoned debate if you believe that me disagreeing with you is uncivilized. I also don't see why a valid criticism is a straw man or a red herring? Sorry but this sounds like a rhetorical dismissal to me. DrBot
Dr Bot: Pardon, but how does a computer work to physically execute instructions again, but by precisely organised mechanical necessity? And notice I have in mind that the zener noise goes to a PRBS counter, one that uses XOR feedback links and feed forward links, etc to get a pseudo-random chain. That way the non-flat distribution of the Zener goes into a flattening mechanical subsystem [a digital circuit that is actually deterministic but since it is randomly seeded [and maybe randomly clocked too] the output is flattened random distribtution], producing a flat random output, flat enough to be used in commercial systems. Chance plus necessity. Likelihood of FSCI as shown practically nil. And in any case this is as Joseph has aptly pointed out, a red herring led away to a strawman, creating the false and misleading, ad hominem laced impression that I do not know what I am talking about on this topic. As we started way back above: once any chance plus necessity system without input of active information, can genrat5e FSCI, the design inference on FSCI as a sign of design, is finished. So, any test by which FSCI could potentially be generated, is a legitimate test of the design inference, through testing the infinite monkeys model. Along the way, you actually admit the central point: chance is not a credible seed for FSCI. So, since mechanical necessity is not a source of high contingency, chance plus necessity would have to depend on chance to try to get to islands of function. But those islands of function by virtue of the scope of the config spaces of 1,000 bits or more, are beyond the credible reach of our observed cosmos. And so, we are left with one credible candidate cause for FSCI, the observed one: design. Thanks. GEM of TKI kairosfocus
KF, I'll try this one more time then I'll give up. Lets go back to the start of this specific point of discussion and try to avoid red herrings and other distractions:
To falsify the design inference, simply produce a case where, in your observation [or that of a competent observer], it is reliably known that chance plus mechanical necessity produces at least 1,000 bits of functionally specific complex information, as could be done by an implementation of the infinite monkeys situation. (Cf this recent UD thread (and onward threads in the ID Foundations series) on that subject.) A simple way would be to set up a million PCs with floppy drives and use zener noise sources to spew noise across them. Then, every 30 s or so, check for a readable file that has 1,000 or more bits of functional information. That is 125 bytes, not a lot. If you do ASCII + a checksum bit per character, that is 125 letters of coherent text that functions linguistically or algorithmically or on some data structure. 125 letters is about 20 words of English worth. This has been put on the table explicitly, many many times.
I'm addressing a specifric point in this post of yours:
To falsify the design inference, simply produce a case where, in your observation [or that of a competent observer], it is reliably known that chance plus mechanical necessity produces at least 1,000 bits of functionally specific complex information ... A simple way would be to set up a million PCs with floppy drives and use zener noise sources to spew noise across them. Then, every 30 s or so, check for a readable file that has 1,000 or more bits of functional information.
There is no mechanical necessity in your example - just random noise. As I have stated (it is not a concession because I never stated otherwise) that random noise is not FCSI, nor are many other patterns that are the result of chance plus mechanical necessity. You are suggesting a way to falsify the design inference using a random noise generator, I am pointing out that if you make a design inference on something other than FCSI, for example sedimentary layers of crystal structures, then the inference can't be falsified with your example - therefore it, as a proposed experiment, is flawed. DrBot
Joseph: Or, with chance and necessity without design doing so. The snowflake is a classic case on this, cf my online discussion here. GEM of TKI kairosfocus
PPS: Please note, Dr Bot: the first post specifically addresses chance and necessity, which includes their joint action; such as in a snowflake where the specificity is from necessity and the variations are from chance but hey are not coupled to provide informational function -- a point that has sat for years in both the UD weak argument correctives and the always linked note I have through my handle. Necessity does not lead to high contingency, and chance does not lead to high contingency with functional specificity; the two acting together give a case where the complexity is chance and the specificity is necessity but hey are not coupled to give informational function. Only design on our observation and analysis will do that. I trust this is clear and specific enough. Going back to the FSCI origin by chance test, the aspect of complexity is on the contingency, only chance or design explain high contingency so the test for the one is a test for the other, if chance fails to make FSCI. kairosfocus
DrBot:
If the failure of a random number to generate FCSI is proof that natural processes can’t produce FCSI then the failure of a random number generator to produce anything other than randomness would be proof that natural processes can’t produce crystals, sedimentary layers etc.
Please explain. The failure of random number generators to produce FCSI is evidence that random, undirected processes cannot produce FCSI. There isn't any FCSI in sedimentary layers nor is there any FCSI in crystals. So there isn't any issue with random, undirected processes producing them. Joseph
Dr Bot: Re: random number generators produce random numbers, they are not good analogies for chemical processes so they do not provide any evidence that chemical processes can’t generate FCSI. 1 --> The issue here is chaining polymers, where the specific sequence is what is functional [perhaps after folding etc], for RNA, DNA and proteins. 2 --> In each of these cases, the chaining spine does not particularly constrain sequence, and indeed Dean Kenyon conceded his Biochemical Predestination thesis of 1969 in his preface to the first ID theory book, Thaxton et al The Mystery of Life's Origin on exactly this point. 3 --> So, the issue is a highly contingent chain, and how does that chain get to be as it is. For highly contingent facets of a phenomenaon, the two key causal factors are chance and intelligence. 4 --> Mechanical necessity, under a given initial condition, produces the same result, hence Laplace's Demon. (Chaos gets its unpredictability from tiny variations that block the setting up of exact initial conditions, fed into a nonlinear noise amplifier so to speak. That is from chance variations.) 5 --> So, the issue IS the source of the contingency in digital strings that are functionally specific and complex. chemistry just clicks the chain together, and it may have something to do with folding and function once folded. 6 --> So, your concession -- and concession it is -- that chance is not a credible source of FSCI, immediately points strongly to the other main empirically warranted source of high contingency, design. 7 --> Mechanical necessities of the chemistry are clicking the chain together and are expressed in folding and function, but they are not shaping the chain. The chain sequence [standard stacking links] shapes the function, not the other way around. 8 --> In short, your concession implies inference to design, as necessity will not do what you want. (And indeed, if the laws of physics and thence chemistry programmed life into the cosmos, that would have very very big cosmological fine tuning implications, given what we already know about the fine tuning of physics.] GEM of TKI PS: have you read the first post in the ID foundations series? The second one? kairosfocus
KF
Thanks for the decisive concession:
I said:
I’ll make the point again – all you get out of random number generators is random numbers, you don’t get FSCI, and you also don’t get any other patterns
That isn't a concession, it is what I have been saying all along - random number generators produce random numbers, they are not good analogies for chemical processes so they do not provide any evidence that chemical processes can't generate FCSI. This is ALL I am saying, I am not claiming that chemical processes CAN produce FCSI, just that the example you were proposing - random number generators not producing FCSI as proof that nature doesn't produce FCSI - is a flawed example because random number generators don't produce other patterns that we KNOW nature DOES produce - crystal structures, sedimentary layering etc ...) If the failure of a random number to generate FCSI is proof that natural processes can't produce FCSI then the failure of a random number generator to produce anything other than randomness would be proof that natural processes can't produce crystals, sedimentary layers etc.
now, the problem you need to address is how the DNA sequences and assocated functional polymers that come together in the living cell originated and configured themselves functionally by chance plus necessity, then how novel dna on the order of 10?s of millions of bases originated by similar chance plus necessity to get us to novel body plans.
Why do I need to address this? I'm not making any claims in this domain - I'm just pointing out that your proposition about random number generators is emperically flawed - It is a bad argument that can easily be shown to be irellevant and as such doesn't help the case for ID - THIS is why you should either revise it, or witdraw it. I'm trying to give you some constructive feedback to help improve the strength of your arguments. DrBot
Thanks, Denyse. I enjoyed this post. We happen to be living at the end of a highly theory-centered age. Einstein’s crack about common sense is legendary and applauded by the Laputans at the academy. We love our theories, and we love ‘em simple. Natural Selection produces the species. Sexual repression causes unhappiness. Property is the root of all social evil. E=mc2. A love of theory and liberalism go hand-in-hand. Liberals, after all, are people who are chronically unhappy with the way things are. They claim to be able to produce a radical transformation of being by negating existing values, but what they actually produce is nothingness. Darwinism is a “metaphysical research program” just surely as Relativity can never be used as a practical tool of physical measurement, since it negates the dimensions that make measurement possible. We can only hope—and pray—that the 150 year-old siege against common sense is entering its twilight phase. It has already lasted longer than either Rationalism or Transcendentalism. Common sense tells us that nature was designed. Common sense will probably prevail in the end, if for no other reason than that Modernism has outlived its welcome and lost all vitality. The very fact that the liberals are now overwhelming entrenched in the academy is actually a hopeful sign. There is nothing human nature hates more than something that is stale and used up. allanius
markf:
However, what is falsifiable (and has been falsified in some cases) is **specific** claims about how biological diversity arose.
What has been falsified? It is a given that the processes espoused by the theory of evolution have never been observed constructing functional multi-part systems. So what do you have? Joseph
F/N: MG cf FAQs here and top right this and every UD page, UD's being a weak arguments corrective. kairosfocus
Dr Bot: Thanks for the decisive concession:
I’ll make the point again – all you get out of random number generators is random numbers, you don’t get FSCI, and you also don’t get any other patterns
As I have cited, that is indeed so. now, the problem you need to address is how the DNA sequences and assocated functional polymers that come together in the living cell originated and configured themselves functionally by chance plus necessity, then how novel dna on the order of 10's of millions of bases originated by similar chance plus necessity to get us to novel body plans. Remember, as already noted, the relevant forces usually cited are chance variation plus natural selection [= differential reproductive success], leading to descent with modification to the level of novel body plans: CV + NS --> DWM, with NBP (Cf my discussions here and here, including the issue of getting to a self-replicating cell form chemicals in a pond or the like prebiotic environment.) GEM of TKI kairosfocus
MathGrrl:
This is an excellent example of why it is essential to clearly state one’s hypothesis and make testable predictions entailed by that hypothesis that would serve to refute it if they fail. Doing this for ID, and documenting it in a FAQ, would eliminate the “ID is not science” claim immediately. I would think that a number of people here would be interested in doing that work.
I did just that- provided a testable hypothesis for ID- well I copied it from a referenced book. Now if you don't like it then perhaps you could provide a testable hypothesis for your position so we can compare. Any time you or Dr Bot want to produce such a hypothesis for comparison would be good- the sooner the better. Joseph
Dr Bot: Pardon, but the relevant point on sedimentary layers is that they are complex but not specified. Nature acting through chance plus necessity is fully capable of getting to complex outcomes, such as snowflakes, vortices and sedimetnary layers etc etc. These invariably have a situation where the complexity and any specificity they have are decoupled. In the relevant cases, linguistic information on digital symbols, or algorithmic information that functions prescriptively, as in DNA, the complexity and specificity are tightly coupled. That is how we get to islands of isolated function in a config space. Which I duly noted on. Notice how the flowchart on aspects of phenomena or objects specifically addresses specificity AND complexity. The infinite monkeys type test, by zener fired random generator or other means, is a test of getting to FSCI by chance plus necessity. As the results already cited show, things of the order of 1 in 10^50 or so are feasible on chance plus necessity, but the relevant threshold needs to be of order 1 in 10^300 or so. GEM of TKI kairosfocus
The thermodynamics and probability backed assessment accepted by design thought is that we will not pass stage one: 143 ASCII characters worth of coherent text in English.
I'll make the point again - all you get out of random number generators is random numbers, you don't get FSCI, and you also don't get any other patterns (Non functional, non specified information) that match anything that the observed cosmos naturally generates (apart from complete randomness) If your example proves that FSCI can't be the result of natural forces then it also proves that any other non-random pattern seen in nature can't be the result of random forces - and this to me seems obviously wrong! DrBot
PPPS: The Wiki Infinite Monkeys article has some enlightening remarks: _____________________ >> The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[19] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d... Due to processing power limitations, the program uses a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detects a match" (that is, the RNG generates a certain value or a value within a certain range), the simulator simulates the match by generating matched text. Questions about the statistics describing how often an ideal monkey should type certain strings can motivate practical tests for random number generators as well; these range from the simple to the "quite sophisticated". Computer science professors George Marsaglia and Arif Zaman report that they used to call such tests "overlapping m-tuple tests" in lecture, since they concern overlapping m-tuples of successive elements in a random sequence. But they found that calling them "monkey tests" helped to motivate the idea with students. They published a report on the class of tests and their results for various RNGs in 1993.[20] . . . . Primate behaviorists Cheney and Seyfarth remark that real monkeys would indeed have to rely on chance to have any hope of producing Romeo and Juliet. Monkeys lack a theory of mind and are unable to differentiate between their own and others' knowledge, emotions, and beliefs. Even if a monkey could learn to write a play and describe the characters' behavior, it could not reveal the characters' minds and so build an ironic tragedy.[21] In 2003, lecturers and students from the University of Plymouth MediaLab Arts course used a £2,000 grant from the Arts Council to study the literary output of real monkeys. They left a computer keyboard in the enclosure of six Celebes Crested Macaques in Paignton Zoo in Devon in England for a month, with a radio link to broadcast the results on a website. One researcher, Mike Phillips, defended the expenditure as being cheaper than reality TV and still "very stimulating and fascinating viewing".[22] Not only did the monkeys produce nothing but five pages[23] consisting largely of the letter S, the lead male began by bashing the keyboard with a stone, and the monkeys continued by urinating and defecating on it. Phillips said that the artist-funded project was primarily performance art, and they had learned "an awful lot" from it. He concluded that monkeys "are not random generators. They're more complex than that. … They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."[22][24] >> _____________________ We can easily see that the result after feasible lengths of time and computing effort, is to make chains that are an order of magnitude below the cutoff level in mind. Let us observe:
128^25 = 4.79*10^52 128^143 = 2.14*10^301
Let us not forget, a typical protein is 300 or so AA long, corresponding to 900 bases, or a config space of:
4^900 = 7.15*10^541
kairosfocus
KF, you don't seem to have understood my point.
Sedimantary layers may be complex but they are not specified, and certainly not functionally specified on a digital code.
They are patterns that we observe in nature, so is FSCI. I am not claiming that sedimentary layers have any function or complexity, just that they are observed, and most importantly - THEY ARE NOT RANDOM. If your random noise generator doesn't produce patterns that we know ARE the result of natural processes then WHY is it any good as an example of how natural processes can't account for other patterns (FSCI) - If we take your proposition seriously and deal with the implications on their merits then the inability of a random noise generator to produce ANY structured output (sediments, crystals) regardless of FSCI content implies that ANY NON RANDOM PATTERN cannot be the result of natural processes - all because a random noise generator won't, in the lifetime of the cosmos, produce those types of patterns. DrBot
PPS: If you want, we could feed the infinite monkeys result into a test for simple text, and see if we can get a phrase or sentence of relevant length, against say the database of 1 - 2 million books in the Gutenberg free ebook record. Sentences that pass the 1,000 bit [~ 143 ASCII character] test could then go into a pool and see if we can get a paragraph, then a book by shuffling them or their elements. The thermodynamics and probability backed assessment accepted by design thought is that we will not pass stage one: 143 ASCII characters worth of coherent text in English. If we do so in a credible case -- there will have to be an audit, given the cases such as Weasel etc -- then the inference from FSCI to design is dead, and with it both CSI and IC, thus design theory. kairosfocus
PS: The test I proposed is a test as to whether [d]FSCI -- which is what we see in DNA -- is credibly a product of chance plus necessity without intelligent direction. If that can be shown on a computer, the design inference on dFSCI and wider FSCI as sign of intelligent design is dead. So would be the 2nd law of thermodynamics. A random number circuit driving a computer is a convenient way to implement the test, and is a form of the infinite monkeys type test. A type of test commonly raised by evolutionary materialism advocates to give the impression that such chance can create modest variations that will be picked up by natural selection and will then give rise to evolutionary development; cf Dawkins' metaphor of the easy back slope up Mt Improbable; when he should be addressing getting to the shores of Isle improbable in a sea of non-functional configs. But in all these cases the issue of isolated islands of function is not properly addressed, e.g. notoriously in Dawkins'; Weasel, where non-functional codes are tested for proximity to target and are rewarded for increments in proximity. Avida and ev etc beg the questions of getting to islands of function, and the issue of injecting active information. The incidence of function among configs is far too high, for instance. kairosfocus
Dr Bot: Pardon, but your slip is showing again. Sedimantary layers may be complex but they are not specified, and certainly not functionally specified on a digital code. (If you dispure this, kindly provide the code, and the function the code leads sedimentary layers to produce, algorithmically or linguistically. WHAT YOU HAVE PROVIDED IS A CASE WHERE WE COME ALONG AND LABEL SEDIMENTARY LAYERS, AND TRANSLATE THE AS IS FACTS ON THE GROUND SYMBOLICALLY. THE RESULTING CODE IS INDEED FUNCTIONAL AND SPECIFIC BUT IT IS DUE TO OUR HIGHLY INTELLIGENT INTERVENTION.) DNA is dFSCI, especially as regards protein coding. Accurate description and associated objective distinction, is the first step in proper scientific reasoning. GEM of TKI kairosfocus
KF, I'm addressing your specific claim that a random noise generator can be used as evidence that natural processes cannot produce FSCI. Your last post didn't address the argument on its merits, I'll try and re-state the issue to make it more clear. Your hypothesis appears to be: Natural Processes cannot produce FSCI - intelligence is required. The experiment proposed is: Measure the output of a random noise generator for a (long) period of time, if it doesn't produce FSCI then we have proven that FSCI can't be generated by natural forces. If you want to test your random noise idea and the specific claim that, because it won't produce FSCI therefore natural processes can't produce FSCI, then you ought to start by testing it against natural processes that produce specific patterns. Lets say we can describe sedimentary layers thus: aaabbbbaaabbaaabbbaabbbbbaaaaaabbbbb aaabbbaabbbaabababbbaabbbaaabbabbbab (Under 1000 bits if we are using ASCII chars) Where a and b can be any letter in the alphabet and the number of consecutive a's and b's can vary to a degree. We then re-state your hypothesis: Sedimentary layers are not the result of natural processes - Intelligence is required. And the experiment: If a random noise generator doesn't produce a pattern that matches that shown above then we have demonstrated that sedimentary layers are not the result of natural processes. The probability of a true random noise generator producing this type of output is vanishingly low so have we proven the hypothesis - sedimentary layers require intelligence - or is our experiment flawed?? I'm not actually disagreeing with the idea that FSCI requires intelligence, I'm arguing that your proposed experimental proof is flawed and you shouldn't use it in its current form. DrBot
F/N: Onlookers, above Dr Bot inadvertently fell afoul of a distinction made as long ago as 1973 by Orgel in the very first usage of "specified complexity" in the context of origin of cell based life: __________________ >> In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added. Crystals, of course, would by extension include snow crystals, and order enfolds cases such as vortexes, up to and including hurricanes etc. Cf. here.] >> __________________ Just like "[l]umps of granite or random mixtures of polymers" stratigraphical layers due to sedimentation or volcanic deposition etc, are complex but not specified. In particular, the layers that form at a given time and place are a stste of affairs tracing to chance plus necessity, but they are an as is proposion, i.e there is nothing beyond this is just what happened here and some other pattern would do just as well. In the living cell we have very specific, tightly coupled functional organisation and associated information, some of it on digital codes that feed into algorithmic processes such as protein translation. The failure to properly mark such observationally based distinctions is a source of much confusion in this matter. I suggest Dr Bot should read and respond to the ID Foundations 4 post, here. GEM of TKI kairosfocus
Dr Bot: You know that the issue is chance plus necessity, and that in teh case of DNA, the issue is the genrator of informaiton. You know that natural selection, the second half of the usual expression culls, it does not create information. So, kindly address the source of digitally coded,functionally specific complex information. GEM of TKI kairosfocus
MathGrrl & DrBot state, 'That isn’t at all analogous to the mechanisms identified by modern evolutionary theory, so it wouldn’t prove much of anything. MathGrrl is right,' Are you guys finally admitting that computers, using evolutionary algorithmns, have failed to violate Dembski and Marks (LCI) Law of Conservation of Information,,, but isn't Dawkins 'Methinks It Is Like A Weasel', or similar goal directed programs thereof, suppose to be slam dunk proof for you guys that computers can do what you now say they can't do? As to this,, 'The model doesn’t reflect reality so it’s not a good basis for an experiment.' So please show, ANYWHERE IN REALITY, especially in life, where functional prescriptive information has been generated. Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html ,,, and since 'reality' has never been observed generating any information above what was already present in life,,,, Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ ,,,and since we know for a fact that information is generated by conscious human intelligence,,, Stephen C. Meyer - The Scientific Basis For Intelligent Design - video http://www.metacafe.com/watch/4104651/ ,,, in fact DrBot and MathGrrl, both of you individually, in your short posts, have just greatly exceeded the information capacity of the entire universe from what we can expect over the entire history of the universe!!!,,, ,,, then why in blue blazes is intelligence excluded as a rational explanation? Shoot you guys as 'naturalists/materialists' cannot even prove that the universe itself, at its most foundational level, is 'natural'; Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist. Then again I'm not a professor of physics at a major university as Dr. Henry is.) http://henry.pha.jhu.edu/aspect.html Professor Henry's bluntness on the implications of quantum mechanics continues here: Quantum Enigma:Physics Encounters Consciousness - Richard Conn Henry - Professor of Physics - John Hopkins University Excerpt: It is more than 80 years since the discovery of quantum mechanics gave us the most fundamental insight ever into our nature: the overturning of the Copernican Revolution, and the restoration of us human beings to centrality in the Universe. And yet, have you ever before read a sentence having meaning similar to that of my preceding sentence? Likely you have not, and the reason you have not is, in my opinion, that physicists are in a state of denial… https://uncommondescent.com/intelligent-design/the-quantum-enigma-of-consciousness-and-the-identity-of-the-designer/ As Professor Henry pointed out, it has been known since the discovery of quantum mechanics itself, early last century, that the universe is indeed 'Mental', as is illustrated by these quotes from Max Planck. "As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter." Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck was a devout Christian, which is not surprising when you realize practically every, if not every, founder of each major branch of modern science also 'just so happened' to have a deep Christian connection.) bornagain77
Dr Bot: Natural processes embrace both mechanical necessity and chance, as I have long since discussed in the first post in the ID foundations series; you may find the flow chart helpful. Information is a highly contingent phenomenon, and the two credible sources of high contingency are chance and intelligence. The point of the FSCI principle, is that we routinely observe intelligence giving rise to FSCI, so it is an empirically credible sign of design. We do not observe FSCI originating from chance plus necessity without intelligent direction. And, we have good analytical reason why that is so. Can you show a contradiction where on observation digitally coded, functionally specific complex information of at least 1,000 bits [dFSCI, the relevant subset] originated in our observation by chance plus necessity without intelligent design as a material factor? In particular, have you seen at least 170 AA worth of protein coding DNA [~ 500 base pairs] originate by chance + necessity without intelligent direction? If not then only if you can rule out design as a possibility can you claim that however remore the possibility, the dFSCI in the living cell, original and for the various body plans, came about by chance and necessity. In short, we are back at the question of a priori imposition of Lewontinian materialism on origins science as a censoring constraint on inference to best explanation:
To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[From: “Billions and Billions of Demons,” NYRB, January 9, 1997.]
What is really self-evident is that materialist censorship imposed as an a priori on science cripples it from being able to address the truth on origins in light of the evidence, since it must fit in the materialistic stratightjacket. Science in a materialistic straightjacket is an ideology, politics not what science should be:
. . . an unfettered (but ethically and intellectually responsible) progressive pursuit of the truth about our world, based on observation, experiment, analysis, theoretical modelling and informed, reasoned discussion.
GEM of TKI kairosfocus
MathGrrl & DrBot state, 'That isn’t at all analogous to the mechanisms identified by modern evolutionary theory, so it wouldn’t prove much of anything. MathGrrl is right,' Are you guys finally admitting that computers, using evolutionary algorithmns, have failed to violate Dembski and Marks (LCI) Law of Conservation of Information,,, but isn't Dawkins 'Methinks It Is Like A Weasel', or similar goal directed programs thereof, suppose to be slam dunk proof for you guys that computers can do what you now say they can't do? As to this,, 'The model doesn’t reflect reality so it’s not a good basis for an experiment.' So please show, ANYWHERE IN REALITY, especially in life, where functional prescriptive information has been generated. Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html http://www.us.net/life/index.htm Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html ,,, and since 'reality' has never been observed generating any information above what was already present in life,,,, Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ ,,,and since we know for a fact that information is generated by conscious human intelligence,,, Stephen C. Meyer - The Scientific Basis For Intelligent Design - video http://www.metacafe.com/watch/4104651/ ,,, in fact DrBot and MathGrrl, both of you individually, in your short posts, have just greatly exceeded the information capacity of the entire universe from what we can expect over the entire history of the universe!!!,,, ,,, then why in blue blazes is intelligence excluded as a rational explanation? Shoot you guys as 'naturalists/materialists' cannot even prove that the universe itself, at its most foundational level, is 'natural'; Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist. Then again I'm not a professor of physics at a major university as Dr. Henry is.) http://henry.pha.jhu.edu/aspect.html Professor Henry's bluntness on the implications of quantum mechanics continues here: Quantum Enigma:Physics Encounters Consciousness - Richard Conn Henry - Professor of Physics - John Hopkins University Excerpt: It is more than 80 years since the discovery of quantum mechanics gave us the most fundamental insight ever into our nature: the overturning of the Copernican Revolution, and the restoration of us human beings to centrality in the Universe. And yet, have you ever before read a sentence having meaning similar to that of my preceding sentence? Likely you have not, and the reason you have not is, in my opinion, that physicists are in a state of denial… https://uncommondescent.com/intelligent-design/the-quantum-enigma-of-consciousness-and-the-identity-of-the-designer/ As Professor Henry pointed out, it has been known since the discovery of quantum mechanics itself, early last century, that the universe is indeed 'Mental', as is illustrated by these quotes from Max Planck. "As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter." Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck was a devout Christian, which is not surprising when you realize practically every, if not every, founder of each major branch of modern science also 'just so happened' to have a deep Christian connection.) http://en.wikiquote.org/wiki/Max_Planck bornagain77
The Zener diode noise writing a functional text test wold show, in a very convenient way, that undirected forces of chance and necessity, on teh gamut of the cosmos, are capable of originating FSCI.
Just to make the point again - If your random number generator can't produce specific patterns that we actually observe being generated by natural forces then why is it a good test of a hypothesis about natural forces generating other specific patterns that we observe? By the same argument you use we could say that sedimentary rocks aren't the result of natural forces because a random number generator doesn't produce that kind of ordered output. DrBot
You need to look at from a more realistic chemical perspective. We are not talking about individual bits coming together in the right order but rather groups of bits (molecules) coming together to form a functional structure. If the 1000 bit description is proper and complete then it describes how each atom is configured and connected. Natural processes produce complex molecules so rather than having a random number generator spit out random bits it would, at the very least, be more realistic for it to spit out structured groups of bits (corresponding to molecules that occur naturally) But that still wouldn't capture chemistry in action, for example even spitting out random molecular bit descriptions wouldn't lead to larger structures being generated, for example molecular chains. Natural chemical interactions are far from random so testing a hypothesis with a pure random number generator has no relevance to real chemical systems, it is a red herring.
The Zener diode noise writing a functional text test wold show, in a very convenient way, that undirected forces of chance and necessity, on teh gamut of the cosmos, are capable of originating FSCI.
Incorrect, all it would show is that a pure random noise generator generates random noise. DrBot
MG: The Zener diode noise writing a functional text test wold show, in a very convenient way, that undirected forces of chance and necessity, on teh gamut of the cosmos, are capable of originating FSCI. That would be a direct refutation of the heart of the design inference, on both origin of complex specified information, and on the origin of irreducibly complex systems [as the implication of such is that the quantum of functional complexity to meet the criteria C1 - 5 discussed here, will require passing the design inference threshold]. Y'see, the natural selection part of the Chance variation plus natural selection expression only culls out less or non functional variants. The proposed source of the variation in the end is chance. So, the decisive issue is the implied claim on the power of chance to create novel digital information in genes and associated regulatory elements in the living cell; and before that to explain the origin of the living cell. Thence the significance of the 500 - 1,000 bit limit as a threshold for the capacity of chance on the gamut of our observable cosmos. As my current post here discusses, that threshold is passed by the time we get to a single new protein of 200 AA's. And, of course, given the frequency with which GA or evolutionary algorithm type programs are advanced to show the capacity of evolutionary mechanisms, it is a little rich to object to making sure the random information to be added to such an algorithm is genuinely random. That is what a zener driving a PRBS type shift register would do. (Such zener ckts are now routinely used as genuine random number generators.) The red herring --> strawman subject changing rebuttal (nb probably inadvertent) fails. GEM of TKI kairosfocus
MathGrrl is right, but the point applies even more generally. Natural processes (which we actually observe) produce structure, take for example the way sediments deposit in layers to produce sedimentary rock. A random generator like the one KF described would not produce this kind of organised pattern. If the random noise generator can't even produce patterns that we know occur as a result of the forces of nature then it is unreasonable to use it as an experimental apparatus to test the idea that FSCI can arise from natural forces and laws. The model doesn't reflect reality so it's not a good basis for an experiment. DrBot
kairosfocus writes:
A simple way would be to set up a million PCs with floppy drives and use zener noise sources to spew noise across them. Then, every 30 s or so, check for a readable file that has 1,000 or more bits of functional information.
That isn't at all analogous to the mechanisms identified by modern evolutionary theory, so it wouldn't prove much of anything. This is an excellent example of why it is essential to clearly state one's hypothesis and make testable predictions entailed by that hypothesis that would serve to refute it if they fail. Doing this for ID, and documenting it in a FAQ, would eliminate the "ID is not science" claim immediately. I would think that a number of people here would be interested in doing that work. MathGrrl
MF, MG et al (and onlookers): At 7 above, I showed a specific and direct way in which the GENERAL claim of the design theory could in principle be falsified, quite similarly to how the 2nd law of thermodynamics could also be falsified. (Indeed, such a falsification of the design inference would be a big step to falsifying the 2nd law of thermodynamics.) BA 77 has also provided several more specific cases of potential falsification. Inability to be falsified is not a problem for the design inference, save insofar as it is a convenient strawman tactic used to try to discredit design theory. The insistent putting up of such strawman based talking points while ignoring correction, does not say much about the strength of the evolutionary materialist case on its merits. GEM of TKI kairosfocus
...what is falsifiable (and has been falsified in some cases) is **specific** claims about how biological diversity arose. Is the claim that complex life arose by random mutation (and natural selection) falsifiable? Is the claim that life started by a "random event" falsifiable? In theory, "randomness" can produce anything.Therefore, a "randomness-theory" will never be falsifiable. But will it still be scientific? Dala
#11 #12 Meleagar "the claim that non-intelligent, non-teleological forces/processes are sufficient to explain biological diversity is necessarily unfalsifiable as well" Absolutely. However, what is falsifiable (and has been falsified in some cases) is **specific** claims about how biological diversity arose markf
Mathgrrl: It would be hard to find a more valid dichotomy than X and not-X, as I explained in #3. Meleagar
F/N: On the explanatory filter, cf here. kairosfocus
MathGrrl, In addition to kf's post, Please tell me exactly where Abel's Null hypothesis fails to provide an exact point of falsification/verification for both ID and neo-Darwinism: The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8.) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag MathGrrl, I would expect the demonstrated generation of ANY functional prescriptive information whatsoever to be a exceedingly modest threshold for neo-Darwinism to prove to be true, as well as an extremely modest threshold to prove ID false, especially seeing as the integrated prescriptive information found in life far surpasses what our best programmers can do in computers!! Not to mention the fact that most neo-Darwinian college professors mercilessly ridicule anyone who dares question the almighty power of Random Mutations filtered by Natural Selection to generate such unmatched levels of integrated complexity found within all life on earth. Perhaps Mathgrrl you would like to be the first to falsify Abel's Null Hypothesis??? bornagain77
Longer version: So much confusion over such a simple concept- determining design in a natural world. This is all about answering one of science's three basic questions- "How did it come to be this way?". Intelligent Design is based on three premises and the inference that follows (DeWolf et al., Darwinism, Design and Public Education, pg. 92):
1) High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design. 2) Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity. 3) Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity. 4) Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.
The criteria for inferring design in biology is, as Michael J. Behe, Professor of Biochemistry at Leheigh University, puts it in his book Darwin ' s Black Box:
"Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”
He goes on to say:
” Might there be some as-yet-undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless, we can say that if there is such a process, no one has a clue how it would work. Further, it would go against all human experience, like postulating that a natural process might explain computers.”
That said we have the explanatory filter to help us determine the cause of the effect we are investigating. On to the Explanatory Filter: The (design) explanatory filter is a standard operating procedure used for detecting basic origins of cause. It or some reasonable facsimile is used when a dead body turns up or a fire is reported. With the dead body we want to determine if it was a natural death, an accident, a suicide or a homicide (what caused the death?) and in with the fire, the investigator wants to know how it started- arson, negligence, accident or natural causes, i.e. lightning, lava, meteorite, etc. Only through investigation can those not present hope to know about it. When investigating/ researching/ studying an object/ event/ structure, we need to know one of three things in order to determine how it happened: 1. Did it have to happen? 2. Did it happen by accident? 3. Did an intelligent agent cause it to happen? A fire is investigated before an arson is. First we must make this clarification by Wm. Dembski:
”When the Explanatory Filter fails to detect design in a thing, can we be sure no intelligent cause underlies it? The answer to this question is No. For determining that something is not designed, the Explanatory Filter is not a reliable criterion. False negatives are a problem for the Explanatory Filter. This problem of false negatives, however, is endemic to detecting intelligent causes. One difficulty is that intelligent causes can mimic law and chance, thereby rendering their actions indistinguishable from these unintelligent causes. It takes an intelligent cause to know an intelligent cause, but if we don't know enough, we'll miss it.”
This is why further investigation is always a good thing. Initial inferences can either be confirmed or falsified by further research. Intelligent causes always entail intent. Natural causes never do. (page 13 of No Free Lunch shows the EF flowchart. It can also be found on page 37 of The Design Inference, page 182 of Signs of Intelligence: Understanding Intelligent Design, and page 88 of The Design Revolution) The flowchart for the EF is set up so that there are 3 decision nodes, each node capable only of a Yes or No decision. As are all filters it is eliminative. It eliminates via consideration/ examination. START ? CONTINGENCY? ?No ? Necessity (regularity/ law) ?yes COMPLEXITY? ?No ? Chance ?yes SPECIFICATION? ?No ? Chance ? yes Design The event/ object/ phenomena in question is what we start with. Then we ask, in sequence, those 3 questions from above- 1st Did this event/ phenomena/ object have to happen? IOW is this the result of the laws of nature, regularity, or some other pre-determining (natural) factors? If it is then we don’t infer design with what we have. If it isn’t then we ask about the likely-hood of it coming about by some chance/ coincidence? Chance events do happen all the time, and absent some blatant design marker, we must take into account the number of factors required to bring it about. The more factors the more complex it is. The more parts involved the more complex it is. By getting to the final decision node where we separate that which is merely complex from intentional design (an event/ object that has a small probability of occurring by chance and fits a specified pattern), means we have looked into the possibility of X to have occurred by other means. May we have dismissed/ eliminated some too soon? In the realm of anything is possible, possibly. However not only is it impractical to attempt every possible, but by doing so we would no longer have a design inference. By eliminating every possible other cause design would be a given. What we are looking for is a reasonable inference, not proof. IOW we only have to eliminate every possible scenario if we want absolute proof. We already understand that people who ask that of the EF are not interested in science. It took our current understanding in order to make it to that, the final decision node and it takes our current understanding to make the inference. Future knowledge will either confirm or falsify the inference. The research does not and was never meant to stop at the last node. Just knowing something was the result of intentional design offers no more about it. IOW design detection is the first step in the two step process- detection and understanding of the design. Just because the answer is 42* that doesn’t tell us what was on the left-hand side of the equal sign.
"Thus, Behe concludes on the basis of our knowledge of present cause-and-effect relationships (in accord with the standard uniformitarian method employed in the historical sciences) that the molecular machines and complex systems we observe in cells can be best explained as the result of an intelligent cause. In brief, molecular motors appear designed because they were designed” Pg. 72 of Darwinism, Design and Public Education
IOW the design inference is all about our knowledge of cause and effect relationships. We do not infer that every death is a homicide nor every rock an artifct. Parsimony- no need to add entities and the design inference is all about requirements, as in is agency involvement required or not? Threfor to refute any given design inference all one has to do is demonstrate that nature, operating freely, can produce it. (*Hitchhiker's Guide to the Galaxy reference) Joseph
How to falisfy Intelligent Design- Intelligent Design is the claim that some designing intelligence is required to explain some effects observed in the universe (and the universe itself)-> REQUIRED. Therefor to falsfy any given design inference all one has to do is demonstrate that nature, operating freely can produce the effect in question. This is proven by the explanatory filter which mandates that chance and necessity be given first shot at demonstrating their powers. If chance and/ or necessity can explain it then the design node is never even considered. Joseph
MG: Please, look at 7 above. GEM of TKI kairosfocus
Meleagar writes:
If ID is unfalsifiable, tnen the claim that non-intelligent, non-teleological forces/processes are sufficient to explain biological diversity is necessarily unfalsifiable as well, because both claims would necessarily be determined by the same metric.
That is a false dichotomy. Falsifiability has a very precise meaning and mechanism. If you want to demonstrate that ID is falsifiable, you need to clearly state the ID hypothesis and make one or more testable predictions based on entailments of that hypothesis. Falsifying the modern synthesis, while it might get you a trip to Stockholm, would do nothing to support ID. Can you state the ID hypothesis and one or more testable predictions it entails? MathGrrl
It should be noted that the same holds true for any claim of ID: it would require a falsifying metric - which it does. It is interesting that mainstream evolutionary biologists have claimed RM & NS as sufficient and is a valid scientific assertion of fact, and then claim that there is no valid scientific ID metric, because the former can only be validated by the latter. Meleagar
MarkF, If ID is unfalsifiable, tnen the claim that non-intelligent, non-teleological forces/processes are sufficient to explain biological diversity is necessarily unfalsifiable as well, because both claims would necessarily be determined by the same metric. Meleagar
OT kairos, this 'short' video may interest you very much: Quantum Information In DNA & Protein Folding - video http://www.metacafe.com/watch/5936605/ bornagain77
markf, It seems kairos has directly challenged you as to your claim that ID is not falsifiable!!! Is your silence, as well as complete lack of evidence to back up your assertion, concession that he is completely correct??? bornagain77
PS: MF, thanks for a softball pitch to use in the (DV, "soon") upcoming ID Foundations 5, on FSCI as a reliable and empirically testable sign of design. Aci has also inadvertently given me an excellent specific case on the use of design inferences in medical situations, the Glasgow consciousness ["coma"] test and scale. kairosfocus
MF: Re: my claim is not that ID is false. Just that is not falsifiable. To falsify the design inference, simply produce a case where, in your observation [or that of a competent observer], it is reliably known that chance plus mechanical necessity produces at least 1,000 bits of functionally specific complex information, as could be done by an implementation of the infinite monkeys situation. (Cf this recent UD thread (and onward threads in the ID Foundations series) on that subject.) A simple way would be to set up a million PCs with floppy drives and use zener noise sources to spew noise across them. Then, every 30 s or so, check for a readable file that has 1,000 or more bits of functional information. That is 125 bytes, not a lot. If you do ASCII + a checksum bit per character, that is 125 letters of coherent text that functions linguistically or algorithmically or on some data structure. 125 letters is about 20 words of English worth. This has been put on the table explicitly, many many times. Just, you habitually ignore it. Or, perhaps you meant that on thermodynamics considerations you do not expect such a test to actually falsify the design inference on seeing FSCI from chance ++ necessity without intelligent input. For, the islands of function will be deeply isolated in a context where for 1,000 bits, the config space is 1.07*10^301, and our whole observed cosmos working at 1 Planck-time per state [about 10^20 times faster than a nuclear particle interaction on the strong force], for the thermodynamics lifespan of the observed cosmos [about 50 mn times the usually estimated time since the singularity], forhe ~ 10^80 atoms of the cosmos, would not be able to scan as much as 1 in 10^150t5h of the config space. That is, analytically the challenge is a super task beyond the credible resources of our observed cosmos. That is, not only do we routinely and only observe the FSCI coming from intelligence, but we have the sort of analysis just done to show why that is so. So, maybe you agree with that; that such a task will be predictably futile. The experiment to see chance plus necessity only giving rise to FSCI is all but sure to fail. In that case, you agree with the inference to design on seeing FSCI, just you cannot bring yourself to acknowledge it. GEM of TKI kairosfocus
markf, Michael Behe on Falsifying Intelligent Design - video http://www.youtube.com/watch?v=N8jXXJN4o_A The Law of Physicodynamic Insufficiency (Null Hypothesis for Prescriptive Information Generation) - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html Michael Behe's Quarterly Review of Biology Paper Critiques Richard Lenski's E. Coli Evolution Experiments - December 2010 Excerpt: After reviewing the results of Lenski's research, Behe concludes that the observed adaptive mutations all entail either loss or modification--but not gain--of Functional Coding ElemenTs (FCTs) http://www.evolutionnews.org/2010/12/michael_behes_quarterly_review041221.html Evolution vs. Genetic Entropy - video http://www.metacafe.com/watch/4028086 The foundational rule for explaining the diversification of all life on earth, of Genetic Entropy, a rule which draws its foundation in science from the twin pillars of the Second Law of Thermodynamics and from the Law of Conservation of Information (Dembski, Marks, Abel), can be stated something like this: "All beneficial adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally created in the parent species genome." bornagain77
#3 and #4 I don't know about other anti-IDists - but my claim is not that ID is false. Just that is not falsifiable. On the other hand claims about specific designer(s)with known powers and motives are falsifiable and, in all cases that I know of, clearly false. markf
Meleagar, That is a really interesting thought. It goes back to the whole, "ID is unfalsafiable and look I've falsafied it" canard. Collin
On the topic of bias against rational thought ... It occurred to me that there would be a formal logical refutation against the mainstream claim that unintelligent, non-teleological processes are a sufficient explanation for biological processes. First, we can quantify intelligence/teleology as X, and thus can quantify non-intelligence, non-teleology as not-X. Any positive determination of not-X necessarily provides a metric for determining X. Thus, if not-X can be identified by any metric, that same metric can identify X. If there is no means to identify X, there certainly cannot be a means to identify not-X. Thus, an assertion that X cannot be identified is equivalent to an assertion that not-X cannot be identified. An assertion that X has not been identified is equivalent to the assertion that not-X has not been identified. One statement cannot be made without also making the corresponding statement. If mainstream science asserts (as it does) that it has identified an instance of not-X; this necessarily means that it has identified both X and a metric for determining X. Mainstream science must have a valid methodology/metric for determining ID or else it has no means by which to assert that any design is not generated by intelligence/teleology. Since it does not have such a metric, then the claim that such processes are not-X is necessarily false. Since that claim is necessarily false, evolutionary theory fails. Meleagar
Ms. O'Leary, This should interest you; Ruth and Orit Interview - Christian Students at Georgia Tech - video http://www.youtube.com/watch?v=mm7-6Pw_vk0 bornagain77
That is really interesting. When I got my degree in psychology, I did feel like there was a "club" and that if you think a certain way, then you fit in better. Collin

Leave a Reply