Uncommon Descent Serving The Intelligent Design Community

An Eye Into The Materialist Assault On Life’s Origins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Synopsis Of The Second Chapter Of  Signature In The Cell by Stephen Meyer

ISBN: 9780061894206; ISBN10: 0061894206; HarperOne

When the 19th century chemist Friedrich Wohler synthesized urea in the lab using simple chemistry, he set in motion the ball that would ultimately knock down the then-pervasive ‘Vitalistic’ view of biology.  Life’s chemistry, rather than being bound by immaterial ‘vital forces’ could indeed by artificially made.  While Charles Darwin offered little insight on how life originated, several key scientists would later jump on Wohler’s ‘Eureka’-style discovery through public proclamations of their own ‘origin of life’ theories.  The ensuing materialist view was espoused by the likes of Ernst Haeckel and Rudolf Virchow who built their own theoretical suppositions on Wohler’s triumph.  Meyer summed up the logic of the day

“If organic matter could be formed in the laboratory by combining two inorganic chemical compounds then perhaps organic matter could have formed the same way in nature in the distant past” (p.40)

Darwin’s theory generated the much-needed fodder to ‘extend’ evolution backward’ to the origin of life.  It was believed that “chemicals could “morph” into cells, just as one species could “morph” into another “ (p.43).   Appealing to the apparent simplicity of the cell, late 19th century biologists assured the scientific establishment that they had a firm grasp of the ‘facts’- cells were, in their eyes, nothing more than balls of protoplasmic soup.   Haeckel and British scientist Thomas Huxley were the ones who set the protoplasmic theory in full swing.  While the details expounded by each man differed somewhat, the underlying tone was the same- the essence of life was simple and thereby easily attainable through a basic set of chemical reactions.

Things changed in the 1890s.  With the discovery of cellular enzymes the complexity of the cell’s inner workings became all too apparent and a new theory that no longer relied on an overly simplistic protoplasm-style foundation, albeit one still bounded by materialism, had to be devised.  Several decades later, finding himself in the throws of a Marxist socio-political upheaval within his own country, Russian biologist Aleksandr Oparin became the man for the task. 

Oparin developed a neat scheme of inter-related processes involving the extrusion of heavy metals from the earth’s core and the accumulation of atmospheric reactive gases all of which, he claimed, could eventually lead to the making of life’s building blocks- the amino acids.  He extended his scenario further, appealing to Darwinian natural selection as a way through which functional proteins could progressively come into existence.  But the ‘tour de force’ in Oparin’s outline came in the shape of coacervates- small, fat-containing spheroids which, Oparin proposed, might model the formation of the first ‘protocell’.

Oparin’s neat scheme would in the 1940s and 1950s provide the impetus for a host of prebiotic synthesis experiments, most famous of which was that of Harold Urey and Stanley Miller who used a spark discharge apparatus to make the three amino acids- glycine, alpha-alanine and beta-alanine.  With little more than a few gases (ammonia, methane and hydrogen), water, a closed container and an electrical spark Urey and Miller had seemingly provided the missing link for an evolutionary chain of events that now extended as far back as the dawn of life.  And yet as Meyer concludes, the information revolution that followed the elucidation of the structure of DNA would eventually shake the underlying materialistic bedrock.          

Meyer’s historical overview of the key events that shaped origin-of-life biology is extremely readable and well illustrated.  Both the style and the content of his discourse keep the reader focused on the ID thread of reasoning that he gradually develops throughout his book.

Comments
Adel, Don't bother. Some people in this forum present meaningful, informed challenges to my point of view. Sometimes I have to look past their tone because their content is constructive. Others offer "What’s the alternative? An unnatural explanation?" ScottAndrews
ScottAndrews, You seem unwilling or unable to answer simple questions about your beliefs. That's fine for now. I'll catch up with you later. Regards, Adel Adel DiBagno
Nakashima-san: There is a distinct threshold of function, allowing identification and measure. Absent a replicator based on a code, storage, a reader and an implementer, there is no active self-replication. No active self-replication, no life. No self-replicating life, no possibility of hill climbing by blind variation and selective replication on differential success. Thus, until you have a viable life form with a viable body plan (including the von Neuman replicator set) you have no basis for hypothesied or observed evolving. Thus, we see a sharp distinction between that which can replicate and that which cannot. Functionality is in an island, and non-function on self replication is the surrounding sea. GEM of TKI kairosfocus
KF-san, That’s odd: as has been repeatedly pointed out, we observe cell based life as a self-replicating system, which works though the von Neumann type self-replicator architecture: blueprint, reader, implementer allowing self replication. Here I have to respond similarly to my response to Mr BillB's use of the term autocatalytic sets. Such a thing as a Von Neumann replicator might be the ultimate goal you are aimed at, but it isn't the way you measure progress. You might as well say that the Empire State Building is your measure of buildings. This is either a quite unwieldy way of measuring small buildings, or it is a boolean variable, true for one building and false for every other. Either way, not very helpful as a measure. Measures such weight, volume, and power consumption are more useful measures for buildings than "is it the Empire State Building?" However, this kind of boolean measure is useful for your argument. A GA that only gets one bit (literally) of feedback from its fitness function is reduced to random search. Nakashima
3] What an independent origins science education course could look like FYI, BB, such a fresh approach should include an advanced placement -cum- college level survey course that covers: [1] an overview of the strengths and weaknesses of science, including the difference between origins and operational science: projecting to a remote, unobserved past and seeking to explain it is not at all epistemically equivalent to matters of direct observation in the present. [2] a survey of cosmology including stars, galaxies, cosmos, planet system formation and timelines [including terrestrial dating techniques], with relevant finetuning to operating point issues. [3] A survey of OOL, including the relevant thermodynamics and information theory considerations in light of the information system at he heart of cell based life. [4] A similar survey of origin of body plan level biodiversity, with a look at typical icons over the past 150 years. [5] A survey on origin of conscious, reasoning, en-conscienced mind, and raising issues tied to that. [6] Addressing origins science in society based on the context of the ideological war that now threatens to fatally break the heart of our civlisation, inviting the kind of strategic defeats that Byzantium suffered in the C7 and France suffered in 1940. --> Such a critical survey course on a controversial aspect of science is very feasible. --> And, as an INDEPENDENT course, such will not be vulnerable tot he control of the sort of nasty censorship and hostage-taking tactics that now so often obtain at the hands of the NCSE, ACLU, NAS etc. --> In an Internet dominated era, such can be offered online quite easily, thank you. --> All that is required is to actually construct and present such a course, which can be done based on the above outline. 4] Adel, 396: What’s the alternative? An unnatural explanation? Close but no cigar. From the days of Plato on, the ART-ificial -- thus, intelligent -- has been a classic distinction to the "natural." Design theory gives us a way to draw that distinction, based on reliable signs of intelligence. 5] Nakashima-san, 398: simply imagined, asserted without proof. First, science is not about PROOF; but instead, empirically warranted provisional inference to best explanation. For that, the bottomline for this thread still obtains: [1] FSCI is real and quantifiable, [2] it is routinely produced by intelligence in our observation, [3] it is ONLY observed to be produced by intelligence, [4] on search space grounds, it is unlikely that non-intelligent forces such as chance and/or necessity will be able to achieve FSCI. In short, the conclusion is inductively strong. 6] How can you assert the existence of islands of function when you cannot articulate a clear idea of what function means in a pre-biotic environment, and on what kind of entity function is being measured? That's odd: as has been repeatedly pointed out, we observe cell based life as a self-replicating system, which works though the von Neumann type self-replicator architecture: blueprint, reader, implementer allowing self replication. That constitutes an irreducibly complex set, which necessarily is an island of function. No code and/or no storage, and/or no reader and/or no effector, and observed self-replication capacity vanishes. So, we have a threshold of required functionality based on a logically based framework. In addition, we do observe existing life forms, and see tha the DNA storage ranfes upwards of 100's of kilo bases, i.e independent life [not parasitic on preexisting life for key nutrients etc] starts out at about 600 - 1,000 kilo bases. Parasitics start out at 100,000 or so bases, or about 200 k bits. All of these are well beyond the practical limitations of the search resources of the observed cosmos; of which 1,000 bits is a very reasonable threshold. GEM of TKI kairosfocus
Following up: 1] Adel (and Nakashima-san) Thanks, NYRB, okay. Pity they just give a preview for a 1997 article! 2] BB, 395: your mind, once closed, remains forever shut. your opinions are fixed and you believe that repeating your assertions will somehow make them fact. This is yet another turnabout false accusation; and on the matter of the Weasel case, an outright misrepresentation of the truth. (Onlookers, observe how this is (a) yet another rabbit trail led out to a strawman soaked in ad hominems [to be ignited . . . ], and (b) is false to the actual outcome on the merits, which I have in my always linked here. As teh just linked demonstrates, BB et al cannot be trusted to give a true and fair view of the case on the merits.No prizes for guessing why there is much interest in stating a twisted summary and little interest in actually addressing my point on the merits. See why I have spoken about the attritional implications of repeated rhetorical wave attacks, not only for UD [which is being snowed under], but for our civilisation as a whole which is having the civility at the heart of democracy eaten out by scorched earth rhetorical stratagems, at 338 above?) Now, too, a glance at either the always linked or the weak argument correctives of which I am a co-author, will show that my positions are taken on evidence and on inference to best current explanation in light of such evidence. ["Proof" in any strong sense is not in the power of science.] And, it remains the case that intelligence is the ONLY observed cause of FSCI such as will have to be explained for the case of OOL. Indeed, it is the ROUTINE cause of FSCI. So the inference on best explanation from FSCI as sign to intelligence as cause is inductively strong. So strong that the usual resorts are not to providing solid counter-examples (several suggestions having been shot down in flames in this thread) but to censoring Lewontinian a priori materialism, often under the disguise, "methodological naturalism." 2] I await your invasion of Poland, although from what I have heard about their education minister Liljana Colic’s attempts to ban evolution from schools it may already have begun. unworthy, ad hominem laced rhetoric. And, what part of the following, from 338, constitutes "banning evolution from schools"? Namely:
g –> I therefore suggest that it is time to deploy not just a set of weak argument correctives and a brief glossary but at minimum highlighted links to adequate tutorials across the range of ID studies, constituting an ID 101 with actual FAQ’s addressing not just rhetorical dismissals and distortions, but need for basic information. [A good start to that would be a critical review of the Wikipedia page on ID.] h –> This should be augmented by links to major ID papers and works on the net, including where relevant Google Books online. i –> I also advocate for a fresh start on origins science education, that will break the evolutionary materialist monopoly and prepare a new generation for breaking out of the Lewontinaian version of Plato’s cave with the shadow shows based on so many misleading icons. [A wiki based set of tutorials covering underlying issues, cosmology, origin of life, origin of biodiversity, origin of mind and origins science in society would I think do a lot of good. Not least by simply breaking he monopoly out there.]
[ . . . ] kairosfocus
KF-san, The topology of islands of function in a sea of nonfiction is not “imaginary”, in short — but it is inconvenient for those who would wish that contra what Shapiro and Orgel have counselled after a lifetime in the field, organised complexity assembles itself conveniently out of small prebiotic molecules. Not imaginary perhaps, but simply imagined, asserted without proof. How can you assert the existence of islands of function when you cannot articulate a clear idea of what function means in a pre-biotic environment, and on what kind of entity function is being measured? Nakashima
Adel: I find your comments pointless. You criticize what I've said by restating and confirming it. What am I to say? Your last question was willfully ignorant. ScottAndrews
Thanks to BillB for coming back and eliciting the following from ScottAndrews:
None of those narratives have reached the level of probability where we might ask, “How do we know for sure?”
Such is the nature of historical science. Especially when we're talking about the very, very remote past. In any case, science never knows anything for sure. I thought that was common knowledge.
I’m not denigrating the research.
I think you are. Read your own words. As someone said, "You aren’t conceding anything by admitting it."
It’s one thing to look for a natural explanation - go for it - but another to assume a priori that it’s waiting to be found.
What's the alternative? An unnatural explanation? Adel DiBagno
KF There seems little point me continuing this discussion. It is clear from this thread and others, most memorably the WEASEL comedy, that your mind, once closed, remains forever shut. your opinions are fixed and you believe that repeating your assertions will somehow make them fact. Nakashimas comment at 388 is very pertinent, you lack all the qualities you demand in others and in the opinions of others. I realized I misinterpreted your story about the Germans for which I apologize; You are not accusing us of being the Nazis you were equating science with the allies of WW1, ID with the Germans and proposing a new strategy based on that history. I await your invasion of Poland, although from what I have heard about their education minister Liljana Colic's attempts to ban evolution from schools it may already have begun. It all makes sense now! BillB
Attn: kairosfocus I've goofed again, see: Nakashima @379
I’m responsible, perhaps, for KF-san citing NYRB. I did not find a reference on the NYT site, but I did find a NYRB source so perhaps it is Dr J Bloom that needs to correct something.
Of course you're right. Shame on Bloom for the misattribution, but thanks to him for making it available gratis. Adel DiBagno
sea of non-function! kairosfocus
Adel 378: Thanks. You are right. 9i'll have to fix my offhand quoting.) BTW, the left off part of the paragraph is where Mr Lewontin reveals his lack of familiarity with the subject matter. For miracles to stand out as signposts pointing beyond the everyday world they REQUIRE a well-ordered predictable backdrop. Similarly, for morality to have any validity, this demands a predictable world in which actions are expected to have consequences. In short, a world created by a God who intervenes miraculously in a moral context will be one in which science is possible; and it will not be a chaos but a cosmos. That is a part of why founding era modern scientists saw themselves as thinking God's thoughts after him. GEM of TKI PS: Briefly, re BB @ 381 on 338: turnabout rhetorical stratagems, and illustrative of the issue raised on rhetoric in 338. (And, note i wa snot attacking persons but warning on the implications of tactics at work and where UD needs to go to deal with this reality.) PPS: And, BB should kindly show us how we observe von Neuman self replicators using blueprints and readers and implementers forming themselves out of molecular noise in realistic model prebiotic soups today. The topology of islands of function in a sea of nonfiction is not "imaginary", in short -- but it is inconvenient for those who would wish that contra what Shapiro and Orgel have counselled after a lifetime in the field, organised complexity assembles itself conveniently out of small prebiotic molecules. kairosfocus
"while the untested hypothesis which leads to it is called “likely.” And, they have volumes to prove it. Upright BiPed
Nakashima: The "aura of fact" appears in the very first sentences of the introduction.
The size of the first informative molecules was strongly constrained by the accuracy in replication. In an environment where proofreading mechanisms were initially absent, replicating biomolecules had to be necessarily short. This represented a strong limitation in the amount of genetic information that could be stored and reliably transmitted to subsequent generations, as well as to the functional capabilities of the evolving molecules.
Do the authors actually know anything about the size of the first informative molecules or their functional capabilities? These are purely hypothetical statements worded as factual ones. Next,
That process likely led to the appearance of molecular quasispecies (Eigen 1971), large and heterogeneous populations of replicating molecules that initiated Darwinian evolution.
Here, the end result, "large and heterogeneous populations of replicating molecules that initiated Darwinian evolution," is presumed to be factual, while the untested hypothesis which leads to it is called "likely." ScottAndrews
If anyone has read the Hazen book, then they will know that the OOL effort is essentially no where. Hazen is an honest researcher and appears to be a believer that the OOL could be due to a natural process but essentially he admits that there is nothing on the horizon that gets any where. He surveys what has been done so it is unlikely there is any effort of substance left unturned in his book. Now, it is a couple years since his book but I have not seen anything to make the science leap forward. The construction of a couple nucleotides in the lab is one thing but all that is is the construction of a couple tinker toys when what is needed is a 747 or maybe just a private jet to get to the first cell. jerry
KF-san, In short the paper in question is a mass of galloping hypotheses that soon take on the aura of “fact.” Since you've been quoting the paper freely, why don't you quote the part where it takes on the aura of fact? Or are you making something up? If there is any aura of fact that is unwarranted, it is your repeated and repeated assertions without any attempt at humility, without any awareness or admission your statements are hypothetical, are speculative. It is indeed telling that you can find such humility and circumspection in the paper under discussion but not in your repeated and repeated copy and post. Nakashima
BillB: There are a number of narratives, all of them hypothetical, and all covering only certain aspects of life's origin. None of those narratives have reached the level of probability where we might ask, "How do we know for sure?" If we were trying to figure out where computers, rather than life, originated, right now we're at the point where we've just discovered how to get silicon from sand. I'm not denigrating the research. But it's important to be honest about what we do and don't know and how very far we are from understanding how life did or could have come about. It's one thing to look for a natural explanation - go for it - but another to assume a priori that it's waiting to be found. ScottAndrews
ScottAndrews: I would have thought God of the gaps applies anywhere there is a gap, of any size. My impression is that OOL research has a number of narratives, some of which are based on direct experimental research rather than speculation. There are also gaps and the inevitable problem of 'how do you know for sure' which also applies to almost everything else that has not been dorectly observed. BillB
Joseph: In 1902 observation would suggest that only birds can fly. Fortunately some people thought that the ability to fly might be explainable by mechanism rather than metaphysics. BillB
BillB: You too, with the books and the volumes? Forget them! They were an illustration to help one visualize what gaps look like. Perhaps posts in a fence would have been a better illustration. My context of my point has long passed, making it irrelevant. It was (in response to another comment) explaining that one cannot commit a "god of the gaps" fallacy with regard to OOL. "God of the gaps" applies when we have a nearly completed body of knowledge with a few pieces missing. In this case, every piece is missing, or hypothetical. I'm not making a broad argument against the merits of OOL research. Rather, I'm stating the obvious, that it has not provided any substantiated narrative or details between which there could be gaps. You've said so yourself. I don't think this should be controversial. ScottAndrews
BillB, Observations say that only life begets life. And all current scientific data does not help your position. Also we do not know about the processes that formed our planet. The best we can do is guess given a world-view. BTW our knowledge of evolution is limited. We don't even know what makes an organism what it is. Joseph
ScottAndrews: Our knowledge of OOL is limited to what we can discover about the process that helped form our planet and our understanding of the chemistry of life and the behaviour of complex chemical systems. Whilst we can never 'know' what actually happened we can infer plenty and certainly understand what was required, if those requirements are scarce and if our planet ever met those requirements.
Whatever your argument with ID is, you’ve just confessed to having no specific alternative.
...
No one is talking about two competing shelves full of books.
If you are not claiming that ID scientists can know what happened then we are talking about competing inferences. The the OOL shelf has plenty of volumes whereas the ID shelf seems to have a single book that we are not allowed to look at. BillB
KF, firstly, after going away for a few days, I see in you post at 338 that you have predictably abandoned discourse and resorted to personal attack by, as far as I can tell, complaining that we are all a bunch of Nazis and that we are engaged in uncivil discourse for refusing to accept your blind assertions. Never mind, I forgive you. Moving on, I see you are still making claims about your imagined 'islands of functionality in a sea of non function' but haven't really made any attempts to understand Nakashimas points about how you measure pre-biotic function. You evidence for these islands seems to be purely that if you apply mass randomisation to a working mechanism it will probably break. This is quite true but it is no reason to assume an island topology for the configuration space. You need to remember the other observed and tested evidence about how gradual modifications to mechanisms can shift functions, as well as break them, so when this fact is factored in you end up with topologies that consist of interconnected islands and continents - certainly the vast majority of configuration space is inanimate (although as has been pointed out, gas giants can serve functions to living systems) but there is no evidence to suggest that all configurations of matter that work to self replicate or form part of a self replicating system are isolated islands - this is just wishful thinking on your part and is not backed up by any evidence.
Thus, the attempt to rhetorically extend the island of function by appealing to a sloping ocean floor fails.
This is a rhetorical dismissal, the point we are making is that you are the source of function in this context, it is a metric you have imposed on configuration space. If you remove it you still have a configuration space and you can put demarcation lines around interesting groups of configurations like replicators but you then are able to see that a pre-biotic topology exists. It is not one determined by natural selection but is is not all noise either, there are rules and processes that may drive some systems towards the interesting demarcation lines - this is what a lot of OOL research is investigating.
some scientists have presumed that nature has an innate tendency to produce life’s building blocks preferentially
They hypothesised this and are testing these hypotheses. The research I mentioned earlier and couldn't find references for have now been found for me, it is by Higgs & Pudritzis and called "A thermodynamic basis for prebiotic amino acid synthesis and the nature of the first genetic code" and a PDF is available here although it appears to be pre-review copy with the final version in print as "A thermodynamic basis for prebiotic amino acid synthesis and the nature of the first genetic code." in Astrobiology 2009. It is also worth looking up the work that is being done on the maximum entropy production principle. The point here, and in relation to ScottAndrews points, is that there are many papers in the journals, which are on the shelf, relating to many aspects of pre-biotic life and complex chemistry.
Until an adequate account of origin of information is accepted, the “gap” between the two will remain unbridged.
The problem is that your definition of information requires an intelligent source. Mechanisms for generating the information in biology are observed but when we build models and use them to verify the process you complain that because the models were designed (to replicate nature) the information has been put in by the designers. You have rendered the notion of 'information' impossible to test because anyone who does somehow inserts it into the results. Looking at your later posts KF I see that you take the Manrubia-Briones paper and add lots of your own text to help highlight how this investigative science is done, how hypothesise are developed from existing research which provide avenues for new research to test them. I'm not quite sure why you think this is wrong? BillB
Adel, I'm afraid you're missing the point by a wide margin. No one is talking about two competing shelves full of books. My extremely simple point is this. Our "knowledge" of life's origin is limited to what materials life is composed of and what the finished product looks like. That's it. Period. It doesn't matter how many books you reference or what experiments were performed. No one knows what happened. No one claims to know what happened. There are no gaps in our understanding of life's origin because there is no understanding. Am I wrong? Is there some step in the chain from inanimate matter to replicating life where we know what happened, something not preceded by "perhaps," "possibly," or "could have?" I thought this was common knowledge. You aren't conceding anything by admitting it. ScottAndrews
DiBagno-san, I'm responsible, perhaps, for KF-san citing NYRB. I did not find a reference on the NYT site, but I did find a NYRB source so perhaps it is Dr J Bloom that needs to correct something. Google is your friend! ;) Nakashima
kairosfocus, Please correct your attribution of Lewontin's article. It was not published in the New York Review of Books, but in the NY Times Book Review: http://www.drjbloom.com/Public%20files/Lewontin_Review.htm Further, I am taking the liberty of quoting the last sentence of the paragraph that you have cited:
To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen.
Adel DiBagno
ScottAndrews, Upright BiPed, I'm sorry that my comment was offensive. That was not my intention. To try to redeem myself in your eyes, here are some books on the OOL shelf: The Emergence of Life: From Chemical Origins to Synthetic Biology, by Pier Luigi Luisi (2006) Genesis: The Scientific Quest for Life's Origins, by Robert Hazen (2005) Origins: A Skeptic's Guide to the Creation of Life on Earth, by Robert Shapiro (1987) I've read the Hazen and Shapiro books (Shapiro is the person whose recent criticisms of the RNA hypothesis kairosfocus has quoted). But not the Luisi book (it's expensive), but it has been highly recommended. Now, what are the books on the other shelf or shelves? Adel DiBagno
PPS: the above should also show the key equivocation involved in asserting that since we observe the creation of organic molecules in nature, we have a major plank in the bridge from the speculative prelife "organic soup" to life. Organic chemistry is about carbon-chain molecules. Life in cells is about INFORMATIONAL macromolecules working together to implement a von Neuman replicator. Until an adequate account of origin of information is accepted, the "gap" between the two will remain unbridged. And, the ONLY known, empirically observed source of functionally specific, complex information is intelligence. but, through the dominance of evolutionary materialism, that little fact is too often blocked from speaking at the outset of investigation or discussion; rendering utterly implausible speculations the only players allowed on the field. As US National Academy of Sciences member Lewontin confessed:
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [NY Review of Books, 1997]
That should tell us a lot about what is going on. kairosfocus
In his recent critique of the RNA world in Sci Am, Shapiro aptly observed: ______________ >> RNA's building blocks, nucleotides, are complex substances as organic molecules go. They each contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern. Many alternative ways exist for making those connections, yielding thousands of plausible nucleotides that could readily join in place of the standard ones but that are not represented in RNA. That number is itself dwarfed by the hundreds of thousands to millions of stable organic molecules of similar size that are not nucleotides . . . . The RNA nucleotides are familiar to chemists because of their abundance in life and their resulting commercial availability. In a form of molecular vitalism, some scientists have presumed that nature has an innate tendency to produce life's building blocks preferentially, rather than the hordes of other molecules that can also be derived from the rules of organic chemistry. This idea drew inspiration from . . . Stanley Miller. He applied a spark discharge to a mixture of simple gases that were then thought to represent the atmosphere of the early Earth. ["My" NB: Subsequent research has sharply undercut this idea, a point that is unfortunately not accurately reflected in Sci Am's caption on a picture of the Miller-Urey apparatus, which in part misleadingly reads, over six years after Jonathan Wells' Icons of Evolution was published: The famous Miller-Urey experiment showed how inanimate nature could have produced amino acids in Earth's primordial atmosphere . . .] Two amino acids of the set of 20 used to construct proteins were formed in significant quantities, with others from that set present in small amounts . . . more than 80 different amino acids . . . have been identified as components of the Murchison meteorite, which fell in Australia in 1969 . . . By extrapolation of these results, some writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . I have observed a similar pattern in the results of many spark discharge experiments . . . . [N]o nucleotides of any kind have been reported as products of spark discharge experiments or in studies of meteorites, nor have the smaller units (nucleosides) that contain a sugar and base but lack the phosphate. To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . . The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck. >> __________________ That should suffice to underscore just how ill-founded the above speculations and optimistic reports on RNA world experimental support are. The bookshelf is quite literally empty of adequate and relevant EMPIRICAL evidence, thought he one above it on speculation on "scenarios" positively groans with papers and books. And so, today's rhetorical wave attack goes down in flames. GEM of TKI PS: of course, as Orgel highlighted in his posthumously published response, the metabolism first OOL model in its own way is just as speculative and lacking in empirical warrant. Each of these founder on the key fact observed in real life: INFORMATION is carefully stored and maintained then used to control the step by step process of making the nanomachines of life and organising them into functional units arranged in useful configurations. Information of an order of complexity that makes its spontaneous origin through lucky noise so incredible that speculations on short informational molecules are used to paper over the gapingly empty bookshelf. That is, there is no empirically robust model or theory of origin of life in absence of intelligent origination of the required bio-information for cell based life as we see it. So blatant is this, that attempts are now being made through speculations and equivocations on terms like replication, mutation and selection, to extend the range of the Darwinist model of evolution through chance variation and natural selection to the pre-life world. But, knowledge is not equal to such speculations; especially where plainly relevant alternatives are being filtered out before the known facts on the origin of information can speak; through Lewonitian censorship through a priori materialism. kairosfocus
Onlookers: The thread has now wended its way back to the original focus, with the above link to the Manrubia-Briones paper by Nakashima-san. Now, In that paper, we may see: -------------- >> The size of the first informative molecules was strongly constrained by the accuracy in replication. [Note the questions of source and means of use of information being begged, at the outset] In an environment where proofreading mechanisms were initially absent [absence of error detection and correction would very likely destroy functionality in raid order], replicating biomolecules [replication is assumed not demonstrated, i.e we have the quiet assumption here of either passive reproduction through autocatalysis on templates etc, or the von Neuman architecture for active self-replication] had to be necessarily short. This represented a strong limitation in the amount of genetic information that could be stored and reliably transmitted to subsequent generations [dubiously assumes the existence of information and means of applying it], as well as to the functional capabilities of the evolving molecules. That process likely led to the appearance of molecular quasispecies (Eigen 1971) [i.e imagines a quasi-life based on Dawkinsian or similar replicators], large and heterogeneous populations of replicating molecules that initiated Darwinian evolution. [thus ducks the issues of what life is and how it works per OBSERVATION (i.e cells), and introduces "function" through paper chemistry at best] One of the most popular scenarios for molecular evolution [i.e effectively wholly speculative] prior to the appearance of cellular life is that of the RNA world [Cf below on Shapiro's remarks on that world from his recent article] (Gilbert 1986; Joyce 2002), where small populations of replicating RNA molecules would simultaneously encode information and perform catalytic activity. [smuggles in catalytic activity into the ideas of function and information] Mutation (inherent to the replication process) and recombination should have promoted the appearance of variants. Selection, defined through the characteristics of the environment where evolution proceeded [fallacies of mutation and selection before credible life forms multiplied by galloping hypotheses where speculation ever so soon becomes assumed "reality"], would have favored the replication of certain molecular types. Different microenvironments (characterized by their physicochemical conditions, including ionic strength, pH, metal concentration, or temperature) would then induce different selection pressures, and eventually a spectrum of independent populations of functional replicators might have been simultaneously available [Dawkins' replicator arrives on the scene]. In a favorable situation, it is possible that each molecular quasispecies selected in that way specialized in performing a single, simple function, as a step prior to the emergence of genetic or metabolic reaction networks. [the hypotheses gallop on] This scenario has received steadily increasing experimental support in the last two decades. [hurling the elephant: claiming a weight of evidence that is not justified in details -- note absence of even a link to a survey article here] Although there is no known natural ribozyme that catalyzes the template-directed polymerization of nucleotides, in vitro evolution experiments have shown that RNA-dependent RNA polymerization can be performed without the help of proteins [cf Shapiro on this!] (Johnston et al. 2001; Joyce 2004; Orgel 2004). However, the details of how such a process could have taken place in the RNA world are as yet unknown (Joyce 2002; Joyce and Orgel 2006). Advances in experimental research indicate that the appearance of complex functions in RNA molecules could have been linked to the independent selection of molecular motifs or domains rather than to the de novo selection of complex molecules [speculations gallop on] (Knight and Yarus 2003; Joyce 2004; Wang and Unrau 2005) . . . ---------- In short the paper in question is a mass of galloping hypotheses that soon take on the aura of "fact." Shapiro's strictures on the RNA world are apt . . . [ . . . ] kairosfocus
Adel, you disappointment us.
"What books, other than Scripture, does the other shelf contain?"
This comment is so cheap and stupid. (Clive ban me if you wish) I remember GP and you having a long discussions several months ago, and you never cheapened the topic by this level of utter stupidity. It is sad that you chose to do so now. Adel, if you'd like to discuss ID then I am happy to oblidge. I know for a fact you are smart enough to know the difference in the quality of observations that can be leveled in both directions. I also know that you can see the triviality of your last post. If you simply want to give up on reason and make this level of comment in its place, then please consider keeping your cheap shots to yourself - where they should be. Upright BiPed
"functional complexity" I wonder whatever that could be. jerry
Adel, If the actual substance of the discussion does not interest you, you could always go do something else. ScottAndrews
What books, other than Scripture, does the other shelf contain? Adel DiBagno
Adel, It was more of an illustration than an analogy I suppose, but it was a solid one. If you have a shelf full of books and one is missing, that's a gap. If a few are missing, you have gaps. By comparison, if there are no books on the shelf, we wouldn't call that a gap. ScottAndrews
Scott Andrews, Sometimes, analogies are helpful. Sometimes they are not. Adel DiBagno
My point was simply that it is not possible to commit a "god of the gaps" fallacy when dealing with OOL. For that to be possible, we must fill the void with knowledge and presumably leave some gaps. (Perhaps you were equating my 'books' analogy to knowledge. I was talking about literal books on a shelf.) We have some experimental knowledge, but it may or may not have any connection to the actual events we seek to explain. ScottAndrews
Not amazing, just the distiction between 'how it was' and 'how it could have been'. Nakashima
I wonder what you thought my point was. It must have been really, really amazing! :) ScottAndrews
You give me too much credit. ScottAndrews
OK, your point was less sophisticated than I thought. Nakashima
And I should add, I'm being rather generous by saying that we have hypotheses. I won't elaborate though, to avoid distracting from the main point. ScottAndrews
Mr. Nakashima: The research explicitly describes a "scenario." It further states:
The results that we present in this section have been obtained from computer simulations in which the evolution and selection of a population of RNA sequences are numerically implemented.
Of course, there's nothing wrong with exploring scenarios through simulations. But my question is, does this paper describe what happened billions of years ago? Does it even claim to? Unless the answer is yes, sorry, no books on the shelf. My claim is that we can't explain or describe how life originated - we have only hypotheses. Presenting a hypothesis does not counter that claim. ScottAndrews
Mr ScottAndrews, Modular evolution and increase of functional complexity in replicating RNA molecules This is the kind of research which I think justifies saying there are a few books on the shelf! ;) Nakashima
Nakashima:
We are past just showing that the components can be synthesized, and there are experiments looking at how they are assembled into larger ensembles.
We know that the components can be synthesized. That's a bit like melting sand and saying we've discovered the natural origins of computers. It's wishful extrapolation. There are experiments to see how those components could be assembled. Obviously it follows that we don't know. Our supposed knowledge of OOL boils down to what the pieces would be and what the finished product looks like. Everything else, where the components came from and how they came together, is hypothetical. That's why I maintain that there are no "gaps" in our knowledge of the OOL. It's just one enormous gaping (redundant) gap. ScottAndrews
Mr ScottAndrews, Our understanding of the chemical makeup of life may not be an empty shelf, but our understanding of how life’s components came to be arranged is. I'm not sure I agree. We obviously don't know the historical reality, but I think you are asking a more sophisticated question. We are past just showing that the components can be synthesized, and there are experiments looking at how they are assembled into larger ensembles. Nakashima
Nakashima @353: Our understanding of the chemical makeup of life may not be an empty shelf, but our understanding of how life's components came to be arranged is. I'm not aware that we know anything at all. We can perform experiments that produce organic molecules, but do we have any idea whether life's actual origins resemble those experiments? In my 'books on the shelf' analogy, I equate the books to knowledge, not to hypotheses. Our understanding of life's origins is a void, not a body of knowledge with a few gaps. ScottAndrews
Mr Hayden, I agree that it is a powerful argument relative to any explanation of OOL. The question is the relative humility and acceptance of that. Scientists who are investigating OOL know how much they don't know. I don't see the same attitude in KF-san. Nakashima
Nakashima at 348, In response to my 336 - you simply should have lef it alone. I never used the word "program". So replacing the word I actually did use, we get: "The revised claim would be that any programming could falsify, not that any programming is a falsification. ...which is idle jibberish (and as trivial as your oroginal post). You are defending a post that was so contorted it only needed no one to notice in order to be forgotten. - - - - - - - - KF at 339, thank you. Upright BiPed
Nakashima, ------"But again, simply repeating an assertion doesn’t make it true. How do we measure function in the pre-biotic world? Upon what objects are measuring function? until you can answer these questions you can’t say anything about the reality of islands of function. Appealing to life today is not helpful." This is really just as powerful an argument against evolution. Clive Hayden
Mr ScottAndrews, I agree with KF-san that the abiotic creation of organic molecules is well demonstrated. There are gaps in our knowledge in how so many of these molecules connect, stay connected, become energised, etc. But I would not call that an empty shelf. Nakashima
SA: Excellent point. Wells' discussion on Miller-Urey here should be a good point of departure. gotta go GEM of TKI kairosfocus
PS: Constitute or contain or imply. DNA molecules and RAM chips or DVD disks as such are not the information itself but encode and store it. PC LCD screens store and display rapidly update-able information. Voltage variation in a chip is not information itself but a signal that bears it. Information is the abstract entity that is so stored or implied or used. (And that is part of the trouble that materialists have with information: it is abstract but real enough to function as the foundation of our technological world. BTW, energy is also pretty abstract . . . we infer it through operational means [that which can be converted into work, i.e forces moving points of application across space] but we do not ever see energy itself, just energy carriers and expressers.) kairosfocus
BillB: As much as I hate to post anything in the middle of this excellent thread, In response to
...you acknowledge that you don’t know anything about the evolution of early life or proteins. Whatever your argument with ID is, you’ve just confessed to having no specific alternative.
You write:
Alternative to what?, you seem to have just proposed a god of the gaps.
When one suggests that life originated via chemical accidents but cannot offer a single specific as to how that happened, that is not a gap. A book missing from a shelf is a gap. An empty shelf is not a gap. ScottAndrews
Jerry: A computer screen and associated display unit constitute a functionally specific complex information based organisation of components that work together at an operating point, to display information. [Think about how exactly the pixels on your screen are organised and controlled, whether LCD or CRT or Plasma or whatever] I am saying that: [1] the display itself expresses FSCI, [2] a screenful of organised imagery and text on it constitutes FSCI -- that is what I spoke of earlier [and made a calculation on 800 x 600 24-bit pixels for], and [3] the info fed in to form an organised image and/or text themselves will as a rule constitute FSCI, once they exceed 500 - 1,000 bits. GEM of TKI kairosfocus
Mr Biped, but your revised sentence says that the hypotheses can be falsified if any successful algorithmic programming can be found. Very nice, Nakashima. Thank you, but no. The revised claim would be that any programming could falsify, no holds barred on technique, not that any program is itself a falsification. Nakashima
As I close off for the day: Evoloops. Hurling the elephant: no details, just a name as if that proves something. On looking up: >> As genes circulate in the loop counterclockwise, copies of them are made [Just how, kindly sir?] and sent [again, just how,a dn how will these copies just happen to program functional protein chains etc?] toward the tip of the arm. The arm grows through repetition of straight growth and left turning. [How is such "folding" initiated and controlled? Is not Right turning just as probable inherently, in a presumed prebioptic soup, and how are the "right" monomers going to be available in step by step sequence to carry out the info storage and onward metaboliic function etc?]. When the tip reaches its own root after three left turns, they bond together to form a new offspring loop. [an algorithm writes itself out of lucky noise -- or is that by an intelligent programmer's input] Then the connection between parent and offspring disappears [[ That is, we have termination here, a nontrivial issue in algorithm design]]. ( In such a way, the loop reproduces its offspring which has a structure identical to its parent's in the right area, in 151 updates. >> Such loops of course are implemented through complex and specifically functional algorithms. D/RNA chains in such loops would have to be self replicating, requiring a handy "soup" that goes well above investigator interference levels of implausibility relative to realistic prebiotic worlds. [Start with homochirality vs racemic energetic equivalence of different handednsss issues.] In short,the image makes a nice looking but again predictably misleading icon. Next rhetorical wave please. GEM of TKI kairosfocus
"Similarly, the screen that displays the output contains FSCI" kairosfocus, I am not sure I agree with you or do not know what you mean here. The screen is a manufactured item but I am not sure where the FSCI is in the screen. It certainly is in the computer program in two separate ways. FSCI is in an entity if that entity is information, complex, and specifies a function in something independent. So I am not sure how a screen fits that. The computer program is FSCI in two completely separate ways. It is FSCI just as any written communication that makes sense in terms of a grammar system or vocabulary makes sense. It is also FSCI in the sense that any computer program that is functional is FSCI. FSCI is in an entity but it requires two other independent entities for it to be so. The simplest illustration is DNA which requires the machinery of the transcription and translation process and the object of the process or protein. Without the latter two the DNA might well as be junk as is probably some DNA. It is only when the the second entity (transcription and translation process) turns something into a third entity that has function that the FSCI arises in the DNA. Similarly a random set of letters probably does not contain FSCI because the vocabulary and grammar system will not produce any meaning in our minds other that this is a useless pieces of letters. So any computer program is FSCI since by definition it presupposes this intermediary machinery that reacts to the binary machine code and produces a screen display or a printed document. The screen display or printed document is not FSCI necessarily. There are some intermediary steps in the computer program and each may be producing some entities with FSCI. For example the binary code itself could be a useless series of 0's and 1's or it could be mediated by the hardware to produce something functional. A good computer programmer should be able to delineate the steps. Now the code of a computer program could also be looked at in terms of any communication. The communication contains FSCI, but the mechanism for mediating that communication, vocabulary and grammar, are the means that turn that communication into another entity, the understanding in our minds. Obviously the example of language is a complicated process but the same three independent entities are there. A. the element with FSCI: DNA, lines of computer code, a sentence in a language or the same thing spoken or put into signs. B. The set of rules that transforms or mediates the original entity with FSCI such as the machinery of transcription and translation; the process that turns the computer program into digital information and then the hardware that produces the output and this is very analogous to transcription and translation; and for language the vocabulary and grammar developed over the years and which we teach our children. C. The entity that is produces by these processes, a protein, a printed page or screen view, an understanding in our minds. So FSCI is as simple as ABC but all three are needed but the FSCI only resides in the A but needs B and C to be FSCI. And B or C may or may not contain FSCI. jerry
Nakashima-san: Again, photographs are an example of observed, functionally specific complexity. Photography is, notoriously, a field of technology. That is, on EMPIRICAL EVIDENCE [tut, tut . . . ] -- and a successful counter example of images produced naturally (e.g. of Mt Rushmore) would be important -- photographs are designed. The differential image on a leaf idea is of course an extension of putting a bit of black paper on a leaf then exposing it to sunlight and "developing" the latent image by using iodine or whatever. [The leaf provides a handy grid that is light sensitive; the image comes from a designed process. Remember we are dealing with photos not "film." (Besides the leaf itself is riddled with FSCI and raises the issues Abel just noted as I excerpted. Can you adequately explain in technical details the origin of chloroplasts with their own DNA chains, on chance + necessity from simple initial components that do not embed FSCI?)] GEM of TKI kairosfocus
KF-san, If you are interested in self-replicators, please look into Sayama-sensei's Evoloops. You can get a link from the cellular automata Wiki page. A great example of intelligent design creating a universe perfectly tuned for life! But again, simply repeating an assertion doesn't make it true. How do we measure function in the pre-biotic world? Upon what objects are measuring function? until you can answer these questions you can't say anything about the reality of islands of function. Appealing to life today is not helpful. Nakashima
Hot off the Presses: Abel strikes again: http://www.bioscience.org/2009/v14/af/3426/3426.pdf Tellingly relevant excerpt: ____________ All known organisms are prescribed and largely controlled by information (1-22). Most biological prescriptive information presents as linear digital programming (23-26). Living organisms arise only from computational halting. Fittest living organisms cannot be favored until they are first computed. Von Neumann, Turing and Wiener all got their computer design and engineering ideas from the linear digital genetic programming employed by life itself (27-32). All known life is cybernetic (33-35). Regulatory proteins, microRNAs and most epigenetic factors are digitally prescribed (3). MicroRNAs can serve as master regulators of gene expression (36-38). One microRNA can control multiple genes. One gene can be controlled by multiple microRNAs. Nucleotides function as physical symbol vehicles in a material symbol system (MSS) (39-41). Each selection of a nucleotide corresponds to pushing a quaternary (four- way) switch knob in one of four possible directions. Formal logic gates must be set that will only later determine folding and binding function through minimum- free-energy sinks. The most perplexing problem for evolutionary biology is to provide a natural mechanism for setting functional configurable switch-settings at the genetic level. These logic gates must be locked in open or closed positions with strong covalent bonds prior to folding of biopolymers. At the point of polymerization of informational positive single strands, no selectable three- dimensional shape exists for the environment to favor. In addition, the environment does not select for isolated function. The environment only selects for fittest already- living organisms. The challenge of finding a natural mechanism for linear digital programming extends from primordial genetics into the much larger realm of semantics and semiotics in general. Says Barham: "The main challenge for information science is to naturalize the semantic content of information. This can only be achieved in the context of a naturalized teleology (by 'teleology' is meant the coherence and the coordination of the physical forces which constitute the living state).” (42) The alternative term “teleonomy” has been used to attribute to natural process “the appearance of teleology” (43-45). Either way, the bottom line of such phenomena is selection for higher function at the logic gate programming level. ______________ GEM of TKI kairosfocus
Kf-san, No, I haven't seen a photograph produced by unaided chance and necessity, unless of course humans are the result of chance and necessity, nature operating freely. In that case, I suppose that all photographs could fall into that category. (But as ana side, I was looking a t a book of alternate photographic techniques recently that did show examples of using large leaves as the 'film', taking advantage of the differential degradation of sugars in shadowed portions vs lighted portioins.) Again, you are basing your response on the least important aspect. Rhetorically, you should be trying to address the most important point not the least. This is not an objection that you raise against your own examples. Allowing this objection simply means that all experiment is invalid and/or FSCI is useless. Nakashima
New business: The updated point 6 on the FSCI simple metric, in light of the waves of objections above: _______________ 6 --> . . . we can construct a rule of thumb functionally specific bit metric for FSCI: a] Let contingency [C] be defined as 1/0 by comparison with a suitable exemplar, e.g. a tossed die that on similar tosses may come up in any one of six states: 1/ 2/ 3/ 4/ 5/ 6. That is, diverse configurations of the component parts or of outcomes under similar initial circumstances must be credibly possible. b] Let specificity [S] be identified as 1/0 through specific functionality [FS] or by compressibility of description of the specific information [KS] or similar means that identify specific target zones in the wider configuration space. [Often we speak of this as "islands of function" in "a sea of non-function." (That is, if moderate application of random noise altering the bit patterns will beyond a certain point destroy function [notoriously common for engineered systems that require working parts mutually co-adapted at an operating point, and also for software and even text in a recognisable language] or move it out of the target zone, then the topology is similar to that of islands in a sea.)] c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, with 500 - 1,000 bits serving as the threshold for "probably" to "morally certainly" sufficiently complex to meet the FSCI/CSI threshold. d] Define the vector {C, S, B} based on the above [as we would take distance travelled and time required, D and t: {D, t}], and take the element product C*S*B [as we would take the element ratio D/t to get speed]. e] Now we identify the simple FSCI metric, X: C*S*B = X, the required FSCI/CSI-metric in [functionally] specified bits. Once we are beyond 500 - 1,000 functionally specific bits, we are comfortably beyond a threshold of sufficient complex and specific functionality that the search resources of the observed universe would by far and away most likely be fruitlessly exhausted on the sea of non-functional states if a random walk based search (or generally equivalent process) were used to try to get to shores of function on islands of such complex, specific function. [WHY: For, at 1,000 bits, the 10^150 states scaned by the observed universe acting as search engine would be comparable to: marking one of the 10^80 atoms of the universe for just 10^-43 seconds out of 10^25 seconds of available time, then taking a spacecraft capable of time travel and at random going anywhere and "any-when" in the observed universe, reaching out, grabbing just one atom and voila that atom is the marked atom at just the instant it is marked. In short, the "search" resources are so vastly inadequate relative to the available configuration space for just 1,000 bits of information storage capacity that debates on "uniform probability distributions" etc are mooot: the whole observed universe acting as a search engine could not put up a credible search of such a configuration space. And, observed life credibly starts with DNA storage in the 100's of kilo bits of information storage. (100 k bits of information storage specifies a config space of order ~ 9.99 *10^30,102; which vastly dwarfs the ~ 1.07 * 10^301 states specified by 1,000 bits.)] 7 --> For instance, for the 800 * 600 pixel PC screen, C = 1, S = 1, B = 11.52 * 10^6, so C*S*B = 11.52 * 10^6, FS bits. This is well beyond the threshold. [Notice that if the bits were not contingent or were not specific, then X = 0 automatically. Similarly, if B [is less than] 500, the metric would indicate the bits as functionally or compressibly etc specified, but without enough bits to be comfortably beyond the UPB threshold. Of course, the DNA strands of observed life forms start at about 200,000 FS bits, and that for forms that depend on others for crucial nutrients. 600,000 - 10^6 FS bits is a reported reasonable estimate for a minimally complex independent life form.] ______________ One hopes that along with my first comment for today, this will provide adequate clarifying and corrective information. The relevance to the fact that complex, specifically functional information is now a recognised, clear fundamental constituent of life should be plain. And, onward in light of the von Neuman conditions for self replication -- an active self replicator has to embed blueprint, blueprint reader and blueprint executor to self-replicate -- teh relevance of the resulting need to get to islands of fucntion in a sea of non function to the OOL challenge should be even more massively evident. Abel's remarks as excerpted by Upright simply underscore the consequent predicted and observed empirical realities. GEM of TKI kairosfocus
PPPS: Rob, re 324. the C-program's text contains FSCI. Similarly, the screen that displays the output contains FSCI. The C program needs not be run on any given system, and a given screen needs not show only the program, i.e. the cases are not mutually dependent. We have two coincident instances of functionally specific, complex information: [a] a program that carries out a particular function, and [b] a screen that shows content fed to it by an operating system. That you as a highly informed person should seek to conflate the two suggests a manufactured strawman, not a serious question or objection. kairosfocus
PPS: Upright: Excellent rebuttal. kairosfocus
3] Ideological warfare by rhetorical attrition: In the 1st World War on the Western Front, the Allied powers were forever seeking a breakthrough and doing so by throwing waves of men at well organised German defenses backed by Krupp's version of the Maxim machine Gun and quick firing artillery. Eventually, after suffering millions of casualties and after the French army mutinied in 1917, over the next year, the Germans were simply exhausted. (Of course, the cost to the "victorious" allies was in the long term so ruinous that they were not able to buck up in time to stand up to Hitler's thirst for a rematch.) Something fairly similar to this is going on in this blog: a --> There are relatively few well informed ID commenters capable of rapidly rebutting endlessly repeated versions on "standard" distractions, strawmannish distortions and denigrations led out to rhetorical dismissals. (I won't do more just now than point out that such tactics are fundamentally corrosive to the civility and good sense that must underlie a sustainable free civilisation.) b --> But, if one is willing to deploy endless rhetorical waves of such red herrings led out to strawmen soaked in ad hominems and ignited to cloud, poison and polarise the atmosphere [never acknowledge the force of corrective counter-arguments, just launch yet another rhetorical wave . . . ], then eventually one can wear down those who would have to rebut, not through the merits but by sheer weight of numbers and rhetoric. c --> At that time, the APPEARANCE of victory on the merits can be put up, and in our day, perception is often more important in the short term than reality. (long term, ther eis a terrible price to be paid, and believe you me the historical exemplar of the impact of the first wave of Islamist expansionism on disaffected Christian populations in Syria and Egypt resentful of Byzantine domination and oppression, is a sobering warning. There was a REASON why these areas fell to Islam so fast and so easily! Sadly, out of the frying pan into the fire . . . ] d --> Beyond a certain point, if the rhetorical waves are allowed to pound away UD is going to be overwhelmed by endless waves of long since answered fallacies [just cf the Weak Argument Correctives], reaching a point where any original post will at once be swarmed under by a wave of misleading arguments. And, naive onlookers will be primed to simply go tot he objections to see the "answer." e --> Indeed,t here are several recent threads at UD that have already been "overwhelmed." f --> In short, there is need for a very different counter-rhetorical strategy for UD and other similar sites. (The old one of simply banning those who insist on inane or too obviously uncivil remarks led to accusations of "censorship." We cannot revert to that; though a few exemplary cases do merit such banning.) g --> I therefore suggest that it is time to deploy not just a set of weak argument correctives and a brief glossary but at minimum highlighted links to adequate tutorials across the range of ID studies, constituting an ID 101 with actual FAQ's addressing not just rhetorical dismissals and distortions, but need for basic information. [A good start to that would be a critical review of the Wikipedia page on ID.] h --> This should be augmented by links to major ID papers and works on the net, including where relevant Google Books online. i --> I also advocate for a fresh start on origins science education, that will break the evolutionary materialist monopoly and prepare a new generation for breaking out of the Lewontinaian version of Plato's cave with the shadow shows based on so many misleading icons. [A wiki based set of tutorials covering underlying issues, cosmology, origin of life, origin of biodiversity, origin of mind and origins science in society would I think do a lot of good. Not least by simply breaking he monopoly out there.] j --> I believe this will also help redress the manpower imbalance at UD and elsewhere. +++++++++++++ GEM of TKI PS: Again Nakshima-San: have you ever seen a photograph produced by unaided chance + necessity? Why or why not? kairosfocus
Footnotes: First, thanks Upright for taking time to bring Abel's excellent work to bear. His work, in significant part, is in effect a technical level version of the ideas descriptively summarised under the term "functionally specific, complex information." Observe, onlookers: again, there is no effective response on substance. (And, FYI Rob, a pen does not explicitly contain FSCI; I addressed first that it is a complex, functionally co-ordinated object with a core that exhibits irreducible complexity -- as is common for engineered systems. indeed, Darwinian type "spontaneous, ratcheting, hill-climbing" processes are deeply challenged to get to such IC systems, directly or indirectly. Such a core will have in it a cluster of decisions to form components and integrate them at an operating point. In turn, that can be turned into a chain of decisions which are expressible in binary sequential form, i.e. we can assess that there will be IMPLICIT functional sequence complexity -- or at a simpler level, FSCI -- associated with such an object. But, an item can be irreducibly complex without being beyond the 1,000 bit threshold, and such irreducible complexity is already strong evidence of design.) Next, there is one point of unfinished business I wish to address before doing anything else: 1] On islands of fucntionality:
[BB, 314:] I would say though, and to mirror what others have said, that the whole notion of seas and islands is poor when factoring in a pre-biotic universe. At best you should consider the config space to include the ocean floor and simply place sea level as a slightly arbitrary demarcation point between complex chemistry and self replicating systems. The ’search’ that occurs in a universe is simply a mass collection of shifting configurations, some of which may be very close to these ’shores of function’.
1 --> The implicitly conceded point in this objection is that once we DO have islands of function in a sea of non-function, we then have a challenge to first get to shores of function before we can properly use hill-climbing ratchets to get to peaks of function. 2 --> That is why there is an attempt to extend the slope below the "non-functionality" sea level. (There was also an attempt to dismiss the fact that the search onducte4d by the atoms of the cosmos acting across its lifespan specifies an upper bound on search. But 10^80 or so atoms changing state every 10^-43 seconds and doing so for 10^25 seconds is a reasonable upper bound on cosmic search: 10^150 "moves.") 3 --> Now, as I have repeatedly pointed out above [most recently by showing how noise would corrupt a photo of Mt Rushmore but once we are in a snowstorm, further noise will simply move us around in the sea of non-images], such an islands of function configuration space topology is COMMON and quite reasonable to expect with complex functional systems. 4 --> When it comes to observed life, we first see that it is an actively self-replicating entity -- not like a crystal that grows passively by inter atomic or intermolecular forces. 5 --> As Von Neumann pointed out in the 1940's (this is a proto-ID prediction!), such a system will require not only general operating machines, but a stored blueprint and a self-assembling factory that reads and uses it to copy itself; rendering such an entity that incorporates self-replication even more complex than one that does not. [Indeed, the much derided William Paley reflected on that, speaking about a self-assembling watch.] 6 --> Thus, such an actively self-replicating entity is necessarily based on functionally specific and complex information, once the function is vulnerable to perturbation; which is notoriously true. (Just think about what radiation damage does to cells, esp as the level is gradually turned up: at first, repairable [we have to function in an environment in which minor damage is a commonplace], then triggers cancers, then simply wipes out the cells -- what radiation sickness is about.) 7 --> In short, life -- including hypothesised first life that is based on empirical evidence of how life exists and operates -- manifests an islands of function in a sea of non-function topology. 8 --> That is, until metabolic, genetic/info storage and self replicating subsystems are appropriately constructed and integrated, you do not have life. There is a shoreline of function, and beyond, a sea of non-function. 9 --> Thus, the attempt to rhetorically extend the island of function by appealing to a sloping ocean floor fails. Until you are on the shoreline, you have no basis for empirically credible life, which is why the main schools of thought on OOL as exemplified by Shapiro and Orgel, have mutually refuted themselves. [NB: slight update to the always linked.] In short, the bottomline is still as it has been stated above. However, a remark or two on rhetorical tactics: [ . . . ] kairosfocus
Nakashima, At the top of your post at 334, you provide a quote from Abel and then bolded the text you wanted attention drawn to.
We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses.
And then you say that you “don’t know what in Abel’s work justifies this exception”. I then removed the exception from the quote: “We repeat that a single incident of nontrivial algorithmic programming success would falsify any of these null hypotheses.” It then occurs to me that the original sentence says that the null hypotheses can be falsified if a single instance of algorithmic programming success can be found that isn’t the product of selection at the programming level…but your revised sentence says that the hypotheses can be falsified if any successful algorithmic programming can be found. Very nice, Nakashima. - - - - - - - - - - Not having done enough damage to the original sentence and its meaning, you take aim at it again in the middle of your post by suggesting that its “all the more odd in light of” a second quote coming from Abel’s paper.
Functional switch-setting sequences are produced only by uncoerced selection pressure. There is a cybernetic aspect of life processes that is directly analogous to that of computer programming. More attention should be focused on the reality and mechanisms of selection at the decision-node level of biological algorithms. This is the level of covalent bonding in primary structure. Environmental selection occurs at the level of post-computational halting. The fittest already-computed phenotype is selected.
You now have my interest peaked, so I went through the quote. Abel states that functional sequencing doesn’t come from coerced pressure (law-like cause and effect necessity). Then he highlights the analogy between biological algorithms and computer programming (they both operate from a set of selections that precede function). He then points out that environmental selection only operates on the value of the end product (but does not cause the selections that precede it). So, if I may now put your two thoughts together: It seems odd to you that Abel would want you to know his null hypotheses can be falsified by “a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level, -and- that this is even more odd because physical laws don’t explain functional sequencing, computers and biosystems run off programs, and environmental selection selects for function after its functioning. Also, you wish that Abel would be more consistent. Finally at the end of your post, you say “If FSC is the product of selection, you can’t rule out selection and then declare victory.” Now, I already know that you’ve read David Abel’s work (thank you). And there is no ambiguity in that he concludes FSC can only result from the act of a volitional agent. FSC is the product of an agent selecting for function at the organizational level. Nowhere does Abel say that FSC is the product of anything else. Your thinking on this is so twisted that I can only assume you are simply saying something in order to say anything at all. - - - - - - - - - - - - In your next post at 335, you say “KF-san has given us several points and assertions we are nowhere near finished talking about.” Believe me; I am quite certain you have more to say on the matter. I think incessant is an appropriate term. The question is, will the posts be as incomprehensible as you ones you've already made. Upright BiPed
Mr BiPed, Abel's work is indeed fascinating. I wish he would participate here to talk about it with us. but in the mean time KF-san has given us several points and assertions we are nowhere near finished talking about. What is function in the pre-biotic world? KF-san seems to know there are islands of it. Where does this knowledge come from? Nakashima
We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. I don't know what in Abel's work justifies this exception. Allthe more odd in light of this: Functional switch-setting sequences are produced only by uncoerced selection pressure. There is a cybernetic aspect of life processes that is directly analogous to that of computer programming. More attention should be focused on the reality and mechanisms of selection at the decision-node level of biological algorithms. This is the level of covalent bonding in primary structure. Environmental selection occurs at the level of post-computational halting. The fittest already-computed phenotype is selected. Granted it's just a hash of assertion and opinion, but it would be nice if Abel were consistent. If FSC is the product of selection, you can't rule out selection and then declare victory. Nakashima
BillB, Why not address the evidence? You've read Abel's work, why not address it if you want to show ID is vacant? Why not stop with the harping over the edges of an argument with KF and just lay ID bare? That is what you want isn't it? Don't you want the evidence for design to be shown false? What does Abels' paper have wrong - tell us specifically what that is? Does chance ever not operate at maximum uncertainty? Does any chance event ever lead to a another chance event that isn't operating at maximum uncertainty? Are there any chemical affinities along the linear sequencing of DNA (where the information is)? If DNA was the product of ordered states could it hold the amount of information it contains? Does complex coordinated function between disperate physical objects require selection at the information level, or no? Tell us Bill. Address the evidence for ID that is already a part of the peer-reviewed scientific record - just as a novel change of pace. Why not? Upright BiPed
from David Abel... (Theor Biol Med Model. 2005; 2: 29. Published 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958) ABSTRACT: Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). EXCERPT: What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. The fundamental contention inherent in our three subsets of sequence complexity proposed in this paper is this: without volitional agency assigning meaning to each configurable-switch-position symbol, algorithmic function and language will not occur. The same would be true in assigning meaning to each combinatorial syntax segment (programming module or word). Source and destination on either end of the channel must agree to these assigned meanings in a shared operational context. Chance and necessity cannot establish such a cybernetic coding/decoding scheme [71]. How can one identify Functional Sequence Complexity empirically? FSC can be identified empirically whenever an engineering function results from dynamically inert sequencing of physical symbol vehicles. It could be argued that the engineering function of a folded protein is totally reducible to its physical molecular dynamics. But protein folding cannot be divorced from the causality of critical segments of primary structure sequencing. This sequencing was prescribed by the sequencing of Hamming block codes of nucleotides into triplet codons. This sequencing is largely dynamically inert. Any of the four nucleotides can be covalently bound next in the sequence. A linear digital cybernetic system exists wherein nucleotides function as representative symbols of "meaning." This particular codon "means" that particular amino acid, but not because of dynamical influence. No direct physicochemical forces between nucleotides and amino acids exist. - - - - - - - Upright BiPed
from David Abel... (Theor Biol Med Model. 2005; 2: 29. Published 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958) ABSTRACT: Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). EXCERPT: What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. The fundamental contention inherent in our three subsets of sequence complexity proposed in this paper is this: without volitional agency assigning meaning to each configurable-switch-position symbol, algorithmic function and language will not occur. The same would be true in assigning meaning to each combinatorial syntax segment (programming module or word). Source and destination on either end of the channel must agree to these assigned meanings in a shared operational context. Chance and necessity cannot establish such a cybernetic coding/decoding scheme [71]. How can one identify Functional Sequence Complexity empirically? FSC can be identified empirically whenever an engineering function results from dynamically inert sequencing of physical symbol vehicles. It could be argued that the engineering function of a folded protein is totally reducible to its physical molecular dynamics. But protein folding cannot be divorced from the causality of critical segments of primary structure sequencing. This sequencing was prescribed by the sequencing of Hamming block codes of nucleotides into triplet codons. This sequencing is largely dynamically inert. Any of the four nucleotides can be covalently bound next in the sequence. A linear digital cybernetic system exists wherein nucleotides function as representative symbols of "meaning." This particular codon "means" that particular amino acid, but not because of dynamical influence. No direct physicochemical forces between nucleotides and amino acids exist. - - - - - - - Upright BiPed
BillB, “Does anyone know anything about the evolution of early life or proteins?” the answer is, Yes, the people who do OOL research know a lot more than people who do ID research or KF." Would that include David Abel? Upright BiPed
Joseph, You are incorrect - Archaeologists don't try and calculate the FSCI of objects to determine if they are designed. How could they - the concept of FSCI barely exists beyond this website. BillB
Nakashima,
Where you have used the words autocatalytic set, I would prefer just ‘collection of molecules’. Autocatalytic set is a description of the goal state, the fitness function.
I confess I'm showing my ignorance of OOL research and chemistry. My area is cybernetics but I do work with some people doing OOL and other related stuff in ALife, in particular daisyworld models. Have you come across Chematon models of chemical replicators? Someone I know did his PhD on them. I agree, it would be nice to see some here actually testing their claims. BillB
ScottAndrews: Alternative to what?, you seem to have just proposed a god of the gaps. It's KF who is claiming to know enough about life's origins to know what happened. What I'm objecting to is this nebulous FSCIdea as some kind of proof of design and to the constant confusions over GA's and models. I don't do OOL research but I know some people who do so when you ask "Does anyone know anything about the evolution of early life or proteins?" the answer is, Yes, the people who do OOL research know a lot more than people who do ID research or KF. BillB
BillB @319: I don't object to your use of my words to serve your own purpose. I stand by them.
Does anyone know anything about the evolution of early life or proteins? How can we say what it is or isn’t consistent with?
But by doing so, you acknowledge that you don't know anything about the evolution of early life or proteins. Whatever your argument with ID is, you've just confessed to having no specific alternative. ScottAndrews
P.P.P.S I also think it's cool that millions of bits of FSCI can be generated simply by switching to 8-byte pixel boundaries for image data. But it's a shame that lossless compression of image data can make millions of bits of FSCI disappear. R0b
P.P.S. Below is a 2 kilobyte C# program that fills your screen with meaningful text. I don't see how this program can have any more than 16kbits of FSCI. Isn't it amazing that it can produce 11.52 million bits of FSCI for a 800x600 screen, and much, much more than that for larger screens? using System.Windows.Forms; using System.Drawing; class Program : Form { static void Main(string[] args) { Program prog = new Program(); Label label = new Label(); label.Text = @" Four score and seven years ago, our fathers brought forth upon this continent a new nation: conceived in liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war. . .testing whether that nation, or any nation so conceived and so dedicated. . . can long endure. We are met on a great battlefield of that war. We have come to dedicate a portion of that field as a final resting place for those who here gave their lives that this nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we cannot dedicate. . .we cannot consecrate. . . we cannot hallow this ground. The brave men, living and dead, who struggled here have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember, what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us. . .that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion. . . that we here highly resolve that these dead shall not have died in vain. . . that this nation, under God, shall have a new birth of freedom. . . and that government of the people. . .by the people. . .for the people. . . shall not perish from this earth."; label.Font = new Font(FontFamily.GenericMonospace, Screen.PrimaryScreen.Bounds.Width / 50, GraphicsUnit.Pixel); prog.Controls.Add(label); label.Bounds = Screen.PrimaryScreen.Bounds; prog.TopMost = true; prog.WindowState = FormWindowState.Maximized; prog.ShowDialog(); } } Isn't it R0b
kairosfocus, your style reminds me of trial lawyers' closing arguments. You know, "Ladies and gentlemen of the jury, we have seen clearly that..., the evidence undeniably proves that..." Of course, that's what the lawyer is paid to say, and it tells us nothing about the credibility of his case, or the jurors' views, or even the lawyer's views. You're still claiming that we regularly observe intelligence creating FSCI, so I'll keep pointing out that we don't. At best, we observe FSCI and somehow infer its origin. You still haven't given us a method for determining which link of the causal chain introduced the FSCI. For GA's, you trace the FSCI past the computer to the programmer. Why not the computer? Or why not trace it further to the designer of the programmer? Your answers to these questions are ad hoc, vague, and question-begging. Basically it comes down to your a priori conviction that computers, being mere mechanical entities, can't create FSCI, while humans can. P.S. If FSCI is so clear and simple, why can't the handful of ID proponents who know about it agree on whether a pen has FSCI? R0b
Mr Charrington, If you wish to discuss something then it is up to you to come to the discussion prepared. However it is obvious that you don't even have a basic understanding of ID and you also don't have any intention of supporting the claims of your position. Joseph
One would measure the information in an object by determining what it took to bring said object into existence. BillB:
Thanks, you have just shown how the concept can’t ever demonstrate design in nature.
Yet archaeologists do it all the time. One would measure the information in an object by determining what it took to bring said object into existence.
Would you not also have to measure the information in the other things that it took to bring an object into existence?
Only if one is anal retentive. All we are trying to do is determine if nature, operating freely can account for it or if agency involvement was required. Then once that is determined we investigate accordingly. Your other "objections"- about counting bits- demonstrates you are clueless. Joseph
Mr BillB, I've tried to engage KF-san on this question of what is pre-biotic function, and by extension what is the object of which it is a measure. Where you have used the words autocatalytic set, I would prefer just 'collection of molecules'. Autocatalytic set is a description of the goal state, the fitness function. If you wanted to fit this into a GA, you could expand the four letters of the RNA alphabet with a BREAK letter that signified the end of one molecule and the beginning of another. Thus one GA population member could hold multiple kinds of RNA. To evaluate, fold the RNAs into their tertiary structure per the parameters of the experiment. Drop multiple copies of each into the experiment, crank up your molecular dynamics simulator, come back later and see what you've got. The experiment will yield a negative result if every random set of RNA molcules degrades instead of producing more reaction products than you started with. That would be an important, publishable result. Anyone from the ID side who did this experiment would be a hero in my book, no matter what the result, simply for putting their beliefs to the test. Nakashima
R Daneel Olivaw is not going to be a GA. Similarly, a GA will not write itself out of randomly varied noise on a disk, nor is it credibly improved by allowing the object or source code to be hit by white noise; rapidly such would result in NON-function, which is why islands of function are seen as sitting in a sea of non-function.
No, R Daneel Olivaw isn't going to be a GA - what a bizarre thing to even think that it might! You have yet again launched into this idea that a GA ought to 'write its self out of noise'. No one is claiming that GA's pop into existence out of nothing or that making random changes to a GA won't stop it from functioning as a GA. What we are talking about is what a GA does, not how they are created. I don't understand why you are finding these two concepts so difficult.
When it comes to life forms: first life credibly requires 600 - 1,000 kilo bits of initial information capacity to function ...
Remember this from ScottAndrews:
Does anyone know anything about the evolution of early life or proteins? How can we say what it is or isn’t consistent with?
I hope you are going to back up your claims about what is required for the origin of life with some evidence. Would you define an autocatalytic set as having any 'function' or does it only have function when we can define it as a living system? BillB
KF-san, Sorry, I am not attached, latched or quasi-latched to the data array being a PHOTOGRAPH. It is only a data array. I happen to know it contains an image of Mt Rushmore, ca 1925. If your design detection procedure can't tell me anything about the content of the PHOTOGRAPH, but rather relies on the artifact of it being a PHOTOGRAPH, delivered to the detection procedure via the intelligently designed INTERNET, then the design detection procedure is useless. I'm quite willing to go back to a discussion of more abstract data arrays if you prefer. Here is generation 0 of my population of IPD competitors. Every bit of the array was assigned by a call to the random number function. How much FCSI does it contain? Here is the data array for generation 1000. The population members now function much better. How much FSCI does it contain? You can assume the use of the GA algorithm I gave previously, if it helps. Please answer about the data array, not the algorithm, the operating system, the microprocessor, etc. None of those things seem to matter when discussing text on a screen, or 143 ASCII characters. Nakashima
PPPS: We can see that, for example, a particular text string in front of us is of n characters, and that it constitutes contextually responsive text in English. Where N > 143 we can confidently conclude based on the text string and its characteristics -- not direct observation of its causal story -- that it is an artifact of design. (For, we have separately seen that and why FSCI is a reliable sign of design. Validation of sign is not ot be confused with its use.) kairosfocus
PPS: I trust it is clear enough that I am showing that a stick that falls in berry juice can be used to write, but a parker 51 shows a kind of functionally specific complex information that takes us out of the credible reach of nature acting freely by forces of chance + necessity. Similarly, 3 letters by chance that spell out an English word are fairly easy to get to, but 143 ASCII characters forming a contextually responsive utterance in English is another entirely. And, when it comes to life forms, origin of life credibly requires 100's k bits of information, well beyond the credible reach of chance + necessity -- for INITIAL function. kairosfocus
Nakashima-San: You have chosen the PHOTOGRAPH as an example. Can you show me a photograph that has appeared anywhere in our known observation without a design-based process? As for the issue of the impact of modest random perturbation [here through white noise aka "snow"], this goes to specificity of function within an island of functionality. FSCI is about functionality that is sufficiently specific that it exists in a target zone that can more or less be characterised as islands or archipelagoes. That is why noise dumped into the picture of Mt Rushmore C 1925 eventually makes it unrecognisable as a specific location, then as a picture of a mountain then of a picture of anything in particular. and, once we are at the snowstorm effect, further random change has a very different effect: moving around in a vast sea of snowy images. Such an entity is there3fore not functionally specific, save in the sort of scenario where we take one particular config and use it as say the basis of a one time message pad cipher system, or maybe as a way to do a lottery outcome. then, we have made a reference point from the otherwise non-functional config and have defined a new target for a new purpose. GEM of TKI __________ PS: BB: GA's etc do output data strings that exhibit FSCI, and such show that design is at work in the underlying causal process, per reliable sign. The ASCII text of the code is similarly an index of design, and we do in fact know that GA's are artifacts of design. What I have objected to is that GA's are not credible as artificial intelligences in any sense worth having: they are incapable of autonomy, real decision and imaginative creativity. R Daneel Olivaw is not going to be a GA. Similarly, a GA will not write itself out of randomly varied noise on a disk, nor is it credibly improved by allowing the object or source code to be hit by white noise; rapidly such would result in NON-function, which is why islands of function are seen as sitting in a sea of non-function. In short, FSCI is again seen to be the product of intelligence. Technological evolution -- by design -- allows the functional complexity of systems to increase over time: such systems are complex, functionally specific and informational. Similarly, pens an the like -- of sufficient complexity -- show implicit information that can be worked out and used to estimate the required FSCI, but it would be much easier to simply observe the complexity and irreducibility that occur with core parts for a modern pen. (It is conceivable that a stick or feather could get itself stuck in berry juice and form a "natural" pen" but that has nothing to say to the Parker 51 or the like. And, because of the IC of such systems -- and of the many subsystems in cell based life -- the creation of novel functionality of complex order that exhibits irreducibility is maximally improbable by Darwinian type processes -- in short novel body plans have to get to shores of function too, before they can be improved incrementally. A quill stuck in berry juice is one thing, a Parker 51 another entirely different one.) When it comes to life forms: first life credibly requires 600 - 1,000 kilo bits of initial information capacity to function, and has in it many irreducibly complex carefully organised subsystems. Novel body plans require 10's - 100's+ of MILLIONS of new bits. kairosfocus
KF:308 (**In order not to flood this page with cut'n'pastes from KF's post I'll just paste excerpts - see 308 for the proper context - I'm not trying to misquote by removing the context)
What then of the metrics in # 177 above [and the always linked] ... The case of a PC screen full of information? And of course, the case that triggers all of this dismissal: observed DNA etc in the cell.
Please read my posts 302-303 regarding how GA's are implemented and what FSCI they produce to see if this changes the way you calculate FSCI for a GA. If the GA is embodied in a non computational system how would the measurements apply. In the case of a computational system should we apply the measure to the written code or to the binary and if so what contribution to FSCI (or lack of FSCI) does the compiler make?
(As tot he notions that I do not understand hill-climbing algorithms ...
What notions, do you mean my reference to you not understanding the difference between simulation and simulator because if so then hill climbing is irrelevant. This is the same misunderstanding that Nakashima is trying to unpick with his comments on Second Life. When you create a model you are creating an algorithm and a set of equations which, when iteratively computed, will yield a series of variables that should approximate observable variables in the system or the part of the system you are modelling. You keep bringing up the computer hardware and operating system and talking about perturbations to it but this is completely irrelevant to the model. You ought to be able to take a model and iterate it using only a pen and paper, the computer is just a tool to speed things up. GA's, when used to model biological processes, are part of a model and you can do the math by hand, or with a computer. Messing around with the computer or the OS breaks the tool you are using, it has nothing to do with the model.
having given specific cases of FSCI, we have shown that in the cases where we do independently know the cause of FSCI, it is intelligence.
All I have seen is you pointing at designed things and saying 'look, FSCI' and then pointing at DNA and saying 'Look, FSCI'. The actual process by which you calculate it seem to be contingent on lots of assumptions as Mark Frank has highlighted. You dismiss GA's as examples of things that generate FSCI because humans design GA's and 'put the FSCI in at the start' somehow yet you also reject the idea that an intelligent designer could create life by embodying a GA in a universe. From what I can tell the only way to reliably infer FSCI is if you know it had an intelligent source.
we do know that complex functional, information rich algorithm-implementing organisation must have arisen for life as we observe it to exist.
We don't yet know what was required for life to exist, otherwise we would have a rigorous step-by-step explanation of the origin of life. Simple saying 'its complicated' is not sufficient. How can we measure the bit count of a process we haven't uncovered yet? --
Strawman, set up to be knocked over.
Perhaps you missed where I was responding to this from jerry:
Now maybe in some other language or by using some encryption technique we can relate the second string of information to something else. If that is true, then that string is also FSCI.
jerry seems to be claiming that the FSCI in something can change if we discover new things about its origin. Are you agreeing or were you actually accusing jerry of setting up the straw-man?
Once an entity has information carrying capacity and exhibits observable functionality ... shows islands of function in a sea of possible configurations ... the undirected search capacity of our cosmos.
You keep bringing up this islands of functionality idea, again here:
...islands of function and thresholds beyond which the search resources of the observed cosmos... ... ...rendering any at-random walk based search utterly .unlikely to hit on any reasonable islands of function.
What is this idea based on, apart from wishful thinking, and why does trying to make out that the universe is a search engine help, apart from setting up straw-men? You have assumed that all configuration spaces consist of isolated islands, why aren't some configuration spaces more like continents. If we are going to talk about the universe as a search and configuration spaces as landscapes then the first thing to realise is that the universe is not in one location in the landscape, it is all over the place performing billions of 'searches', or more properly configuration shifts, in parallel every second in a search space consisting of islands, Archipelagos and continents. I would say though, and to mirror what others have said, that the whole notion of seas and islands is poor when factoring in a pre-biotic universe. At best you should consider the config space to include the ocean floor and simply place sea level as a slightly arbitrary demarcation point between complex chemistry and self replicating systems. The 'search' that occurs in a universe is simply a mass collection of shifting configurations, some of which may be very close to these 'shores of function'.
GA’s in our observation are the product of designers.
Yes, Yes, Yes, how many times do I have to say this, I am not claiming that GA's pop into existence randomly. The question is about if a GA can produce this FSCI and if the universe can have a 'GA' built into it by an intelligent designer. Talking about how sensitive an implementation of a GA is to perturbation misses the point entirely which is why I question your understanding - the universe will break if you change the laws of physics : a simulation will break if you change the way the underlying computer does maths.
The FSCI of a pen:
I'm glad you mention a quill, I was going to ask about charcoal sticks. Nature, operating freely, can generate writing instruments but some specific processes are required to refine them (and they are only writing instruments if we use them as such), with non replicating entities like pens you need people but what if the entity can make variable copies of its self? In the pen example you seem, as in all other FSCI examples, to rely on knowledge of the pens history and its intended purpose. And yet you also claim:
FSCI is both observable and quantifiable, without reference to causal story.
So, which is is, and do you agree with jerry or joseph's claims about FSCI or are they wrong? BillB
KF-san, Thank you explaining the 1925 example once more. From your discussion, I'm not sure how modest perturbation disrupting function relates to the previous use of compressibility. Is modest perturbation a test for C? But again, that is a side detail. From your explanation, I can't think of any photo of anything that won't be full of FCSI. I suppose that finding the world always exhibits design is a valid theological position, but I don't see how FSCI can be a useful metric if it is always returning as many bits as there are in the data array. Nakashima
Pardon: CSB metric: ______________ FSCI is also an observable, measurable quantity; contrary to what is imagined, implied or asserted by many objectors. This may be most easily seen by using a quantity we are familiar with: functionally specific bits [FS bits], such as those that define the information on the screen you are most likely using to read this note. . . . . we can construct a rule of thumb functionally specific bit metric for FSCI: a] Let contingency [C] be defined as 1/0 by comparison with a suitable exemplar, e.g. a tossed die. b] Let specificity [S] be identified as 1/0 through functionality [FS] or by compressibility of description of the information [KS] or similar means. c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, with 500 - 1,000 bits serving as the threshold for "probably" to "morally certainly" sufficiently complex to meet the FSCI/CSI threshold. d] Define the vector {C, S, B} based on the above [as we would take distance travelled and time required, D and t], and take the element product C*S*B [as we would take the ratio D/t to get speed]. e] Now we identify: C*S*B = X, the required FSCI/CSI-metric in [functionally] specified bits. . . . . For instance, for the 800 * 600 pixel PC screen, C = 1, S = 1, B = 11.52 * 10^6, so C*S*B = 11.52 * 10^6, FS bits. This is well beyond the threshold. [Notice that if the bits were not contingent or were not specific, then X = 0 automatically. Similarly, if B < 500, the metric would indicate the bits as functionally or compressibly etc specified, but without enough bits to be comfortably beyond the UPB threshold. Of course, the DNA strands of observed life forms start at about 200,000 FS bits, and that for forms that depend on others for crucial nutrients. 600,000 - 10^6 FS bits is a reported reasonable estimate for a minimally complex independent life form.] _____________________ In the case of ASCII text, 143 7-bit characters is equivalent to 1,000 bits or 10^301 possible configs. Random changes will rapidly convert contextually responsive English text of 18 - 20 words into meaningless hash. And tehre is an Inernet full of instances of such FSCI by known intelligence, but none of such by chance + mechanical necessity, as the Welcome to Wales example discusses. GEM of TKI kairosfocus
CH: Please read 177 above. GEM of TKI kairosfocus
Onlookers: MF of course claims not to ever read what I write. It shows; e.g. in:
when challenged to calculate the FSCI you can only do it by making all sorts of arbitrary assumptions about the specification and the cause.
1 --> The simple metric of FSCI is based on thresholds that give us the ability to make a conclusion based on a topology of islands of function in a sea of non functional configs, and in the further context of sufficient information storage in the function that unaided random walk based search strategies and the like will be maximally unlikely to succeed in reaching shores of function. 2 --> That is FSCI has a particular purpose: allowing a decision beyond the practical reach of false positives, that we are looking at a designed entity. 3 --> And, as tested on literally millions of known orign cases, it does render a reliable verdict; indeed to date no objector has been able to produce a good counter example where chance + necessity have spontaneously produced FSCI without active information from an intelligent source. 4 --> So, we have very good reason to conclude that FSCI is fit for its purpose: it is a reliable sign of intelligence. 5 --> In that context the simple FSB metric at 177 above allows us to set reasonable quantitative thresholds, in the context of using dummy variables to categorise the functional specificity and complexity beyond a threshold before we use the information stored and used to implement function. 6 --> Insofar as that is an example of fitness for purpose, that is not arbitrary, regardless of the conventional nature of such thresholds: mean sea level is a convention, too! 7 --> MF also indulges in a turnabout:
a] We have (per Orgel et al) observed that functionally specific complexity is an intersting feature of certain objects in the world. b] We further observe that -- on an intuitive basis -- it is a pattern that tends to show up in known designed entities. c] We set up reasonable thresholds and see that they reliably indicate cases of design where we INDEPENDENTLY KNOW the causal story. d] We can find no counter instances at these thresholds, and objectors, after years of effort are equally unable to find counter-examples. (Thus the types of strawmannish objections and red herrings we see above.) e] We have good reason to see that FSCI for instance is a reliable sign of design, and so may infer on best current explanation to design as cause when we see FSCI. f] To infer the presence of FSCI, we have used the simple approach: (i) specific function vulnerable to modest perturbation (i.e topology of islands of function), (ii) sufficient information storage capacity used in that function to soak up the search capacity of the observed cosmos (500 - 1,000 bits or more), (iii) a measure of actual capacity used so that we see how far beyond the threshold we are. g] the FSB metric then reports the result, in functionally specific bits that are known to be beyond the threshold. h] This is not a matter of imposed assumptions and question begging, but empirically based inference to best current explanation across known causal factors. i] If MF is able to, he can cirte a case of FSCI where we know directly that the cause is not intelligent. [And this includes that chance + necessity acting on an arbitrary initial configuration can and does form FSCI.] j] Similarly, he can show us that a fourth causal factor beyond teh observed chance, mechanical necessity and intelligence, is at work, and/or that he can credibly reduce say intelligence to chance + necessity. k] but instead of doing the level playing field thing, he has in effect tried to assert away the issue from inference to empirically anchored best explanation to suggest that questions are being begged.
8 --> He only succeeds in underscoring that there is no good counter to the triple observation that (a) FSCI is a known product of intelligence, (b) is only known to be so produced, and (c) is credibly beyond the reach of known non-intelligent causal factors alone or in concert. 9 --> Thus, FSCI plainly stands as a reliable sign of intelligence, including in cases where we do not directly know the causal story; e.g. origin of DNA-based cellular life. 10 --> And THAT is what best explains the stridency of materialistic objections to it. ___________ GEM of TKI kairosfocus
Nakashima-San: Re:
Next unanswered question - I give your C*S*B procedure an array of data 800*600*24 bits, scanned from a photo of Mt Rushmore, ca 1925. You say the procedure should infer design. This is not a false positive because of the nature and structure of the photograph. What do you mean by nature and structure, and how would that apply to a photo of TV static? (I know I am dating myself because in the new era of digital TV there is no static.)
Actually, I addressed this one already. Once more unto the breach: 1 --> A scanned photograph shown on a PC screen will manifest FSCI in the image, once modest perturbation will destroy the image. 2 --> In the case of Mt Rushmore circa 1925, a fairly modest degree of noise will disrupt the recognisability of the particular mountain in view, then beyond a further modest threshold, that there is a specific image will be lost in the on-screen snowstorm. [And, there is such a thing as a disrupted digital image; I guess we see a lot more of that out here in the Caribbean [esp. when Cricket is "on" . . . ] than you do in Japan, doubtless.] 3 --> And, in speasking of the nature and structure of the photograph, i am alluding to the fact that here is a certain silver depositional pattern based on the optical information impinging on the film where the picture was taken. 4 --> On developing, we have a specific image of Mt Rushmore at a particular moment, and not of something else (as would happen if the camera were to have accidentally opened up and spoiled the exposed film). 5 --> This is now a reference point, and we can see that we now have a definite function: photo of Mt Rushmore, circa 1925. 6 --> Scan and put on screen, and we will have a further digitised version (strictly, the Ag particle pattern in the film is a digital image, just the pixels are at random not in a grid.) 7 --> Inject white noise to varying levels and we will see increasing disruption of function to the degree where beyond a certain point no discernible image of a particular object is observable. Further noise will just mush the snowstorm around, with no material difference in what is there. 8 --> that is, in the case of such a screen-full of "snow," moderate random changes (as previously discussed in raising e.g steganography) will not materially affect the overall image of a screen-full of snow. 9 --> That is, there is not an islands of function topology in this case. The general complexity [we doubtless have more than 1,000 bits of info storage capacity] is not functionally specific, so FSCI does not apply: F = 0. 10 --> If we, however, were to make one particular screen-full of snow a standard and do bitwise comparisons to it, we would see a difference that can be used functionally, but that is where we have now made a reference target. (This can be used to make up say a one-time message pad.) 11 --> The centrality to FSCI of a topology of islands of function in a config space dominated by non-functional configs that thus exhausts the unaided search capacity of the observable universe will be plain. GEM of TKI kairosfocus
Onlookers: The desperation to dismiss the implications of FSCI has now reached a climax in which ideology is now triumphing over easily accessible and long since adequately discussed facts and their implicaitons: 1] BB, 284: Despite numerous requests, a blanket refusal to demonstrate how FSCI can be calculated for a specific example Excuse me! What then of the metrics in # 177 above [and the always linked], and even in the Weak Argument Correctives at 28? have we not seen that any string of ASCII charactes of at least 1543 length that consittutes contextually responsive English will be an example? Similarly for computer programs? The case of a PC screen full of information? And of course, the case that triggers all of this dismissal: observed DNA etc in the cell. (As tot he notions that I do not understand hill-climbing algorithms on differential "fitness" measures or do not understand biology etc., that boils down to that i disagree with the conventional wisdom; so much easier to dismiss than to address the serious epistemological issues as already linked. FYFI, while I do confess to being a penitent sinner under reconstruction and reformation, I am neither ignorant nor insane nor imbecilic on the relevant matters, pace Dawkins et al.) 2] . . . resorts to rhetorical claims and gestures towards products of human design with claims of ‘Look, FCSI, its obvious onlookers. On the contrary, having given specific cases of FSCI, we have shown that in the cases where we do independently know the cause of FSCI, it is intelligence. This means that FSCI is known to be produced by intelligence, and that there are no cases where it has been observed to be spontaneously produced by nature acting freely and without direction, by chance + necessity. (Given the threshold configuration space of 2^1,000 states and an observed universe that cannot access as many as 2^500 atomic states across its lifespan, this is also just what analysis of the challenge of search will tell us.) In short, here BB is distorting and dismissing the scientific inference from the fruit of widespread and reliable observation to empirically warranted inductive generalisation. But, such dismissal does not shift the balance: on inference to best and empirically well supported explanation, FSCI is a reliable sign of intelligence. The real problem: if FSCI is a reliable sign of intelligence, and DNA exhibits it, it is best explained as designed. Which is unacceptable to those committed to a priori Lewontinian materialism imposed on science. 3] 298: From a scientific point of view we don’t yet know what was required for life to arise so we can’t measure its information content and determine its FSCI. On the contrary, we do know that complex functional, information rich algorithm-implementing organisation must have arisen for life as we observe it to exist. For, such cell based life has these features. Just so, we can easily enough observe the information carrying capacity of DNA, etc, and observe both function and vulnerability to modest perturbation. DNA for minimally complex independent life is about 600 - 1,000 k bits, or a search space well beyond that of 1 k bits. In short, we here see FSCI and know that undirected chance + necessity is credibly unable to successfully sample the search space, on the gamut of probabilistic resources accessible in our observed cosmos. but, we also know that DNA is algorithmically functional based on digital information. this class of entity is routinely produced by intelligent agents, though we have not as yet mastered the arts to create something like DNA. (And BTW, hill-climbing algorithms do not address the real challenge here: to get TO shores of function in vast config spaces of non-function.) 4] 302: The short version of that would seem to be - we can’t tell if something has FSCI unless we know everything there is to know about it, and its history. Strawman, set up to be knocked over. Once an entity has information carrying capacity and exhibits observable functionality that is sufficiently specific to be vulnerable to modest perturbations [i.e. shows islands of function in a sea of possible configurations] then the object beyond a certain threshold is beyond the undirected search capacity of our cosmos. So -- and as has repeatedly been pointed out but ignored or dismissed -- FSCI is both observable and quantifiable, without reference to causal story. (The issue is that when we DO know the causal story directly, FSCI is invariably the product of intelligence, and that on search space grounds this is just what would be expected; i.e. we have grounded an induction on best explanation from observed FSCI to its most credible cause -- intelligence not nature acting spontaneously and without direction through forces of chance + necessity.) And, once we see FSCI we are entitled to infer to its general class of cause: design. 9Thus, for very relevant instance, cell based DNA-driven life shows the signs that point to its origin in design. By what designer is another question, one to be answered based on other evidence that allows us to identify possible candidates and select the likeliest.) 5] 303: a good GA when implemented on, say a computer, would be as compact as possible whilst a large GA (written by Microsoft probably ;)) might contain much complexity (lots of FSCI) but not perform any better than the small GA containing less FSCI. — By perform better in this context I mean generate more FSCI – The genetic algorithm implementing program in either case will with overwhelming probability exhibit 143+ ASCII characters, and will be functional and vulnerable to modest perturbation. It also depends on a computer which is itself based on much functionally specific, complex information. So, in either case we have excellent grounds to infer from seeing a GA program, that it was designed. And, in fact by discussing "written by Microsoft" BB in effect acknowledges this: GA's in our observation are the product of designers. It may well be that a more efficient algorithm will take up less storage, but that is besides the point: we are looking at islands of function and thresholds beyond which the search resources of the observed cosmos cannot credibly move to shores of function by undirected chance + necessity. If a GA were to have in it less than 143 characters, the inference on search resources vs config space would be locked out, but that is because we are seeking to eliminate the chance of a false positive inference to design, and are perfectly willing to accept that the particular test for design will not be applicable to a string that is below that threshold. (Other tests such as irreducible complexity might well apply. See what happens if you knock out each character in sequence: is there a core part of the program where once anything is knocked out function is lost, and on restoring function comes back? multi-part irreducibly complex functionality at an operating point is another sign of intelligent design.) 6] The "probability" bugbear The key issue is not PROBABILITY -- and metrics thereof -- but search space vs a known topology of islands of function. With 1,000 bits of information carrying capacity involved in a case of observed function vulnerable to moderate perturbation, we see that the config space has 2^1,000 ~ 10.7 * 10^300 cells. The observed universe over its lifetime, has some 10^150 or less possible states of its ~ 10^80 atoms. In short, viewing the universe as a search engine, it cannot sample as much as 1 in 10^150 of the possible configurations, rendering any at-random walk based search utterly unlikely to hit on any reasonable islands of function. We do routinely see complex functional entities that exhibit this degree of info carrying capacity: they are not produced by chance + necessity alone, but by intelligence (which can get to shores of minimal functionality and then increment function towards peaks by among other techniques, trial and error then improvement). But since such an otherwise reasonable inference cuts across the dominant evolutionary materialism of our day, it is stoutly resisted as we see above. 7] The FSCI of a pen: Is a pen functional? Does it show vulnerability to modest perturbation of information stored in it? Is the capacity of the stored information in excess of 500 - 1,000 bits? A pen does show function based on interacting parts, and is indeed vulnerable to modest perturbation [drying ink, broken knibs or ball points, broken springs and whatnot], but it does not exhibit EXPLICIT information storage. Since we see a cluster of mutually interacting parts working together at an operating point vulnerable to disruption on loss of one of a cluster of key co-adjusted parts [ink, ink storage, ink transfer to writing contact point, transfer to paper], functionality AS A MODERN PEN exhibits irreducible, more or less fine-tuned [to operating point] complexity. Thus, irreducible complexity would be the reasonable sign of intelligence to infer based on. But, if one insists on whether such a pen exhibits FSCI, the issue is implicit information storage: is there hidden information in a functioning pen of at least 1,000 bits? ANS: On reverse engineering [observe: no "science stopper" here . . . ], we see a chain of decision nodes that are tied to its functionality, which could be reduced to the classic chain of binary decision nodes. If that chain can reasonably be seen as going beyond 1,000 nodes, then we could apply FSCI as a criterion to the pen. Precision of fit and co-adaptation of key parts for function at operating point [e.g. the ink, the ink storage and ink transfer mechanisms,a s well as the transfer to paper mechanism], would allow us to make such a decision. (Similarly, 143 ASCII characters that function as contextually responsive English text embed a chain of at least 1,000 binary node decisions.) VERDICT: The typical modern pen is likely to be well above such a threshold, though something like a goose quill or stick dipped in ink would be below it. (And, to try the Berra's blunder game of imaging a chain of "ancestry" from a stick dipped in berry juice to a Parker 51 fountain pen simply would show that a taxonomic tree illustrates commonality of structure, not ancestry without intelligent direction; as happened with the Corvette from the 1950's - 70's.) _____________ In short, the balance on the merits is plain: FSCI is indeed a reliable sign of intelligence. And insofar as 143+ characters of ASCII text in contextually responsive English, a computer screen-full of functional information, genetic algorithm programs, pens or living cells exhibit FSCI,we have good grounds for inferring to design as the most credible cause in all of them. GEM of TKI kairosfocus
#299 No, it is really quite simple and since you seem to have a hard time understanding this very simple concept, maybe you should refrain from commenting on it. Perhaps you should study some computer programming and some basic courses in English grammar. Jerry - as always very quick to resort to insults. However, you haven't responded to the argument. FSCI is meant to be a well defined concept. Yet when challenged to calculate the FSCI you can only do it by making all sorts of arbitrary assumptions about the specification and the cause. If I change the assumption I come up with a different value. Are you saying those assumptions are not required? Or do you have a justification for them? Mark Frank
Mr Charrington, -------"Clive, No. Why?" Why not? Clive Hayden
Kairosfocus @ 114
For those troubled by the issue on whether or not I am in agreement with Dembski
Actually, I rather asked if Dr. Dembski agrees with you. BTW, what do you think about Jerry's FSCI definitions? Are thy compatible with yours? sparc
Bill, You say things that are demonstrably incoherent, then ignore that anyone noticed. Great. Upright BiPed
A general point - a good GA when implemented on, say a computer, would be as compact as possible whilst a large GA (written by Microsoft probably ;)) might contain much complexity (lots of FSCI) but not perform any better than the small GA containing less FSCI. -- By perform better in this context I mean generate more FSCI -- Shouldn't a 'GA' implemented in let's say, a universe, by an all powerful God be as compact as possible and therefore contain almost no FSCI whilst generating almost infinite quantities of the stuff? BillB
jerry:
Now maybe in some other language or by using some encryption technique we can relate the second string of information to something else. If that is true, then that string is also FSCI.
The short version of that would seem to be - we can't tell if something has FSCI unless we know everything there is to know about it, and its history. A GA on its own is useless, just a description. You need some mechanism to operate according to the rules laid out in the algorithm. This could be a computer, a man with pen, paper and an abacus or a naturally occurring set of environmental conditions. how do you work out which FSCI in all the entities that support the running of a particular instantiation contribute to the FSCI generated by the process, and if the process then affects or adds to the FSCI in the entities that sustain the process what will happen. Will the FSCI go into exponential growth? BillB
Joseph, Where does measuring what it took to cause an object to come into existence stop. In the case of a pen do we need to account for the information in all the causes for all the objects required to cause the pen, and all those causes as well? If we do then ultimately all objects trace back to the origin of the universe so your methods require almost infinite regression back to things we cannot observe. Not a good basis for a scientific technique. BillB
UB
In his rush to mock KF, our freind Billie passed over a post directed at him which shows his objection to algorithmic DNA was nothing but a cheap distraction. Apparently, his fragility is overwhelmed by the implications.
I'm sure KF isn't that fragile and I would hesitate to use the word cheap against him so his distractions are probably rooted in confusion about the difference between a simulation and a simulator. BillB
""The point is that FSCI is meant to be a well defined concept. But you can only make it well defined by including a lot of arbitrary assumptions about both the target and the context in which the outcome is generated. No, it is really quite simple and since you seem to have a hard time understanding this very simple concept, maybe you should refrain from commenting on it. Perhaps you should study some computer programming and some basic courses in English grammar. jerry
Joseph,
One would measure the information in an object by determining what it took to bring said object into existence.
Thanks, you have just shown how the concept can't ever demonstrate design in nature. From a scientific point of view we don't yet know what was required for life to arise so we can't measure its information content and determine its FSCI. From an ID perspective we have no knowledge of the creator so we are unable to determine what was required for it to bring life into existance. Therefore, according to you, we have no way of measuring the FSCI in life unless we can uncover a material, knowable cause. BillB
Re #292 The point is not how low the number is. We all know that any outcome can be described in such a way that the probability of achieving that outcome is as low as you like. The point is that FSCI is meant to be a well defined concept. But you can only make it well defined by including a lot of arbitrary assumptions about both the target and the context in which the outcome is generated. I could for example assume that the target is any string which will effectively give a message of some kind to a recipient (i.e. the function is "gives a message"). But that in itself depends on the context. In the right context almost any string of characters gives a message. Mark Frank
"Surely some of these assumptions, and hence your calculation, need some justification?" Not really. You can nibble around the edges but it will not change anything of consequence. Your comment used over 800 characters and expanded the mix to include numbers and 4 or 5 other characters. So we have 42^800 possibilities. Now using some nibbling here and there I bet you could get it down to 35^750 or some other even lower number of which your post is just one possibility (if one considers 35^750 a low number but it is much lower than 42^800 by mucho magnitudes.) So your post is both very complex and very rare. Why don't you take a crack at it and see how low a number you can get. jerry
Interesting. In his rush to mock KF, our freind Billie passed over a post directed at him which shows his objection to algorithmic DNA was nothing but a cheap distraction. Apparently, his fragility is overwhelmed by the implications. He was, however, able to suggest to KF that he put his money where his mouth was. Upright BiPed
#289 Oh I see - a blank and 26 characters. This calculation makes some of the assumptions: * There is a mechanism for producing character strings * All characters are equally likely to be selected * Uppercase characters, characters from other alphabets, special characters, mathematical characters etc are not available * Probability of selecting a character is independent of characters already selected * The attempt to produce the string happened just once * The specification is exactly this string (or possibly the mixed case version of the string)- not e.g. strings of 22 characters that make sense in English or strings of 22 characters that make sense in some language or strings of some length that achieve the same end Surely some of these assumptions, and hence your calculation, need some justification? Mark Frank
Joseph
One would measure the information in an object by determining what it took to bring said object into existence.
Would you not also have to measure the information in the other things that it took to bring an object into existence? The information it took to make the hammer that was used to flatten the metal that was used to make the tie clip?
That said for a GA just count the bits it contains and that would give you the minimum amount of information (SI) it takes.
So FSCI = File Size? Is that compressed or uncompressed? Is the FSCI value different for compressed or uncompressed bits? What about working memory, that the GA would use when it was running? Is that counted? What about information it generates as it executes? That included also? Does the FSCI change at all or is it fixed?
The same goes for a pen.
When you put it like that, it sounds simple. Can you do it for a pen then, or are you limited to claiming that it it's possible in theory only? What is the value of the FSCI in a pen please? Or any other example you would like to give of a real physical object would be great. As other then simply counting the characters in a text message (KF's "millions of examples all over the internet" get out). I've never seen a value put on a everyday object for it's FSCI. Such as a, totally at random example, softball. Or a simple pen. Can it even be done? Mr Charrington
Joseph
It proves you are arguing from ignorance.
Then the thing to do would be to dispel my ignorance by answering my simple questions? Let me put it another way. What is the smallest value for FSCI that an object can have? If I ask for an object with 1 bit of FSCI I'm ignorant. What's the minimum then? 2? 10? 100? 500? If I ask for an object with 499, 500 or 501 bits of FSCI I'm ignored. Jerry
There is no FSCI in a pen
And yet if you found a pen on the heath you would believe it to be a designed object. How can a designed object contain no FSCI? If it does indeed have FSCI then presumably it's over 500 bits of it as it's a designed object. How many bits of FSCI does that pen have, if it has any at all? Mr Charrington
"I am intrigued. How did you get 27^21" The usual way. Two corrections, 27^22 and “ivjioe kjfe faod tm ql" jerry
Mr Charrington, When you sayn things like: 1 bit of FSCI? It proves you are arguing from ignorance. So why do you choose to do so? I say it is because you cannot support your position and have no desire to learn what it is you are arguing against. Joseph
BillB, One would measure the information in an object by determining what it took to bring said object into existence. That said for a GA just count the bits it contains and that would give you the minimum amount of information (SI) it takes. The same goes for a pen. The bottom line is it is a measurement. Information. The information age. Information technology. Information theory. When IDists speak of complex specified information they are using it in the following sense: information- the attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something (as nucleotides in DNA or binary digits in a computer program) that produce specific effects When Shannon developed his information theory he was not concerned about "specific effects". It is producing those specific events which make the information specified! And that is what separates mere complexity from specified complexity. Joseph
#285 I am intrigued. How did you get 27^21? Mark Frank
I made an error in the last post and it should be 27^21. If anyone else wants to correct the math, go right ahead but the basic idea is there. jerry
"the FSCI in a GA, in a pen, a rock or anything else for that matter." There is no FSCI in a pen or a rock though I could imagine how some intelligence might make it so. I assume it is a normal pen. In a GA, just use the letters or individual units of code and do an analysis such as the the amount of variation in an English sentence. "Methinks I am contrary" as opposed tp "ivjioe kjfe faod tm q" or 2^21 for each. Neither would be FSCI except there exist an independent mechanism to relate one to something else. Both of these other entities (that which does the relating and that which being related to) are completely independent of the initial entity or data set which is the source of information. Now maybe in some other language or by using some encryption technique we can relate the second string of information to something else. If that is true, then that string is also FSCI. jerry
KF-san, Thanks for the replies. I brought up Second Life because it is a model, just as we have been talking about on this thread. Since you are on record as supporting a position on simulation, I was attempting to use a reference to a well known simulation to clarify your position. As you say here, an earthquake underneath Linden Labs might temporarily shut down the simulation. Just prior to that, would the simulated ground be shaking? This is all relevant to your contention that the results of a GA can be discounted because additioinal sources of error were not introduced into the operating system and hardware. Next unanswered question - I give your C*S*B procedure an array of data 800*600*24 bits, scanned from a photo of Mt Rushmore, ca 1925. You say the procedure should infer design. This is not a false positive because of the nature and structure of the photograph. What do you mean by nature and structure, and how would that apply to a photo of TV static? (I know I am dating myself because in the new era of digital TV there is no static.) Nakashima
Onlookers: The balance of this debate on its merits is quite evident, in particular the following: 1-> Despite numerous requests, a blanket refusal to demonstrate how FSCI can be calculated for a specific example. 2-> instead resorts to rhetorical claims and gestures towards products of human design with claims of 'Look, FCSI, its obvious onlookers.' 3-> this, coupled with a clearly demonstrated failure to understand computational models of biology and nature 4-> and inventions of islands of function as an evidence free rhetorical dismissive device ignoring established and emperically based concepts like neutrality 5-> all clearly illustrate the vacuity and lack of scientific rigour that underlies FCSI and the selective hyperskeptisism employed to avoid a discussion of the issues on their merits. In short: cut the flowery rhetoric KF, put your money where your mouth is and show us how to calculate the FSCI in a GA, in a pen, a rock or anything else for that matter. BillB
Onlookers: The balance on the actual merits continues to be quite evident. Observe, especially, that objectors tot he inference from FSCI as a reliable sign of design are still unable to come up with a credible empirically observed counter-example. This is directly relevant to the point of the original post, that life reflects a distinct extra that is at least partly captured by functionally specific, complex information, which is inexplicable on undirected chance + necessity. Jerry: You have a point, i.e. the thread is plainly showing selective hyperskepticism at work on the objectors side, now coming up as endless distractive or dismissive objections. Re CH @ 269: Please first look up 177 above, or my always linked on metrics for FSCI. (In short, you have set up and hope to knock over a strawman that reflects at best lack of awareness of what is being discussed. For instance,t eh simple metric as I gave will never rule anything to be 1 bit of FSCI. the point of complexity is that beyond a certain threshold, 500 - 1,000 bits, functional target zones in the overall configuration space -- islands of relevant function -- are so isolated that the search resources of the cosmos are inadequate to credibly find such, apart form intelligent direction.) Nakashima-San, 270:
how should a Second Life avatar react to an earthquake around San Francisco (the location of Linden Labs’ servers)?
1 --> How is this even remotely relevant to the issues in this thread? (Scratching head . . . ) 2 --> I observe from Wiki: >> Second Life (SL) is a virtual world developed by Linden Lab that launched on June 23, 2003 and is accessible via the Internet. A free client program called the Second Life Viewer[1] enables its users, called Residents, to interact with each other through avatars. Residents can explore, meet other residents, socialize, participate in individual and group activities, and create and trade virtual property and services with one another, or travel throughout the world, which residents refer to as the grid . . . . Built into the software is a three dimensional modeling tool based around simple geometric shapes that allows a resident to build virtual objects. This can be used in combination with the Linden Scripting Language which can be used to add functionality to objects. More complex three dimensional Sculpted prims (colloquially known as sculpties), textures for clothing or other objects, and animations and gestures can be created using external software . . . >> 3 --> All of this is of course pretty explicitly functional, complex and specific software creating a model world that is of course the product of a network of intelligent designers. That is, it exemplifies that FSCI is observed to be the product of design. 4 --> If you mean that a sufficiently strong earthquake could directly or indirectly cause massive perturbation to the servers, well that would be plainly possible, and would on overwhelming probability result in not improvement but destruction of performance. G'day. GEM of TKI kairosfocus
I just noticed a post addressed to me several days ago that I missed. DK, I have never used or presented such logic on UD. If you say that I have, then provide a link...or, you could simply promise not to put words in my mouth in the future. :) - - - - - - - By the way, I can understand you wanting to push such logic. It would make the logic presented by the following idea much more easy to ignore: Chance and physical necessity cannot be rationally offered as the mechanisms that caused the organization of inanimate matter into living tissue. To do so would be a direct contradiction of what we empirically know to be true of each of these mechanism. Therefore, only chance and physical necessity may be used to explain the organization of inanimate matter into living tissue. Upright BiPed
BillB
algorithm: A finite set of unambiguous instructions performed in a prescribed sequence to achieve a goal… …think about where you are likely to find naturally occurring things that are by definition products of human endeavor.
Only humans use algorithmic instructions? ...? ? “DNA contains the specific instructions needed for an organism to develop, survive and reproduce. To carry out these functions, DNA sequences must be converted into messages that can be used to produce proteins, which are the complex molecules that do most of the work in our bodies.” –genome.gov ? ”The sequence of nucleotides in a gene gives it meaning by storing the instructions for building the other molecules necessary for life. –National Academy of Sciences ? “DNA's instructions are used to make proteins in a two-step process. First, enzymes read the information in a DNA molecule and transcribe it into an intermediary molecule called messenger ribonucleic acid, or mRNA. Next, the information contained in the mRNA molecule is translated into the "language" of amino acids, which are the building blocks of proteins. This language tells the cell's protein-making machinery the precise order in which to link the amino acids to produce a specific protein. –US National Library of Medicine - - - - - - - - If your comment was an attempt to dismiss the algorithmic nature of DNA, then it was factually silly on its face. And if it was an attempt to dismiss what we see because we are human and we give things human names, then your comment was then old and tired. If I remember correctly, it’s number six or seven on the All-Time Materialists Retorts list. As was already discussed on this thread, if every human on the planet died tomorrow, the symbol system instantiated inside DNA would keep right on acting as symbols for discreet physical substances and processes. Those symbols and the discreet physical substances and processes they represent would continue to have no physical connection to each other. The entire system has nothing whatsoever to do with humans observing it, nor giving it the name “algorithm”.
Perhaps you meant: “I was looking for naturally occurring complex processes”
Chemical processes seek stasis, BillB. DNA does precisely the opposite. Upright BiPed
Hoki,
The very sort of assmptions you are using when you say what you expect the designer to have (or not) done.
Yes, of course. However my point is that if the universe is designed for human life then I'm simply pointing out that if that's the case then it's a very roundabout way of making human beings - create a very very large universe and then put human beings only in a single place. Perhaps somebody can clarify why the entire universe is required to ensure human beings happen in just one tiny part of it. If we are special then it's because the universe is not designed to produce us inevitably. Humans are possible, here we are. But inevitable? No. If we were we'd know about it already, if only from the other human beings blowing up their own planets with nuclear bombs! We'd see and understand that! Mr Charrington
Clive,
No.
Why? Mr Charrington
Mr Charrington, ------"The more special we are, the less the universe was designed to produce us, no?" No. Clive Hayden
Mr's Charrington and Hayden: Didn't you learn anything from Cornelius Hunter's posts about religious assumptions? The very sort of assmptions you are using when you say what you expect the designer to have (or not) done. Hoki
Clive
That makes us, and life, all the more special.
The more special we are, the less the universe was designed to produce us, no? Mr Charrington
Clive
The universe is not empty of life, maybe you don’t grasp that you’re alive, nor appreciate what it takes for your existence.
If the universe was the size of a pint glass would you say that the glass was "full of life"? Or would you on first glance simply see an empty (of life) glass?
If there be any room between the ground and the end of the universe, you have to have space.
True. True. But this much "space" between "the ground" and "the end of the universe"? As you happen to be sitting on "the ground" don't you think your viewpoint my be biased? Seems to me it's much more likely the purpose of the universe is to generate as much empty space as possible. That is what makes up the majority of the universe, after all. Life just seems to be an extra blip along for the ride. You say the universe is designed for life, I say it's designed for life in the same way that a 747 is designed for moving bacteria around the world, or casting plane shape shadows. And, Clive, do you think the "designer" could be an Alien or not? Could the designer of life be different to the "designer" of the universe? You seem to be avoiding this. How many designers do you think there were Clive? Mr Charrington
Mr Charrington, -------"Then would it not be more reasonable still for the Designer to simply create life on Earth without the rest of the universe? What does it add? Why is anything outside of planet earth even there in that case? Just to give us something to look at in the night sky?" Are you serious? If there be any room between the ground and the end of the universe, you have to have space. The universe is not empty of life, maybe you don't grasp that you're alive, nor appreciate what it takes for your existence. The universe is full of life, the argument from size, that you're attempting to make, is very feeble. If we say that sizes determine value, then you're less valuable than the closest tree. "In popular thought, however, the origin of the universe has counted (I think) for less than its character - its immense size and its apparent indifference, if not hostility, to human life. And very often this impresses people all the more because it is supposed to be a modern discovery - an excellent example of those things which our ancestors did not know and which, if they had known them, would have prevented the very beginnings of Christianity. Here there is a simple historical falsehood. Ptolemy knew just as well as Eddington that the earth was infinitesimal in comparison with the whole content of space. There is no question here of knowledge having grown until the frame of archaic thought is no longer able to contain it. The real question is why the spatial insignificance of the Earth, after being known for centuries, should suddenly in the last century have become an argument against Christianity [or Design]. I do not know why this has happened; but I am sure it does not mark an increased clarity of thought, for the argument from size is in my opinion, very feeble. When the doctor at a post-mortem diagnoses poison, pointing to the state of the dead man’s organs, his argument is rational because he has a clear idea of that opposite state in which the organs would have been found if no poison were present. In the same way, if we use the vastness of space and the smallness of earth to disprove the existence of God [and Design], we ought to have a clear idea of the sort of universe we should expect if God [or the Designer] did exist. But have we? Whatever space may be in itself – and, of course, some moderns think it finite – we certainly perceive it as three-dimensional, and to three-dimensional space we can conceive no boundaries. By the very forms of our perceptions, therefore, we must feel as if we lived somewhere in infinite space. If we discovered no objects in this infinite space except those which are of use to man (our own sun and moon), then this vast emptiness would certainly be used as a strong argument against the existence of God [and the Designer]. If we discover other bodies, they must be habitable or uninhabitable: and the odd thing is that both these hypotheses are used as grounds for rejecting Christianity [and Design]. If the universe is teeming with life, this, we are told, reduces to absurdity the Christian [or Design] claim – or what is thought to be the Christian [or Design] claim – that man is unique, and the Christian doctrine that to this one planet God came down and was incarnate for us men and for our salvation. If on the other hand, the earth is really unique, then that proves that life is only an accidental by-product in the universe, and so again disproves [the] religion. Really, we are hard to please. We treat God as the police treat a man when he is arrested, whatever He does will be used in evidence against Him. I do not think this is due to…wickedness. I suspect that there is something in our very mode of thought which makes it inevitable that we should always be baffled by actual existence, whatever character actual existence may have. ~C.S. Lewis, God in the Dock, "Dogma and the Universe" (1970) Clive Hayden
Clive,
It is perfectly valid and reasonable that the Designer arranged the entire universe for life on Earth only.
Then would it not be more reasonable still for the Designer to simply create life on Earth without the rest of the universe? What does it add? Why is anything outside of planet earth even there in that case? Just to give us something to look at in the night sky?
What’s interesting to me, is that the possibility that aliens exist is trotted out as an argument against design, but so is the argument of the impossibility of aliens existing.
What's interesting to me is that if aliens exist they are material. If they are material they could not have created the universe, as they are in it too. So while Intelligent Design on the one hand pretends to accept Aliens as a viable mechanism for the origin of life in reality ID proponents also believe "the Designer" created the universe. You can't have it both ways.
Which do you think is the better argument? One thing is for sure, you can’t use both.
You tell me Clive. Do you honestly believe "the Designer" could be an alien? If so, do you accept then that there is more then one possible designer? The designer of life on earth and the creator of the universe?
It would be an unwarranted assumption to assume that we should see more life on the design hypothesis.
It could be if you could show how it's logical that the universe is both created for life and empty of life at the same time. If you could show not only that the entire universe is required for life but that's it has to be empty of other life for life on earth to happen then maybe it would be an unwarranted assumption. As long as those things are lacking, well, it's not that unwarrented.
Whether there “should” be more life on that assumption, has no purchase.
If the universe is created for life, then explain how it is not full of life. You have not done that. Just said it makes us more special. That's not an explanation. Mr Charrington
Mr Charrington, ------"Then why do you suppose is is that in the entire observed universe we appear to be the only life form? That makes us, and life, all the more special. ------I mean, if the cosmos was “designed for that kind of life” would we not expect to see, er, more of it? In fact, not more, just “any other them ourselves” would help your case. In fact, we see none. Despite that you claim the cosmos was designed for life. Right… No, we wouldn't necessarily expect that. It is perfectly valid and reasonable that the Designer arranged the entire universe for life on Earth only. What's interesting to me, is that the possibility that aliens exist is trotted out as an argument against design, but so is the argument of the impossibility of aliens existing. Which do you think is the better argument? One thing is for sure, you can't use both. ------Why do you suppose that is KF, if the entire universe is designed for “this kind of life” that we see only a single example of it? Us?" Because we are special to the design. It would be an unwarranted assumption to assume that we should see more life on the design hypothesis. It is enough to determine that the finely tuned constant's likelihood being a result of blind caprice are vanishingly small for our own existence. Whether there "should" be more life on that assumption, has no purchase. Clive Hayden
"insistent sophomoric dismissal of my understanding of scientific theorising and modelling," kairosfocus, they are just playing games. Whatever you say, they will make up some strange comment such as "FSCI has not been demonstrated because the sky was grey yesterday and Uranus has 25 moons" You will reply that Uranus has 27 moons and then they will say see you are not sure because some of what are called moons are just large rocks temporarily caught in its orbit. And then the debate will go on about the length of the time of each moon in Uranus's orbit and because of the uncertainty, FSCI is uncertain at best as a concept. You will then argue that the number of moons on Uranus has nothing to do with FSCI and they will say prove it. And then they will declare victory and mock you for your lack of understanding of such basic concepts of science. And if you asked what has the sky being grey has to do with anything they will say your lack the basic understanding disqualifies you and FSCI as anything important because it is obviously FSCI. As I said answering their inane comments is just feeding their childish behavior. You are answering the spammers. jerry
KF-san, Declaring victory, Day 2. Here's another unanswered question - how should a Second Life avatar react to an earthquake around San Francisco (the location of Linden Labs' servers)? Nakashima
Kairosfocus, Can you give me an example of something with 1 bit of FSCI? 499 bits of FSCI? 500 bits of FSCI? 501 bits of FSCI?
Similarly, the finely tuned balance of multiple factors that “sets” the observed cosmos to an operating point that is favourable to life points to design of the observed cosmos for that kind of life.
Then why do you suppose is is that in the entire observed universe we appear to be the only life form? I mean, if the cosmos was "designed for that kind of life" would we not expect to see, er, more of it? In fact, not more, just "any other them ourselves" would help your case. In fact, we see none. Despite that you claim the cosmos was designed for life. Right... Why do you suppose that is KF, if the entire universe is designed for "this kind of life" that we see only a single example of it? Us? Any thoughts on that? If the universe is designed for this kind of life then the designer could have done a better job. I can think of better designs for a universe if "this kind of life" was the target. Fill it with air for a start. Give everybody wings. Use all that empty space between the stars for something! So, KF, if the cosmos is designed for life what kind of life is it designed for? It's not humanity, that's for sure. Mr Charrington
Onlookers: The onward remarks above show just why I have found it necessary to draw a bottom-line summary of the actual state of the case on the merits above, both for the challenged concept of FSCI, and the underlying question of the origins of life. (Notice, as the rhetorical dust settles, how to date a valid counter example to the inductive generalisation that FSCI -- among other similar signs -- is a RELIABLE sign of intelligence is still yet to be seen.) As for BB's insistent sophomoric dismissal of my understanding of scientific theorising and modelling, I suggest onlookers examine my first level remarks on epistemology here. [In a nutshell my personal philosophy of science (and thus of modelling) generally follows Charles Sanders Peirce: science uses abduction to infer to best current empirically warranted explanation, which is elaborated deductively and tested empirically, yielding a hopefully reliable -- but provisional -- inductive generalisation; i.e weak form knowledge: well warranted, credibly true belief held provisionally. In the case of models, usually, accuracy is sacrificed in favour of convenience [back of the envelope and all that], but with validation to assure empirical reliability.] Science at its best is an unfettered (but intellectually and ethically responsible) pursuit of the truth about our world, based on empirical evidence. In that context, the key problem with the current dominant evolutionary materialistic paradigm on origins studies is that it a priori forecloses other than materialistic explanation, as may be seen from US NAS member Lewontin's now well-known, ever so revealing remarks:
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [NY Review of Books, 1997. RL is so convinced that materialism is true that he turns around and uses it as a criterion of truth, not realising that his locks out the possibility of correction.]
And, as I have shown here in my always linked, Lewontin's historically and philosophically ill-judged position is now being enforced by power centres of institutional science [acting as today's equivalent to the Magisterium of old . . . ] and are appearing in courtrooms, education policies and parliaments as well. So, returning to he main point: we have good reason to see that it is empirically relaible that FSCI is known to be produced by intelligence in aciton, and that we ONLY see it as being so produced, where we directly know its source. Similarly, on the search resource of the observable cosmos challenge, we cannot see how undirected chance + necessity can credibly produce such FSCI -- including the codes, algorithms, data structures, information and implementing machinery found in the cell, which on just DNA would need ~ 600 - 1,000 k bits (relative to observed minimally functional independent life). We have excellent empirically based warrant to conclude that life, from the outset, reveals that it is designed. Similarly, the finely tuned balance of multiple factors that "sets" the observed cosmos to an operating point that is favourable to life points to design of the observed cosmos for that kind of life. And, given the problem of the materialistic Magisterium, these two empirically grounded inferences is quite enough to trigger a scientific revolution; as is beginning to happen. That this revolution just happens to be more theism-friendly (and more John 1:1 and Rom 1:19 - 20, etc) than institutional science has been in recent decades is just a matter of how the cookie crumbles: atheism is no more priviliged a scientific outlook than the theism of the founders of science who sought to think God's thoughts after him. After all, the proper method of core philosophy -- here considered as comparative analysis of worldviews -- is comparative difficulties across all live options. GEM of TKI kairosfocus
UB, I think it would be a good idea to have a page dedicated to Abel's work here at UD. It seems to be the best-kept secret in ID theory. It is fairly dense reading for us laypeople, however. Do you know of a gentler, more accessible inroduction to his work? herb
UB: I would start here:
algorithm: A finite set of unambiguous instructions performed in a prescribed sequence to achieve a goal, especially a mathematical rule or procedure used to compute a desired result.
and think about where you are likely to find naturally occurring things that are by definition products of human endeavour. Perhaps you meant: "I was looking for naturally occurring complex processes" I would ask a chemist if I were you, I'm a computer scientist. BillB
Mr BiPed, Sorry, I missed your comment 207! No, I don't know of any similar system. Nakashima
BillB, could you please step in and help Nakashima with comment 207? Thanks. Upright BiPed
Does anyone know anything about the evolution of early life or proteins? How can we say what it is or isn’t consistent with?
I get the feeling KF thinks he does, but its a good question, which is why I really wish I could track down the paper to check on their methodology. BillB
BillB:
there is research indicating how early life and protein evolution is consistent with thermodynamics.
Does anyone know anything about the evolution of early life or proteins? How can we say what it is or isn't consistent with? ScottAndrews
KF, your comment at 257 makes it quite clear that you are still struggling with the basic concepts and methodologies behind scientific models of physical reality. Not to worry, these can be difficult concepts however I don't see much point in my trying to explain again so I'll give up now, particularly in light of your move towards rhetorical dismissal and insult in 256, your proclaimed victory in 259 and the tiresome appeals to onlookers. By the way, you said this:
Physics does not program ponds to make life molecules, at least if the thermodynamics numbers are to be believed.
Making blind assertions will not win arguments, unfortunately I haven't yet managed to track down the reference but there is research indicating how early life and protein evolution is consistent with thermodynamics. From what I gather some of those materialistic scientists you grumble about are actually doing the research rather than just making claims. Incidentally and as I've alluded to several times - I'm not a materialist, if you want to refer to me, even indirectly, as one then that's fine but don't complain if I start calling you a creationist. BillB
KF-san, Declaring victory does not actually answer my questions. With regard to OOL, do you agree that the "function" under discussion is "accumulates reaction products faster than they are broken down" or something similar? Nakashima
PS: I see some further remarks by Nakashima-san. Briefly, again: the photo of a face can be rendered into a conjunction of polygons with colour, shading etc, and encoded accordingly. The specification of Geo W's face is far more precise than that of a face-like object pattern as can be seen in the Man on the mountain (and I can see 2 - 3 other vaguely face like features in the latter). "Snow" in a picture only becomes specific if used as a reference, as I have already discussed this morning and before: random disturbances do not materially affect its snow-like distribution. Small changes destroy the resemblance in a portrait. the later sits in an island,t eh former is in the sea of non-function, until an intelligence selects the pattern to use as a base for a cipher or the like. G'day, again. Punto final. kairosfocus
Onlookers: It seems to me tha ther eis now more than adequate evidence and interaction to see the true balance on the merits for this thread. Namely: 1 --> information that is functionally specific and existing as islands of function in a vast sea of possible (but overwhelmingly non-functional) configurations is a key component of observed life, and thus credibly needs to be explained as to its source. 2 --> FSCI, in EVERY case of known source, is the product of intelligence. 3 --> Not least this is becuse the search resources of the observed universe are simply vastly inadequate to search a significant part of the config space. So, it is beyond the reasonable reach of unaided trial and error. 4 --> In short FSCI is on the evidence in front of us, dirtect and indirect, positive ans negaticve, best explained by intelligence. 5 --> So, on seeing FSCI in DNA etc, this is a strong clue that life is rooted in intelligent design, and indeed,t he bio-information is as fundamental a component as the associated chemistry and physics. 6 --> But, since that is unacceptable to evolutionary materialists, it is being stoutly resisted, by all sorts of objections; all of which fail, and most of which are irrelevant. (Some even mange to provide inadvertent support to what they try to overthrow.) ___________ And, that's the bottomline. G'day GEM of TKI kairosfocus
KF-san, In the case of Mt Rushmore in the first and second [digitised] photos, in both cases design would be resident in the nature and structural patterns of a photograph. A photo is a designed object. No false positive on “design” there. I'm not sure what you mean by the nature and structural pattersn of a photograph. Do you mean things like "the sky occupies the upper third of the image"? It seems that this argument can be applied to any data set chosen by a human experimenter - the act of choosing makes it FCSI. If I take a photo of the static on my TV and give you the raw bits, will you infer design because of the nature and structure of the photograph? As to specification by compressibility, this is of course an allusion to the fact that a specification is as a rule simply describable and as well to the Kolmogorov applications used by others in this general field. I don't really care what it is an allusion to, the Hope Diamond or last week's weather forecast for Pluto. Did I choose S = 1 correctly based on the low compressibility of the data, correctly following your procedure? Nakashima
PPS: BB, the computational substrate as you call it is materially different from the physical one. You have rhetorically dismissed the specific causal factor that makes the functional organisation of the GA not only possible but routinely actualised. The computer, repeat, simply mechanically executes instructions fed to the processor, whether or no they make sense: GIGO. It is the designer that makes the instructions do a specific job, here: executing a genetic algorithm program. Physics does not program ponds to make life molecules, at least if the thermodynamics numbers are to be believed. [Cf TMLO for its analysis.] kairosfocus
BB: You come across as having skimmed to make further "objections ad infinitum" rather than to understand what the subject is about and address it seriously on its merits. For instance, it should come as no surprise to you that the experimental method is in significant part about testing predictions of alternative hypotheses on actual independently known cases. FSCI, on that standard, is ever more plainly a reliable sign of intelligence. That is not a way to make either progress in your own understanding or to a reasonable discussion. But, you are inadvertently telling the intelligent onlooker just where the true balance is on the merits. GEM of TKI kairosfocus
PS: VJT's images are very useful. George Washington's facial image is plainly far more specifically facial and accurate to an individual than the man of the mountain. Analysis in terms of polygons and colour-shading would be relevant, as well as the now fairly common analysis of faces. (Cf interesting discussion here on the average = beautiful thesis. Turns out the REALLY "gorgeous" face is not average, though the average is relatively attractive.) kairosfocus
It is plain that the GA was written by an intelligent programmer, and that when executed, it mechanically pursues a path based on an algorithm that is just as much a matter of intelligence. Its output will exhibit FSCI, and thast will be directly traceable to the action of the author and the designer of the PC that ran it.
Yes, as I have been saying ALL ALONG, GA's are designed. According to you they generate this FSCI, except they don't because they are actually just transferring it from the person who wrote the GA? I'm still unclear on exactly how you measure or quantify this transference. IF I run the same experiment several times will It always produce the same level of FSCI? Can it produce more than the thing that created the GA. Is FSCI additive or cumulative, can I use a GA to generate a billion robots that assemble to make a thing with more FSCI than me? I'm also still unclear on how you would go about calculating the FSCI in a GA. I'm not trying to be obtuse, I just want yo to demonstrate that it is possible and that you understand how it can be done. If you can't then it is not a problem, I would just rather not rely on your promise that it is obvious.
Now, rather minor adjustments of facial characteristics are enough to destroy recognisability as a specific individual, i.e. — surprise! — we have here a config space topology exhibiting definite islands of function.
You have one island of function here, that of a human face. Making minor modifications to it will turn it into a different human face. Whether it is recognizable as a person depends on who is looking at it. If you are trying to make allusions to nature here then it fails because nature is not trying to achieve a fixed single target such as a particular persons face. All you have done by setting this as a target is try and define a fitness heuristic that is 'un-natural' and explicitly creates the islands of function you are so obsessed with. Consider how the heuristic changes if you allow anything recognizable as any kind of face from the animal kingdom.
Similarly, in a GA, its code is not allowed to vary at random as part of the normal execution. Nor that of the underlying operating system. And, neither the OS nor the GA — both of which massively exhibit FSCI
Yes, the GA operates within the constraints imposed by the computational substrate just as the self replicating organisms in nature operate within constraints imposed by the laws of physics. In this way the compute substrate and simulation environment complete with GA are analogous to the physical world. Trying to argue that they are bad analogies because the rules that constrain them don't randomly change is like saying that the laws of physics should randomly change. Clearly they don't so neither should those constraints when used in a model. This is just incoherent distraction.
Recall, at just 1,000 bits of functionally specific information-bearing capacity, we are dealing with a situation where the resources of the entire observed cosmos would be unable to scan 1 in 10^150 of the possible configurations.
Yes - more incoherent irrelevance. The universe is entirely parallel, it doesn't do things 'one step at a time' like a computer. Not all configurations need to be 'scanned' as cumulative processes can 'search' much more effectively in the types of search spaces found in nature. If a GA operating in a simulation can 'generate' FSCI then why can't natural selection operating in the world generate FSCI? After all I'm not arguing that the universe wasn't intelligently created so if it was created then, just as with the GA, FSCI ought to be produced. BillB
VJT & Jerry: Thanks. You both have valid points. Others: Very little of what has proceeded further is going anywhere other than what should be trivial or obvious. For instance, the program above is obviously in itself FSCI-bearing, as Jerry highlighted. It is fucntionally specific and complex beyonf the complexity of 143 ASCII characters -- disturb it at random and see what happens to its executability, even at 5% probability fo mutation of the individual character. (And, a basic FSCI metric relevant to what was just inferred was long since provided above and in the linked, as well as in the Weak Argument Correctives, so BB is simply being willfully obtuse.) It is plain that the GA was written by an intelligent programmer, and that when executed, it mechanically pursues a path based on an algorithm that is just as much a matter of intelligence. Its output will exhibit FSCI, and thast will be directly traceable to the action of the author and the designer of the PC that ran it. THIS INSTANTIATES THAT OBSERVED CASES OF FSCI WHERE WE DIRECTLY KNOW THE CAUSE, ARE PRODUCTS OF DESIGN. As to specification by compressibility, this is of course an allusion to the fact that a specification is as a rule simply describable and as well to the Kolmogorov applications used by others in this general field. In the case of Mt Rushmore in the first and second [digitised] photos, in both cases design would be resident in the nature and structural patterns of a photograph. A photo is a designed object. No false positive on "design" there. [And it is relevant to see that this is yet another case of millions of cases of FSCI where the cause is independently known and turns out routinely to be intelligent. In short, distractive and dismissive rhetoric above notwithstanding, the challenge to identify a case of FSCI which is of known origin and has not originated in intelligent action still stands unmet.] At the second level, the issue is the difference between a mountain in its more or less natural state -- which is a complex shape driven by law + chance [which would of course lead to non-inference to design -- complex but not functionally specific in any relevant sense] -- and in the modified state due to design by a sculptor. In the latter case, the mountain was modified to accord with the facial characteristics of four historically important individuals, which indicates a rather tight specification, which can be regarded as functional [though by "function" algorithmic or linguistic function are far more relevant to our concerns on e.g. OOL . . . note the red herring led out to an intended strawman rhetorical tendency here]. Now, rather minor adjustments of facial characteristics are enough to destroy recognisability as a specific individual, i.e. -- surprise! -- we have here a config space topology exhibiting definite islands of function. Bit content of the Mt Rushmore to get the degree of precision of shape to be recognisable as portraits of specific persons will obviously be well beyond 1,000 bits. (As to Old Man of the Mountain, this is more of a function of our internal tendency to see faces -- socially and protectively important -- than of any particular shape; the shape being easily recognised as a pattern stemming from chance + necessity. The shape is complex but not particularly specific.) If one were to rework Mt Rushmore to resemble a natural mountain, and one were to look at the mountain, it would seem to be complex but not particularly specific -- moderate random variations in its shapes etc would not destroy the class of pattern etc [regarding pattern as a type of function]. Thus, it would then not exhibit functionally specific constraints on its complexity, i.e it would NOT be FSCI. Indeed, after the collapse of the "man" feature, the mountain in New Hampshire is still a mountain with the same general class of features, just one interesting feature used as found object art has now eroded away. However, say one were to digitise a picture of a natural mountain and use some of the binary code string to form a cipher. For instance, in the red spectrum part of the code for the photo, use the least significant bit to store ASCII text, then take the difference between the natural mountain's picture and the modified picture to extract the text, i.e steganography. In this case, the natural mountain as digitised in one particular image would be a specification, and functionality would be derived from subtle differences of a related image from it. However, if the original were to be allowed to vary at random, even moderately, the ability to extract information would soon be destroyed. Same, if the cipher-bearing transformed copy were to vary at random. That is, an island of functionality topology has now emerged. (Think of how in the US Colonial era, leaves were incorporated into runs of bank notes to foil counterfeiting: in effect a particular found object pattern was now used as a specification, and divergence therefrom was proof of counterfeiting. The NH state quarter has some of that same functionality -- if a claimed quarter turned up with the Man significantly modified, it would have to be counterfeit.) Rob is of course right that function is in a context, but hat was not at issue, I trust the above on steganography brings out a little more of what I meant. in particular at random aspects as used to function, are always used in a controlled fashion. E.g., lottery winning tickets and numbers must fit certain specifications. Similarly, in a GA, its code is not allowed to vary at random as part of the normal execution. Nor that of the underlying operating system. And, neither the OS nor the GA -- both of which massively exhibit FSCI -- arrived by random processes that rewarded observed algorithmic function. That, at a basic level, for obvious search space exhaustion reasons. Which illustrates the issue FSCI raises -- not the improvement of function through whatever hill climbing algorithm one may wish to use [even one that uses controlled randomness], but the arrival on the shores of an island of function in a vast sea of non-function. Recall, at just 1,000 bits of functionally specific information-bearing capacity, we are dealing with a situation where the resources of the entire observed cosmos would be unable to scan 1 in 10^150 of the possible configurations. So, until you get TO shores of function, you have a problem of specifying an unintelligent mechanism that can cut down on the implications of mere trial and error in a vast sea of non-function. GEM of TKI kairosfocus
Mr Jerry, I did not know that God was writing comments on this site. Did He also write the code that Nakashima gave us. While I think you are expressing it somewhat jokingly, this question of where to assign credit (or blame) for FSCI is key. Do we give credit to the first cause or to the last cause? If the first cause, then I understand naming God as the author of the FSCI. If the last cause, then the GA itself is the author of the FSCI, not of itself, but of the Pop data array inside it. (Note that the bits in Pop come from the random function or copied from other places in Pop. The ultimate source of every bit in Pop is random. Nakashima
Mr Vjtorley, Hi. The idea you describe would be appropriate for estimating the FSCI of a sculpture - the 3D object itself. What KF-san has been discussing is simply a 2D image, whether of Mt Rushmore, Mona Lisa, a block of text or a screenful of static. Your idea is similar to how objects are often represented in computer graphics/ The surface is approximated by a set of triangles. if you arrived at this yourself I congratulate you. Nakashima
Nakashima, The code you presented when implemented on an appropriate computer is FSCI. It is information that is complex and specifies another entity which has a function. Sparc, "Indeed, every of your comments contains FSCI. Just like KF’s posts the FSCI content of your comments is a constant that equals God." I did not know that God was writing comments on this site. Did He also write the code that Nakashima gave us. jerry
Mr Nakashima You wrote:
Let me ask you about a thought experiment. Let’s take two photos, taken from the same vantage point, the same season, the same time of day, of Mt Rushmore, the first in 1925 (before the sculpture) and the second in 1941 (after the sculpture was completed).
FYI: Here is a picture of the Old Man of the Mountain in Vermont, before it was destroyed. http://www.funnycoke.com/om4.jpg Here is a photo of George Washington's face under construction on Mt. Rushmore, in 1932. http://www.imageenvision.com/md/stock_photography/men_constructing_mt_rushmore.jpg To calculate the bit content, you could treat each face as a set of equal-sized planes (like the faces of a pyramid, a cube, or an octahedron) and then calculate how many of these planes you'd need to generate a pretty good likeness of each face, with the same amount of detail. Each plane has its own mathematical equation. You could assign one bit of information to each plane. You then asked about morphing from one to the other.
In which of these images is it correct to infer design, and in which of them is it incorrect?
The number of bits in the Old Man of the Mountain is clearly within the scope of nature to generate. Let's say (very generously) that a structure of equivalent specified complexity to the Old Man of the Mountain is generated by natural processes, somewhere in the world, once every year. We can use that fact to calculate the probability of obtaining N bits. George Washington's face has considerably more than N bits. You should be able to calculate the number of bits that natural processes might generate once every 4,536,000,000 years (the age of the Earth). That's your cutoff point. vjtorley
Nakashima, Ah, excellent, if I'm not mistaken, it's the microbial GA! I was considering posting the same thing myself. I was wondering about how the representation of the algorithm will affect the FSCI? Perhaps your psuedocode could be compared to a plain text description and a compiled executable. Should they all contain the same amount? BillB
Mr Jerry, If you want to discuss the FSCI of a GA algorithm, I thought it would help if we had one to reference, so I wrote this pseudocode during lunch. It is a steady state GA with uniform crossover. It is about the smallest GA I could write, but it should still work. Nakashima
PopSize = 1000 IndSize = 1000 MaxTime = 1000 mutationRate = 0.05 allocate Pop[PopSize, IndSize], Fitness[PopSize] for i = 1, PopSize for j = 1, IndSize Pop[i, j] = rnd(0, 1) next j Fitness[i] = evaluate(Pop[i, *]) next i allocate NewInd[1, IndSize] for t = 1, MaxTime * PopSize for j = 1, IndSize p1 = rnd(1, PopSize) p2 = rdn(1, PopSize) if Fitness[p1] > Fitness[p2] then newBit = Pop[p1, j] else newBit = Pop[p2, j] if rnd(0,1) < mutationRate then newBit = not(newBit) NewInd[1, j] = newBit next j p3 = rnd(1, PopSize) Pop[ p3, *] = NewInd[1, *] Fitness[p3] = evaluate(Pop[p3, *]) next t evaluate( Ind ) { } Nakashima
kairosfocus:
the randomne4ss has no ma=eqanintg or function in itself.
Can you name anything that has function "in itself"? Random sequences are useful in many situations. That you deem them non-functional seems rather arbitrary.
Complex, but not specific in itself.
In your always linked, you say that specificity can be identified through functionality. Random sequences serve several important functions.
And, to move from RA decay to “encoded” string*, we have brought in: algorithms, codes, processing and storage, etc; all of which are the work of intelligent designers.
Ah, so information in AGTCCTGACTTCAGGGCT comes from the intelligent designer who encoded the genetic sequence, not from the actual DNA. And here I was thinking that functional DNA was an example of FSCI.
The onward process of intelligently using the sequence, however generated, is what would give it specificity and function. AFTER that process has been completed,t he originally random sequences have now become specific, complex and functional.
So random phenomena do have FSCI, but only after they're used for something. So why do you credit the FSCI to the consumer rather than the producer? Is there any reason other than the fact that the latter would invalidate your claims?
Let’s see, how many bits does it take to code and store the analysis of tidal motion under gravitational forces? How many to store the reports on ecological function of tides? [In short, the INFORMAIONAL aspects come from outside the tidal system and processes.]
That makes no sense. It sounds like you're saying the there is no information in tidal behavior until we record and report that previously non-existent information. Again, why is this not also true for biological information?
They are not INFORMATIONALLY functional in themselves.
Again, can you name anything that is informationally functional, whatever that means, in itself?
Again, the information and its specified complexity and functionality are not the product of the tides as such but of the process that an intelligent agent applies.
By that logic, you're creating the information in this comment as you view and mentally process the pixels. Any critiques of this comment should therefore be directed to yourself. R0b
Jerry:
y the way FSCI is obvious and in order to see where it is in a GA you would have to have the algorithm itself to examine. It exist very clearly in language, computer programing and in DNA. All are clear examples. Bringing up other examples are sometimes problematic. Something may or may not have FSCI but there are probably examples where is not obvious. But it does not exist in nature and that is the point. If one wants to pursue it to other things, go right ahead but it is not relevant to the basic argument. All the other attempts are just extending it to intelligent processes, not natural ones.
Indeed, every of your comments contains FSCI. Just like KF's posts the FSCI content of your comments is a constant that equals God. sparc
Jerry,
By the way FSCI is obvious and in order to see where it is in a GA you would have to have the algorithm itself to examine. It exist very clearly in language, computer programing and in DNA. All are clear examples. Bringing up other examples are sometimes problematic. Something may or may not have FSCI but there are probably examples where is not obvious.
This is the point isn't it. The reason why I'm arguing here is that FSCI isn't obvious and with these ambiguities I can't just take your word for it when you claim:
But it does not exist in nature ...
I would help if you can give me a demonstration of this:
in order to see where it is in a GA you would have to have the algorithm itself to examine.
Pick which ever one you want, there are plenty of examples on the web. BillB
Billb, I have been here 4 years and this is my analysis of what passes for discourse here. I have only found one (maybe a second) anti ID person here in that time that did not fit the description I have made. At first there are attempts to be reasonable but after time they all drift into the same behavior pattern. By the way FSCI is obvious and in order to see where it is in a GA you would have to have the algorithm itself to examine. It exist very clearly in language, computer programing and in DNA. All are clear examples. Bringing up other examples are sometimes problematic. Something may or may not have FSCI but there are probably examples where is not obvious. But it does not exist in nature and that is the point. If one wants to pursue it to other things, go right ahead but it is not relevant to the basic argument. All the other attempts are just extending it to intelligent processes, not natural ones. jerry
KF-san, PPS: Nakshima-San: there are many real life situations that give different equally valid metrics for phenomena. At a very simple level, think about how we measure angles. In these cases, we know how to convert between metrics. We can convert between degrees and radians, as you say. So it would very much help your case to show that these three different metrics are convertible, or explain why they are not. Nakashima
KF-san, PS: Contrast a screen-full of white noise “snow”: complex yes, specific — no. Vulnerable to random perturbation — no. So, the specific variable would set it to zero on the simple FS Bits metric. (Cf here granite vs DNA on Orgel’s remarks.) Well, this is exactly why I asked for your help. Based on your definition, viz. b] Let specificity [S] be identified as 1/0 through functionality [FS] or by compressibility of description of the information [KS] or similar means. I set S = 1 due to the low compression ratio. I said so previously and asked for your confirmation that this was appropriate. Let me ask you about a thought experiment. Let's take two photos, taken from the same vantage point, the same season, the same time of day, of Mt Rushmore, the first in 1925 (before the sculpture) and the second in 1941 (after the sculpture was completed). 1 - do you agree that 'design detection' in the first photo is a false positive? 2 - do you agree that 'design failure' in the second photo is a false negative? Now using Photoshop or some similar tool we create a series of morphs between the two photos, let us say 98 intermediates at 1% intervals. In which of these images is it correct to infer design, and in which of them is it incorrect? Nakashima
jerry. Well done! there's nothing like a resort to personal attack to help solidify your position in a debate ;) BillB
In every case of FSCI — which is observable, and measurable — where we know the source independently, we see that its source is in intelligence.
I'm still waiting for you to show me how to measure the FSCI of a GA. Without knowing how to do something like that I'm not sure how to apply it to anything else, particularly as your definition of FSCI seems to extend to any observation of nature. ----
Do you not see that first the Mona Lisa or Rushmore have in them particular functions and specification?
If I were to take my stonemasons tools and carefully re-design and re-work Mt Rushmore so that it looks like a mountain would it still contain FSCI. In fact if I designed a whole planet with oceans and mountains specifically so that life would evolve would that planet contain FSCI, and if so how much, and how would you tell the difference between my planet and one that formed naturally, without having prior knowledge that a designer was at work?
... And such digital maps or pictures will in every observed case of known origin be DESIGNED. What is so hard to see or accept about that?
Speaking for myself: nothing, you are quite right, every digital map or picture that is known to have been designed has been designed. Of course I am using your definition of design that would include pictures generated algorithmically or with GA's. We know that it is possible to create mechanisms that can create pictures, which I presume will contain FSCI. BTW, a screen of what looks like white noise could be a representation of a large encryption key, which would be very vulnerable to perturbation. Without knowing the cause of the signal how do you tell if it has FSCI? I don't actually have a problem with the idea of an intelligent cause to the universe but I still don't buy your claims that this FSCI you argue is in living systems can't be gathered from the environment and must be placed there by an agency. BillB
"Do you not see that first the Mona Lisa or Rushmore have in them particular functions and specification?" kairosfocus, they all see. What they do is sit around and think how they can make up something that will confuse the issue. They are not driven by desire to understand, only confuse people as best they can. That is why the interesting thing is why grown men do this. Many of these are not children or young adults but supposedly mature adults and they actively engage in this behavior. Have some pity on them. There must be something strange or wrong with them to engage in such behavior. I have relatives who are like them, in their 30's and 40's and some older who do not lead serious lives and find amusement in making other people unhappy. I have other relatives who are serious and leading very productive lives with families who would not waste a second on such juvenile behavior. When the pro ID people here try to be an adult with them, you send them off to find another bit of non relevant trivia that they will hope confuse the issue. They are a rather pathetic lot but they are here so most of the pro ID people feel the need to deal with them. But it just feeds them. It is like the bulk spam servers looking for those who will answer the inane emails sent out. By answering them you are only encouraging them to send out more spam and try to screw up your computer. Think of the anti ID people here as the spammers. If we charged them .01 cents a word to post a comment here, we would see the last of them jerry
kairosfocus,
mechanism that implements actions step by step per the design of an external intelligent agent.
I believe what you just described fits the description for an electronic computing device, like what we normally use for information processing. It is all about information processing, right? I am afraid I don't quite get your point. Cabal
PPS: Nakshima-San: there are many real life situations that give different equally valid metrics for phenomena. At a very simple level, think about how we measure angles. Similarly, think about different digital encodings for images or other analogue phenomena, which is actually a metric process. The issue is fitness for particular purpose in view and being clear on what convention you are using in the particular context. kairosfocus
PS: Contrast a screen-full of white noise "snow": complex yes, specific -- no. Vulnerable to random perturbation -- no. So, the specific variable would set it to zero on the simple FS Bits metric. (Cf here granite vs DNA on Orgel's remarks.) kairosfocus
Nakshima-san: Do you not see that first the Mona Lisa or Rushmore have in them particular functions and specification? Do you not see that a digitised map, drawing or picture of these will then have in IT FSCI? And that such would be most evidently designed? As to the simple metric: > 1 kbits -- yes, functional as a map or picture according to some coding scheme and associated algorithms, yes. Vulnerable to sufficient random perturbation; plainly, yes. So, FSCI, and the bit measure of functionally specific bits will give a bit value if beyond 1,000 bits, specific and functional. And such digital maps or pictures will in every observed case of known origin be DESIGNED. What is so hard to see or accept about that? GEM of TKI kairosfocus
Onlookers: The past few days have sufficed to demonstrate the actual balance on the merits. In every case of FSCI -- which is observable, and measurable -- where we know the source independently, we see that its source is in intelligence. We also know on search space grounds, that such cases are not likely to emerge on chance + necessity only. Thus, FSCI is a reliable sign of intelligence, and int he case of OOL -- the actual main focus dor this thread -- it means that OOL is best explained by reference to intelligent action. GEM of TKI PS: BB, all we need is to know the underlying causal force that best explains FSCI. That turns out to be intelligence. We can then explore circumstances to see whodunit all we wish. That tweredun comes first. PPS: BB, in fact the issue is that the observed universe necessitates a cause, per its credible origin and contingent character, typically estimated at 13.7 BYA. Given its evident complex finetuning for cell based life, that is on best explanation, intelligent, and powerful and of course not made up from the matter we observe. [Matter does not create itself out of noting . . . ] Going further, by isolating that intelligences are observed and have characteristic fingerprints, then we are in a position to let empirical evidence speak -- without imposing a priori materialism -- on the subject of whether or not immaterial intelligences exist or have left traces in our cosmos -- and the decisive issue on that hinges not on OOL but cosmological fine tuning and inference to its best explanation; FSCI in lifge simply speaks to intelligence as its cause, but life on earth could in principle be a design by a creator within the cosmos (as TMLO, the foundational design theory technical work, discussed from 1984). The Lewontinian imposition of materialism on science censors the evidence. kairosfocus
KF-san, Nakashima-San: You already have adequate examples on FSCI and related concepts, and on the quantifications thereof. Well, though I am a poor student, I guess you agree with how I've calculated the FSCI above, with regard to a screenshot of the Mona Lisa and static. It seems that your metric always returns either 0 or the number of raw bits. If you can't detect the difference between a picture of Mt Rushmore taken in 1925 and one take in 1941, you have accomplished nothing, and labored mightily to do so. Nakashima
4] we can easily exceed the 1000-bit threshold if we record the information long enough, so we have FSCI
Congratulations, you are so close to finally understanding. GA's and biological organisms acquire this from the environment through a cumulative process.
In short, as the highlighted exhibits, the FSCI is created by a process of measuring, encoding, recording and compiling in a relevant data structure.
Now you are in trouble again because you seem to be requiring an intelligent observer for FCSI to exist. If naturally occurring processes don't comntain FCSI until they are recorded and encoded then how can we tell if biological organisms contain FCSI when, by definition, studying them turns everything we look at into FCSI.
... an intelligent agent’s action is creating the FSCI, per observation, analysis, encoding, data structure and storage operations.
You need to present a method of differentiating between FCSI that is created when we study something and any FCSI that already exists, otherwise you are just placing FCSI in the eye of the beholder. BillB
Now, implement the relevant controller for us
It's Derek Smith’s controller, why should I do his work for him? Also, I haven't ever claimed that GA's are intelligent, the debate is about whether they can generate FCSI. You are claiming that only an intelligence can generate FCSI so I suppose from your standpoint if a GA can generate it then it must be intelligent because you have defined FCSI as requiring an intelligence to generate. I would offer up the decades of research into GA's but I know that you will reject them out of hand by claiming that the FCSI is smuggled in when the GA is created.
…how much FCSI does a GA contain? Can you supply some numbers please. and then we can start to answer this: …can it generate more FCSI than it contains?
How are the calculations coming along?
And, BTW, I have said nothing about whether or no a designer “must be” immaterial.
So you accept that life on earth can have a material cause then? Presumably this means that FCSI can be the product of purely material forces (i.e. the laws of nature), you just believe that it requires a certain class of natural system (an 'intelligence') to generate this FCSI. Somewhere in your chain of reasoning there has to be an immaterial cause of FCSI or intelligence, otherwise you are accepting that FCSI and intelligence can arise from natural (material) processes. BillB
R0b: Interestingly some designers of electronic products will fill the spare memory of a micro-controller with junk code and data, just lots of random bytes, in order to confuse any attempts at back engineering. This presumably has FCSI because it serves a purpose. Unfortunately I think we are flogging a dead horse here as KF seems to enjoy his portable goalposts a little too much ;) So, I'm left wondering how much FCSI a machine that turns random noise into its numerical representation can contain?
the algorithm and its instantiation are where the intelligent inputs come in.
Do things like this qualify as 'algorithms'?
Ocean tides serve important functions
BillB
Footnote on Rob's examples: Explaining how the strawman comes up: 1] Rob, 222: random processes in nature, such as radioactive decay, can be encoded as bit sequences of arbitrary length, so exceeding the 1000-bit threshold is no problem. Of course, the process is known random, so is irrelevant to the question of intelligent design -- a 1,000+ bit string of known random numbers would not even be in question. Complex, but not specific in itself. (Cf Orgel on Granite or look at a complex organic tar in the OOL context.) And, to move from RA decay to "encoded" string*, we have brought in: algorithms, codes, processing and storage, etc; all of which are the work of intelligent designers. __________ *I use the string as the primitive of data structures because more complex structures can be expressed as suitably related groups of strings. 2] Any given random sequence is useful for a variety of purposes, including testing and encryption, so it’s functional, and therefore FSCI. The onward process of intelligently using the sequence, however generated, is what would give it specificity and function. AFTER that process has been completed,t he originally random sequences have now become specific, complex and functional. [To see that, think about what happens if we then onward allow significant random perturbation: the code assignments suddenly break down. A one-time message pad -- by design -- has no further use once used up.] 3] Ocean tides serve important functions, including nutrient mixing and the creation of intertidal ecologies. How many bits does it take to store tidal behavior? Let's see, how many bits does it take to code and store the analysis of tidal motion under gravitational forces? How many to store the reports on ecological function of tides? [In short, the INFORMAIONAL aspects come from outside the tidal system and processes.] The tides, of course are a product of known mechanical forces, and exhibit as well some random elements. They are not INFORMATIONALLY functional in themselves. (Even the sand bugs that live in the intertidal zones on beaches are sensing and responding to the tidal circumstances, i.e. contain their own processing and programs of response. That's why we used to catch them for bait by slapping down a dead fish on a string as the waves back washed: the bugs would pop up to investigate what they felt and smelled or tasted. We in turn would spot them and scoop them up. Smart bugs knew how to slip away when we tried that.) 4] we can easily exceed the 1000-bit threshold if we record the information long enough, so we have FSCI. In short, as the highlighted exhibits, the FSCI is created by a process of measuring, encoding, recording and compiling in a relevant data structure. Again, the information and its specified complexity and functionality are not the product of the tides as such but of the process that an intelligent agent applies. 5] How many bits does it take to store the trajectory of the moon? Again, an intelligent agent's action is creating the FSCI, per observation, analysis, encoding, data structure and storage operations. 6] the problems stem from your definitions, and that the objections also apply to your own examples. Quite the opposite. The problems come from Rob's consistent failure to see the -- quite obvious actually -- role of the intelligent agents in getting to functional, specific, coded, complex information. That sort of blindness to what is otherwise obvious, tells us a lot about how the evolutionary materialist view and imposition impoverishes early C21 science, imposing blinders that block quite intelligent and articulate contributors from seeing what SHOULD be obvious. So, we should take warning, and understand that something is rotten in the state of early C21 science. ___________ GEM of TKI kairosfocus
BB: Now, implement the relevant controller for us, so we can see how a GA is a credible example of a real intelligence, rather than simply a mechanism that implements actions step by step per the design of an external intelligent agent. And, BTW, I have said nothing about whether or no a designer "must be" immaterial. (I happen to believe that designers will be informationally based, and am open to the possibility of both materially based and immaterially based designers, just as information is not locked down to any one material expression. [It is materialists who are a priori committed to the idea that all must reduce to matter-energy and space-time, acting through chance + necessity. As I have linked, that runs such into serious difficulties, often expressed today as "the hard problem of consciousness." It is hard because it is trying to resolve a self referential absurdity, implicitly.] ) GEM of TKI kairosfocus
Nakashima-San: You already have adequate examples on FSCI and related concepts, and on the quantifications thereof. GEM of TKI kairosfocus
Rob: When you use RA or Zener noise of sky noise etc to make codes,t he algorithm is where the coding comes from. The random element is a controlled input that gives an assignment for say a 1-time message pad. the randomne4ss has no ma=eqanintg or function in itself. And of course the algorithm and its instantiation are where the intelligent inputs come in. In short, the "example" is a strawman. GEM of TKI kairosfocus
kairosfocus:
YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT
Okay, I'll take the bait. I'm a sucker for all-caps. 1) The behavior of random processes in nature, such as radioactive decay, can be encoded as bit sequences of arbitrary length, so exceeding the 1000-bit threshold is no problem. Any given random sequence is useful for a variety of purposes, including testing and encryption, so it's functional, and therefore FSCI. 2) Ocean tides serve important functions, including nutrient mixing and the creation of intertidal ecologies. How many bits does it take to store tidal behavior? Even if we discretize time and depth very coarsely, we can easily exceed the 1000-bit threshold if we record the information long enough, so we have FSCI. 3) Or we could note that the principle cause of tides is the moon, which remains in close proximity to the earth as the earth rotates and moves through space. How many bits does it take to store the trajectory of the moon? Again, even discretizing time and location very coarsely, we can easily exceed the 1000-bit limit. The same goes for the earth's trajectory, which has the function of producing seasons. It's not hard to see what objections can be raised to the above cases. The question is whether you'll notice that the problems stem from your definitions, and that the objections also apply to your own examples. R0b
Mr Kairosfocus, No-one, to my view, is talking about AI. This is rhetorical misdirection on your part. How do you calculate FSCI? How do you prove its source? Those are the issues under discussion. You have made claims that you are being asked to support. Having three metrics is almost worse than having none. Which of the three is correct? Nakashima
Mr kairosfocus, Nakashima-san: dismissal does not undercut the material force of the point. Namely, unconstrained random changes of significant size imposed on functioning systems are more likely to perturb them away from functionality than to improve such functionality. (This is one of the reasons why we see a topology of islands of function in a sea of non-function.) In this case the point had no material force. No scientist, working in any field, has the lack of separation of simulation, model, and experiment built into their methodology. If there is an earthquake in SF (God forbid), will my Second Life avatar feel anything shaking? Perturbation moves you away from functionality? Now you are making the assumption that you have accused others of making - that the population members are close to high functioning areas already. Here's a question - if I take a step in a random direction, have I moved closer to the top of Mt Fuji or away from it? This a resaon our intuition about function and non-function can deceive us. In abiogenesis, there is no fitness function, there are only reaction rates and products. The 'fitness' of a molecule depends entirely on its enviroment, which is why a GA like the IPD scenario I proposed is closer to that reality than silly Weasel style functions. Nakashima
how much FCSI does a GA contain?
I take it that you are declining to answer this?
using block caps to draw attention to what is there as opposed to what you expect to see is NOT “shouting.”
Perhaps not but using block-caps to issue challenges on discussion forums was something I understood to be analogous to shouting:
YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT
I have consulted Derek Smith’s model. There is no mention of genetic algorithms but he does present a model system for creating intelligent robotics that involves adding higher level planning and prediction. This is what he says:
a higher order controller (far left) replaces the external manual source of command information. This means that there is no longer any high-side system boundary, making the new layout self-controlling. That is to say, it is now capable of willed behaviour, or "praxis"
No mention of this 'will' coming from an immaterial source, it is all a product of the mechanisms that make up the machine. Earlier you said this:
You will see that the upper level controller is imaginative, creative and volitional, rather than merely mechanical — acting by step by step instructions and procedures triggered by necessity and/or blind chance.
The upper level controller is part of the mechanism, it is mechanical in that sense and does not draw on immaterial things to operate. The detail of the mechanism isn't explained but I would guess that a neural network might be a good candidate, and they work quite well with GA's. BillB
Onlookers: First, observe that to date, the objectors to the observation that FSCI is a reliable sign of intelligence are unable to provide a single clear empirical counter-example, in the face of literally millions of examples all over the Internet. That should tell us the real balance on the merits. Second, had one or two of the objectors above troubled to consult the Derek Smith Model as already linked, they would have seen why GA's are not credible as artificial intelligences. Namely, the locus of the creative, volitional and imaginative supervision of the algorithmic loop is EXTERNAL to the system. (That is, GA's contain active information that is exogenous to the GA. A real AI will instead be credibly able to creatively project and decide its own path, then successfully implement it, fixing problems along the way.) BOTTOMLINE: This thread has now plainly passed the point of diminishing returns and it is plain enough where the true balance on the merits lies. FSCI is a reliable sign of intelligence, and we have excellent reason to conclude that this holds in the context of the information systems in life, from first life on. G'day GEM of TKI PS: FYI BB, using block caps to draw attention to what is there as opposed to what you expect to see is NOT "shouting." PPS: Nakashima-san: dismissal does not undercut the material force of the point. Namely, unconstrained random changes of significant size imposed on functioning systems are more likely to perturb them away from functionality than to improve such functionality. (This is one of the reasons why we see a topology of islands of function in a sea of non-function.) PPPS: Sparc, you are simply whistling in the dark as you walk by the graveyard. You have seen at least three metrics relevant to FSCI [at least one of which has published a table to 35 values int eh peer-reviewed literature, apart form "good enough to make the relevant pint" examples above], and you have seen a context that shows why the term is useful and holds warrant dating back to Orgel in 1973. Again, FSCI is that subset of specified, complex information that has the specification through observed function. As a start, any string of contextually responsive ASCII text in English of at least 143 characters would exhaust the probabilistic resources of the observed cosmos to try to explain it on undirected chance + necessity. But, intelligence routinely produces such. kairosfocus
A more philosophical question: Do lengthy comments that are completely ignored outside of the small UD cosmos contain any FCSI? A quick Google search proves that FCSI is not effective as an argument. PS. Therefore I introduce EFCSI (effective functional complex specific information) PPS. Although it seems impossible to calculate FCSI (to my best knowledge there's no positive example of FCSI) it is well possible to make a rough etimate of EFCSI of KF's comments. PPPS: SInce only Jerry and a few UD commenters I don't remember adopted FCSI the EFCSI content of KF's comments is close to zero. PPPPS. It may increase if FCSI were supported by Dr. Dembski. sparc
KF-san, It would be easy enough to spew random bits across the PC, and see what happens. Given the well-known vulnerabilities provided by malware, the point I made stands. Mr Dodgen knew better than to hold to this position after its absurdity was pointed out, why do you persist? And spewing bits acheives what, exactly? If I ran a GA on such a bit spewing PC, would you accept that it generated FSCI not sourced in the programmer? PS I think you mean exploited, not provided, minor point. Nakashima
Correcting a strawman: GA’s exhibit FSCI, and are known to be desinged by intelligent agents.
Please do me the courtesy of actually reading my comments. GA's are designed, that is what I said. The strawman was your attempt to confuse a model with the system used to implement the model. I'll repeat myself just for clarity:
The GA does not arise by chance variation of random bit strings, and the operating system on which it sits, likewise. Nor do we permit random variation of the whole program and the operating system when we use a GA.
Biologists do not claim that DNA copying errors change the laws of physics. This is the same error that Gill keeps making about simulations.
As the onlookers will note, I was addressing this confusion over how computers are used as tools for modelling, not the issue over who designs GA's. Your obfuscatory attempt to distract has failed. It seems from your 3 that you are claiming that if ANY attempt to model natural processes generates FCSI then the designer of the model must be inputting active information, and that therefore invalidates anything the model produces. I notice you have avoided answering my question about whether a deity included the active information required for our evolution with creation of the universe. Also this: ...how much FCSI does a GA contain? Can you supply some numbers please. and then we can start to answer this: ...can it generate more FCSI than it contains?
We have no empirically warranted grounds for inferring that the FSCI in GA’s or other cases may arise without active information coming from intelligent agents. YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT
No need to shout. You are persisting with strawmen. As I have already repeated, I have never claimed that Genetic Algorithms are not the product of human design. You challenged me to prove that they do not rely on humans to design them - why should I! it's not something I claim! I see nothing more self referential or incoherent about the idea of a naturally occurring intelligence than in the idea of an un-caused cause such as a deity. BillB
Also: Constrained randomness is a known feature of designed, complex systems. (Cf how dice are often tossed to play a board game, e.g Monopoly.) Such systems however neither originate in randomness and blind mechanical forces nor do they allow unconstrained randomness -- such randomness would soon cumulate to the point where the functionality would be compromised. Indeed, the living cell has subsystems that maintain the integrity of DNA information. And that word integrity tells us what easily enough happens when randomness gets out of hand -- as does the danger of radiation damage. And, that is very well known. GEM of TKI kairosfocus
Nakashima-San: It would be easy enough to spew random bits across the PC, and see what happens. Given the well-known vulnerabilities provided by malware, the point I made stands. GEM of TKI kairosfocus
PS: BB, again, GA's exhibit FSCI and are KNOWN (per direct observation) to be designed; aptly illustrating one example of the empirical base for empirically anchored inductive inference to best explanation from observed FSCI to intelligent design. We have no empirically warranted grounds for inferring that the FSCI in GA's or other cases may arise without active information coming from intelligent agents. YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT, i.e. here is a point of so far successfully met empirical test -- this is not at all a strawman. (NB: I have found across years, from experience, that I must often add emphases because it seems that otherwise the key words will be overlooked by objectors to ID. Cf above on the statement by Orgel 1973.) kairosfocus
Mr BillB, We should be careful to distinguish intentional and unintentional sources of variation. Why was the neutrino experiment that Denyse visited buried in a mine? To isolate it from unwanted sources of radiation. In the same way, I would want to run a simulation on a machine that is as bug free as possible. Scientists did not prize the Pentium floating point error when it was discovered. But what if we did 'permit' random variation in the OS, even in the hardware? Lets say I run a GA on a machine made out of Jello, sitting next to Chernobyl. Would Mr KF now agree that it produced FCSI whose source was not the programmer? The whole 'not random enough' objection doesn't lead anywhere fruitful, but it does expose a woeful understanding of what experiment, model building and simulation are. I will simply note that the EIL team has never supported these claims. There is nothing in the MESA users guide about running it on Windows ME, a flawed Pentium chip, and lump of uranium, during a tornado, an earthquake, and a stock market crash, to obtain the correct results. Nakashima
BB: 1 --> Correcting a strawman: GA's exhibit FSCI, and are known to be desinged by intelligent agents. 2 --> Where a REAL human-created artificially intelligent entity might lie: Cf Eng Derek Smith's two-tier control MIMO cynbernetic model here. (You will see that the upper level controller is imaginative, creative and volitional, rather than merely mechanical -- acting by step by step instructions and procedures triggered by necessity and/or blind chance. Figure out how to do that, build a demo model, and then let's go build R Daneel Olivaaw!) 3 --> OOL and investigator interference: Ever since Thaxton et al laid out a ranked scale of investigator interference in TMLO [have you read this?], we have had objective criteria for identifying what is a legtitimate and what is an illegitimate degree of investigator interference. Where the OOL situation as modelled owes its performance to injected active information, it is an invalid model of the proposed chance + necessity only pre-life world. Shapiro and Orgel's recent exchange [discussed end of Section A my always linked] points out just how both sides of the genes first and metabolism first approaches fail at this bar. 4 --> Intelligence vs mechnism and chance: the point is BB that we see that chance, mechanical necessity tracing to initial conditions and acting forces, and intelligence are three distinct OBSERVED causal factors in the world in which we live. When we can wholly explain a phenomenon without residue as the product of chance + necessity, we do not need to invoke intelligence to explain it. That's not a matter of your subjective opinion, that is a matter of objective, massively evident fact. 5 --> Nor can you successfully reduce intelligence to chance + necessity, on pain of self referential absurdity a la Crick's neurological reductionism or the like. You may choose to be absurd, but we then have a right to infer form your absurdity to the falsehood of your position. 6 --> And, when a position entails self-referential incoherence, it is a generally accepted conclusion that it must be false. That just happens to be the fate of evolutionary materialism and its cognates of reductionistic attempted explanation of conscious, reasoning, choosing, morally bound mind. (As the just linked demonstrates in summary.) _____________ G'day, GEM of TKI kairosfocus
Mr Jerry, Or do you dispute that nature cannot produce the Works of Shakespeare? Well, it has done so once already, I suppose that constitutes a sort of existence proof! ;) Yes, the monkeys would take a long time to bring forth Shakespeare again. But they would take an equal amount of time to produce any text of the same length. There is nothing in the glorious poetry of Shakespeare that makes him hard to reproduce, only the length of the text. Anthony Trollope would be even harder to recreate. What about the collected speeches of Leonid Brezhnev? What does that prove? We know that random walks over a large space take a long time to cross any small target. But that is not how GAs work. Nakashima
The GA does not arise by chance variation of random bit strings, and the operating system on which it sits, likewise. Nor do we permit random variation of the whole program and the operating system when we use a GA.
Strawman. Biologists do not claim that DNA copying errors change the laws of physics. This is the same error that Gill keeps making about simulations. Simply putting DESIGN in capitals doesn't add anything to the argument. No one is claiming that GA's are not designed or DESIGNED. The question, as far as mutation in nature goes, is if it occurs and to what degree. This is something we can empirically measure and then, if we are using the GA to model biology, we can apply this rate. If that model includes 'tightly controlled' mutation within the limits seen in nature then all we are doing is producing an accurate model of reality. This whole notion that mutation in a simulation is somehow invalid if it is not applied beyond the simulation is just plain bizarre. If you ditched the computer and just did the math by hand would you argue that, for it to be accurate, 2+2 should not always equal 4? BillB
Nakashima, Can you please help me out? I was looking for naturally-ocurring complex algorithms that include such phenomena that would be analogous to a stop codon in DNA. Do you know of any that I can study? Upright BiPed
Mr Jerry, There has never been any known FCSI produced by nature including life. The origin of life is under debate but by all current understanding FSCI is beyond the power of nature to produce. The only logical conclusion then is to conclude that life was probably not produced by nature because nature most likely cannot produce FCSI. But now you write: If you read my paragraph closely, you will see no absolutes but probabilistic or likely statements. Is the word 'never' absolute or probabilistic? The circularity arises from using that idea as an assumption, and then restating it as your conclusion only two sentences later. Nakashima
Further note (as I forgot to mention earlier): ridicule -- fallacy of "truth lost in the laugh" -- notwithstanding, the random variation used with GA's is tightly controlled and that by DESIGN. The GA does not arise by chance variation of random bit strings, and the operating system on which it sits, likewise. Nor do we permit random variation of the whole program and the operating system when we use a GA. (The likelihood of crashing the system would then overwhelm whatever hoped for at random improvements one fishes for by throwing out a ring of variations on an already basically functional configuration and doing a competitive test on some metric of function or other.) kairosfocus
PS: It is worth noting that -- as is linked from Weak Argument Corrective 27 -- Abel et al and latterly Durston et al have -- in the peer reviewed literature -- found a very closely related term to FSCI, to be useful in their work in biophysics. Indeed Durston et al have produced a metric and published thirty five values of FUNCTIONAL SEQUENCE COMPLEXITY in FITS, functional bits. If Sparc et al are unaware of that, it is by failing to access and squarely address easily available and repeatedly pointed out information. kairosfocus
"You may be right about material intelligence lacking an immovable reference for ‘truth’ but that does not constitute proof that material things cannot produce behaviour that we would regard as intelligent, or that material processes cannot produce these entities." I am not sure I understand what you are saying but you seem to not understand the gist of the argument. There are no absolutes on the ID side. It is all probabilistic. So nature could produce FSCI but there are two things to consider. First, there is no evidence that it ever did. And second, there seem to be physical impediments for it to do so in terms of probabilistic resources for combinations of basic elements. Now none of this means that it will not be shown in the future by some unknown process that it is feasible. But until that time the statement that is is highly unlikely is a reasonable statement. On the other side of the argument, there is the absolute pronouncement that intelligence may not be allowed in any scientific consideration. Or that the likelihood that an intelligent cause is greater than zero is forbidden. ID gets accused of being absolutist when in fact it is the absolutist who are making this absolutely false statement about the people who are being reasonable in this debate. jerry
I have a question. If someone uses a term to describe a phenomena that is commonly used by the scientific community but the term itself is not used by the scientific community, is that term not valid? One can claim that most are not using the term but does that make the term inappropriate. Especially when that term is an attempt to bridge the terminology used in another discipline with a process used in the scientific community in areas of mutual interest. And further more this terminology is currently being used in similar form in some related areas of science. jerry
So, GA’s exemplify the known source of FSCI: intelligence.
Yes, GA's are algorithms designed by people. Would you regard any experiment to test an idea in Evolutionary theory or OOL as invalid because it is the product of intelligence? The pertinent question here is firstly, how much FCSI does a GA contain, and secondly can it generate more FCSI than it contains?
genetic algorithms are simply not credible candidates for such created intelligences.
What, even GA's created by God? Does this fact you claim mean that we have established something about the capabilities of the designer? I'm afraid I don't buy your 'intelligence cannot be the result of mechanism' argument. You haven't provided a reason why intelligence can't operate from a substrate that is based solely on repeatable and observable features of the universe, you just claim that if it is then nothing makes sense, so therefore it can't be. You may be right about material intelligence lacking an immovable reference for 'truth' but that does not constitute proof that material things cannot produce behaviour that we would regard as intelligent, or that material processes cannot produce these entities. BillB
"That reasoning is perfectly circular." Absolutely not, because there were not absolutes. You do not seem to understand the difference between the "either" or "or" concept. Or the use of probabilistic statements. It is either natural or the product of intelligence. If you have a third option, let us know what it is. If you read my paragraph closely, you will see no absolutes but probabilistic or likely statements. If nature cannot produce something, for example the Works of Shakespeare, then the likely answer is that it was produced by an intelligence. Capice? Or do you dispute that nature cannot produce the Works of Shakespeare? I will even give you all the monkeys you desire which is cheating on the nature can do it proposition. If you do dispute it then we can add to our list of how to characterize you. jerry
Further footnote: Re Sparc in the Pinker thread, to Wm A D:
Kairosfocus introduced the term FCSI (aka FSCI) on this forum. I may have missed it but do you or the other EIL members use this term. If so could you please share your thoughts on it?
Of course this is an inverted appeal to authority. As Sparc et al have already repeatedly been informed (starting with Weak Argument Correctives 27, 28 and 29), the term functionally specific, complex information [FSCI] -- and other like descriptions -- is a DESCRIPTIVE reference to the phenomenon identified by Orgel in 1973, when he described how cell based life forms show specified complexity in a bio-functional context. So long as Orgel is right when he said the following, FSCI is a legitimate term:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
Let us zoom in just a bit:
living organisms --> thus, the context is that of biological function int he context of the cell and its macromolecules --> Moreover, the underlying issue is to account for the origin of such bio-function based on informational macromolecules and associated physically instantiated algorithms are distinguished by --> That is, observationally differentiated from two other classes --> in context, complexity and specification their specified complexity. . . . --> In a biofunctional, algorithmic, digital information context --> We mark out living systems by their SPECIFIED COMPLEXITY --> That specification is here a FUNCTIONAL one The crystals fail to qualify as living because they lack complexity; the [random] mixtures of polymers fail to qualify because they lack specificity. --> here we see the distinctions for living systems vs crystals and random mixtures of polymers --> this also bring in the implication that living systems reflect a complex organisation of the component machinery that implements the activities of life --> Should the components be damaged, or disarranged sufficiently, functionality vanishes --> and so also we see the issue that there is an irreducible complexity of organisation and mutual adjustment to operating points in such living cells
In short, the matter has long since been clear, from ORGEL, decades before Dembski's contribution of providing a mathematical framework for modelling and analysing what "specified complexity" means. And, as has again just been explained, it is perfectly capable of being quantified, using the metrics by Dembski, that by Abel and Durson et al, or even a simple heuristic on functionally specific bits. Whether or not the good folks at EIL find it useful for their purposes, FSCI is clearly a useful "glorified common sense" term for ours here at UD. So useful in fact that objectors to ID are desperate to dismiss or suppress it. GEM of TKI kairosfocus
PS: bFast, a subtlety: In using 1,000 bits, the square of a binary form of the estimate of the number of states our observed cosmos could have across its thermodynamically plausible active life, I am not so much appealing to probability as search resource exhaustion: the whole universe we observe, acting as a search engine, cannot reasonably access as much as 1 in 10^150 of the config space specified by just 1,000 bits of info storage capacity. So, even very large and numerous islands of function in such a space will be well beyond the search capacity of our observed cosmos, acting in an undirected fashion. And, the relevant spaces start way beyond that: simplest observed life uses up about 600 - 1,000 k bits for its DNA, having a config space of order 10^180,000+ cells at he lower end. the notion that some prebiotic soup out here could spontaneously create the relevant organisation and information spontaneously to get to an island of initial function then becomes utterly absurd. And, that brings us right back to the core point of this thread, with DNA and its significance, and also the fine-tuning of the cosmos to facilitate such life, we are now dealing with not just matter and energy but INFORMATION as fundamental constituents of the observed universe. So, just as there were hot debates on matter, waves, particles and energy,w e are seeing a hot debate on information in our day. When the dust settles in another couple of decades, it will then be "obvious" that information is a fundamental element of the universe, and that functionally specific complex information comes from mind. But before we get there,t eh committed materialists will be dragged along, kicking and screaming all the way as their cherished worldview collapses in the face of overwhelming evidence. (That is just what happened to the Marxists across the 1980's; and BTW, there are still many - e.g. a certain Mr Chavez, and maybe some disciples of Saul Alinski [who was a committed Marxist, contrary to how Wiki glosses over his real views in both the bio and the review on Rules for Radicals . . . ] closer to the halls of influence and power in your homeland, too -- who plot Marxist revivals!) kairosfocus
4] The real issue: Intelligent Design, as Dembski has described it, is "the science that studies [reliable] signs of intelligence." If there are reliable sings of intelligence, then from observed sign we may freely and on good warrant infer to the signified. FSCI is claimed to be one such sign, and on its strength, we may then infer from the observed DNA etc based information systems in cell based life to the design of such life. But, in our day a la Lewontin, Dawkins et al, there is often a strong institutional commitment to the proposition that such design is not to be considered, as it may lend support to theistic etc worldviews, which are often viewed as marks of ignorance, stupidity, insanity or wickedness. So much so that institutions such as the US National Academy of Sciences have sought to redefine science -- in the teeth of its history and significant philosophical considerations and plain ordinary facts -- as in effect applied materialism. In short, worldview level question begging and associated censorship is at work on the evolutionary materialist side; as has repeatedly been documented and exemplified (sometimes to the point of absurdity) in this blog. 5] Bottomline: By sharpest contrast to that, FSCI is: 1--> observable and sufficiently definable to be distinguished by operational criteria 2 --> quantifiable by various metrics, in the simplest case, functionally specific bits. 3 --> In every directly known case, FSCI traces to intelligent action. 4 --> Since FSCI is associated with a topology of isolated islands of observable function in a large sea of non-functional configurations, we may distinguish functional and non-functional macrostates and [at least in principle or on a model basis] assign relative statistical weights. 5 --> On so doing, we see that undirected chance + necessity on the gamut of out observable cosmos, cannot credibly arrive at shores of function in the sea of possible but non-functional configurations. (The statistical weight of the non-functional macrostate overwhelms the functional ones.) 6 --> Consequently, that possible hill climbing mechanisms may exit that can help a population of replicating entities with low functionality hill-climb to peaks of locally maximal function, becomes irrelevant. (For, you have to first get to shores of function before you can climb the hill to optimality.) 7 --> thus, there are both positive and negative reasons to infer from FSCI as a reliable sign of intelligence to the best explanation thereof: intelligence in action. ______________ GEM of TKI kairosfocus
Footnotes: I will note on several points that seem worth underscoring, mostly for the sake of onlookers: 1] Binary variables It is a commonplace of stastistics to have variables that identify observable or infer-able contingent circumstances and so take a binary or similar set of values. In this case, functionality is a macro-observable, and complexity beyond 500 - 1,000 bits information storage capacity is calculable per relevant observables. Indeed, the simple illustrative instances of (a) an ASCII text string of 143+ characters in English that responds to context (and/or a similar string in a program), and (b) the PC screen that shows the windows in which such text strings reside have been on the table from the outset. The resistance to such examples is aptly illustrative of the underlying lack of weight on the merits for the case put by objectors to the FSCI concept. And, by inversion, it shows us just how significant the fact of FSCI is. 2] Genetic algorithms The first thing we need to know about such is that they are computer programs, invariably written by known intelligent agents and aptly exemplifying FSCI in the very list of statements. So, GA's exemplify the known source of FSCI: intelligence. Moreover, we know that algorithms and programs, data structures etc and the machines to implement them illustrate something very significant: the instructions and their structured, organised sequences, are mechanically implemented, not based on common sense and active decision [and recall here, that we have active conscious creative rationality and decision making ability is a first fact of our experience . . . the denial of which lands us in inescapable self-referential absurdities]. So, we know that the active information in the GA that gets it to move towards peaks of performance comes from the designer, not the machine or the code. In short, while created intelligences are a reasonable concept [we ourselves are a case in point, as the FSCI in us testifies to our origin in another intelligence], genetic algorithms are simply not credible candidates for such created intelligences. 3] Definitionitis I have long since pointed out that EXPERIENCES AND CONCEPTS ARE PRIOR TO PRECISING DEFINITIONS, even in science. For instance, the experience, observation and concept "life" resists such stated definition to this day, but is a foundational scientific concept for a major discipline, biology. (nor indeed can everything in a discipline be defined, on pain of circularity or infinite regress. Commonsense concepts -- aka primitives -- must ground any discipline, call them what you will.) What grounds our real-world work in science instead is that we may observe examples and abstract key concepts, which may be used in operational contexts to describe, explain, analyse, model predict and perhaps influence. In that context, the concept of functionally specified, complex information [FSCI] -- a subset of complex, specified information groundeed in OBSERVED function depending on complex, contingent (thus, information-bearing) organisation of elemets -- has ever since Orgel's statement in 1973 been more than adequately grounded in experience and coherent conceptualisation, with no less than two major mathematical metric models, and a simple rule of thumb heuristic. The insistent, sometimes recycled [in the teeth of having already been adequately answered], objections and hair-splitting above are actually inadverte3nt testimony to the force and validity of this point. [ . . . ] kairosfocus
ROb(186) "if the concept is to be employed in revolutionizing science, it needs to be rigorized. If the concept is to be accepted by science it must be devoid of unacceptable baggage. A definition is called for, a definition of the kind of information which is in DNA, and not found in nature outside of that unique realm of "life". Yet if the definition, no matter how "rigorous", is approximately: csi = created by intelligence, it will be rejected flat out. I propose a simple term and rigorous definition which matches DNA, computer software, and nothing found within nature other than within "life". FSCI - Function specifying complex information. The information must have complexity (probability less than 1 in 10^150 works for me), it must be information (as defined by Shannon, why not) and it must specify something that is functional, that does something. I would suggest that the computer code which implements a word processor is function specifying. I would suggest that DNA, which describes (or provides a significant portion of the description) of a functioning organism, qualify as FSCI. There, we have a rigorously defined term that should not be repulsive to the scientific community on its face. bFast
Mr Jerry, There has never been any known FCSI produced by nature including life. The origin of life is under debate but by all current understanding FSCI is beyond the power of nature to produce. The only logical conclusion then is to conclude that life was probably not produced by nature because nature most likely cannot produce FCSI. That reasoning is perfectly circular. Nakashima
Joseph-san, As you can see on the thread above, KF-san was showing how to perform a calculation, compare the value to a standard, and from that comparison infer design. It sounds as if you think that inference proceeds with no evidence whatsoever. Nakashima
jerry:
Around here we are talking about the subset that does specify something else.
Who is we? From what I can tell, you and bFast are the only ones here who think that CSI and FSCI refer to information that specifies something else, as opposed to information that is specified. Correct me if I'm mistaken in that observation.
DNA meets the definition of FCSI.
Really? Is FCSI's complexity relative to a chance hypothesis like CSI's is? If so, what is the chance hypothesis and how did you estimate the probabilities?
There has never been any known FCSI produced by nature including life.
That's quite a sweeping claim. And it seems a little premature, seeing that no work on FCSI has ever been published. (Unless you think that FCSI is synonymous with Abel's and Durston's functional sequence complexity. If so, why introduce another term?) R0b
"The information is specified by a specifying agent. It does not necessarily specify something else." But it could and some very interesting subsets of CSI specify other entities. Around here we are talking about the subset that does specify something else. In all cases except DNA we can identify a specifier or a likely specifier so it is not an issue that it is specified according to your understanding of CSI and has its origin in intelligence. So let's say we abandon the concept of CSI for the moment. Then FCSI exists on its own merits and is easy to understand and we will assume it is not related to CSI. DNA meets the definition of FCSI. There has never been any known FCSI produced by nature including life. The origin of life is under debate but by all current understanding FSCI is beyond the power of nature to produce. The only logical conclusion then is to conclude that life was probably not produced by nature because nature most likely cannot produce FCSI. Thus by default because life is based heavily on FCSI it was probably originally specified by some intelligence. So now we are back to DNA being CSI according to your definition and understanding. jerry
Nakashima-san:
You can’t come to a design inference without a rigorous, repeatable process.
That is false. First there is a design inference and only then can one hope to determine a specific process. Ya see reality dictates that in the absence of direct observation or designer input the only possible way to make any scientific determination about the designer(s) and/ or the specific process(es) used, is by studying the design in question. And BTW it is very repeatable that designing agencies can design and create irreducibly complex systems, information storge systems, and information communications systems. Joseph
Mr Joseph, You can't come to a design inference without a rigorous, repeatable process. Mr Kairosfocus has said he can make a scientific inference. Nakashima
R0b, CSI is more rigorous than anything the non-telic position has to offer. So please stop your whining. Also this isn't about revolutionizing science- science got to this point on the shoulders of IDists- it is about again letting scientists come to a design inference if that is what the data points to. Joseph
I am an engineer (software) not a scientist nor mathemetician. As such, I think in much more concrete terms than the others do.
I think there is good-sized population of us engineers on this board, and precious few scientists and mathematicians. A pity, IMO.
ROb, if you are suggesting that Dembski says that CSI only exists when a product is the result of the CSI, rather than the information being gathered from the product, then I think you present a valid splitting of hairs.
Actually, the specification, according to Dembski's usage of the term, need not precede the product. One of Dembski's objectives in his work is to flesh out the idea of "post-specification".
That said, it is my understanding that Dembski was not the originator of the term CSI.
As far as I know, he is the originator of the term, unless you count Crime Scene Investigation. He has a handful of technical definitions of the term, but he usually uses it quite loosely. Other people have differing understandings of the concept, which is certainly fine. But if the concept is to be employed in revolutionizing science, it needs to be rigorized. R0b
Hey, a good opportunity to interject. I am an engineer (software) not a scientist nor mathemetician. As such, I think in much more concrete terms than the others do. I see CSI as a blueprint that fully describes a product. Now, if one finds a product, and generates a blueprint to describe the product this is somehow fundimentally different than if one finds the blueprint which was used to manufacture the product. ROb, if you are suggesting that Dembski says that CSI only exists when a product is the result of the CSI, rather than the information being gathered from the product, then I think you present a valid splitting of hairs. That said, it is my understanding that Dembski was not the originator of the term CSI. Further, I have had discussions with others who pull quotes out of Dembski's work that suggest that he has created a definition which obligates his conclusion. I think that CSI is a concept that must belong to the world, not to Dembski alone. bFast
Mr Nakashima, You are correct. You asked if the Hazen link made reference to "Islands of Functionality", which you took as a sign it was different from KF's usage. You wrote:
because there is no discussion of “islands of function” in Hazen’s functional information.
I wanted to point out that they did discuss the Islands of Function concept using their functional info in the same way KF did. You are completely correct that they're talking about a specific landscape. In the same way, KF is talking about specific biological landscapes. Atom Atom
jerry:
Bfast said it was his understanding that CSI just was information that specified something else. That made sense to me./
If bfast said this, then he and you are not talking about Dembski's CSI. Dembski defines CSI as complex specified information, not complex specifying information. The information is specified by a specifying agent. It does not necessarily specify something else. R0b
Nakashima:
So what is the source of this increase? Mr KF has asserted something - “[put there by its designer]“, but I have trouble following his logic.
Indeed, both Nakashima and I have been trying, from different angles, to get kairosfocus to provide details on FSCI accounting practices, which appear rather ad hoc. kairosfocus has spent an awful lot of functionally specified pixels responding to arguments I haven't made, while leaving this issue floating in the ether. When FSCI comes from a computer, even if there random elements involved, he invariably credits the FSCI to the designer/programmer of the computer. When FSCI comes from a human, he invariably does not credit the FSCI to the designer/environment/inherited traits/randomness of the human. The basis for this seems to be the view that computers are mechanical, preset, programmed, without thought or understanding, capable of only artificial languages, non-learning, etc., while humans are volitional, creative, spontaneous, original, decision-making, common sense, rational but not predictable, etc. If those are the key concepts in crediting FSCI to the proper source, then the concepts need to be operationalized and incorporated into the definition of FSCI. As it is, we have no way of determining the actual source of the FSCI. R0b
jerry
And a typical complaint is that our definition is not used in real science thus it is bogus.
But are leading ID researchers like Dr. Demnski using it? I've asked this before but unfortunately, Dr. Dembski is seemingly not following the KF's FCSI comments. sparc
Mr Kairosfocus, Thank you for humoring me and being patient with me. I appreciate your linking to specific materials out of the large store of your always linked. I appreciate your working a calculation of an example. I am a bit (a weak pun, very sorry) unsure why C and S are binary values. Are these also "eye of the beholder" variables? Perhaps I don't understand what you mean by 1/0. Let me try to work through an example - a screen displaying an image of the Mona Lisa. C = 1, because the image is contingent. The Mona Lisa is a single point in the screen's config space. S = 1, because the image is specific. I'll use the fact that it has a non-zero size when compressed to motivate this choice. B = 11.52*10^6 bits C*S*B = 11.52*10^6 gt 1000, therefore design! Let's try again to make sure I understand. Image of static C = 1, same as before. S = 1, same as before. B = 11.52*10^6 bits C*S*B = 11.52*10^6 bits gt 100, therefore design! It seems to me from your presentation that there is an inference to design for all B. Nakashima
Mr BillB, Yes, it seems that Mr KF is willing to allow that a GA's population members contain FCSI. I think he might agree that the best of generation 1 contains less FSCI than the best of generation N. Indeed the whole population has probably increased in FCSI. So what is the source of this increase? Mr KF has asserted something - "[put there by its designer]", but I have trouble following his logic. He claims to be able to make a scientific inference that it is the designer, not the RNG, the clock, the growing history, that is the source of this incremental FCSI. This is more than Dembski and Marks claim to be able to do. Nakashima
KF: Just to pick you up on one point:
A genetic algorithm is expressing active information in it [put there by its designer], towards a target zone.
If you mean a target area in its configuration space then this is incorrect, the algorithm will try and maximise an agents score as given by a fitness function but how the agent it achieves this score is not normally defined by the fitness function - the phenotype is not a target. If you take a look at some of Karl Sims early work in evolving virtual creatures you see lots of different evolved solutions that arise from the same fitness function. There can be many possible configurations that qualify as 'fit'. Also, not all GA's use static fitness functions, incremental approaches will use fitness functions that gradually change and embodied fitness functions are sometimes used that are a product of the agents environment rather than an imposition by an external auditor. If FCSI enters the design via the fitness function then is it not inconceivable that all the FCSI we observe in nature is a product of natural selection, which its self is a fitness function designed by a deity? BillB
PPS: Oops, point7 ha a less than init and clipped, sorry: 7 --> For instance, for the 800 * 600 pixel PC screen, C = 1, S = 1, B = 11.52 * 10^6, so C*S*B = 11.52 * 10^6, FS bits. This is well beyond the threshold. [Notice that if the bits were not contingent or were not specific, then X = 0 automatically. Similarly, if B {is less than} 500, the metric would indicate the bits as functionally or compressibly etc specified, but without enough bits to be comfortably beyond the UPB threshold. Of course, the DNA strands of observed life forms start at about 200,000 FS bits, and that for forms that depend on others for crucial nutrients. 600,000 - 10^6 FS bits is a reported reasonable estimate for a minimally complex independent life form.] kairosfocus
PS: A genetic algorithm is expressing active information in it [put there by its designer], towards a target zone. It is not an original source of FSCI, though the fact that such a program will normally itself have FSCI in it, it is testimony to the reality that designers create FSCI. PPS: Brillouin et al do create an information metric, as shown in the linked: negentropy. Jaynes et al show a link from physical to informational entropy, and Durston et al give FSC metrics that are more sophisticated versions of what the simple FSCI metric here presents. Excerpting the just linked; two clicks away from all my recent posts at UD: >> FSCI is also an observable, measurable quantity; contrary to what is imagined, implied or asserted by many objectors. This may be most easily seen by using a quantity we are familiar with: functionally specific bits [FS bits], such as those that define the information on the screen you are most likely using to read this note: 1 --> These bits are functional, i.e. presenting a sceenful of (more or less) readable and coherent text. 2 --> They are specific, i.e. the screen conforms to a page of coherent text in English in a web browser window; defining a relatively small target/island of function by comparison with the number of arbitrarily possible bit configurations of the screen. 3 --> They are contingent, i.e your screen can show diverse patterns, some of which are functional, some of which -- e.g. a screen broken up into "snow" -- would not (usually) be. 4 --> They are quantitative: a screen of such text at 800 * 600 pixels resolution, each of bit depth 24 [8 each for R, G, B] has in its image 480,000 pixels, with 11,520,000 hard-working, functionally specific bits. 5 --> This is of course well beyond a "glorified common-sense" 500 - 1,000 bit rule of thumb complexity threshold at which contextually and functionally specific information is sufficiently complex that the explanatory filter would confidently rule such a screenful of text "designed," given that -- since there are at most that many quantum states of the atoms in it -- no search on the gamut of our observed cosmos can exceed 10^150 steps . . . . 6 --> So we can construct a rule of thumb functionally specific bit metric for FSCI: a] Let contingency [C] be defined as 1/0 by comparison with a suitable exemplar, e.g. a tossed die. b] Let specificity [S] be identified as 1/0 through functionality [FS] or by compressibility of description of the information [KS] or similar means. c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, with 500 - 1,000 bits serving as the threshold for "probably" to "morally certainly" sufficiently complex to meet the FSCI/CSI threshold. d] Define the vector {C, S, B} based on the above [as we would take distance travelled and time required, D and t], and take the element product C*S*B [as we would take the ratio D/t to get speed]. e] Now we identify: C*S*B = X, the required FSCI/CSI-metric in [functionally] specified bits. 7 --> For instance, for the 800 * 600 pixel PC screen, C = 1, S = 1, B = 11.52 * 10^6, so C*S*B = 11.52 * 10^6, FS bits. This is well beyond the threshold. [Notice that if the bits were not contingent or were not specific, then X = 0 automatically. Similarly, if B A more sophisticated metric has of course been given by Dembski, in a 2005 paper . . . . 9 --> When 1 >/= ?, the probability of the observed event in the target zone or a similar event is at least 1/2, so the available search resources of the observed cosmos across its estimated lifespan are in principle adequate for an observed event [E] in the target zone to credibly occur by chance. But if ? significantly exceeds 1, that becomes increasingly implausible. The only credibly known and reliably observed cause for events of this last class is intelligently directed contingency, i.e. design. 10 --> Thus, we have a rule of thumb informational metric and a more sophisticated informational measure for CSI/FSCI, both providing reasonable grounds for confidently inferring to design. (Durston, Chiu, Abel and Trevors provide a third metric, functional bits or fits, a functional bit extension of Shannon's H-metric of average information per symbol, here.)>> kairosfocus
Nakashima-San: first, the point of evoolutionary materialistic OOL scenarios is that somehow molecular noise transformed itself into DNA-driven algorithmic digital processing,and that further noise created major body plans. We are not talking about one or two mutations at points in already functioning DNA, but 600 k bits or so of information for first life and 10's - 100's+ of mega bits for novel body plans. And, if you will follow up on the links and the identified cases, I believe the key points that need to be clarified will become much clearer to you. (It now seems to me from your remarks on the log2 pi relationship, that you have not first seen what I am saying -- and link from EVERY post ever made by me at UD -- before criticising. That's not cricket.) GEM of TKI kairosfocus
Mr Kairosfocus, From your always linked: As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. Are these your FSCI? Can you state them explicitly. From a subsequent post: Once 500 - 1,000 bits info storage capacity is passed, that becomes a most unlikely explanation relative to the known source of such FSCI: design. How pleasant to return to where we began. So a GA that evolves 1000 bit long IPD competitors is generating FSCI? Nakashima
Mr kairosfocus, I am not claiming that all cases of functionality follow the pattern of islands of function in a sea of non-function, only that this is relevant to certain key cases studied using the concept FSCI. I agree these are the case of ultimate interest, but along the way you have made a number of claims, which I am hoping you can clarify. In particular, cases where modest perturbation at random will as a rule derange function. Most digital symbolic codes and code based systems are like that: change the sufficiently long string at random, and it will usually either become a non code word or inappropriate to its context. Actually, I think that the longer the string, the more likely you are to preserve function after a single letter change, but that depends very much on the problem. In terms of chemistry, the cube-square law and the categorization of amino acids by hydrophobic or -phillic means that long strings are resistant to losses in function due to small random perturbations. This where the rubber meets the road and no amount of generalization on either side of the argument will resolve the issue, only experiment will. (Genetic algorithms are very particular about what they allow to change at random — the “genes” not the rest of the program or the operating system that supports it!) Sir, are you entering this as a serious comment? This is the error that led to Gil Dodgen receiving so much ridicule. Nakashima
PPPS: A bridge or poker etc hand can be so assigned a target zone that it is complex and specified to some threshold or other (though such hands in general will not pass observable universe level CSI thresholds . . . cf the calculation on Dembski's metric in the weak argument corrective no 27). But, it has no directly observable function in a system, unlike machine code in a PC or DNA code in a cell. Right from Orgel, the latter has been a specific relevant context for FSCI as an issue. OBSERVE real world function, then look at degree of complexity and what happens on perturbing it. is it reasonable that so much complex functionality could arise by chance + necessity only? Once 500 - 1,000 bits info storage capacity is passed, that becomes a most unlikely explanation relative to the known source of such FSCI: design. kairosfocus
PPS: And, I have used FSCI in particular in contexts of complex digital information (and things reducible to that). DNA is a digitally coded information-bearing molecule that uses a four state basic code element, so it is entirely appropriate to speak of it in that light and draw comparisons to other cases of codes and algorithmic instructions. kairosfocus
PS: Nakashima-San, please, please! (Cf Section A my always linked where I discuss information [not to mention Appendix 3 where I speak to CSI, FSCI and related concepts in the context of their roots and relevance]; you have spoken ignorantly and dismissively.) Furthermore, we have been discussing a very clear set of cases of observed function, which are quire publicly available: posts in this blog, the wider Internet, long enough [143 ACII character] text strings that make sense in English, program code, PC screens, assembly of a house from its parts, assembly of a flyable jet plane from its parts, DNA-RNA and the cell's executing mechanisms. This is not at all "private" or "imprecise." kairosfocus
Mr Jerry, Dembski was trying to be too general and develop a system that would determine for all entities whether they were designed or not while in terms of evolution the interest was much more narrow. There was no need for this more generalized concept that seemed to befuddle everyone. That is an interesting perspective. It goes to the heart of whether FSCI is a general, abstract concept that can be calculated for bridge hands, sequences of coin flips, etc. or whether it is specific to biological contexts. This is why I spoke earlier of FSCI of the output of a computer program vs that of a beaker of chemicals. But it seems that Mr Kairosfocus has used FSCI in very non-biological contexts, so again, I await clarification from him. Nakashima
Nakashima-San: I am not claiming that all cases of functionality follow the pattern of islands of function in a sea of non-function, only that this is relevant to certain key cases studied using the concept FSCI. In particular, cases where modest perturbation at random will as a rule derange function. Most digital symbolic codes and code based systems are like that: change the sufficiently long string at random, and it will usually either become a non code word or inappropriate to its context. (Genetic algorithms are very particular about what they allow to change at random -- the "genes" not the rest of the program or the operating system that supports it!) Observe in this regard that there is a fair degree of error detection and correction routinely carried out on DNA in the living cell. GEM of TKI kairosfocus
Mr jerry, To measure the complexity of the DNA string just as one measures the complexity of a word, sentence, paragraph, line of code, module or program one calculates the likelihood of the sequence of symbols, or in the genome, the DNA sequence to assess its likelihood. Yes, -log2(p(x)). Wouldn't it be nice if Mr Kairosfocus was in agreement that this simple definition was appropriate, since so many other people use it! That is all I'm asking for - agreement on a precise definition. Or not. Mr KF could also simply declare that FSCI has no precise definition, that 'functional' is an adjective like 'pretty' - its meaning completely private. I'm just not willing to assume I know what Mr KF means. Look at how much wrangling there was on the Weasel threads over terminology, with invented terms like quasi-latching springing up like mushrooms after the rain. I agree with Mr Atom, it is better to let the man speak for himself. Nakashima
Mr Atom, That material from Hazen 2007 is saying that islands of function exist in a particular fitness landscape, not that they are intrinsic or necessary to the definition of functional information. As an example, if all the functional points on a landscape were arranged like Mt Fuji, gradually sloping up to a single peak, they would have the same functional information measure as a landscape where each functional point was a pole sticking up out of the sea, separated from any other pole by miles of flat surface. (That is how they were drawn by someone, Douglas Axe?) Nakashima
Mr Krondan, You are free to edit Wikipedia to bring more attention to that account, if you like. Nakashima
Note to ID opponents: Keep throwing rocks at KF. By all means, feel free to split some more hairs. Upright BiPed
nakashima wrote:
I find it strange that some sites never credit Huxley for admitting and retracting his error.
Even stranger is the way people gloss over a contemporary account of the Bathybius scam, such as Joseph Cooke's, in favour that be-all, end-all fount of truth, Wikipedia. Vladimir Krondan
Some further footnotes: I see a considerable exchange over FSCI (the functionally defined subset of CSI) happened yesterday. Jerry has just captured the essential point, given what we know about how cell based, DNA and RNA-using life operates:
“Nothing in Biology Makes Sense Except in the Light of functional complex specified information.”
In that context, it would be useful to remind ourselves of Orgel's original remark from 1973, as already noted at 82:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
In short, Orgel makes a two-part distinction, points out examples and counter examples and draws a conclusion. In so doing, he sets the term "specified complexity" in a bio-functional (and of course -- given the significance of DNA and the genetic code -- informational) context. Thus, to deduce from it and explore the significance of BOTH (i) complex specified information in general, and (ii) functionally specified, complex information are plainly legitimate. Dembski has provided a general metric, Abel et al have provided a more narrow metric on the average information per symbol [aka informational entropy] in light of observed sequences, and Hazen is using a very similar concept with a threshold of function. The point there is that function is a macro-observable, which is compatible only with a cluster of components that are so fitted together as to be at an operating point -- borrowing a term from amplifier design. This means in turn that we have a recognisable macrostate, which can be distinguished form a non-functional macrostate; thence we can in principle do comparative microstate counts, thence get to entropy metrics a la Boltzmann's s = k ln w, so through Brillouin's negentropy formulation, to information. Or, following Jaynes et al, we may bridge to information on the informational interpretation of entropy as I excerpt on and discuss in App 1 my always linked. Onward, as Bradley et al following Yockey et al show [observe my point 9 in that appendix], this ties into the CSI concept in a functional context. This state space, macro-micro distinction and compatibility leading to comparative counts and info and entropy metrics is the underlying thinking that is always at work in my discussion of FSCI. It is also directly relevant to the issue of the origin of life: bio-function is in effect a recognisable "macro" phenomenon, which is compatible with certain configurations of biologically relevant macromolecules, which are observed as created based on digital code and energy-using informational processing and associated organisation into functional structures at operating points. Structures that are fairly easily fatally perturbed, e.g. by radiation (which mostly creates free radicals from water molecules, triggering at random thermodynamically favourable reactions, in turn breaking up that fine-tuned complex, co-ordinated functionality). This demonstrates the reality of islands of function in a sea of non-function. That islands of function often exist in a sea of non-function is an extension of remarks by Denton on what would happen if letters were concatenated at random in his 1985 work, and here at UD it is GPuccio [GP -- are you out there watching?] who seems to have first hit on the happy phrasing and imagery. Now, as a result of that, we can ask about thresholds of complexity that make it utterly unlikely that such function originated by chance. The answer is that when specificity and complexity pass a threshold, it is maximally unlikely that available resources can move from plausible initial conditions [prebiotic "soups" especially] by undirected chance plus blind mechanical forces. but by contrast, it is well known that FSCI is produced by intelligent agents, linguistic and algorithmic text strings being an obvious case in point. So, the threshold is one of inference to best explanation. For that, we know that the observed cosmos has about 10^80 atoms, and that 10^25 s and one state per 10^-43 s [the Planck time] are a reasonable cluster. So, for a unique functional state 500 bits of information [~ 3 *10^150 configs] is a good marker. Squaring that, we have a situation where the 10^150 states reachable by the observed cosmos across its thermodynamically plausible working life would only be able to search less than 1 in 10^150 of the possible configurations, So, it is reasonable that 1,000 bits of information storage capacity [= Shannon information] in an observed functional system that is vulnerable to modest perturbation, will not be detectable by undirected processes, as sampling 1 in 10^150 of the config space is equivalent to marking 1 atom for one intant in the observed comsos, and then usign a space probe capable to time travel and traversign the entirte observed cosmos at random, and then picking up just one atom, just once, and voila, it is the marked moment for the particular atom. But such systems are routinely produced using imagination, foresight and design by intelligent actors. And, this is a matter of easily observed empirical fact, starting with posts in this tread and the functional information displayed on your PC screen when you access it. In short, it is maximally unlikely that on the gamut of the observed cosmos, the observed digital information based cellular life originated by chance + necessity alone. [Recall, independent life forms start out at about 300 - 500 k base pairs.] Similarly, when we observe that major body plan innovations credibly call for leaps of 10 -100+ million bases in DNA, and associated cellular machinery to give it effect; we can infer that major body plans are not credibly explained on chance + necessity alone. These observations are very compatible with design. The real problem is that one possible candidate for such a designer -- especially given the evident fine-tuning of the cosmos that facilitates such life -- sounds a lot like the God of theism. (And worse, in Rom 1:19 - 22 or so, a major text in the specifically Judaeo-Christian manifestation of theism, it is baldly stated that attributes of the Godhead are discernible from the features of creation, rendering men "without excuse" for rejecting God. [This is actually a point of empirical testability for that tradition, as had it been obviously failed we would hear no end of gloating on the point; but when that test seems instead to have been passed, it becomes a lot more cutting in its force.]) And that seems to explain a lot of the intensity we have seen: much is at stake, much more than merely science based on a very reasonable inference to best explanation per FSCI and its known sources, that in any other context would hardly be worth noticing. GEM of TKI kairosfocus
Let me provide a brief history of FSCI from my point of view. The term CSI is a term developed by Bill Dembski. Anyone with a different understanding on its origin pleas chip it. Dr. Dembski tries to formulate through mathematics a rigorous definition of design using this concept of CSI. As a new comer here in 2005 I watched as people used the term CSI but witnessed what I describe as a floundering to describe it precisely in a non mathematical fashion. I still don't understand the concept because I believe it only makes sense in terms of the mathematics. Early in 2007, there was another long discussion of just what CSI was with no apparent agreement by anyone on just what it meant. Then two things happened. Bfast said it was his understanding that CSI just was information that specified something else. That made sense to me. Well that explained DNA, sentences and computer programs but did not explain bridge hands, certain coin flips, or Mt. Rushmore. Well in a way it describes Mt. Rushmore but not perfect bridge hands, patterns in supposedly shuffled decks of cards, choices by political party members, or coin flips with specific patterns etc. At the same time kairosfocus came out of lurking and started contributing here and described the functional complex specified information. It was then obvious. Dembski was trying to be too general and develop a system that would determine for all entities whether they were designed or not while in terms of evolution the interest was much more narrow. There was no need for this more generalized concept that seemed to befuddle everyone. The information in DNA met this very narrow case of specified information so why bother with CSI since it was problematic. Hence, the focus on FSCI or FCSI. It is simple to understand and some calculations can be done on the sequences without too much trouble. No one realized at the time that OOL researchers such as Hazen was focusing on this same concept in trying to understand how life arose. They were relating sequence complexity to functional information. But we have been inundated with mock complaints by the anti ID people ever since trying to steer the discussion to the more general CSI concept which is less well defined. And a typical complaint is that our definition is not used in real science thus it is bogus. It is an interesting game the anti ID people play and I often wonder what drives them to do such things. They must obviously know how simple and appropriate the concept is yet they go on and on about its lack of scientific background or its imprecision when the concept lies at the foundation of biology. If Theodosius Dobzhansky were to make an honest statement about biology it would be "Nothing in Biology Makes Sense Except in the Light of functional complex specified information." That would be a more accurate statement than the one he made. jerry
Thank you Nakashima. Then I will venture one more contribution. If you find the 2007 PNAS Hazen paper: Functional information and the emergence of biocomplexity Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak ...they do in fact make reference to "Islands of Functionality":
Islands of Function. What is the source of the reproducible discontinuities in Figs. 1 and 2? We suggest that the population of random Avida sequences contains multiple distinct classes of solutions, perhaps with conserved sequences of machine instructions similar to those of words in letter sequences or active RNA motifs (52).
Page 5 So I think KF is in fact using Hazen's simple formulation. Atom Atom
Nakashima, Sorry, but FSCI is a very simple concept. In the genome think transcription and translation. A string of DNA is FSCI if it enables the formation of some unrelated but useful entity through some set of intermediary processes. How difficult is that to understand. Some DNA is junk and leads to nothing. It is just a repeat of probably useless information as opposed to useful information or FSCI. DNA which is FSCI leads to a protein or a RNA polymer which has function within the cell. Someone who dabbles in GA's can not be that dense that this is not obvious. This is now literally Biology 101. Because we recognize the similarity of this process to language and computer programming, and give it a name, does not mean that it is not meaningful if others do not use the same name. Others have made the same assessment that it is information, it is complex, it specifies a function of some other entity. If it makes you happy, call it the Nakashima transform system. As indicated other processes that follow this pattern are language and computer programming. Each requires an intermediary system to specify another entity that has function. To measure the complexity of the DNA string just as one measures the complexity of a word, sentence, paragraph, line of code, module or program one calculates the likelihood of the sequence of symbols, or in the genome, the DNA sequence to assess its likelihood. You obviously know this, but you demand exact precision feigning that this is not a scientific concept that could go the way of phlogiston when in fact it us used everywhere, every day in biology. There is no worry it will disappear because you and others do not like our definition. As I said use your own definition. I find this amusing and wonder why you and others persist in this charade. What could be the root of this faux concern for the inappropriateness of this ID concept? We could start a whole thread on this topic. jerry
Mr Atom, A contribution is never an interruption! Nakashima
Mr. Nakashima, Yes, there is no mention of that. I had the impression that KF's usage was consistent (and equivalent) with Hazen's, since I've seen him in the past relate FSCI to the work of Durston, Abel and Trevors which use either Hazen's formulation or one very similar. I'll let him answer if he had a different usage in mind. (If he did, I apologize for interrupting the flow of conversation with an inaccurate reference!) Atom Atom
Mr Atom, Thanks for the reference! But I am not sure if Mr Kairosfocus will agree that the two definitions are equivalent, because there is no discussion of "islands of function" in Hazen's functional information. Good luck withyour work at EIL! Nakashima
Dear Nakashima, Sorry to jump into your thread, but Functional Information (which is equivalent to FSCI) is defined by Hazen: here. Also, earlier this week we had a breakthrough in which we would be able to directly relate functional information (FSCI) to Active Information (which is the metric developed by the EIL, and seems most amenable to use with GAs and searches in general.) I'm guessing it will be the topic of an upcoming paper. Atom Atom
Mr jerry, Very happy to respond to you, sir. First, as Mister Kairosfocus has reminded us, specified complexity has its roots in the OOL literature. I for one have no objection to deriving CSI from specified complexity. FSCI seems to be KF's private term, often used here, picked up by a few others. That is fine, as far as it goes, but to be taken seriously outside this blog it needs a clear and concise definition. Otherwise it will fall into the category of terms such as phlogiston, protoplasm and "I know it when I see it" vagueness. It seems that FSCI is measurable in bits, according to KF's frequent usage. So it would seem amenable to precision. I believe this is exactly the kind of precision that ID studies needs to earn its place in the scientific communirt. My encouragement is genuine. Just as FSCI is an abstract concept, and can be applied to many non-biological models, so genetic algorithms are abstractions that can be applied in a variety of contexts. They do not model or rely on a close analogy to the cell. The broader term Evolutionary Computation is more apt, since it focuses on the of evolutionary operators - a population, history, variation and selection. So the first point is that if FSCI is truly an important concept, it should be applicable to an abstract GA, just as much as beaker of chemicals. I'm not implying that abiogenesis research is being done by anyone today via GA, though there are some relevant efforts. More to the point are KF's frequent claims that abiogenesis is akin to solving a 1000 bit problem. Well, some 1000 bit problems are not very hard, and others are. (BTW, GAs can be used to solve problems up to a gigabit in size, larger than the human genome. Larger than the potato genome!!) How do you tell the difference? It seems to me that KF has identified FSCI equivalent problem hardness with "islands of function". Again, the term needs a precise definition to be useful. Some problems have obvious islands of function and some don't. I'd be perfectly happy if KF said he is working towards some FSCI metric such as (# of bits in the solution) * (hardness), with a strict definition of hardness. To bring this back to abiogenesis, it remains to be proven that actual chemical evolution happens under appropriate circumstances, and whether it is a 'hard' problem. Exactly what 'islands of function' means in prebiotic conditions is unclear. Having confirmed Oparin's guesses, chemists must try to find support for hypercycles and NK landscapes. The success or failure of that kind of investigation will confirm whether prebiotic chemistry has the 'islands of function' which KF asserts that it does, by analogy with today's biochemistry. In summary, FSCI and GAs are both abstractions from real biology, and claims are made for them on abstract problems that people hope are relevant to chemical systems. But if you want to advance our understanding of the world, you have to be a bit more precise, than we have seen heretofore. My questions have been meant to help that happen. Nakashima
"Can you give me an example of something with 499 (not designed)" No, I doubt anything exists of this complexity that was also functionally specified could ever arise naturally. You would be hard pressed to find something of a few bits that wasn't designed. The number was picked because it is so large that no possible combination of atomic states since the Big Bang could lead to it. "500 (designed" probably any of your posts here "million bits of FSCI?" a typical short thread here "an almost impossible hurdle for naturalistic processes to deal with I note you say “almost”. You don’t rule it out 100% then? Why not?" Because it is theoretically possible for all the atoms to end up in one corner of the room at the same time. There is a greater than zero probability so it is not impossible. If we were in an universe that was eternal, then all is possible. We are dealing with rhetoric here and while I never say absolutely, the actual probability is quite low, requiring thousands of zeros past the decimal place before you get to a non zero integer. jerry
Jerry, Can you give me an example of something with 499 (not designed), 500 (designed) and a million bits of FSCI?
an almost impossible hurdle for naturalistic processes to deal with
I note you say "almost". You don't rule it out 100% then? Why not? Mr Charrington
Nakashima, Most of us here believe that FSCI is a impediment that the anti ID people cannot get over/around/through. It is easy to understand and the proposition that it does not occur in nature, excluding life and intelligent activity, seems an almost impossible hurdle for naturalistic processes to deal with. Because of this difficulty we get lots of double talk here from the anti ID contingent and precious little thats deals with this very simple idea. Stereotypical troll behavior. It does not seem that GA's are relevant to abiogenesis because genetic algorithms (which I know next to nothing) are dealing with stuff that already exist in a cell and I assume a GA is looking for some sort of improvement. A GA could get you non functional FSCI or in essence not FSCI since it does not have function and thus, the organism would die. Or it could get you modified FSCI which is not very interesting in the evolution debate and which happens all the time and is a big ho hum. Sometimes the modified FSCI has a different function and this is interesting and only a small ho hum. What it cannot seem to get is a completely unrelated FSCI because as kairosfocus says, they are rare and there is nothing functional in-between so no organism could continue to exist when the GA took the organism off the reservation and into no man's land where nothing can survive. Or else the thing being modified is not functional (duplicated genomic material) and the GA modifies this non functional element till it eventually stumbles on to a far away island of functionality. This latter part is what I understood the Gouldian worshipers believe and has upset some Darwinian worshipers here. That is my layman's understanding of kairosfocus thoughts and GA's and it seems to make sense to me. There really isn't any such thing as a GA or a search in nature but only a continual production of modifications which natural selection sweeps aside except maybe a rare exception as hoped for by the anti ID enthusiasts. Of the latter, the examples are few and very far between and nearly always trivial. Occasionally and I mean occasionally we get a big drum roll, breathless expectations and then presentation of what in the long run is minutiae. So tell me where I am wrong so I do not write an invalid synopsis next time. jerry
I understand that a value of 500 bits for FSCI indicates certain design. Is there an example of something with 499 bits of FSCI? I'm interested see examples of the value of FSCI in actual examples as it's spoken so much about. Can anyone give me an example of something with 1 Bit of FSCI 350 bits of FSCI 499 bits of FSCI 501 bits of FSCI 1,000,000+ bits of FSCI and explain how you came to that figure? That would be great. Mr Charrington
"Chemistry is the medium; information is the message." It is a true shame then that this amazing "vital force" can be tinkered with by replication enzymes making mistakes. "A little bit of knowledge can be a dangerous thing, if unjustified extrapolations are made from it." How unintentionally prophetic. derwood
From the OP: "...Leon Urey and Stanley Miller who used a spark discharge apparatus to make the three amino acids- glycine, alpha-alanine and beta-alanine. I do hope that Meyer gave a more accurate accounting than that. In an interview, when questioned about the yield of his experiments, Miller replied: "Just turning on the spark in a basic pre-biotic experiment will yield 11 out of 20 amino acids. If you count asparagine and glutamine you get thirteen basic amino acids. We don't know how many amino acids there were to start with. Asparagine and glutamine, for example, do not look prebiotic because they hydrolyze. The purines and pyrimidines can alos be made, as can all of the sugars, although they are unstable." 11 is more than 3. (from http://www.accessexcellence.org/WN/NM/miller.php) derwood
Mr jerry, Sadly, no. It has been mostly about nothing. Mr Kairosfocus advanced a very interesting idea about FSCI but seems unwilling to follow it up and discuss its implications. A discussion of GAs where fitness is decided by competition, rather than nearness to a prespecified target, and with selection based clearly on relative fitness, would be very valuable to many readers of UD (including myself, of course). It would move us past the insufferable WEASEL rag doll immensely. Nakashima
Question of the day. Has Nakashima actually said anything in all his recent posts or are they much ado about nothing? Kairosfocus, keep up the good work. You seemed to have upset Nakashima with your relevant logic and facts. Maybe Nakashima and most of the other anti ID people should retire to an appropriate cul de sac to discuss their ideas. jerry
Sensei
Is anyone going to argue that Brownian motion is intelligently designed, or that each perturbation is the finger of the deity?
I wouldn't be at all surprised! According to some here we must boil our computers if we want to accurately model Brownian motion. BillB
Mr kairosfocus, The insistence on starting from the shores of islands of function simply underscores that there is no cogent answer on getting to such a shoreline without intelligent direction. This completely misses the point of how fitness landscapes based on competition, and selection based on relative fitness, obviate arguments based on "islands of function". It doesn't matter if one molecule's reactivity is low. If it is twice another molecule's, then the first molecule will capture twice the resources over time for its reaction products. Nakashima
Mr Kairosfocus, (Indeed, we may simply observe that organisms die on modest perturbation of functional organisation.) And this is relevant to abiogenesis, how? you don't seem to be grasping the essential point that there is no absolute sense of function in discussing chemical evolution. There are only relative rates of reaction. Nakashima
Mr Kairosfocus, Also, we do not routinely observe such random number generators routinely issuing King Henry V’s speech or the like. We do see intelligent agents routinely issuing linguistic and algorithmic organised sequences that exhibit FSCI. Again, apropos of nothing. A GA is not just an RNG. But perhaps you would like to return to a discussion Polonius' speech and its generation? That went so well for you. Nakashima
Mr Kairosfocus, c] does the fitness landscape have to have islands of function before the functional context generates FCSI? Again, ever since Orgel in 1973, it has been well understood that complex functional organisation is distinct from mechanically generated order, and randomness. (Cf here the Abel et al cluster: orderly, random and functional sequence complexity.) In that context, the concepts of complex specified information and as a relevant subset functionally specified complex information, are relevant. Relevant, agreed, but how defined? You are not making progress towards a detailed definition. You have objected to examples which you believe do not exhibit "islands of function". By what metric can anyone make that distinction? Further to this, since complex function resign on complex co-adapted and configured elements is inherently highly vulnerable to perturbation, such functionality naturally shows itself as sitting on islands in a sea of non-function. That is, the description of islands in a sea of non-function is not arbitrary or suspect, but empirically well-warranted. (We do not write posts here by spewing out letters at random . . . ) Warranted, but how measured? Alliteration is not explanation. Nakashima
Mr Kairosfocus, This is the same basic reasoning that underlies the statistical form of the 2nd law of thermodynamics. Which is apropos of exactly nothing. What is your procedure for differentiating active information sourced in the RNG from active information sourced in the code provided by me? Nakashima
Mr Kairosfocus, a] N, 123: In repeating that the FSCI must have come from the programmer . . . As the above shows, I am not making an a priori commitment (which is what he highlighted indicates) but an inference to best, empirically anchored explanation. That is, I have made a scientific rather than a philosophical inference — it is evolutionary materialism that has introduced a priori censoring commitments on this subject, cutting off the evidence from speaking. You will need to provide more detail in step 2. Your 'inference' is one that Dembski and Marks cannot acheive, you are on the brink of making ID scientific in a way even Lewontin would approve of. Nakashima
Mr Kairosfocus, A few footnotes: It is fairly clear from the telling rhetorically strategic silence on the point above that advocates of abiogenesis and/or body plan level macro-evolution have no clear empirical evidence of the following originating by undirected chance and mechanical necessity tracing to blind natural forces: There is no point "above". my question to you, which this word smog is trying to avoid, is about your definition of FSCI. Nakashima
Yes, always Lewontin because he is the poster child of all the trolls on this site. jerry
Mr BillB, Your point could be equally well addressed to Mr Joseph. Random number generators are abstractions, part of a model of a reality that may in fact depend on Brownian motion or some other mixing process. Is anyone going to argue that Brownian motion is intelligently designed, or that each perturbation is the finger of the deity? Nakashima
Ok, shorter KF NO, I am not going to answer your questions. PPS - Lewontin! Nakashima
BillB, Joseph seems to be saying that his process directly outputs "random numbers" whereas the radioactive decay process only outputs "yes, decay has happened" or "no decay" - i.e. 1 or 0 with a variable, unpredictable time between each 1 or 0 (of course, the average decay time is predicable, leading to the "half life" concept). Just not "random numbers" as such, e.g 344543 7938284596 238957438564 etc Is that about the size of it Joseph? Out of interest, Joseph, what is the range of random numbers that your diode can generate? Mr Charrington
Kf Are proteins numbers? Have numbers been observed to exist outside of human culture? Can stochastic processes affect DNA replication in a way that can be approximated numerically with the aid of random number generators? BillB
Joseph, Do you believe that randomness does not occur in nature save for the inventions of intelligent agents? BillB
Nakashima-san, I used to work in the encryption industry. Our products used random number generators. The old stuff used a noisy diode. That noise was then input to a flip-flop or counter and the output was the random number generated from that noise. It took design engineers to create it. Joseph
dbthomas, Just how is radiactive decay a random number generator? Joseph
Or here Bathybius Huxley realized that he had been too eager and made a mistake. He published part of the letter in Nature and recanted his previous views. Later, during the 1879 meeting of the British Association for the Advancement of Science, he stated that he was ultimately responsible for spreading the theory and convincing others. Most biologists accepted this acknowledgement of error. I find it strange that some sites never credit Huxley for admitting and retracting his error. Perhaps it doesn't fit their dramatic preconceptions. Nakashima
Any of you guys remember "Bathybius"? This was a most important materialist assault at the time. Because of Bathybius, we were all supposed to stop believing in God and so on. Evolutionists said it was a vast sheet of living proto-blob under the oceans, from which all life sprung. It was discovered by Huxley, but then the whole thing turned out to be a scam. Read about it here: Bathybius Vladimir Krondan
Sorry for the late return to the thread. dbthomas, in my question regarding the stop codon you redirected me back to your previous post at 68. My response at 73 was so brief because I literally had 3 minutes before my plane took off. I am now happy to return your post at 68 for a closer look. You say:
TAG doesn’t mean ’stop’ at all. We say ’stop codon’ because it describes its function. It simply doesn’t match any tRNAs, but does react with proteins called release factors, and so translation stops. The ribosome doesn’t need to ‘know’ its ‘meaning’.
This sells the process a little short don’t you think? Firstly, we have to look at the phenomena of “stop” in context, which you seem to have completely ignored. The missing context centers around a chain of nucleotides in DNA that symbolically represents the proteins and processes that are required for living tissue to successfully operate. A function must exist which brings about an orderly end to the process of protein synthesis when the process has completed the sequencing of amino acids in a protein. That function within the process is brought about by a chemical signal along the chain of nucleotides which has the specific intent to end the process. The key word here is process. No one is suggesting that a bucket of thymine, adenine, and guanine means “stop”. However, within the context of reality, it would be hard to argue that a stop codon is merely a human description, and not an actual signal within the process indicating that the end of the amino acid sequencing is complete (so “stop” the sequencing). You say the T-A-G triplet (once transcribed) “simply does not match any tRNAs” and then go on to say release factor proteins come into play. How fortuitous is it that those release factors (and the tRNAs themselves, etc) just happened to be synthesized and waiting inside the cell. Once again, you have discarded the context. This phenomenon is taking place within a cell (actually within a certain part of a cell). That cell has constituent parts which exist there for the specific and organized purpose of cellular function. The specialized release factor proteins are part of the system. They, nor any other constituent parts of the cell, would exist there at all if “stop” did not mean “stop”. In other words, they all required “stop” to mean “stop” so that they can be part of the process where “stop” means “stop”. To assert that the resulting mechanical effect of the stop codon is the cause of the stop codon is to say that the effect of the cause is the cause itself, and perhaps even the cause of the cause. In a system that is well known to be physico-dynamically inert (particularly in regards to the actual sequence of amino acids, such as T-A-G) that assertion has of certain ring of intent to it. I can suppose that if I asked why the coding of the 3 billion base pairs of the human genome exists in the order they do; you could simply answer “so that humans are made”. And if I argued that it could not rationally come to be organized by a mechanism that operates at maximum uncertainty (like chance) then you could simply posit the long period of time that Life has existed on Earth. You could then make a meaningless appeal to selection as the organizing force. Both of these explanations would, of course, ignore that the sequencing of DNA has no physical cause to exist at all, and that organized complex life began on Earth almost immediately after the planet cooled. Perhaps if my hastily posted question (as I was in the airport) could have been more specific, then perhaps your response would have been less trivial and more useful. ID proponents are looking to materialists to provide material explanations that are based on what is known about material causes (and to not contradict what is already known about material causes). Perhaps you could have given us an empirical example of other naturally occurring complex algorithms where such analogous phenomena as a “stop” codon exist. Do you have any such examples? Upright BiPed
Onlookers: A few footnotes: It is fairly clear from the telling rhetorically strategic silence on the point above that advocates of abiogenesis and/or body plan level macro-evolution have no clear empirical evidence of the following originating by undirected chance and mechanical necessity tracing to blind natural forces:
1 --> Computer languages, codes, algorithms and organisation of data structures. 2 --> Functionally specific, complex information (and broader specified complexity) 3 --> Irreducible complexity (especially that based on finely-tuned mutual adjustments to meet at an operating point).
Each of these is well known and routinely observed to be the product of intelligent design. As well, the challenge to find target zones of function in the relevant configuration spaces with vast seas of non-function, rapidly exhausts the search resources of our observed cosmos. So, such phenomena, credibly, are reliable signs of intelligence. Why that inference is being so stoutly resisted is because of its possible worldview level implications, not anything to do with its empirical weight. (In other words, a la Lewontin et al, we see that an imposed a priori commitment to materialism is blocking and censoring out the inference to what would otherwise be the obvious best explanation.) Now, a few points above require a note or two: a] N, 123: In repeating that the FSCI must have come from the programmer . . . As the above shows, I am not making an a priori commitment (which is what he highlighted indicates) but an inference to best, empirically anchored explanation. That is, I have made a scientific rather than a philosophical inference -- it is evolutionary materialism that has introduced a priori censoring commitments on this subject, cutting off the evidence from speaking. b] If you have a solid way of differentiating between the active information input by the programmer and the active information input by the random number generator, you have solved a very interesting problem for ID. A random number generator of course is strictly capable of making an avalanche of rocks down a hillside fall into any particular shape, including the shape: WELCOME TO WALES. However, as I have pointed out long since, the number of possible at-chance configurations that do not fulfill any linguistically meaningful configuration are so much more abundant in the config space than those that do, that we do not expect to see such. This is the same basic reasoning that underlies the statistical form of the 2nd law of thermodynamics. By contrast, intelligent designers routinely arrange rocks to form such complex, linguistically functional configurations, and do many other similar things. the inference to best explanation is therefore obvious,a nd is a longstanding design theory technique. c] does the fitness landscape have to have islands of function before the functional context generates FCSI? Again, ever since Orgel in 1973, it has been well understood that complex functional organisation is distinct from mechanically generated order, and randomness. (Cf here the Abel et al cluster: orderly, random and functional sequence complexity.) In that context, the concepts of complex specified information and as a relevant subset functionally specified complex information, are relevant. Further to this, since complex function resign on complex co-adapted and configured elements is inherently highly vulnerable to perturbation, such functionality naturally shows itself as sitting on islands in a sea of non-function. That is, the description of islands in a sea of non-function is not arbitrary or suspect, but empirically well-warranted. (We do not write posts here by spewing out letters at random . . . ) d] DBT, 125: two words: radioactive isotopes. A sample of radioactive material does not generate and issue random NUMBERS, it simply has atoms that decay stochastically. Since we have observed and analysed that stochastic pattern (and others like it, e.g. Zener or sky noise), we then use our intelligence to create machines that generate random numbers using the outputs of that stochastic behaviour. (And we can also make pseudo-random number generators that can more or less convincingly mimic that behaviour.) Joseph is clearly right:
Can you show us a random number generator arising via nature, operating freely? That would help your case…
Also, we do not routinely observe such random number generators routinely issuing King Henry V's speech or the like. We do see intelligent agents routinely issuing linguistic and algorithmic organised sequences that exhibit FSCI. e] N, 126: There is no absolute fitness landscape that all population members experience equally in GA systems that focus on competition rather than targetted search. Again, the islands of functionality in a sea of non-function pattern is a natural one for organised complexity. And, absence of function is fairly obvious empirically. (Indeed, we may simply observe that organisms die on modest perturbation of functional organisation.) f] An absolutely low function can still be a strong relative function. The material issue is not competition among functional states of whatever high of low level, but to get to initial function without intelligent direction. The insistence on starting from the shores of islands of function simply underscores that there is no cogent answer on getting to such a shoreline without intelligent direction. GEM of TKI kairosfocus
Mr Joseph, random.org Personally, I am not convinced true randomness is necessary. As in many things evolutionary, I'm pretty sure it is a relative measure that matters, not an absolute measure. A pseudo-RNG with a period longer than the age of the universe (for example) would serve just as well. Better in some sense, because experiments are repeatable, using the same seed. This focus on relative applies to fitness. of course. There is no absolute fitness landscape that all population members experience equally in GA systems that focus on competition rather than targetted search. This another argument against "islands of function". An absolutely low function can still be a strong relative function. Nakashima
Well, I can't exactly show them to you, Joseph, but since I assume you accept the existence of atoms, two words: radioactive isotopes. dbthomas
Nakashima-san, Can you show us a random number generator arising via nature, operating freely? That would help your case... Joseph
Mr Kairosfocus, Thank you for the Wiki quote. I think I have edited that page in the past, so it is good to know that someone finds it useful. In repeating that the FSCI must have come from the programmer you are overstepping the conclusion of Dembski and Marks. The LCI paper simply concluded that the active information came from one of the inputs, without giving a method of determining which. If you have a solid way of differentiating between the active information input by the programmer and the active information input by the random number generator, you have solved a very interesting problem for ID. And that is a problem that is relevant here. You chose to highlight the word 'stochastic', you could also highlight the word 'random', and then you would see that the 'mechanical' perjoratitve is not apt. So we come round again to this islands of function idea. Let me ask you plainly again - does the fitness landscape have to have islands of function before the functional context generates FCSI? Is there a measure of landscape ruggedness for which you can say "Above this value for this metric, FSCI exists, below this number it is merely CSI." Nakashima
Nakashima-San: First, I excerpt Wiki on GA's:
Genetic algorithms are implemented in a computer simulation in which a population of abstract representations (called chromosomes or the genotype of the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
The highlights should show the core problem with using GA's and their claimed inspiration in "evolution" to then seek to justify evolutionary materialism: CIRCULARITY, on multiple levels. That is why I have highlighted the issue of first getting TO the beaches of functionality before one may climb to peaks of function by whatever hill-climbing method one may wish, including e.g. modest random variation and steepest ascent, etc. In short, before you can speak of differential reproductive success, you first have to get to a viable and reproducing organism, for first life and then for major novel body plans. That is why the tree of life icon is missing its tap root, and that is why there is no good mechanism for major branching. (What explains minor variations does not account for the information threshold issue and the organised fine-tuned irreducible complexity issue.) In that context, sure GA's can move you around -- by design BTW -- within an island of function, but the issue is not there; it starts with: how do you get tot he shores of function in a very large non-functional space, without recourse to injection of active information? And that BTW is where the NFL issue comes up: you don't get the required information to create that initial functional organised complexity based on multiple complicated interacting parts for free; unless you are willing to resort to incredible luck indistinguishable form magic or materialistic miracles. In that context, FSCI would not be so much "created" by a genetic algorithm, as created by its intelligent designer. And, by Intelligence, I mean this, courtesy Wiki as cited in the glossary above:
“capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.”
PCs and the genetic algorithms we load into their active memories do not reason, plan or solve problems; they simply execute mechanical instructions mechanically, without thought or understanding. Computers mechanically executing instructions based on their architecture are not using language in the sense that we do, as we see form the distinction that computer "languages" are artificial languages. And, computer "learning" is a loose analogy. And all of that applies to GA's, whether such are used to study protein folding or antenna design. (Recall also that proteins are useful because a certain cluster of related information-rich, step by step assembled polymers will fold to mutual key-lock fitting shapes, and in so doing will fulfill key steps in the workings of life. To get to that cluster of nano-machines and their functional organisation puts us well beyond the threshold that the FSCI concept highlights.) GEM of TKI kairosfocus
BA: Thanks. GEM of TKI PS: I ask you to contact me (through the always linked). kairosfocus
Concerning biological vitalism and how it has been straw-manified by atheistic materialists, it helps to read the works of Lionel Beale and Hans Dreisch. They can be found at Internet Archive. Vladimir Krondan
bornagain77,
Mandy Moore - You’re My Only Hope - A Walk To Remember http://www.youtube.com/watch?v=q6zzKZTZ6Ro
Thanks for posting the inspiring video! She could be a great ambassador for ID. I wonder if she is a believer?? herb
Mr Kairosfocus, So, random walk based processes of generating contingent outcomes — and remember, mechanical necessity does not generate high contingency but plays out along trajectories shaped by initial and intervening circumstances — become irrelevant, once we are looking at the sort of recognised functionality that is vulnerable to modest perturbation. Indeed, that is why it is vitally important to understand the difference between a random walk, which has little hope of exploring the space within the life of the universe, and methods based on populations and history. For genetic algorithms, this difference is captured in John Holland's Schema Theorem. Briefly, the Schema Theorem says that exponential growth can conquer a large space. Compound interest wins again! :) There are caveats, of course, otherwise the NFL Theorem would be violated. If the space is structured in such a way that it is arbitrary (history has no predictive value) or deceptive (prediction from history leads to a place where you are worse off than before), then GAs will do no better than, or worse than, random search, and NFL is preserved. Is the space of proteins (for example) amenable to GA search, arbitrary, or deceptive? One clue is the success of other predictive processes in that space. Any success in predicting protein function from sequence would be indicative that this space is, in fact, amenable to GA search. I agree that this discussion of protein space (or a similar RNA space) may be of ultimate interest. But in the meantime, if I could just clarify that per your definition of FCSI, FCSI is generated by the processes of a GA running on a suitably large problem? I beleive this is the position of Dembski and Marks in their LCI work, though they are still struggling with the question of tracing the sources of the FSCI. Nakashima
kairosfocus, Thank You for the time, patience and effort, you put into explaining the intricacies of ID. I know many times those you are trying to instruct are belligerently unreasonable to the point of making it seem talking to a brick wall would be more profitable, but there are those of us who do listen to you. So keep up the good work. Here is a song for you; Mandy Moore - You're My Only Hope - A Walk To Remember http://www.youtube.com/watch?v=q6zzKZTZ6Ro bornagain77
Odd . . . kairosfocus
Pardon a test: Testing blockquote.
single block
Next, double:
first level block
second level block
Back to level 1
Ordinary (To see how my formatting went wrong.) kairosfocus
PS: For those troubled by the issue on whether or not I am in agreement with Dembski, note that islands of function (and archipelagos) are target zones where once one reaches the beachline, hill climbing algorithms such as modest chance variation + differential performance leading to culling on "best performers" will be applicable. My point in giving a simple rule of thumb with a 1,000 bit info storage capacity threshold on observed functional information, is that by specifying a criterion of such vastness that the cosmos will not be able to search more than an incredibly tiny speck of it, no reasonable islands of function will be credibly accessible through whatever is comparable to an unaided random walk in the ocean. No beacons, no wafting winds, no wafting currents, no wandering birds that allow one to know one is in the neighbourhood and which direction to go when they go home to roost on evenings, etc. In short, no active warmer/colder information that rewards non-function on proximity. Once we do that, we will very soon see that the reasl problem with OOL and later on body plan level biodiversity is that there is no reasonable way to get tot he shores of initial function on undirected chance + necessity. That is the conundrum that has needed to be answered by evolutionary materialism advocates for years at UD [and elsewhere], and which still stands unanswered. kairosfocus
Rob: Pardon, but, you are recirculating already answered objections, with a few twists and turns. The unanswered reductio challenge still remains. You are a known intelligence, and the very posts you just put up are instances of original, i.e. creative and more or less contextually responsive text in English of more than 143 characters. In short, all the documentation you really need is sitting there in front of you, and is the product of your own recent creative action. Similarly, you would have done much the same had you put up source code. And that answers to the basic issue directly: computers are programmed mechanisms, designed and developed by intelligent agents. They can be programmed to carry out targetted searches of large configuration spaces, but once the spaces get big enough, they do not do so successfully by random walks in a sea of non-function dotted by isolated islands of function. Instead, they step by step carry out preset routines, and in so doing address targets based on preset algorithms. (The relevance of this to the theme of this thread has just been discussed in my response to Nakashima-San.) By contrast a human in a general language or programming situation exhibits volitional spontaneity and creativity: s/he is not spitting out shuffled pre-programmed contextually pre-programmed responses a la Turing's test, or by random shuffling, but is creating genuine non-pre-existing novelty. [That's why so-called expert systems don't work so well outside of narrow contexts where more or less exhaustive rules and cases can be constructed (typically resulting in rather predictable outcomes) and/or deep searches across contingencies can be undertaken. Humans -- a case of observed intelligences -- don't need more than a fuzzy idea and some practice with examples to begin to create successful novel information-rich entities. (Think here on Chomsky et al on the way even infants generate novel sentences.) And, given the critical significance of surprise in many real world situations, that difference is vital. OPTIMAL answers are usually brittle; indeed since we have bounded rationality, the GIGO principle applies -- a programmed optimum for a model may well leave out key unanticipated information from the environment. A classic case is say an expert aircraft landing system that does not factor in a case where an earthquake has cracked a runway. A common-sense using student pilot will spot that something is wrong, but a machine will go right ahead and will crash the plane. Or, check out the performance of OCR systems and Spell checks or Grammar checks. If you trust an OCR to get it fully right, you deserve the result you will get; observe, we then use a human proof reader to correct the output. Why is that? And, why it is that humans can usually read ordinary handwriting (the mess created by doctors is an exception here . . . ), but computers run into serious difficulties trying to do that?] As to the reiterated assertion, insinuation or implication that FSCI is not well defined so can be dismissed, I again point to the simple rule of thumb description/model from weak argument corrective 28:
For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.) The example: a PC screen such as the one you are reading this on is also still quite relevant and again raises the need to address what is literally right there in front of you. Rob, how did the PC in front of you generate the windows, graphics and text on it -- by [A] chance plus necessity undirected by decision-making creative intelligences [as we observe and experience them -- debates and dismissals over "libertarian free will" notwithstanding (onlookers BTW, cf what happens when we revert to materialist reductionism, here)], or [B] by a process of mechanically -- i.e without common sense intervening -- executing a known intelligently designed program acting on inputs, in the end through using organised circuits and voltages? So much so that it is often said: GIGO -- garbage in, garbage out. (I also note that for instance, "life" and many other key entities in science have no generally agreed precising definition, but are still very useful and important SCIENTIFIC entities. That is, I give you a counter example to the notion that entities that have no precising definitions are not proper conceptual entities and can be dismissed. [In fact, we form concepts by abstracting intuitively from examples and then seek to construct descriptions and definitions in words, testing against examples to see if they are reliable enough to use. This then becomes the foundation of quantitative MODELS -- observe we are not here addressing realities -- which can be used where relevant and reliable. I here assert that the just above model is adequately reliable to be used as a criterion of functionally specific complex information and its empirically credible source. Without needing to go beyond the fact that we OBSERVE certain entities -- including ourselves -- that are creatively intelligent and in that intelligence often significantly differ in actions from programmed behaviour and/or from randomness or mechanical necessity of nature. Specifically, [a] necessity gives rise to low contingency. [b] High contingency has the known sources: (i) stochastic, undirected contingency (= chance) and (ii) intelligence, which often shows its presence by rational, decisional behaviour that is creative and in some cases wise as opposed to merely otpimising on a narrow model. FSCI as just simply modelled is a known, routinely observed artifact of such intelligent action, per millions of cases as can be seen on the Internet.]) It seems to me that a cycle of endless debates over words and terms and objections can only really be resolved by reiterating the still unanswered challenge:
Rob, you need to show us a case where undirected chance + necessity (i.e nature acting freely) has credibly created say a 143 character string of text in English (up to a typo or two) that responds to a real world situation. No libraries of chainable prepackaged responses or preset text strings or generating rules, or targets and rules of improvement by proximity without reference to functionality in a context of isolated islands of function, or the like, etc. This or the like has long since been on the table [for months to years here at UD], and the resort to every artifice of debate but the simple production of an empirical counter-example demonstrates clearly that you have no such good counter example. And that in turn goes to the heart of the issue in this thread: materialistic models of origin of life are based on maximally improbable scenarios, and are often insisted on in the teeth of the known routinely observed source of the functional, specified complex information observed ever since Orgel et al to be a key and discriminating characteristic of life. GEM of TKI
kairosfocus
Nakashima-San: Again, once we deal with ~ 500 - 1,000+ bits of information storage capacity to carry out a function, the point is that we cannot exhaust the configuration space or even search out a significant fraction thereof. (The entire universe we observe would not be capable of searching out 1 in 10^150 of the space. The implied odds of getting to any one block of 10^150 configs are like marking just one atom for just one instant in the entire history of the observed universe, then getting into a time and space travelling spaceship and going anywhere in the history and locations of the observed cosmos at random, and on the very first try, we pick up the marked atom at just the right instant of time. That's why this is a practically unwinnable lottery.) So, random walk based processes of generating contingent outcomes -- and remember, mechanical necessity does not generate high contingency but plays out along trajectories shaped by initial and intervening circumstances -- become irrelevant, once we are looking at the sort of recognised functionality that is vulnerable to modest perturbation. For instance take a prebiotic soup model, with empirically plausible monomers in it. To move from such soups to metabolic and/or genetic functionality on chance + necessity only requires spontaneous generation of relevant co-adapted macromolecules, and that these be configured together in the "correct" relationships in spaces of order 10^-6 m scale. As I discuss in my App 1, point 6 in the always linked, the configuration space is daunting, and the result is that the odds of getting to such life on the gamut of our observed cosmos are not materially different from zero. But, we know by routine observation -- e.g. posts in this thread (pace Rob's reiterated objections . . . ) -- that intelligences routinely produce FSCI-bearing functional systems. So, on inference to best empirically anchored, current explanation origin of life (the nominal focus of this thread) is best explained by intelligent design. GEM of TKI kairosfocus
kairosfocus@99:
Real cases of emergence such as how Na and Cl form common salt, have dynamical processes that we can trace.
Do all real cases of emergence have dynamical processes that we can trace?
In short the much touted objection on “uniformity” is a strawman argument.
You're conflating two unrelated points regarding uniformity. My point had nothing to do with the fact that Dembski's null hypotheses are virtually always uniform distributions. The strawman accusation is tiring, especially when it stems from your own misunderstanding.
A 800 x 600 pixel screen with 24 bits per pixel is not going to be compressed below 1,000 bits of information capacity
So when you say "11.52 million functionally specified bits," do you really mean 11.52 million functionally specified bits?
Strawman of immateriality, again.
You're an intelligent person, so you certainly knows what "strawman" means, and yet you repeatedly level the charge against me without telling me how I've misrepresented your position. I even explicitly asked for you to clearly state any position that I have falsely attributed to you so I can retract it. The olive branch was ignored, and I continue to get accusations of strawman. I've said before that you're a good man, kairosfocus, and I believe that, but your brand of "charity in communication" seems strange to me. A few more points: - You might want to follow Dembski's example in including all outcomes that meet the given specification (or function in your case) in your calculation. Dembski does it that way for a good reason. - This would require that you explicitly state a function, rather than just saying that something is functional. The information on my computer screen has many functions, and there is a different quantity of CSI associated with each function, according to Dembski's definitions. - You might also consider incorporating specificational resources as Dembski does, also for a good reason. - You didn't answer my question as to whether a blank screen is functional. Ditto on the screenful of noise. Do you not see the relevance of these questions? - How do we use the FSC of proteins to calculate the amount of FSCI that goes into a design process? What would be a ballpark figure for the amount of FSCI that went into creating, say, this sentence? - Please point me to the documentation that shows that humans generated the FSCI in my PC and in the internet, as opposed to humans being conduits for that information. Thank you in advance. R0b
kairosfocus@99:
Second, in our experience, we are conscious, enconscienced, minded creatures, who find ourselves making choices and originating things with a breadth of range that transcends the credible reach of programming.
The reach of programming is quite vast. Unless we can solve the halting problem, we're within its reach. As to the credible reach of programming, that depends on who or what the programmer is.
Decision-making creatures, notoriously, are rational but are not predictable, nor reducible to outcomes scatterinfg along a mechnical probabilistic statistical distribution.
Rationality entails some degree of predictability. A perfectly rational person is guaranteed to make one of a set of optimal decisions. It's only within that set that their choice is unpredictable. And certainly human behavior is predictable to some degree, even some irrational behaviors. Any set of outcomes constitutes a statistical distribution, and distributions are often used to predict human behavior. To say that human choices are not "reducible to outcomes scattering along a mechanical probabilistic statistical distribution" begs the question of whether human choices are mechanical, whatever that means.
So, the real answer is that we should be at least open to the possibility t hat the apparent creative enconscienced reasoning and deciding consciousness that we experience and which is a premise of all intellectual activities such as science, is real.
Now we're to the heart of the debate. It sounds like you're positing something like libertarian free will, and it seems that some such idea is a necessary in order to conclude that humans create FSCI as opposed to merely storing and expressing it. So instead of it being obvious and universally observed that intelligence creates FSCI, it turns out to be a conclusion based on an LFW-like metaphysic. That has been my point from the beginning. So I repeat that I have never directly observed a human generating FSCI, a statement that you earlier called "self-referential absurdity." Now you say that we should be at least open to the metaphysic from which we can conclude that humans generate FSCI. It seems that your stance has softened considerably. R0b
kairosfocus@98:
So it is not an “admission” to note that we are capable of finding targets in large config spaces.
Computers and nature can find targets in large config spaces too. But I didn't ask whether we're capable of finding targets in large config spaces (and I didn't say anything resembling "admission"). The question is whether we can find sparse FSCI targets without any FSCI to guide us.
In short, you are here plainly tilting at a strawman of your own manufacture.
I asked a yes/no question on whether you make a specific claim. How that constitutes a strawman is beyond me.
Again, the point of functionally specified complex information — ever since Orgel introduced the concept — is that first, function must be OBSERVED, which acts as the specification for the complex information.
The hash challenge has nothing to do with identifying FSCI. It's a challenge to see if you can generate FSCI without using existing FSCI. (Or more accurately, FSI, since the ratio of target size to search space size is less than 153 bits.)
Back to 101: when we see a text string of 143 or more ASCII characters that functions as contextually responsive or appropriate text in English, we may draw out certain conclusions:
I have never disputed that responsive text is FSCI. I accept that arguendo, so there's no need to defend that claim.
“we know that intelligence routinely generates such FSCI.”
Your 7-step argument does not address my reasons for disputing this claim, so I'll ignore it.
This — pace your clever word choice — is not a mere ASSUMPTION, it is a fact — whether or not it is a welcome one for those who argue that chance + necessity are sufficient to explain [away?] cases of “apparent” design.
Whether or not chance+necessity are sufficient to explain cases of apparent design is irrelevant to the question of whether FSCI is well-defined. I've accepted, arguendo, that it is, so you don't need to defend it.
We further know that a server is a programmed entity, which is not carrying out decisions of its own volition, but simply mechanically executes a program, through in the end switching logic states and assocated voltages in circuits.
You didn't answer my question explicitly, but the implication is that necessity (plus, I assume, chance) cannot generate FSCI, so we can rule out computer programs as originators of FSCI. Have I interpreted you correctly? R0b
Mr Kairosfocus, FSCI in the relevant context becomes a consideration when we are addressing config spaces that are large and comprise isolated islands of function. I'm not sure what you are saying. Does this context invalidate whether something is FSCI or not? I thought 1000 bits was pretty large, but we can make the example gigabits in size if you prefer. 2^10^9 is a pretty big config space. How big do the 'cliffs' have to be around these islands before something becomes FSCI? Nakashima
90DegreeAngel: The issue of impact of randomness on a designed object is of course a complex one. If it has enough redundancy in it, it may be robust enough to still function even with moderate damage to the information. (Think here of error correcting codes.) Insofar as there is damage to functional bits -- and, beyond a threshold, an error correcting code or the like will be overwhelmed -- the number of actually functional bits will be falling as damage occurs, and ability to recover function in the teeth of further damage will also be falling. With the degree of intricacy of function we are talking about, the threshold by which functionality fails will occur long before the number of bits that are undamaged falls below the 500 - 1,000 bit rule of thumb threshold. (Notice, experiments point to auto-disintegration of bio-function for independent life forms once the number of base pairs falls below about 300,000.) This of course bears more than a passing resemblance tot he concerns under Sandfor's genetic entropy. GEM of TKI kairosfocus
Nakashima-san: Following up briefly. the program is designed to undertake a targetted search within the reasonable scope of resources of the cosmos, and on a fitness landscape that is not based on islands of fucntion in vast config spaces that are non-fucntional and have no beacons to broadcast the "right" direction to move in. FSCI in the relevant context becomes a consideration when we are addressing config spaces that are large and comprise isolated islands of function. It is enough for me that the GA is itself FSCI (programs being informational entities), and that the machine on which it runs exhibits FSCI. They therefore exemplify the pattern that FSCI originates in intelligent design per our observational experience. GEM of TKI kairosfocus
I have a question for KF . . . As a student of such simulations and their weaknesses. I have to agree with much of what you said. However, I have one objection. This objection stems from the work of GilDodgen and the type of simulations he does. Gil has, correctly, pointed out that all said simulations cannot be worthwhile because they do not take into the true reality of nature and the randomness of it . . . So when you take your machine that is running a simulation and expose it to radiation or chemicals that might impact the simulation and the machine itself running the simulation . . . does this increase or decrease the FSCI? 90DegreeAngel
Mr Kairosfocus, the program and the machine on which it runs are storing and expressing the FSCI, which shows (per the lines of evidence on inference to best explanation as repeatedly outlined) that they are artifacts of intelligent design. As far as you have gone, I agree with you. But you have stopped short of claiming that the population members (either the first or last generation) are my creation, which is a good thing. Had you done so, you would have fallen into the same confusion that Mr Gil Dodgen has demonstrated. The program can explicitly construct the initial population from random bits taken from well known URLs. Further, (and I hope not to confuse the issue) the small GA program could allocate a million population members in each generation, each a million bits long. I didn't construct this terabit of FSCI. Nakashima
Nakashima-San:
So if I write a GA system . . .
the program and the machine on which it runs are storing and expressing the FSCI, which shows (per the lines of evidence on inference to best explanation as repeatedly outlined) that they are artifacts of intelligent design. In outright identifying the -- obviously, highly intelligent -- author of the relevant GA, you have thereby acknowledged that. GEM of TKI PS: I am very much open to the possibility of a created self conscious, reasoning and freely deciding -- thus morally bound as well -- intelligence, even to one that we can create [hopefully someday soon]; but GA's are not credible candidates for that. kairosfocus
Mr KairosFocus, For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. So if I write a GA system where the population members are competitors in an iterated prisoners dilemma with competitions running up to 1000 iteratioins, then you are satisfied that FSCI is being created by the GA? Each member is 1000 bits long, each bit stands for the action to take (cooperate=1, defect=0) in the current iteration. Fitness is the score of the individual at the end of an iterated competition with another member of the population. Nakashima
PPS: Sparc, it should be easily evident that the case where specification is by observed function is a subset of the general set of complex specified information. Similarly, many cases of irreducible complexity -- and BTW fine-tuned adjustment to operating point of a composite system -- are also addressing subsets of CSI. (It is possible for something to be IC without being otherwise complex enough to pass the thresholds for CSI or FSC or FSCI.) kairosfocus
PS: the above set of corrective responses is not directly relevant to the original focus of the thread, but illustrate how the materialist frame of thought leads them to reason as they do on origin of life, which is a case of origin of bio-functional information well beyond the reasonable threshold of complexity. kairosfocus
5] If we all know that humans generate FSCI, how did we go about determining that humans generate it as opposed to merely reshuffling it? Sure enough, the worldview level issue now emerges: are we just computers, mechanically playing out our own programming? [Genetic and/or environmental . . . cf here for the implications of answering that "yes."] The first level of answer is that we ourselves exhibit FSCI so are evidently designed. Second, in our experience, we are conscious, enconscienced, minded creatures, who find ourselves making choices and originating things with a breadth of range that transcends the credible reach of programming. Decision-making creatures, notoriously, are rational but are not predictable, nor reducible to outcomes scatterinfg along a mechnical probabilistic statistical distribution. So, if we are conscious of acting in ways of reason, value and decision, what worldview bests accounts for that? (And, neurological reductionism boils down t o self-referential incoherence and emergentism runs into the barrier that in effect it is argument by magical poofery: once matter and information reach a certain level of complexity, materialists ASSUME or ASSERT that conscious mindedness and morality etc "emerge." Real cases of emergence such as how Na and Cl form common salt, have dynamical processes that we can trace.) So, the real answer is that we should be at least open to the possibility t hat the apparent creative enconscienced reasoning and deciding consciousness that we experience and which is a premise of all intellectual activities such as science, is real. (And we should face the apparent self-referential incoherences of assuming reductionistic materialism, as well as demanding adequate dynamics from emergentists. noting as well that reductionism and emergentism are in reality flip sides of the same materialistic coin.) 6] regarding the Orgel quote, it’s interesting that ID proponents repeatedly use a quote that demonstrates Dembski’s equivocation. “Complexity” in Dembski’s “specified complexity” has nothing to do with uniformity or the lack thereof. On the contrary, Orgel showed and summarised that there is a characteristic of cell based life that is sharply distinct from the order of crystals or the haphazardness of a mass of organic tar. A distinction that cries out for adequate definition and explanation. Dembski provided one model, using the default case for probabilistic distributions per the Laplacian principle of indifference: save where we can identify a reason to bias an outcome across a contingent set, it is reasonable to assume that each possible configuration is a likely as any other. Such can be modified of course if such reason is presented. And, any mathematically sophisticated person should know that much. And, in fact, if we see for instance Bradley's presentation on Cytochrome C here, we will see that he and many others do reckon with non-uniform distribution cases, as has been long since published and presented; indeed, it has long been accessible two clicks away in my appendix 1, point 9, my always linked. (This raises an issue of responsible as opposed to misleading comment by informed commenters.) BOTTOMLINE: Such cases reduce the average information carrying capacity per physical bit-storing unit, but on the scope of the relevant cluster of information bearing entities in life [remember we start observed independent life forms at 300,000 4-state elements], it makes no material difference. In short the much touted objection on "uniformity" is a strawman argument. 7] 93, A screen of text is highly compressible. How did you know to base the calculation on uncompressed rather than compressed pixel data? A 800 x 600 pixel screen with 24 bits per pixel is not going to be compressed below 1,000 bits of information capacity. Redundancy does not undo the fundamental issue. Strawman of immateriality, again. 8] How did you know to use the storage size of the pixels rather than of the text itself? Both the pixels of the screen and the text on it exhibit FSCI (and it does not matter the precise underlying operating system or text encoding scheme or pixel encoding scheme -- though these days these are pretty much standardised), i.e this is a red herring. 9] What function did you choose as the basis of your calculation? Rob, you have a PC screen in front of you as you read this. ASK: What function is it carrying out right now? (In short, onlookers, this has now plainly deteriorated into selective hyperskepticism to the point of self referential absurdity. Sad to see. But, it tells us just how compelling the case that FSCI is a god sign of intelligence is on the merits.) 10] Why did you not include in your calculation all screens that serve the same [or any] function? Because we have a particular screen in front of us, and that screen uses digital strings of information to perform an observed function. In so doing, it uses in excess of 1,000 bits of information carrying capacity, and it is vulnerable to modest perturbation of said strings. In our observation -- despite tortured objections to deny the obvious and evident -- such entities are known to be designed. And, on the search space constraints, only design is a credible candidate for such an entity. So, it becomes an apt illustrative case of how FSCI is a reliable sign of intelligence. So apt, that you -- sadly -- have reduced yourself to self-referential absurdity in trying to deny its force. 11] How would you determine that a screen serves no function? Doesn’t even a blank screen or a screen full of noise serve the function of indicating that your monitor or video adapter is broken? We need not try to identify non-function: the question of FSCI arises when we identify a particular case of function that uses over 500 - 1,000 bits, and is vulnerable to modest perturbations. As has been plain from the outset. 12] I’ve never directly observed a human generating FSCI. Including yourself when you created the posts 92 - 93 etc? In short, you are now in self-referential absurdity. 13] 94, I have no idea how to calculate the FSCI that goes into the design process. 97, I never said, insinuated, or thought that you claimed documented evidence. The comment that you’re complaining about challenges any supposed evidence, including observational, of humans creating FSCI. That none is documented is a minor point. We can calculate, using several approaches -- of which the above is a simplest illustration -- the FSCI of entities that function based on information. Indeed, there is now at least one table of 35 values of such for proteins in the peer reviewed literature, by Durston et al, as is linked form weak argument corrective no 27 above on this page. In short, neither leg of your claim can stand up. (And, in the earlier comment you did make claims about "documented" evidence, which I corrected by pointing out that the given evidence that is relevant is the PC in front of you and the Internet full of cases of FSCI.) 14] I have no intention of arguing my interpretation of “we” in FAQ#28, which may be wrong but is in good faith. Onlookers, the reference of "we in weak argument corrective no 28 should be obvious ,as should the fact that a reasonable working definition 9complete with illustrative case) was presented ahead of the long string of objections answered above:
FSCI is actually a functionally specified subset of CSI, i.e. the relevant specification is connected to the presence of a contingent function due to interacting parts that work together in a specified context per requirements of a system, interface, object or process. For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.) On massive evidence, such cases are reliably the product of intelligent design, once we independently know the causal story. So, we are entitled to (provisionally of course; as per usual with scientific work) induce that FSCI is a reliable, empirically observable sign of design.
Ther meaning of "we" is plain and in a plain context. __________ Finally, observe onlookers: TO DATE, OBJECTORS TO THE FUNCTIONALLY SPECIFIED SUBSET OF COMPLEX, SPECIFIED INFORMATION ARE PATENTLY UNABLE TO GIVE US A CASE WHERE FSCI HAS BEEN CREDIBLY CREATED BY CHANCE + UNDIRECTED MECHANICAL NECESSITY. But, as we have highlighted, from the PCs in front of each of us, to the Internet we are all using, to the comments we are putting up in this thread, we exemplify just how FSCI in our observation is routinely the product of intelligent design. The very nature of the objections, as well as the resulting absurdities, tell us what the true balance of the case on the ever so plainly merits is: we are entitled to (provisionally of course; as per usual with scientific work) induce that FSCI is a reliable, empirically observable sign of design. GEM of TKI kairosfocus
Rob: Pardon an observation: I find your onward responses astonishing, as though we are inhabiting two different worlds. Knowing that you are a longstanding commenter at UD, I confess I am tempted to immediately infer to selective hyperskepticism or even willful obtuseness trying to shift a burden of proof beyond the limits of reasonable exchange. But, the principle of charity in communication forces me to instead first infer to fundamental gaps in communication; probably driven by the impact of worldview level commitments on your part that make ID concepts particularly hard to grasp, and even make it hard to see what should be obvious facts. (After all, Galileo's objectors sometimes refused to look through his telescope or in some cases seem to have objected to optical aberrations and used those to dismiss the reliability of the instrument, regardless of the demonstrated performance on ships coming into harbour.) I will comment on some key points, in the hope of clarifying some gaps in understanding on basic ID related empirical observations, descriptive concepts and inferences drawn therefrom: 1] 92: Do you or do you not claim that we observe humans finding sparse functional targets in vast seas of non-functionality? As we both know, humans are intelligent and undertake targetted foresighted searches. So it is not an "admission" to note that we are capable of finding targets in large config spaces. IT IS AN OBSERVED AND IDENTIFYING CHARACTERISTIC OF US ACTING AS INTELLIGENT AGENTS. (Which is why for instance, the cited case of a screenful of functional information comprising 11 million bits or so, illustrates just how deeply isolated in configurations paces artifacts can be. 11 mn bits specifies 2^11 mn ~ 8.96 *10^3,311,329 possible configs.) The problem -- as I EXPLICITLY pointed out above -- is that chance and necessity under an evolutionary materialist paradigm cannot credibly undertake such foresighted searches, and indeed, that is precisely a point where Mr Dawkins, in discussing Weasel 1986, acknowledged that it was "misleading":
Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [TBW, Ch 3, as cited by Wikipedia
In short, you are here plainly tilting at a strawman of your own manufacture. 2] The point of the hash challenge is to hopefully present a case in there is no previous FSCI to guide us to a solution. Again, the point of functionally specified complex information -- ever since Orgel introduced the concept -- is that first, function must be OBSERVED, which acts as the specification for the complex information. Absent identified function, the default assumption is that the string of bits is probably meaningless, i.e. unspecified and exhibiting random sequence complexity. (Show us that your hash function produces a performance that is vulnerable to modest disturbance, then show us wheter or no we are looking at more or less than 143 ASCII characters in the hash string. If the hash were specifically fucntional and longer than 143 ASCII characters, we could reliably infer formt hat specific strign that it is designed. the use of so strtingent a cioriterion for complexity is to make sure that chance on the gamut of our observed comsos could not reasobnably produce the effect.) Boiling down: sadly, strawman again. 3] I’m not disputing that humans write stuff on the Internet. I’m disputing that “we know that intelligence routinely generates such FSCI”. Back to 101: when we see a text string of 143 or more ASCII characters that functions as contextually responsive or appropriate text in English, we may draw out certain conclusions: a --> The string is highly specified, as it functions in ways that sharply constrain the alphanumeric values of digits in the string S1, S2, . . . S143, . . . Sn. (That is, it conforms with the rules of English text and is meaningful in the context, which locks it out of the vast majority of the configuration space of a string of length n of 128-state [= 7 bit] characters.) b --> Such strings are plainly informational in the Shannon sense of information carrying capacity, and onward in the sense of function in a context. (E.g. this post responds to your comments at 92 etc and provides corrective information, in English, up to the odd typo or two, etc. . . . i.e. we have a beach of function leading up to the peaks of optimal function in an island of functionality. Beyond a certain point, too many typos and other errors would destroy the function, i.e. the string is vulnerable to modest perturbation.) c --> Such a string is -- as a matter of easily observed fact -- routinely observed as produced by intelligent agents, e.g. comments in this thread; and indeed, there is an Internet full of cases in point. So, insofar as FSCI relates to such cases and other similar cases of identified function that is vulnerable to moderate perturbation and encompasses at least 500 - 1,000 bits of associated carrying capacity, it is well warranted to conclude that “we know that intelligence routinely generates such FSCI.” (We could revert to Durston's FSC metric or Dembski's CSI metric, but such sophisticated mathematics are not needed on so obvious a matter.) d --> Moreover, at the 1,000 bit threshold, we are discussing 2^1,000 possible configurations or about 10^301. This is ten times the SQUARE of the number of quantum states of the 10^80 atoms of the observed universe across a thermodynamically reasonable lifespan. That is, considering the observed cosmos as a whole as a search engine, it would not be credibly able to scan as much as 1 in 10^150 of the available configs, making this a "lottery" that is unwinnable on the cosmic scale. [Not even by buying up all tickets that our cosmic-scale resources allow us to.] e --> But, intelligent agents routinely use their imaginations and cognitive abilities to generate such text strings. f --> Extending to similar strings used for programs used for digital information processing systems, the same basic considerations apply.9Even, to long enough hash strings.) g --> And, it turns out that protein codes in DNA etc [in aggregate for any observed organism] are cases in point of such algorithmic, code-bearing functionality that is vulnerable to modest perturbation, so are also credibly designed. 4] Assuming that FSCI is well-defined, how do we know that the FSCI in the output of a process was generated by that process? Can we conclude that a web server generates FSCI because it outputs meaningful text? How about WEASEL, or ELIZA, or A.L.I.C.E.? FSCI is sufficiently well defined to be recognisable, indeed, we are dealing with a case of a descriptive term as used by Orgel, which has been somewhat elaborated and given criteria that allow us to analyse why functionally specific complex information bears a certain significance. This -- pace your clever word choice -- is not a mere ASSUMPTION, it is a fact -- whether or not it is a welcome one for those who argue that chance + necessity are sufficient to explain [away?] cases of "apparent" design.. In the case of software, the software is patently itself a case of FSCI, and is designed. We know that directly as well. We further know that a server is a programmed entity, which is not carrying out decisions of its own volition, but simply mechanically executes a program, through in the end switching logic states and assocated voltages in circuits. [This gets us into the issue that there may be onward worldview level IMPLICATIONS of observing the reality of FSCI as a credible sign of design. This has no relevance to whether or not that is a good empirical evidence of design, save to those who having already made up their minds that mind beyond matter is impossible, and will use that Lewontinian a priori materialistic criterion to reject evidence that does not fit it. We can only hope that such will be willing to face the implications of ending up in absurdities on their worldview's premises.) [ . . . ] kairosfocus
kairosfocus, with regards to the charge of context shifting that you've made over several comments in two threads: I never said, insinuated, or thought that you claimed documented evidence. The comment that you're complaining about challenges any supposed evidence, including observational, of humans creating FSCI. That none is documented is a minor point. If you agree with it, can we move on to the bulk of the argument in the subsequent paragraphs? I have no intention of arguing my interpretation of "we" in FAQ#28, which may be wrong but is in good faith. I'd like to resolve your grievances and move on to the challenges to your claims. If I have misrepresented your position in any way, please clearly state the false position that I attributed to you and I will retract it. R0b
Maybe I am completely wrong but I get the impression that Dr. Dembski never talked about FSCI`but rather sticks to the term CSI that he coined. Or did I miss something? sparc
That's OK ROb: no one knows how you create it either, so that probably technically correct. dbthomas
Correction in the last sentence above: I have no idea how to calculate the FSCI that goes into the design process. R0b
kairosfocus:
For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.
Since you've repeated this, I'll bring up some obvious questions regarding your calculation. 1) A screen of text is highly compressible. How did you know to base the calculation on uncompressed rather than compressed pixel data? 2) How did you know to use the storage size of the pixels rather than of the text itself? If you were to choose the latter, how would you know what encoding to assume? 3) What function did you choose as the basis of your calculation? 4) Why did you not include in your calculation all screens that serve the same function? 5) Why did you not include in your calculation all screens that serve any function? 6) How would you determine that a screen serves no function? Doesn't even a blank screen or a screen full of noise serve the function of indicating that your monitor or video adapter is broken?
For, it would be plain from the context, that I am speaking of directly observable cases, not today’s peer reviewed literature and the power games connected thereto.
I've never directly observed a human generating FSCI. That kind of event seems, at best, something we would infer by calculating the amount of FSCI before and after, and comparing the two. Do you think most people have done that? I know I haven't, especially since I have no idea how to create the FSCI that goes into the design process. R0b
kairosfocus:
In short, designed systems are developed based on intelligently directed, targetted search that can recognise fractional success and identify bugs then remove them. This is a commonplace of engineering work.
So now I'm confused. Do you or do you not claim that we observe humans finding sparse functional targets in vast seas of non-functionality?
Design theory does not posit to have created a one size fits all super decoding algorithm.
The point of the hash challenge is to hopefully present a case in there is no previous FSCI to guide us to a solution. I chose a cryptographic hash so there wouldn't be an algorithm to reverse it. If there were such an algorithm, then would applying it to the hash constitute the creation of FSCI from scratch (assuming that the output is meaningful text)?
kairosfocus: [By contrast we know that intelligence routinely generates such FSCI.] [Rob:] Who is “we”? It certainly doesn’t include me. In the UD FAQ, you refer to “massive evidence” of this, but I see none. It certainly isn’t documented anywhere . . .
Rob, one type of function happens to be contextually responsive English text using ASCII code. The whole Internet stands in demonstration that it is documented that text strings meeting that criterion of function of more than 143 characters on our observation are invariably products of intelligent design.
I'm not disputing that humans write stuff on the internet. I'm disputing that "we know that intelligence routinely generates such FSCI". I stated my reasons already, but I'll repeat them. Assuming that FSCI is well-defined, how do we know that the FSCI in the output of a process was generated by that process? Can we conclude that a web server generates FSCI because it outputs meaningful text? How about WEASEL, or ELIZA, or A.L.I.C.E.? If we all know that humans generate FSCI, how did we go about determining that humans generate it as opposed to merely reshuffling it? (As an aside regarding the Orgel quote, it's interesting that ID proponents repeatedly use a quote that demonstrates Dembski's equivocation. "Complexity" in Dembski's "specified complexity" has nothing to do with uniformity or the lack thereof.) R0b
Upright @ 86:
Still waiting…on the question of how does T-A-G physically mean “stop”.
I already mentioned the basic mechanism: tRNAs don't bind to the stop codons, but release factors do. Thus, translation stops. Maybe you just forgot about that what with all the hundreds of words of intervening KF material. Anyway, if you want more detail (and not superficial analogies to silicon microprocessors), it's not particularly hard to find. Google's pretty useful for that sort of thing, I hear. dbthomas
"That’s what materialists are in denial over, especially as we have a dominant industry full of empirical evidence to see that such entities are only observed to be created by intelligent designers." A new Darwinist mantra "It would appear that the appearance of design is more apparent than ever, but a real designer wouldn't design like that." CannuckianYankee
Nice post at 87 KF. Frost122585
"That grinding noise you hear is the sound of materialistic cognitive dissonance in the morning." We do get a lot of that around here as they go over everything with a fine tooth comb to see what they can snipe at each day. What goes through such minds. Whatever it is, it is not pretty. jerry
UB: Here's a little 101 level thought or two on processors, courtesy NWE's rewrite and completion on the Wiki article:
A central processing unit (CPU), or sometimes simply processor, is the component in a digital computer that interprets computer program instructions and processes data. CPUs provide the fundamental digital computer trait of programmability, and are among the essential components in computers of any era, along with primary storage and input/output capabilities . . . . The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program . . . There are four steps that nearly all von Neumann CPUs [the common stored instruction, sequential execution type] use in their operation: fetch, decode, execute, and writeback.
This implies that we need an algorithm, a data structure for string instructions (and operational data), machines to fetch, decode and execute. And, such must be in an integrated irreducible complex -- for a given architecture -- that will start, cycle and terminate appropriately (with the additional requirement that o/ps must be appropriately dispatched to where hey are needed). Now, let's see:
data & instruction storage -- DNA code -- genetic code (for protein assembly) fetch -- messenger RNA (and the associated algorithm for that!) decode and execute -- ribosomes and transfer RNA dispatch -- endoplasmic dispatch systems, with "headers" that instruct for given AA strings, etc etc.
In that context TAG usually means: terminate chaining, initiate folding and dispatch. So, it's just about as physical as a microprocessor that does similar things with silicon circuits and electrical signals. And the physical is just as incidental to the underlying issue: we are dealing with information processing, not physics and chemistry primarily, save as the "physical layer." (Resemblance to layer-cake information and communication models is NOT coincidental.) That's what materialists are in denial over, especially as we have a dominant industry full of empirical evidence to see that such entities are only observed to be created by intelligent designers. That grinding noise you hear is the sound of materialistic cognitive dissonance in the morning. GEM of TKI kairosfocus
Still waiting...on the question of how does T-A-G physically mean "stop". Upright BiPed
PPPS: I should clarify -- Rob cited me from 104 in the other thread, and referred to the weak argument corrective no 28 by implication. kairosfocus
PPS: The weak argument corrective no 28 responds to the challenge: What about FSCI [Functionally Specific, Complex Information] ? Isn’t it just a “pet idea” of some dubious commenters at UD? In so doing, it provides a sufficiently adequate and exemplified description of what functionally specific complex information is, that on fair comment, it is quite evident that Rob's remarks to Jerry at 80 above are also ill-founded, dismissive, and improperly unresponsive to easily accessible information. kairosfocus
PS: It will help to document what is going on rhetorically, to cite the weak argument corrective no 28, on FSCI:
. . . FSCI is actually a functionally specified subset of CSI, i.e. the relevant specification is connected to the presence of a contingent function due to interacting parts that work together in a specified context per requirements of a system, interface, object or process. For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.) On massive evidence, such cases are reliably the product of intelligent design, once we independently know the causal story. So, we are entitled to (provisionally of course; as per usual with scientific work) induce that FSCI is a reliable, empirically observable sign of design.
Observe from the highlights, what was cited by Rob, and what was omitted that would have given a highly material context. For, it would be plain from the context, that I am speaking of directly observable cases, not today's peer reviewed literature and the power games connected thereto. kairosfocus
Footnote: (Pardon a response on a specific challenge, this is not intended to allow the thread to be side tracked through distractive objections.) Re Rob @ 79:
In cases where the vast spaces between targets are demonstrably functionless (and that means no information that aids in finding the target, since such guidance would constitute function), we humans consistently fail.
Rob, you here misrepresent the issue; perhaps because there is a fundamental divergence of perspective. In fact functionality in an "island "may vary from minimal to maximal, including a "mountain range" so to speak. Using our intellect and imagination, we routinely create systems that exhibit "likely to succeed patterns", up to a neighbourhood of complete success. We THEN do trial and error tests to get more and more function until we get satisfactory results, through debugging and development. To see what I am talking about, consider a microcontroller development or more generic program writing exercise. We do not start with random code and configurations, then arbitrarily change to see what happens. We DESIGN, we initially implement, we test from modules out to systems, and when we are in development, with a multi-fault environment, we see a cluster of difficulties that challenge us due to that multiplicity. In short, designed systems are developed based on intelligently directed, targetted search that can recognise fractional success and identify bugs then remove them. This is a commonplace of engineering work. But, in the evolutionary materialist model of the biological and pre-biological worlds, we do not have the luxury of playing with partial [module level] functionality and logical structure based testing to then adjust towards a predefined target. Functionality has to emerge on its own, without "clues" that move us from non-function to initial function. No multi-part, organised function all at once, no "fitness" to reward. And, to suggest that there is a smooth string of easily accessed neighbouring functions that takes us indirectly from basic chemicals in credible prebiotic soups to integrated life forms, or that between major body plans there is a similar path, is an assertion without empirical warrant on observations in the real world. In that context your hash function challenge is irrelevant. (Design theory does not posit to have created a one size fits all super decoding algorithm. As my note long since says in the first section: FIRST identify a function, which specifies the complex data, then check the required storage capacity. In effect if the function is such that we are looking at 1 k bits plus, on inference to best explanation in light of experience, it is designed.) Now, too, in your onward linked, you make an unsupportable assertion:
kairosfocus: [By contrast we know that intelligence routinely generates such FSCI.] [Rob:] Who is “we”? It certainly doesn’t include me. In the UD FAQ, you refer to “massive evidence” of this, but I see none. It certainly isn’t documented anywhere . . .
Rob, one type of function happens to be contextually responsive English text using ASCII code. The whole Internet stands in demonstration that it is documented that text strings meeting that criterion of function of more than 143 characters on our observation are invariably products of intelligent design. Similarly, source code for a real world program will illustrate the same pattern, i.e we also have a major industry worth of documentation. So, to disestablish the claim, all you need to do is to provide a credible case of 143 or more ASCII characters comprising function information along lines such as above, and which credibly came about by lucky noise and/or forces of mechanical necessity. that such counter-examples are persistently missing underscores the force of the original point. (And BTW, that is the point I made at my single comment in the relevant thread at 104.) Most likely, by "documentation" you mean stuff published in the current peer reviewed literature and passing the vetting and censorship of the Lewontinian materialist establishment. That is immediately the fallacy Galileo exposed 400 years ago: appeal to Magisterium, rather than to objective real world evidence, on a matter that can be settled directly by observation. But, even so, the recent work of Abel, Trevors et al, as well as a fairly lengthy list of other publications document the underlying point. Especially, once we recognise that "FSCI" is a DESCRIPTIVE TERM, not a "suspect" novel terminology. Namely, we do have such things as functionally specific, complex information bearing or expressing entities, which are fairly easy to observe: your posts and mine are cases in point. And, in the qualitative sense, these have long been identified as relevant to OOL. For instance, here is Orgel in 1973 on the distinguishing mark of cell based life:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
What design theorists such as Dembski have done is to in one direction, generalise: (i) specified complexity, not just bio-functional complexity, and in another direction, per Able, Trevors et al, (ii) to have defined dimensions in which such can be evaluated: orderly, random and [algorithmically] functional sequence complexity. And so, this is also actually somewhat relevant to the point of this thread. GEM of TKI kairosfocus
Lamark writes: "To me the laws of the universe aren’t information. Information would have to include having the fingerprints of intelligence, evidenced by breaking the physical universes laws, as regards natural processes acting on their own." When I read your remarks, it reminded me that C.S. Lewis pointed out that a person interfering with the trajectory of a cue ball would not break the laws of physics. And studying the laws would not help us understand why the cue ball moved apart from our predictions. The point is that the physical laws were not broken, and that nature readily accomodated the supernatural (as opposed to unnatural) event. One cannot even call the interference logical or lawful. It is just will. As for information, it may be a simple way of showing oneself and giving the strong impression that there is more going on than just cause and effect explicable by natural causation. In a way, that is information too. It would be like getting someones attention by throwing pebbles from the sidelines. Pebbles don't fly on their own. But admittedly, it is not the type of information being discussed with regard to DNA. Seems to me that God covers all the bases. He provides digital information in the DNA as a clue, and also makes nature absurd enough as to not be intuitable by the assumption that it can ultimately explain itself. As far as evidence (or information) of a designer, it's not that it is unreasonable to conclude that some entity threw the pebbles to alert you of their presense, it's that we may not want to believe there is some unseen personality watching our every move. It is preferable for many to make absolute, the relative fact that nature can explain 'some' phenomenon. Makes one wonder... is God hiding or us? Does God put on fig leaves or us? Hope that is somewhat related to the topic... Lock
jerry, appealing to the everyday usage of the term "information" seems a poor way to support for your claim that "information is well defined". If the everyday usage is well-defined, what is the definition? Is it well enough defined to base a scientific argument for design on the existence of information? Regardless, ID theorists like Dembski and Durston do not use the term in the everyday sense, at least not consistently. For instance, Dembski uses it often in the Shannon sense: "Thus we define the measure of information in an event of probability p as -log2p". According to Dembski, the randomness of a coffee spill endows it with information. (Dembski's typical example is an ink spill.) Other ID proponents say the opposite. Shannon information is certainly more well-defined than the everyday usage of the term, but Dembski leaves crucial assumptions unresolved, without which even Shannon information is ambiguous. For instance, who or what is the receiver whose uncertainty is reduced by the message? Shannon information is explicitly subjective -- that is, the probability is Bayesian. ID arguments sometimes treat probability as subjective (as in the spilled ink), but then inconsistently posit conclusions that are supposedly objective (as in design inferences). This is just a long restatement of the points I made above, which, in all of your scoffing, you haven't addressed. R0b
kairosfocus:
On search space reasons, that leads to such isolation of islands of function that random walks and associated blind mechanical forces are simply not cr5edible explanations of origin of function. But, routinely, we know that intelligent agents produce such strings, e.g. every contextually meaningful English text string on the Internet of more than 143 characters of ASCII code is beyond the threshold.
I'll challenge this claim every time I see you make it, which is often. In cases where the vast spaces between targets are demonstrably functionless (and that means no information that aids in finding the target, since such guidance would constitute function), we humans consistently fail. For anyone who disagrees, the challenge is still open to find a 32 character sentence (capital letters and spaces only) whose MD5 hash is cb6ba5a8daf75b7d50fef95cecae78d7. R0b
"BTW, where does Dr Meyer say that is the appropriate source? In Signaure of the Cell?" Yes. jerry
Mr Jerry, BTW, where does Dr Meyer say that is the appropriate source? In Signaure of the Cell? Nakashima
Mr Jerry, Thank you for that source on Huxley. It appears to be the text of a talk he gave in Scotland, perhaps later published. In any case, it makes two points of note. One is that Huxley thought protoplasm to be complex, and expressed a sense of wonder at the complexity he saw under a microscope. The second is that he saw the ability of plants to metabolize raw materials as a proof that life is built from these same materials. However, this essay does not contain the idea, referred to in the OP, that Huxley thought the protoplasmic cell to be so simple that its origin was a simple matter. Nakashima
Actually, the Huxley source was published in 1869 in something called the Fortnightly Review. From Wikipedia Fortnightly Review was one of the most important and influential magazines in nineteenth-century England. It was founded in 1865 by Anthony Trollope, Frederic Harrison, Edward Spencer Beesly, and six others with an investment of ?9,000; the first edition appeared on 15 May 1865.[1] George Henry Lewes, the partner of George Eliot, was its first editor, followed by John Morley. jerry
Nakashima, Meyer's Ph.D. at Cambridge was on the history of the Origin of Life debate so he had access to nearly everything at a primary source level. The relevant source for Huxley according to Meyer is On the Physical Basis of Life1 (1868) and can be found at http://aleph0.clarku.edu/huxley/CE1/PhysB.html jerry
Dbt, yes you ar exactly right: It has nothing to do with us. But we know about how rust happens. The question then becomes how does it mean "stop"? Upright BiPed
"You seem to think the Watson/Crick formulation of ‘biological information’ is relying on the everyday sense of the word." Yes, I do and all your machinations are just reinforcing what I have been saying. You seem to think that because a word has more than one meaning that you have a gotcha. But all you are doing is making my point as I said when after reading the long article that "I rest my case." Because the word "information" has many connotations you think that somehow it makes one wrong when one uses it in one of its every day uses. We use the term information here all the time and it is consistent with what Watson and Crick meant, it is consistent with how a major part of the Stanford Encyclopedia discusses it and it is consistent with an every day use of the word which is why Watson and Crick used it immediately in 1953 even before they had any comprehension of the mechanism for which it might be valid. Your behavior in this is just a typical anti ID way of trying to impugn someone with anything one can possibly conjure up. Instead of having a positive conversation, the anti ID MO is to see how one can prove someone else wrong or try to make them look foolish. It is not even subtle any more and it gets tiresome dealing with such childish manners. It also reinforces that the pro ID people have legitimate things or else the argument would be about that instead of inconsequential minutiae. Keep up the irrelevant sniping. It makes our case easier for those who are trying to learn. jerry
PPPS: those who want a good overview on the protein synthesis algorithm could look here. Wiki -- on the theory of a picture being worth 1,000 words -- has two good diagrams here and here. Notice the advance "tape", read-express [append to aa chain] cycle the latter illustrates. Cf the diagram with animation here on magnetic tape action. [Magnetic tape and heads do not "understand" meanings either, indeed they just carry out electromagnetic interactions. In an irreducibly complex organised framework that physically implements information transfer.] kairosfocus
PPS: DBT re:
TAG doesn’t mean ’stop’ at all. We say ’stop codon’ because it describes its function. It simply doesn’t match any tRNAs, but does react with proteins called release factors, and so translation stops. The ribosome doesn’t need to ‘know’ its ‘meaning’
And so, TAG under normal circumstances is a halting instruction, like a NO-OP loop can function as a part of a halting phase in a machine language program. The transistors and associated components in an MPU do not "understand" machine code either, they just implement it according to a built in algorithm [these days, through a lower level program, microcode]. That does not prevent machine code from being code in an algorithmically oriented language. Last, NO OP can also be used for other purposes. In short, you are straining at gnats while swallowing camels here. kairosfocus
Footnote: "Jus a pass thru" . . . and took a look at DBT's linked from Stanford Enc of Phil on bio-information. Here's the intro section: ______________ >> Biological Information First published Thu Oct 4, 2007 During the last sixty years, the concept of information has acquired a strikingly prominent role in many parts of biology. This enthusiasm extends far beyond domains where the concept might seem to have an obvious application, such as the biological study of perception, cognition, and language, and now reaches into the most basic parts of biological theory. Descriptions of how genes play their causal role in metabolic processes and development are routinely given in terms of “transcription,” “translation,” and “editing.” The most general term used for the processes by which genes exert their effects is “gene expression.” Many biologists think of the developmental processes by which organisms progress from egg to adult in terms of the execution of a “developmental program.” Other biologists have argued for a pivotal role for information in evolution rather than development: John Maynard Smith and Eors Szathmary (for example) suggest that major transitions in evolution depend on expansions in the amount and accuracy with which information is transmitted across the generations. And some have argued that we can only understand the evolutionary role of genes by recognizing an informational “domain” that exists alongside the domain of matter and energy. Both philosophers and biologists have contributed to an ongoing foundational discussion of the status of this mode of description in biology. It is generally agreed that the sense of information isolated by Claude Shannon and used in mathematical information theory is legitimate, useful, and relevant in many parts of biology. In this sense, anything is a source of information if it has a range of possible states, and one variable carries information about another to the extent that their states are physically correlated. But it is also agreed that many uses of informational language in biology seem to make use of a richer and more problematic concept than Shannon's. [I add: That is, "information" here is about FUNCTION and complexity!] Some have drawn on the teleosemantic tradition in philosophy of mind to make sense of this richer concept. A minority tradition has argued that the enthusiasm for information in biology has been a serious theoretical wrong turn, and that it fosters naive genetic determinism, other distortions of our understanding of the roles of interacting causes, or an implicitly dualist ontology. [I comment: What about an implicit materialist ontology imposed by today's Magisteria as the "definition" of science?] Others take this critique seriously but try to distinguish legitimate appeals to information from misleading or erroneous ones. [I remark: And, how does one do that without begging questions, given that say DNA does use a discrete state string based code, which functions algorithmically in expressing proteins etc . . . ?] In recent years, the peculiarities of biological appeals to information have also been used by critics of evolutionary theory within the “intelligent design” movement. >> ______________ The last paragraph reveals a lot -- especially about the motives of the critics, and about the lack of substance of the criticism. (I find it also telling how ID criticisms of a priori materialist theories on evolution have been presented!) I also find the use of scare-quotes very revealing. After all it is not the mere chemistry of forming a D/RNA chain of monomers that has given rise to the contextually meaningful genetic code, including the significance of TAG. And TAG takes its significance in the context of an algorithm that uses a coded string in DNA and mRNA to sequence amino acids in a protein chain, TAG telling it when to quit adding to the chain. Finally, Shannon Information is really a metric of capacity to carry information, not of contextual functionality. But, when that capacity is put to work as just described, and we see that over 1,000 bits of capacity are used up in a code based algorithmic context, we can easily see that the number of possible configurations involved exceeds ten times the SQUARE of the number of states the 10^80 atoms of our observed cosmos could cycle through across its thermodynamically reasonable lifespan. On search space reasons, that leads to such isolation of islands of function that random walks and associated blind mechanical forces are simply not cr5edible explanations of origin of function. But, routinely, we know that intelligent agents produce such strings, e.g. every contextually meaningful English text string on the Internet of more than 143 characters of ASCII code is beyond the threshold. So, since there is no good evidence that life function can be packed into less than 1,000 bits [indeed the minimum observed for independently viable life forms is about 600 k bits] then we have only one good candidate for origin of bio-functional information: intelligence. As the just excerpted shows, the objection to this inference is not scientific but philosophical: a priori commitment to materialism, now often enforced by institutional power games that have provided tendentious redefinitions of science, a la Lewontin and his fellow members of the US national Academy of Sciences. These fail to tell us, that such redefinitions lack historical and philosophical warrant. So, as we look on, it is plain that much of the above is a matter of distractions and deflections. Which is sad, burt revelaing ont eh real balance ont eh merits. GEM of TKI PS: Mr Huxley of course was Darwin's Bulldog, and a champion of what he called "agnosticism," which boils down to a sophisticated, "softened" version of atheism. So, when Nakashima-San says above that "he was confident that science was the right method to cover that distance [from first life to its origins] . . . " we may properly challenge that "science" does not imply or entail "materialism," but can properly infer to the artificial (i.e. intelligent) as an empirically credible explanation where warranted. At least, if it is to be an unfettered (but intellectually and ethically responsible) pursuit of the truth about our world, in light of evidence and reasoned dialogue on the significance of such evidence. kairosfocus
Upright @ 63
Would T-A-G still mean “stop” if we all died tomorrow?
TAG doesn't mean 'stop' at all. We say 'stop codon' because it describes its function. It simply doesn't match any tRNAs, but does react with proteins called release factors, and so translation stops. The ribosome doesn't need to 'know' its 'meaning'. TAG isn't even the only stop codon. There're also TGA and TAA. And, in some cases, these stop codons don't even stop translation at all, but instead cause non-standard amino acids to be added to the polypeptide chain. So, if we all died tomorrow, the canonical stop codons would still cause translation to stop (most of the time), and atoms with 26-proton nuclei would still form rust when combined with oxygen in the presence of sufficient moisture and participate in countless other reactions, just as they always have. The labels are for our convenience. The molecules and atoms don't care. They just react.
dbthomas
While I was unable to find a quote from Huxley to support the position of Dr Meyer (or is it Deyes'?) from the OP, I have found this from Haeckel. It derives from his History of Creation, 1876, translated into English here. Through the discovery of these organisms [Monera], which are of the utmost importance, the supposition of a spontaneous generation loses most of its difficulties. For as all trace of organization-all distinction of heterogeneous parts-is still wanting in them, and as all the vital phenomena are performed by one and the same homogeneous and formless matter, we can easily imagine their origin by spontaneous generation. The above refers to what Haeckel called plasmogeny, the creation of cells from a fluid mixture of organic material. This he considered no great leap, compared to autogeny, the creation of the organic materials necessary for plasmogeny from purely inorganic materials. (see p. 415-416) Having not read Signature of the Cell, I don't know if Dr Meyer actually references these or similar materials. Perhaps someone who is already reading the book (Mr Joseph?) can confirm for us. Nakashima
Mr BiPed, Naka, your comment seems to be lacking the all-so-evasive evidence for the spontaneous generation of Life. Indeed. We have made progress on understanding the problem since Huxley's day, but less progress on solving the problem. Still, we have made some progress, and by very crude methods. That is heartening. We also have other lines of reasoning which Huxley did not imagine. Cellular automata such as Evoloops show that life can arise spontaneously in a universe with different laws. Now the appropriate ID response to Evoloops would be to point out that the rules of that universe were intelligently designed. This is certainly true, and it would be an interesting piece of research to sample CA rulespace to see how common are universes that support life. This would be in line with what Wolfram called "A New Kind of Science". Nakashima
Rude-san, With respect to equating science and materialism, I was not trying to state my opinion, I was trying to convey the position of Huxley in his essay. As I said, the essay is quite short, and it is interesting to see the view of a Victorian scientist on the definition of science, and how it differs from the issues and vocabulary with which we have become familiar. Nakashima
Barb-san, @42 you give some interesting quotes, but none of them support your original contention that OOL scientists looking into Oparin-style coacervates suffered criticism based on Pasteur's biogenesis theory. Nakashima
So, T-A-G is an acronym for thymine-adenine-guanine that means "stop". That is very interesting, don’t you think? But, I am worried about the permanence of your answer, that being: T-A-G is a name we give to these physical chemicals which means “stop”. For instance, we as humans call the chemical iron by the name “Fe” in the atomic table. And if we all died tomorrow - and no humans existed at all - then “Fe” wouldn’t have the slightest bit of meaning to it. After all, it’s only a name we gave it. Would T-A-G still mean “stop” if we all died tomorrow? Upright BiPed
Have a Tagamet…take a Calgon bath. I was asking a simple question composed of four words. I truly am sorry that you’ve been so moved.
Umm, moved? If by that you mean 'somewhat perplexed' then..OK. Also: Tagamet? Heartburn medication doesn't really make sense in that context. Valium or xanax would have been better. Tagamet does have T, A, and G in it though, so there's that I guess.
[L]ittle did I know that you would not know the context of the question I posed.
Which is why I asked what the hell you were talking about. BTW, I'm changing my wild guess: I'm going with 'Thymine Adenine Guanine', being as it is a stop codon and also because it seems like a pretty on-topic option. If that's so, then the answer you're looking to get is: "Stop." I still can't rule out the religion-related options though, as they are equally if not more on-topic for UD. dbthomas
Goodness gracious dbt, Have a Tagamet…take a Calgon bath. I was asking a simple question composed of four words. I truly am sorry that you’ve been so moved. I asked the question because I was all caught up in the validity of arguments here at UD. With your pronounced list of 7, 8, 9 answers - little did I know that you would not know the context of the question I posed. I simply asked: “What does T-A-G mean?” Upright BiPed
Seversky, FYI, etching on the surface will not cause the disc to be lighter. The laser will slightly melt the plastic, meaning the density of the plastic around the etched area will change. There is no loss of atoms on the disc, just uneven density.
My understanding is that, on the master CD at least, information is etched into its surface by a laser beam as a series of microscopic “pits”. If that is true then the disc with information should actually be fractionally lighter than a blank by the amount of material burnt away.
In the below example, there would be an unperceptible weight change depending on the weight of the ink dyestuff and water. If lighter than the cellulose of the paper, then more difficult to detect. But this is still meaningless in terms of identifying information. Weighting paper could only establish that it was altered, not that it contains information. A coffee spill could also change the weight of the paper but would not contain any information, besides the fact that coffee was spilled on the paper.
Of course, a simple thought experiment shows it could go either way. If I were to write this post on a sheet of paper with a pen, the additional information would cause the paper to become marginally heavier by the amount of ink on its surface. On the other hand, if I were to use a knife to cut out from the paper the letters making up this post, the addition of the same information would have made the paper lighter by the amount of the paper removed.
Oramus
Lock: you're overfocusing on the 'material'. That's the classic way of defining materialism: it's all about matter, which people usually think of in terms of stuff they can pick up. No one really uses it that way anymore, though. How could they, given E=MC2? A more current and accurate term would be 'physicalism'. Thus, you can encode information on actual matter like a CD or DVD or Blu-Ray, but it can also be transmitted using a wifi link. In either case, you have to have some sort of physical medium, and the 'arrangement' of that medium corresponds to the original data, whatever that was. The machinery which encodes and decodes the data doesn't have to understand a thing about the content. It only has to encode and decode in a well-defined and consistent fashion. dbthomas
Seversky, you may be right about the disks. I don't know for sure. It was my understanding that CDs have the layer displaced but not removed in any way. But suppose there is a difference in mass, was it put there by a natural process or an intelligence? As another example, what is the difference in the mass of my monitor as a result of these letters you are reading, as oppossed to mere background? The argument I prefer is a newspaper (leaving aside the obvious intelligent origin). It isn't the ink, or paper that contain the information. It is the arrangement of them. Same with the CD whether the mass is the same or not. In principle, the material (as you said) can only be the medium. But in saying so, you have made the distinction. As for information requiring a material medium... how could I falsify your claim? Science has proven the existence of forces that cannot be directly observed (dark matter etc). Why is it assumed they are material? What is matter? You do not even know what matter is Seversky, and niether do I. Maybe matter requires an immaterial medium and you have it backwards. There is little more dangerous, than certainty amongst, men. On the DVD Meyer did not go into the different definitions of information. It is my assumption that the DVD was not produced in order to sell materialistic counterarguments. In the same way, I do not expect (nor do you) that documentaries on the Discovery Channel will give arguments for design in order to challenge the prevailing view of the producers. I discovered the definitions and arguments you elude to very quickly when deciding to share ID with others here in virtual land. I have done my share of head knocking over it all. Just not here. But I do like your spunk! Lock
Oh, looks like I was wrong. This is why you think you've made a point:
Why don’t you try something simple like the common use of the term in every day English.
You seem to think the Watson/Crick formulation of 'biological information' is relying on the everyday sense of the word. I don't see how. For one, a decent dictionary will show you that there are many non-technical senses of the word. For two, that's a fairly specific description of DNA coding:
So the argument in Godfrey-Smith (2000a) and Griffiths (2001) is that there is one kind of informational or semantic property that genes and only genes have: coding for the amino acid sequences of protein molecules. But this relation “reaches” only as far as the amino acid sequence. It does not vindicate the idea that genes code for whole-organism phenotypes, let alone provide a basis for the wholesale use of informational or semantic language in biology. Genes can have a reliable causal role in the production of a whole-organism phenotype, of course. But if this causal relation is to be described in informational terms, then it is a matter of ordinary Shannon information, which applies to environmental factors as well.
That's not what people usually mean by "information" in the everyday sense. It's much more restricted. dbthomas
"I find this very interesting. As it turns out, there is indeed an “immaterial vital force” that is unique in living systems, and found nowhere else in chemistry. It’s called information. Chemistry is the medium; information is the message." I don't see how this is concluded from the article necessarily, but I'd like to understand this viewpoint. This idea of information I thought I had a handle on because genes have information stored on them. Is this a broader and different definition of information than as it relates to genes? To me the laws of the universe aren't information. Information would have to include having the fingerprints of intelligence, evidenced by breaking the physical universes laws, as regards natural processes acting on their own. lamarck
Jerry @ 48:
Thank you, I rest my case.
How so? You implied @ 37 that biology used the term uniformly:
It’s the same terminology that Watson and Crick used in 1953 and which is used in every biology departments all over the planet.
That's one definition, as the page I pointed you too shows, but it's not the only one. So as I see it, you're case isn't resting. It died. Oh, I know why you think you've made a point: because biologists as a whole haven't "nailed down their terminology". What matters, though, is whether particular biologists use their preferred definitions consistently, and whether or not the definitions themselves are clear and unambiguous. dbthomas
Upright, what the hell are you talking about? I can think of a number of possibilities: a. You're it b. Thymine Adenine Guanine c. The Almighty God d. Technical Advisory Group e. Talented And Gifted f. T=20, A=1, & G=7, and 20-1-7=12, therefore T-A-G means 'L' h. A small label I often find attached to my clothing j. Graffiti k. A brand of body spray l. Triglyceride m. A small tumor and lastly: n. Transcendental Argument for God As an utterly wild guess, I'll say that final one is what you've been fishing for. dbthomas
"Excellent! Is Gil referring to probabilistic, algorithmic, or some other definition of information in #1? How far back do we have to regress conditions when we measure CSI? Bayesian or frequentist?" Why don't you try something simple like the common use of the term in every day English. I haven't got a clue what you are talking about but whatever it is, it is not necessary to understand the simple information concept used in biology. You would have no trouble explaining it to a 10 year old. Now, the article on biology and information from the Stanford Encyclopedia of Philosophy discusses this very simple form of information as well as some more esoteric forms. But all you need is the extremely simple form. jerry
"Here, take a look at this, and you’ll see it’s hardly so cut and dried as you think:" Thank you, I rest my case. jerry
SOooOOooOoooo....what does T-A-G mean? Upright BiPed
A blank disk weighs the same as a full disk, says Stephen Meyer, so information is immaterial.
But what about punch cards? Do they contain information of negative weight? Or is the same amount of CSI stored with positve weight in the punched out pieces? Just wondering. sparc
Lock @ 3
I was already sitting down, tired from cheering and jumping up and down durring the video; but, when I got to the question and answer segment and heard Dr. Meyer explain his illustration about the lack of difference in mass between an empty CD and one containing information I was hooked. That is a powerful illustration.
My understanding is that, on the master CD at least, information is etched into its surface by a laser beam as a series of microscopic "pits". If that is true then the disc with information should actually be fractionally lighter than a blank by the amount of material burnt away. Of course, a simple thought experiment shows it could go either way. If I were to write this post on a sheet of paper with a pen, the additional information would cause the paper to become marginally heavier by the amount of ink on its surface. On the other hand, if I were to use a knife to cut out from the paper the letters making up this post, the addition of the same information would have made the paper lighter by the amount of the paper removed. Does this tell us anything other than information requires a physical medium in which to be stored? Out of interest, did Dr Meyer discuss the nature of information or explain the various ways in which it is defined? Seversky
Just a note on this analogy regarding empty vs. full CD's. Technically, there's no information on either. The information lives only in the mind of the programmer(s) and the end-users. When you physically take apart a CD which when viewed on your computer has pictures of the moon stored on it, you don't see the pictures. You don't really see anything at all. Instead you see *representative* bits, either ones or zeros. "Representative", because in actuality you would see electrons frozen in place on one side or another (assuming you looked REALLY closely). For a "one" is information that's not really there. And a "zero" is a different kind of information that isn't really there either. In the end, and again I'm being quite literal here, there's no *information* on the CD itself. Only a series of electrons, that when decoded by a *mind* (and only a mind) can be *assembled* into information. One might argue that it's a computer that decodes the bits, but this is only true on the surface. Because when reduced down, there was a programmer's mind responsible for the computer itself. So even here, it's the mind of the programmer that contains the *information*. Assume for a moment that there were never programmers and never computers but there were, by some freak cosmic accident, this same CD with electrons on either one or another side. In this case there would be no moons. There would be nothing on it. The electrons would be *literally* randomly dispersed on the CD. There would be *no* information on it. When considering the cell, I always wonder *how* in the world those little buggers process, decode, and then produce error-free output of the information hidden in DNA. They're like tiny computers reading, decoding, and producing output of DNA as if it were source code. So in the end, information comes from *minds* and only minds (as far as I know). DNA is information. It seems to me the most likely source of said information is a Mind. (Second post ever....see you again next year :-) shackleman
jerry:
Information is well defined.
Excellent! Is Gil referring to probabilistic, algorithmic, or some other definition of information in #1? How far back do we have to regress conditions when we measure CSI? Bayesian or frequentist?
Immaterial is immaterial to biology.
You responded to my comment about the terms information and immaterial. What terminology were you referring to when you said that it's used in every biology department? R0b
Barb @ 42:
Many still hold that the Earth’s early atmosphere was reducing (containing little oxygen) because laboratory experiments showed that chemical evolution would be inhibited by oxygen.
Yeah, that's the ONLY reason anyone thinks Earth was oxygen-poor initially. Regardless of the exact details of composition, it is in fact quite well-known that oxygen was not common on the early Earth because there is evidence that such was the case. Ever heard of the Oxygen Catastrophe? You may want to look here as well: History of Earth and also here: Paleoclimatology After that, give Google Scholar a spin. dbthomas
Nakashima-san: “Since this idea logically breaks down at any theory of biogenesis, whether natural or divine, I don’t know that scientists looking into OOL experiments, such as testing if Oparin’s coacervates could form in primitive conditions, really suffered any criticism from this direction. If you can quote an example, it would be helpful.” Let’s look at some criticisms on OOL projects, including the Miller-Urey experiment. “Never will the doctrine of spontaneous generation recover from the mortal blow struck by this simple experiment.” Stated in 1864 by Pasteur himself. He demonstrated unequivocally that even minute bacteria didn’t form in sterilized water protected from contamination. No experiment, to my knowledge anyway, has ever produced life from nonliving matter. Dr. Stanley Miller himself was quoted in the magazine Scientific American as stating: “The problem of the origin of life has turned out to be much more difficult than I, and most other people envisioned.” Miller was straightforward in a paper published two years after his experiment: “These ideas are of course speculation, for we do not know that the Earth had a reducing atmosphere when it was formed.” (Journal of the American Chemical Society, May 12, 1955). Some 25 years post-experiment, Technology Review (April 1981) noted that: “Little evidence has emerged to support the notion of a hydrogen-rich, highly reducing atmosphere, but some evidence speaks against it.” Scientific American in 1991 noted that: “Over the past decade or so, doubts have grown about Urey and Miller’s assumptions regarding the atmosphere.” Many still hold that the Earth’s early atmosphere was reducing (containing little oxygen) because laboratory experiments showed that chemical evolution would be inhibited by oxygen. So, despite the evidence to the contrary, the early atmosphere was reducing, scientists originally thought, because spontaneous generation of life could otherwise not have taken place. That is circular reasoning. My incredulity stems from watching intelligent scientists conclude, without evidence, that life just happened in an uncontrolled environment, by chance, when they cannot even create life under controlled conditions in a technologically advanced laboratory. It’s simple common sense. Also, consider the underlying import of such faulty reasoning. “Scientifically it is correct to state that life cannot have begun by itself. But spontaneously arising life is the only possibility that we will consider. So it is necessary to bend the arguments to support the hypothesis that life arose spontaneously.” Look, I don’t claim to be a philosopher, but I know a logical fallacy when I see one. Barb
dbthomas, that's an interesting link. I've often wondered why Dr. Dembski and others in ID have ignored important writers such as Susan Oyama, whose The Ontogeny of Information: Developmental Systems and Evolution is an important work in developmental systems theory. David Kellogg
What does T-A-G mean? Upright BiPed
Jerry @ 37:
It’s the same terminology that Watson and Crick used in 1953 and which is used in every biology departments all over the planet.
Oh, and you've checked them all, have you? Here, take a look at this, and you'll see it's hardly so cut and dried as you think: Stanford Encyclopedia of Philosopy: Biological Information dbthomas
Information is well defined. Immaterial is immaterial to biology. jerry
jerry:
” they’ve nailed down their terminology” It’s the same terminology that Watson and Crick used in 1953 and which is used in every biology departments all over the planet.
It's not the terms, it's the usage. Do you think that Dembski's, Dodgen's, etc. usage of the terms information and immaterial is well-defined and unequivocal, and that biology departments use the terms in the same way? R0b
" they’ve nailed down their terminology" It's the same terminology that Watson and Crick used in 1953 and which is used in every biology departments all over the planet. jerry
Cabal, I think a good cure for insomnia is to realize that some terms are used carelessly and equivocally here, information and immaterial being two of them. Dembski and Durston base their definitions on the classical concept of information, which is nothing more than a log-transformed view of probability. That would be well and good if they tightened up their idea of probability (given what, and according to whom?) and then stuck to the definitions, but somehow the terms get mysticized. A blank disk weighs the same as a full disk, says Stephen Meyer, so information is immaterial. But why should we expect probabilistic outcomes to have weight? And who says that a blank disk isn't a probabilistic outcome? And what substantial (so to speak) meaning does immateriality have if it's proven true by the existence of any mathematical concept or abstraction? And equating information with the elan vital seems a semantic jump that would clear Fonzie's shark by a mile. I think we can ask the ID community to wake us when they've nailed down their terminology, and sleep in for a long, long time. R0b
Middle Island-san, That’s all well and good—yet when you say, “But he was confident that science was the right method to cover that distance,” you are subtlely equating science with materialism. On this site, one should think, that is part of what’s being debated—not presupposed. A picky point, no doubt, but do you think I'd get by with anything like that on a Darwinist site? Rude
Naka, your comment seems to be lacking the all-so-evasive evidence for the spontaneous generation of Life. Upright BiPed
Rude-san, I think Huxley was arguing that ordinary organic chemistry, advancing step by step without recourse to the divine (or to design) would eventually provide an experimental proof that life could arise from inorganic sources, sufficient to convince that life did, in fact, arise from inorganic sources earlier in Earth's history. The essay I linked to earlier is quite short, and lacking strong evidence to discuss he goes on at length about the methods of science. Nakashima
Information may not be the same thing as the old vitalism or élan vital (or morphic resonances in the writings of Rupert Sheldrake), but I suspect that the latter is as necessary as the first. The information in the design of a machine does not give it the will to live, nor has anyone ever come up with a theory as to how it could. ID argues that information/design arises only from intelligence. Another subject is whether life and mind are more than mere information instantiated in some mechanism. Rude
Mr BiPed, You may share Barb-san's incredulity, I will share Huxley's confidence. Science seems to be progressing in this area along the path he outlined 150 years ago. Nakashima
I hardly slept a wink last night; these words stuck in my mind and wouldn't let go:
I find this very interesting. As it turns out, there is indeed an “immaterial vital force” that is unique in living systems, and found nowhere else in chemistry. It’s called information. Chemistry is the medium; information is the message.
I tried and tried, several approaches but couldn’t find my way. There must be an explanation for this, or are we really dealing with supernature – that as far as I can understand, is beyond reach of observation by us? Is this the next step of ID theory? I’ll try to formulate the question uppermost in my mind: Is this “immaterial vital force” that is unique in living systems something like ‘glued’ to or somehow mysteriously integrated with the (biological) information, making information in living systems qualitatively different from information elsewhere, say in a computer? Let’s make a thought experiment: Reverse engineering of a biological system by first creating a string of ACGT code in a computer. Next, our hypothetical machine converts the computer code into a real, chemical string of DNA. Then, inserting this artificially created information in whatever biological environment suitable, would that information now not contain and express the unique immaterial vital force, would it not function like 'regular' DNA? There are a lot of other questions about this subject buzzing in my mind right now but I guess that’s enough for now but I am looking forward to learning more about this fascinating subject. Are we about to see a real breakthrough in ID research? Cabal
Naka, "Barb-san’s comment was one paragraph implying evidence of difficulty that scientists had, and one paragraph of incredulity. I tried to address a mild factual error and question the existence of the evidence." It seems that incredulity is a shared trait. As far as the question of evidence for the spontaneous generation of life, I think the weight of the evidence is overwhemingly in Barb's favor. You may have empirical information to the contrary that you wish to share, and I as just one casual observer, look forward to your presentation. Upright BiPed
"But he was confident that science was the right method to cover that distance." Nakashima, are you here slipping in a materialist definition of "science"? Do you mean that Huxley was confident that it was chance and necessity sans design all the way down? Is confidence in science and confidence in materialism the same thing? Rude
Mr Biped, I am almost certain this is the idea in Barb’s comment. Barb-san's comment was one paragraph implying evidence of difficulty that scientists had, and one paragraph of incredulity. I tried to address a mild factual error and question the existence of the evidence. That life didn't pop out of a bucket of goo, that is certainly the idea of at least part of Barb's comment. Nakashima
Mr Deyes, Darwin’s theory generated the much-needed fodder to ‘extend’ evolution backward’ to the origin of life. This sentence is an example of the difficulty I have in distinguishing your ideas from Dr Meyer's ideas. It sounds as if you are saying that the concept of evolution was accepted in some other, more conteporary context, and then Darwin came along and pushed the idea of evolution back in time to the very origin of life. Who holds that position? You? Dr Meyers? Darwin? I have read Huxley's lecture "The Origination of Living Beings" and I find no notion similar to that implied here. Huxley was clear that science had made little or no progress on the experimental front, which he did understand pretty clearly. (It almost seems as if the Miller-Urey experiment could have been conducted 90 years ealier.) He also doubted that 'historical' evidence from fossils had come close to the origin of life, based on an argument from apparent complexity. That is the opposite of what is said here of Huxley. Huxley was not confident of the simplicity of the cell and therefore the origin of life. He was quite aware of the problems and distance still to go. But he was confident that science was the right method to cover that distance. Nakashima
Nakashima-san:
Pasteur’s theory that life only comes from other life was a mid-19th century idea, that was advanced against the idea of spontaneous generation.
And the science of the 21st century has confirmed that only life begets life. Joseph
Upright BiPed, you have provided the second step of the ID two-step proof via OOL. Step 1. If life cannot be artificially manufactured, life is too complex to have evolved ==> therefore ID. Step 2. If life can be artificially manufactured, life is created by intelligence ==> therefore ID. David Kellogg
Naka, Very soon man will likely be able to manufacture Life in a lab. When he does, then Life will have followed Life - as it were. Pasteur will not be embarrassed by this, given that Life didn't just pop up in a bucket of goo. I am almost certain this is the idea in Barb's comment. Surely, you see the distinction. :) Upright BiPed
Ms Barb, Pasteur's theory that life only comes from other life was a mid-19th century idea, that was advanced against the idea of spontaneous generation. Since this idea logically breaks down at any theory of biogenesis, whether natural or divine, I don't know that scientists looking into OOL experiments, such as testing if Oparin's coacervates could form in primitive conditions, really suffered any criticism from this direction. If you can quote an example, it would be helpful. Nakashima
'kind of' yes! Lock
To Lock/Biped. You have goaded me into one more response. Please explain: In my estimation yours is a kind of trinitarian materialist point of view. trinitarian materialist ? Graham
Lock, You may have to just ignore Graham. There are those that come here and actually have something meaningful to say. They are vital and interesting, particularly when the conversation focuses on the physical evidence for agency involvement in the Universe and Life within it. Then there are those, like Graham, who in their indefensible certainty, cannot question themselves for any reason whatsoever. Your willingness to do so, and to voice it openly, is anathema to him - a delighful example of what is to be mocked for his personal entertainment. Your strength he sees as a human weakness. One stemming from the stupid ideas of old. His weakness he perceives as a glorious strength - a triumphant victory in the name of reason. And so it goes... Upright BiPed
Thanks for playing Graham... Next! Lock
To Lock. I did not find it until I considered Christianity ... I thought it was a parody up until that last line. The Jesuits would love you. Graham
What does it mean? :( I generally wouldn't expect a materialist (if you are one) to ask questions about meaning. It is far more relative to the existential elements of life than the cerebral. That's ok... our emotional life must cohere with our intellect as well. Do you expect coherence and meaning? Appearently you do... And that shows that you are demanding the very principle discussed in order to question it. Can you see that? It's fine if yours is a genuine question. But it is quite appearent that you asked the question rhetorically, not in order to actually test the validity of what I said, but to cast doubt upon it by simply raising the question. Very bad approach in serious objective debate, and very common in todays courtroom dramas and political satire TV. I really do want to get this right... So you want me to take all of the diverse concepts referenced only implicitly in my abstract response to Dov, and show how they can be unified into a meaningful whole? If so, you should now understand why it is relevant... because we seek it in everything. The coherence between to or more clues (or witnesses) is necessary to truely know 'anything'. And I am not talkig about absolute knowledge. I mean even minimal knowledge. The questions that mattered to me most with regard to nature, reality, and the universe, were how to unify our observations with our philosophical options regarding origins. I didn't understand my confusion then as I do now, so be mindful that I am looking back with the benfit of hindsight. Then, I didn't even know what it was I wanted, but now I believe I wanted my philosophy to match my science. It was not only an intellectual puzzle and desire, but one that would bring tremendous existential peace as well. That did not happen as an atheist, or a pantheist. I did not find it in LSD. Believe me I tried. I did not find it until I considered Christianity with all my all my heart, all my mind, and all my strength. What was your question again? Lock
To Lock at #8 ... the unity and diversity of reality as a whole. Jumping Jupiter. What on earth does all that stuff mean ? Does it mean anything ? Graham
Words mean things. It could easily have been genuine problem if not understood. I'm may be an idiot, but I am not stupid... :D Lock
Lock,
Correction to post #8 just in case there are any theological hair splitters like me out there…
Thanks for clearing that up before the thread got derailed into a theological quagmire. :D herb
Correction to post #8 just in case there are any theological hair splitters like me out there... I did not mean to express that Jesus is the universe incarnate or that he claimed to be. What I meant is that He is reality (God) incarnate and claimed to be so. It is the materialist along the lines of Dov's comments who would be claiming to be the universe incarnate. That is the equivilant of what Jesus claimed but from a materialist angle. Anyway, I wanted to make that clear for any who noticed the problem. Lock
It amazes me that when scientists tried to test Oparin's theory, they were going against a scientifically established fact (that life only comes from life, which was established in the Middle Ages). They theorized that if conditions differed in the past, life could slowly have come from nonlife. Intelligence and advanced education were required (of the scientists) to study and even begin to explain what occurs at the molecular level in our cells. Is it reasonable to believe that complicated steps occurred in a "prebiotic soup" first, undirected, spontaneously, and by chance? Barb
Lock@11. "Why is my cross so heavy when all of its weight was born for me? Obviously there is much I do not yet understand or just simple faith that has yet to grow." Matthew 11:28-30 (King James Version) 28Come unto me, all ye that labour and are heavy laden, and I will give you rest. 29Take my yoke upon you, and learn of me; for I am meek and lowly in heart: and ye shall find rest unto your souls. 30For my yoke is easy, and my burden is light. Jude 1 20But you, dear friends, build yourselves up in your most holy faith and pray in the Holy Spirit. 21Keep yourselves in God's love as you wait for the mercy of our Lord Jesus Christ to bring you to eternal life. Romans 10:17 17So then faith cometh by hearing, and hearing by the word of God. Just thought I'd share this with you. God bless. IRQ Conflict
Gil writes: "Here’s what I want to know: Why did so many people I respected and told me that they loved me indoctrinate me with an obvious lie? First of all, don't get the wrong idea. I haven't been to church in a month. I haven't read my Bible for two months. Been in a real desert lately, dreaming of how good it was in Egypt, and watching the Egyptians play with heathen abandon. I am dead serious about this. I am a nobody... That being said, yours is the kind of question that I have only found an answer to in scripture. Here's what immediately came to mind when I read it because the intellect fails me when faced with understanding such questions. John 14:27 Peace I leave with you; my peace I give you. I do not give to you as the world gives. Do not let your hearts be troubled and do not be afraid. So I guess they do care for us Gil. Like the best lies, they are half-truths. There is a touch of real love in them. It is a distortion of true caring. It is populism, fear of man, and wanting to have a 'good name' in the community. It is politics, peer pressure and the inability to see how subtle and intense the pressure actually is. It is taking the easy road... and why not?... everyone else is doing it! But the clock does run out, collectively and individually. Proverbs 18:24 A man of many companions may come to ruin, but there is a friend who sticks closer than a brother. I wish I could take the easy road. I hate to confess that I miss it. I still do, and then my conscious convicts me instantly. 'Oh wretched man that I am' etc... an alien in what was once 'my kingdom'. Why is my cross so heavy when all of its weight was born for me? Obviously there is much I do not yet understand or just simple faith that has yet to grow. Sorry for the nakedness Gil, all of this you already know... Lock
Be nice Herb. I once stood in his shoes. Though not complete, there is an answer to your question that is at least logically consistent. All dov has to do is put materialism into correct perspective (not the absolute ahead of the philsophy that anchors it) to understand for himself. For the most part, he is making valid logical extentions. The imagination involved will be culled with time. My gut tells me he will not however be shamed into submission. We can't win playing the game so common amongst the ID haters. This person seems a layman in the fray. Your question is legitimate, I just thought it obviously rhetorical, so I sensed a 'touch' of cynicism. I believe idiots deserve some grace, because I most assuredly am one. Lock
Lock: I had believed a lie for so long, as taught in basic high school science classes and the popular media. I was once in exactly your situation. I took it on unquestioning faith that what I was taught in school and told by academic intellectual types was true, and that there was no point in even considering challenges to Darwinian orthodoxy, because the only people who do so are mindless, uneducated, low-IQ religious fanatics. A friend, whom I respected because of his transparent wisdom and exemplary ethical lifestyle (despite the fact that he was a Christian and I thought belief in God was a destructive delusion), suggested that I read Michael Denton's Evolution, A Theory in Crisis. This suggestion came after a brief conversation in which I tried to convince him that, once upon a time, a self-replicating molecule came about, and then random changes and reproductive selection explained everything after that. He said, "I won't try to convince you, but I recommend that you read Denton's book." I read Denton's book, just as Michael Behe did. I slapped myself on the forehead and exclaimed: "Crap! I was conned!" Here's what I want to know: Why did so many people I respected and told me that they loved me indoctrinate me with an obvious lie? GilDodgen
Hey Dov, not being flippant here, I really have to point this out. When you say there are two physics (astronomically or otherwise) and in the next or previous breath, refer to the 'oneness' of the universe (fractally or otheriwse) it sounds to me like you are reiterating a very old prinicple and trying to frame it with new language. Many out here understand the principle well (and respect its mystery too). You are acknowledging both the unity and diversity of reality as a whole. We just call it the trinity. It is not a new concept. In my estimation yours is a kind of trinitarian materialist point of view. Another piece of evidence for those of us who understand that materialism is simply another religion. Even your post (in it's entirety) has a very abstract religious flavor to it along the lines of, 'I am the alpha and the omega. The beginning and the end.' You are on to more than you may realize and I understand very well. The only difference I would encourage you to study is in the language. With one language, reality and its self originating qualities is expounded upon as a living being. Therefore life and being originate in Him. He is not simply the origin of life and truth, but as Jesus said, "I am the way, the truth, and the life". In your language, reality is not personal in the sense of being, but just is. The way you express it contains a presupposition that reality is not a 'who' but a 'what'. Interestingly it takes a 'who' (in this case you) to say it. Many others have preceded you in what I consider to be this error. I have observed for some time (and I think this is right) that science has demanded that the language be framed in the kind of presupposition that you yourself are using. I wonder if you are conscious of it? The philosophy in question (materialism) works well for science when uderstanding cause and effect relations in many areas. But to use it absolutely and apply it to ultimate reality in terms of origins is not science at all but strictly philosophy or metaphysics if you prefer. Your comments are not really observations at all. They are speculations. They are possibilities that have the quality of a declaration. The kind of declaration that logically can only be truely made [with conviction] by a man who thinks himself God. That being said, your comments show not only tremendous intelligence, but a simultaneous lack of perception regarding the necessity of the simple framework in which the mind must, at once, be inarguably anchored if those ideas to be meaningfully and consistently stated. Congratualations on discovering the trinity. Now stand up and declare yourself to be the universe (and its reality and origin) incarnate like Jesus did, and you'll have some real attention. Since I can never stop writing I might as well add that at some point, materialism will reach that point. A man (or more likely a plurality of 'mankind') will declare himself to be God, not just implicitely as is so common now, but outright and boldly. Then... many on the sidelines will have to make a choice. Is Jesus God, or is this 'new man' who unknowingly says the same thing as Jesus? Or... maybe it's just coincidence? ;) Lock
Dov Henis,
Genes are THE Earth’s organisms and ALL other organisms are their temporary take-offs. *** The early genes came into being by solar energy and lived a very long period solely on solar energy. Metabolic energy, the indirect exploitation of solar energy, evolved at a much later phase in the evolution of Earth’s biosphere.
This is a very interesting thesis. Are you saying that genes originally existed on their own, and not as a part of some organism's DNA? How is this possible?? herb
Lock, I know exactly how you feel. We've all been lied to, and now they are threatened by it. Perfect. Upright BiPed
The Dark Matter sends me messages every night . . . 90DegreeAngel
On The Origin Of Origins Dark Matter-Energy And “Higgs”? Energy-Mass Superposition The Fractal Oneness Of The Universe All Earth Life Creates and Maintains Genes A. On Energy, Mass, Gravity, Galaxies Clusters AND Life, A Commonsensible Recapitulation http://www.the-scientist.com/community/posts/list/184.page#2125 The universe is the archetype of quantum within classical physics, which is the fractal oneness of the universe. Astronomically there are two physics. A classical physics behaviour of and between galactic clusters, and a quantum physics behaviour WITHIN the galactic clusters. The onset of big-bang's inflation, the cataclysmic resolution of the Original Superposition, started gravity, with formation - BY DISPERSION - of galactic clusters that behave as classical Newtonian bodies and continuously reconvert their original pre-inflation masses back to energy, thus fueling the galactic clusters expansion, and with endless quantum-within-classical intertwined evolutions WITHIN the clusters in attempt to delay-resist this reconversion. B. Updated Life's Manifest May 2009 http://www.physforum.com/index.php?showtopic=14988&st=480&#entry412704 http://www.the-scientist.com/community/posts/list/140/122.page#2321 All Earth life creates and maintains Genes. Genes, genomes, cellular organisms - All create and maintain genes. For Nature, Earth's biosphere is one of the many ways of temporarily constraining an amount of ENERGY within a galaxy within a galactic cluster, for thus avoiding, as long as possible, spending this particularly constrained amount as part of the fuel that maintains the clusters expansion. Genes are THE Earth's organisms and ALL other organisms are their temporary take-offs. For Nature genes are genes are genes. None are more or less important than the others. Genes and their take-offs, all Earth organisms, are temporary energy packages and the more of them there are the more enhanced is the biosphere, Earth's life, Earth's temporary storage of constrained energy. This is the origin, the archetype, of selected modes of survival. The early genes came into being by solar energy and lived a very long period solely on solar energy. Metabolic energy, the indirect exploitation of solar energy, evolved at a much later phase in the evolution of Earth's biosphere. However, essentially it is indeed so. All Earth life, all organisms, create and maintain the genes. Genes, genomes, cellular organisms - all create and maintain genes. Dov Henis (Comments from 22nd century) Dov Henis
So many of you have worked tirelessly in the face of unjustified persecution. The resistance is way beyond legitimate debate. We laymen and women (who appearently believe more in ID each day because of your work) thank you. Thank you, thank you, thank you, and God bless you... I remember hearing Paul Nelson interviewed by Hank Hannegraff regarding the Unlocking the Mystery of Life DVD. I was stunned by what I was hearing. It was very easy to follow. I remember thinking, 'why was none of this perspective on the Discovery Channel and National Geographic'??? I ordered it. I was already sitting down, tired from cheering and jumping up and down durring the video; but, when I got to the question and answer segment and heard Dr. Meyer explain his illustration about the lack of difference in mass between an empty CD and one containing information I was hooked. That is a powerful illustration. At that point, I stood back up with hands on my head. Like a modern day 'doubting Thomas' my emotions had me proverbially on my knees declaring, 'My Lord and my God'! I had believed a lie for so long, as taught in basic high school science classes and the popular media. Dean Kenyon's comparison between DNA's bits per cubic millimeter and our micro chips was also very illustrative. The whole thing was and is spot on. I asked my dentist once about the whole problem of DNA evolution since DNA is needed before evolution can occur. And if I you all don't mind me saying so myself, I did a masterful job of framing and asking the question. My dentist understood the matter perfectly and the look on his face was priceless. He gets right to business now and has not really spoken to me since. If only the establishment had such wisdom. Keep your wits and let their persecution make them ashamed to attack you. You folks are reaching many. Lock
I find this very interesting. As it turns out, there is indeed an “immaterial vital force” that is unique in living systems, and found nowhere else in chemistry. It’s called information. Chemistry is the medium; information is the message. An excellent insight! tribune7
I find this very interesting. As it turns out, there is indeed an "immaterial vital force" that is unique in living systems, and found nowhere else in chemistry. It's called information. Chemistry is the medium; information is the message. In the beginning, the physical universe was created in a flash of light at a certain instant of time, or so says a certain ancient author. This was ridiculed as being preposterous by scientific consensus until the microwave background radiation signature was discovered. The flash of light was ultra-high-energy gamma radiation, which decayed into microwaves over a period of 14 billions years. So, as it turns out, observations about such things as vital forces being unique and essential in living systems, and the universe being created in a flash of light should not be discarded out of hand. A little bit of knowledge can be a dangerous thing, if unjustified extrapolations are made from it. GilDodgen

Leave a Reply