Uncommon Descent Serving The Intelligent Design Community

An Eye Into The Materialist Assault On Life’s Origins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Synopsis Of The Second Chapter Of  Signature In The Cell by Stephen Meyer

ISBN: 9780061894206; ISBN10: 0061894206; HarperOne

When the 19th century chemist Friedrich Wohler synthesized urea in the lab using simple chemistry, he set in motion the ball that would ultimately knock down the then-pervasive ‘Vitalistic’ view of biology.  Life’s chemistry, rather than being bound by immaterial ‘vital forces’ could indeed by artificially made.  While Charles Darwin offered little insight on how life originated, several key scientists would later jump on Wohler’s ‘Eureka’-style discovery through public proclamations of their own ‘origin of life’ theories.  The ensuing materialist view was espoused by the likes of Ernst Haeckel and Rudolf Virchow who built their own theoretical suppositions on Wohler’s triumph.  Meyer summed up the logic of the day

“If organic matter could be formed in the laboratory by combining two inorganic chemical compounds then perhaps organic matter could have formed the same way in nature in the distant past” (p.40)

Darwin’s theory generated the much-needed fodder to ‘extend’ evolution backward’ to the origin of life.  It was believed that “chemicals could “morph” into cells, just as one species could “morph” into another “ (p.43).   Appealing to the apparent simplicity of the cell, late 19th century biologists assured the scientific establishment that they had a firm grasp of the ‘facts’- cells were, in their eyes, nothing more than balls of protoplasmic soup.   Haeckel and British scientist Thomas Huxley were the ones who set the protoplasmic theory in full swing.  While the details expounded by each man differed somewhat, the underlying tone was the same- the essence of life was simple and thereby easily attainable through a basic set of chemical reactions.

Things changed in the 1890s.  With the discovery of cellular enzymes the complexity of the cell’s inner workings became all too apparent and a new theory that no longer relied on an overly simplistic protoplasm-style foundation, albeit one still bounded by materialism, had to be devised.  Several decades later, finding himself in the throws of a Marxist socio-political upheaval within his own country, Russian biologist Aleksandr Oparin became the man for the task. 

Oparin developed a neat scheme of inter-related processes involving the extrusion of heavy metals from the earth’s core and the accumulation of atmospheric reactive gases all of which, he claimed, could eventually lead to the making of life’s building blocks- the amino acids.  He extended his scenario further, appealing to Darwinian natural selection as a way through which functional proteins could progressively come into existence.  But the ‘tour de force’ in Oparin’s outline came in the shape of coacervates- small, fat-containing spheroids which, Oparin proposed, might model the formation of the first ‘protocell’.

Oparin’s neat scheme would in the 1940s and 1950s provide the impetus for a host of prebiotic synthesis experiments, most famous of which was that of Harold Urey and Stanley Miller who used a spark discharge apparatus to make the three amino acids- glycine, alpha-alanine and beta-alanine.  With little more than a few gases (ammonia, methane and hydrogen), water, a closed container and an electrical spark Urey and Miller had seemingly provided the missing link for an evolutionary chain of events that now extended as far back as the dawn of life.  And yet as Meyer concludes, the information revolution that followed the elucidation of the structure of DNA would eventually shake the underlying materialistic bedrock.          

Meyer’s historical overview of the key events that shaped origin-of-life biology is extremely readable and well illustrated.  Both the style and the content of his discourse keep the reader focused on the ID thread of reasoning that he gradually develops throughout his book.

Comments
Nakashima-San: You already have adequate examples on FSCI and related concepts, and on the quantifications thereof. GEM of TKIkairosfocus
July 23, 2009
July
07
Jul
23
23
2009
02:29 AM
2
02
29
AM
PDT
Rob: When you use RA or Zener noise of sky noise etc to make codes,t he algorithm is where the coding comes from. The random element is a controlled input that gives an assignment for say a 1-time message pad. the randomne4ss has no ma=eqanintg or function in itself. And of course the algorithm and its instantiation are where the intelligent inputs come in. In short, the "example" is a strawman. GEM of TKIkairosfocus
July 23, 2009
July
07
Jul
23
23
2009
02:28 AM
2
02
28
AM
PDT
kairosfocus:
YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT
Okay, I'll take the bait. I'm a sucker for all-caps. 1) The behavior of random processes in nature, such as radioactive decay, can be encoded as bit sequences of arbitrary length, so exceeding the 1000-bit threshold is no problem. Any given random sequence is useful for a variety of purposes, including testing and encryption, so it's functional, and therefore FSCI. 2) Ocean tides serve important functions, including nutrient mixing and the creation of intertidal ecologies. How many bits does it take to store tidal behavior? Even if we discretize time and depth very coarsely, we can easily exceed the 1000-bit threshold if we record the information long enough, so we have FSCI. 3) Or we could note that the principle cause of tides is the moon, which remains in close proximity to the earth as the earth rotates and moves through space. How many bits does it take to store the trajectory of the moon? Again, even discretizing time and location very coarsely, we can easily exceed the 1000-bit limit. The same goes for the earth's trajectory, which has the function of producing seasons. It's not hard to see what objections can be raised to the above cases. The question is whether you'll notice that the problems stem from your definitions, and that the objections also apply to your own examples.R0b
July 22, 2009
July
07
Jul
22
22
2009
09:10 PM
9
09
10
PM
PDT
Mr Kairosfocus, No-one, to my view, is talking about AI. This is rhetorical misdirection on your part. How do you calculate FSCI? How do you prove its source? Those are the issues under discussion. You have made claims that you are being asked to support. Having three metrics is almost worse than having none. Which of the three is correct?Nakashima
July 22, 2009
July
07
Jul
22
22
2009
05:01 AM
5
05
01
AM
PDT
Mr kairosfocus, Nakashima-san: dismissal does not undercut the material force of the point. Namely, unconstrained random changes of significant size imposed on functioning systems are more likely to perturb them away from functionality than to improve such functionality. (This is one of the reasons why we see a topology of islands of function in a sea of non-function.) In this case the point had no material force. No scientist, working in any field, has the lack of separation of simulation, model, and experiment built into their methodology. If there is an earthquake in SF (God forbid), will my Second Life avatar feel anything shaking? Perturbation moves you away from functionality? Now you are making the assumption that you have accused others of making - that the population members are close to high functioning areas already. Here's a question - if I take a step in a random direction, have I moved closer to the top of Mt Fuji or away from it? This a resaon our intuition about function and non-function can deceive us. In abiogenesis, there is no fitness function, there are only reaction rates and products. The 'fitness' of a molecule depends entirely on its enviroment, which is why a GA like the IPD scenario I proposed is closer to that reality than silly Weasel style functions.Nakashima
July 22, 2009
July
07
Jul
22
22
2009
04:48 AM
4
04
48
AM
PDT
how much FCSI does a GA contain?
I take it that you are declining to answer this?
using block caps to draw attention to what is there as opposed to what you expect to see is NOT “shouting.”
Perhaps not but using block-caps to issue challenges on discussion forums was something I understood to be analogous to shouting:
YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT
I have consulted Derek Smith’s model. There is no mention of genetic algorithms but he does present a model system for creating intelligent robotics that involves adding higher level planning and prediction. This is what he says:
a higher order controller (far left) replaces the external manual source of command information. This means that there is no longer any high-side system boundary, making the new layout self-controlling. That is to say, it is now capable of willed behaviour, or "praxis"
No mention of this 'will' coming from an immaterial source, it is all a product of the mechanisms that make up the machine. Earlier you said this:
You will see that the upper level controller is imaginative, creative and volitional, rather than merely mechanical — acting by step by step instructions and procedures triggered by necessity and/or blind chance.
The upper level controller is part of the mechanism, it is mechanical in that sense and does not draw on immaterial things to operate. The detail of the mechanism isn't explained but I would guess that a neural network might be a good candidate, and they work quite well with GA's.BillB
July 22, 2009
July
07
Jul
22
22
2009
02:12 AM
2
02
12
AM
PDT
Onlookers: First, observe that to date, the objectors to the observation that FSCI is a reliable sign of intelligence are unable to provide a single clear empirical counter-example, in the face of literally millions of examples all over the Internet. That should tell us the real balance on the merits. Second, had one or two of the objectors above troubled to consult the Derek Smith Model as already linked, they would have seen why GA's are not credible as artificial intelligences. Namely, the locus of the creative, volitional and imaginative supervision of the algorithmic loop is EXTERNAL to the system. (That is, GA's contain active information that is exogenous to the GA. A real AI will instead be credibly able to creatively project and decide its own path, then successfully implement it, fixing problems along the way.) BOTTOMLINE: This thread has now plainly passed the point of diminishing returns and it is plain enough where the true balance on the merits lies. FSCI is a reliable sign of intelligence, and we have excellent reason to conclude that this holds in the context of the information systems in life, from first life on. G'day GEM of TKI PS: FYI BB, using block caps to draw attention to what is there as opposed to what you expect to see is NOT "shouting." PPS: Nakashima-san: dismissal does not undercut the material force of the point. Namely, unconstrained random changes of significant size imposed on functioning systems are more likely to perturb them away from functionality than to improve such functionality. (This is one of the reasons why we see a topology of islands of function in a sea of non-function.) PPPS: Sparc, you are simply whistling in the dark as you walk by the graveyard. You have seen at least three metrics relevant to FSCI [at least one of which has published a table to 35 values int eh peer-reviewed literature, apart form "good enough to make the relevant pint" examples above], and you have seen a context that shows why the term is useful and holds warrant dating back to Orgel in 1973. Again, FSCI is that subset of specified, complex information that has the specification through observed function. As a start, any string of contextually responsive ASCII text in English of at least 143 characters would exhaust the probabilistic resources of the observed cosmos to try to explain it on undirected chance + necessity. But, intelligence routinely produces such.kairosfocus
July 22, 2009
July
07
Jul
22
22
2009
01:29 AM
1
01
29
AM
PDT
A more philosophical question: Do lengthy comments that are completely ignored outside of the small UD cosmos contain any FCSI? A quick Google search proves that FCSI is not effective as an argument. PS. Therefore I introduce EFCSI (effective functional complex specific information) PPS. Although it seems impossible to calculate FCSI (to my best knowledge there's no positive example of FCSI) it is well possible to make a rough etimate of EFCSI of KF's comments. PPPS: SInce only Jerry and a few UD commenters I don't remember adopted FCSI the EFCSI content of KF's comments is close to zero. PPPPS. It may increase if FCSI were supported by Dr. Dembski.sparc
July 21, 2009
July
07
Jul
21
21
2009
01:06 PM
1
01
06
PM
PDT
KF-san, It would be easy enough to spew random bits across the PC, and see what happens. Given the well-known vulnerabilities provided by malware, the point I made stands. Mr Dodgen knew better than to hold to this position after its absurdity was pointed out, why do you persist? And spewing bits acheives what, exactly? If I ran a GA on such a bit spewing PC, would you accept that it generated FSCI not sourced in the programmer? PS I think you mean exploited, not provided, minor point.Nakashima
July 21, 2009
July
07
Jul
21
21
2009
10:37 AM
10
10
37
AM
PDT
Correcting a strawman: GA’s exhibit FSCI, and are known to be desinged by intelligent agents.
Please do me the courtesy of actually reading my comments. GA's are designed, that is what I said. The strawman was your attempt to confuse a model with the system used to implement the model. I'll repeat myself just for clarity:
The GA does not arise by chance variation of random bit strings, and the operating system on which it sits, likewise. Nor do we permit random variation of the whole program and the operating system when we use a GA.
Biologists do not claim that DNA copying errors change the laws of physics. This is the same error that Gill keeps making about simulations.
As the onlookers will note, I was addressing this confusion over how computers are used as tools for modelling, not the issue over who designs GA's. Your obfuscatory attempt to distract has failed. It seems from your 3 that you are claiming that if ANY attempt to model natural processes generates FCSI then the designer of the model must be inputting active information, and that therefore invalidates anything the model produces. I notice you have avoided answering my question about whether a deity included the active information required for our evolution with creation of the universe. Also this: ...how much FCSI does a GA contain? Can you supply some numbers please. and then we can start to answer this: ...can it generate more FCSI than it contains?
We have no empirically warranted grounds for inferring that the FSCI in GA’s or other cases may arise without active information coming from intelligent agents. YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT
No need to shout. You are persisting with strawmen. As I have already repeated, I have never claimed that Genetic Algorithms are not the product of human design. You challenged me to prove that they do not rely on humans to design them - why should I! it's not something I claim! I see nothing more self referential or incoherent about the idea of a naturally occurring intelligence than in the idea of an un-caused cause such as a deity.BillB
July 21, 2009
July
07
Jul
21
21
2009
06:53 AM
6
06
53
AM
PDT
Also: Constrained randomness is a known feature of designed, complex systems. (Cf how dice are often tossed to play a board game, e.g Monopoly.) Such systems however neither originate in randomness and blind mechanical forces nor do they allow unconstrained randomness -- such randomness would soon cumulate to the point where the functionality would be compromised. Indeed, the living cell has subsystems that maintain the integrity of DNA information. And that word integrity tells us what easily enough happens when randomness gets out of hand -- as does the danger of radiation damage. And, that is very well known. GEM of TKIkairosfocus
July 21, 2009
July
07
Jul
21
21
2009
06:22 AM
6
06
22
AM
PDT
Nakashima-San: It would be easy enough to spew random bits across the PC, and see what happens. Given the well-known vulnerabilities provided by malware, the point I made stands. GEM of TKIkairosfocus
July 21, 2009
July
07
Jul
21
21
2009
06:18 AM
6
06
18
AM
PDT
PS: BB, again, GA's exhibit FSCI and are KNOWN (per direct observation) to be designed; aptly illustrating one example of the empirical base for empirically anchored inductive inference to best explanation from observed FSCI to intelligent design. We have no empirically warranted grounds for inferring that the FSCI in GA's or other cases may arise without active information coming from intelligent agents. YOU ARE HEREBY CHALLENGED TO PROVIDE A CASE IN POINT, i.e. here is a point of so far successfully met empirical test -- this is not at all a strawman. (NB: I have found across years, from experience, that I must often add emphases because it seems that otherwise the key words will be overlooked by objectors to ID. Cf above on the statement by Orgel 1973.)kairosfocus
July 21, 2009
July
07
Jul
21
21
2009
06:14 AM
6
06
14
AM
PDT
Mr BillB, We should be careful to distinguish intentional and unintentional sources of variation. Why was the neutrino experiment that Denyse visited buried in a mine? To isolate it from unwanted sources of radiation. In the same way, I would want to run a simulation on a machine that is as bug free as possible. Scientists did not prize the Pentium floating point error when it was discovered. But what if we did 'permit' random variation in the OS, even in the hardware? Lets say I run a GA on a machine made out of Jello, sitting next to Chernobyl. Would Mr KF now agree that it produced FCSI whose source was not the programmer? The whole 'not random enough' objection doesn't lead anywhere fruitful, but it does expose a woeful understanding of what experiment, model building and simulation are. I will simply note that the EIL team has never supported these claims. There is nothing in the MESA users guide about running it on Windows ME, a flawed Pentium chip, and lump of uranium, during a tornado, an earthquake, and a stock market crash, to obtain the correct results.Nakashima
July 21, 2009
July
07
Jul
21
21
2009
06:08 AM
6
06
08
AM
PDT
BB: 1 --> Correcting a strawman: GA's exhibit FSCI, and are known to be desinged by intelligent agents. 2 --> Where a REAL human-created artificially intelligent entity might lie: Cf Eng Derek Smith's two-tier control MIMO cynbernetic model here. (You will see that the upper level controller is imaginative, creative and volitional, rather than merely mechanical -- acting by step by step instructions and procedures triggered by necessity and/or blind chance. Figure out how to do that, build a demo model, and then let's go build R Daneel Olivaaw!) 3 --> OOL and investigator interference: Ever since Thaxton et al laid out a ranked scale of investigator interference in TMLO [have you read this?], we have had objective criteria for identifying what is a legtitimate and what is an illegitimate degree of investigator interference. Where the OOL situation as modelled owes its performance to injected active information, it is an invalid model of the proposed chance + necessity only pre-life world. Shapiro and Orgel's recent exchange [discussed end of Section A my always linked] points out just how both sides of the genes first and metabolism first approaches fail at this bar. 4 --> Intelligence vs mechnism and chance: the point is BB that we see that chance, mechanical necessity tracing to initial conditions and acting forces, and intelligence are three distinct OBSERVED causal factors in the world in which we live. When we can wholly explain a phenomenon without residue as the product of chance + necessity, we do not need to invoke intelligence to explain it. That's not a matter of your subjective opinion, that is a matter of objective, massively evident fact. 5 --> Nor can you successfully reduce intelligence to chance + necessity, on pain of self referential absurdity a la Crick's neurological reductionism or the like. You may choose to be absurd, but we then have a right to infer form your absurdity to the falsehood of your position. 6 --> And, when a position entails self-referential incoherence, it is a generally accepted conclusion that it must be false. That just happens to be the fate of evolutionary materialism and its cognates of reductionistic attempted explanation of conscious, reasoning, choosing, morally bound mind. (As the just linked demonstrates in summary.) _____________ G'day, GEM of TKIkairosfocus
July 21, 2009
July
07
Jul
21
21
2009
06:03 AM
6
06
03
AM
PDT
Mr Jerry, Or do you dispute that nature cannot produce the Works of Shakespeare? Well, it has done so once already, I suppose that constitutes a sort of existence proof! ;) Yes, the monkeys would take a long time to bring forth Shakespeare again. But they would take an equal amount of time to produce any text of the same length. There is nothing in the glorious poetry of Shakespeare that makes him hard to reproduce, only the length of the text. Anthony Trollope would be even harder to recreate. What about the collected speeches of Leonid Brezhnev? What does that prove? We know that random walks over a large space take a long time to cross any small target. But that is not how GAs work.Nakashima
July 21, 2009
July
07
Jul
21
21
2009
05:42 AM
5
05
42
AM
PDT
The GA does not arise by chance variation of random bit strings, and the operating system on which it sits, likewise. Nor do we permit random variation of the whole program and the operating system when we use a GA.
Strawman. Biologists do not claim that DNA copying errors change the laws of physics. This is the same error that Gill keeps making about simulations. Simply putting DESIGN in capitals doesn't add anything to the argument. No one is claiming that GA's are not designed or DESIGNED. The question, as far as mutation in nature goes, is if it occurs and to what degree. This is something we can empirically measure and then, if we are using the GA to model biology, we can apply this rate. If that model includes 'tightly controlled' mutation within the limits seen in nature then all we are doing is producing an accurate model of reality. This whole notion that mutation in a simulation is somehow invalid if it is not applied beyond the simulation is just plain bizarre. If you ditched the computer and just did the math by hand would you argue that, for it to be accurate, 2+2 should not always equal 4?BillB
July 21, 2009
July
07
Jul
21
21
2009
05:34 AM
5
05
34
AM
PDT
Nakashima, Can you please help me out? I was looking for naturally-ocurring complex algorithms that include such phenomena that would be analogous to a stop codon in DNA. Do you know of any that I can study?Upright BiPed
July 21, 2009
July
07
Jul
21
21
2009
05:34 AM
5
05
34
AM
PDT
Mr Jerry, There has never been any known FCSI produced by nature including life. The origin of life is under debate but by all current understanding FSCI is beyond the power of nature to produce. The only logical conclusion then is to conclude that life was probably not produced by nature because nature most likely cannot produce FCSI. But now you write: If you read my paragraph closely, you will see no absolutes but probabilistic or likely statements. Is the word 'never' absolute or probabilistic? The circularity arises from using that idea as an assumption, and then restating it as your conclusion only two sentences later.Nakashima
July 21, 2009
July
07
Jul
21
21
2009
05:22 AM
5
05
22
AM
PDT
Further note (as I forgot to mention earlier): ridicule -- fallacy of "truth lost in the laugh" -- notwithstanding, the random variation used with GA's is tightly controlled and that by DESIGN. The GA does not arise by chance variation of random bit strings, and the operating system on which it sits, likewise. Nor do we permit random variation of the whole program and the operating system when we use a GA. (The likelihood of crashing the system would then overwhelm whatever hoped for at random improvements one fishes for by throwing out a ring of variations on an already basically functional configuration and doing a competitive test on some metric of function or other.)kairosfocus
July 21, 2009
July
07
Jul
21
21
2009
05:16 AM
5
05
16
AM
PDT
PS: It is worth noting that -- as is linked from Weak Argument Corrective 27 -- Abel et al and latterly Durston et al have -- in the peer reviewed literature -- found a very closely related term to FSCI, to be useful in their work in biophysics. Indeed Durston et al have produced a metric and published thirty five values of FUNCTIONAL SEQUENCE COMPLEXITY in FITS, functional bits. If Sparc et al are unaware of that, it is by failing to access and squarely address easily available and repeatedly pointed out information.kairosfocus
July 21, 2009
July
07
Jul
21
21
2009
04:52 AM
4
04
52
AM
PDT
"You may be right about material intelligence lacking an immovable reference for ‘truth’ but that does not constitute proof that material things cannot produce behaviour that we would regard as intelligent, or that material processes cannot produce these entities." I am not sure I understand what you are saying but you seem to not understand the gist of the argument. There are no absolutes on the ID side. It is all probabilistic. So nature could produce FSCI but there are two things to consider. First, there is no evidence that it ever did. And second, there seem to be physical impediments for it to do so in terms of probabilistic resources for combinations of basic elements. Now none of this means that it will not be shown in the future by some unknown process that it is feasible. But until that time the statement that is is highly unlikely is a reasonable statement. On the other side of the argument, there is the absolute pronouncement that intelligence may not be allowed in any scientific consideration. Or that the likelihood that an intelligent cause is greater than zero is forbidden. ID gets accused of being absolutist when in fact it is the absolutist who are making this absolutely false statement about the people who are being reasonable in this debate.jerry
July 21, 2009
July
07
Jul
21
21
2009
04:47 AM
4
04
47
AM
PDT
I have a question. If someone uses a term to describe a phenomena that is commonly used by the scientific community but the term itself is not used by the scientific community, is that term not valid? One can claim that most are not using the term but does that make the term inappropriate. Especially when that term is an attempt to bridge the terminology used in another discipline with a process used in the scientific community in areas of mutual interest. And further more this terminology is currently being used in similar form in some related areas of science.jerry
July 21, 2009
July
07
Jul
21
21
2009
04:35 AM
4
04
35
AM
PDT
So, GA’s exemplify the known source of FSCI: intelligence.
Yes, GA's are algorithms designed by people. Would you regard any experiment to test an idea in Evolutionary theory or OOL as invalid because it is the product of intelligence? The pertinent question here is firstly, how much FCSI does a GA contain, and secondly can it generate more FCSI than it contains?
genetic algorithms are simply not credible candidates for such created intelligences.
What, even GA's created by God? Does this fact you claim mean that we have established something about the capabilities of the designer? I'm afraid I don't buy your 'intelligence cannot be the result of mechanism' argument. You haven't provided a reason why intelligence can't operate from a substrate that is based solely on repeatable and observable features of the universe, you just claim that if it is then nothing makes sense, so therefore it can't be. You may be right about material intelligence lacking an immovable reference for 'truth' but that does not constitute proof that material things cannot produce behaviour that we would regard as intelligent, or that material processes cannot produce these entities.BillB
July 21, 2009
July
07
Jul
21
21
2009
04:18 AM
4
04
18
AM
PDT
"That reasoning is perfectly circular." Absolutely not, because there were not absolutes. You do not seem to understand the difference between the "either" or "or" concept. Or the use of probabilistic statements. It is either natural or the product of intelligence. If you have a third option, let us know what it is. If you read my paragraph closely, you will see no absolutes but probabilistic or likely statements. If nature cannot produce something, for example the Works of Shakespeare, then the likely answer is that it was produced by an intelligence. Capice? Or do you dispute that nature cannot produce the Works of Shakespeare? I will even give you all the monkeys you desire which is cheating on the nature can do it proposition. If you do dispute it then we can add to our list of how to characterize you.jerry
July 21, 2009
July
07
Jul
21
21
2009
04:13 AM
4
04
13
AM
PDT
Further footnote: Re Sparc in the Pinker thread, to Wm A D:
Kairosfocus introduced the term FCSI (aka FSCI) on this forum. I may have missed it but do you or the other EIL members use this term. If so could you please share your thoughts on it?
Of course this is an inverted appeal to authority. As Sparc et al have already repeatedly been informed (starting with Weak Argument Correctives 27, 28 and 29), the term functionally specific, complex information [FSCI] -- and other like descriptions -- is a DESCRIPTIVE reference to the phenomenon identified by Orgel in 1973, when he described how cell based life forms show specified complexity in a bio-functional context. So long as Orgel is right when he said the following, FSCI is a legitimate term:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
Let us zoom in just a bit:
living organisms --> thus, the context is that of biological function int he context of the cell and its macromolecules --> Moreover, the underlying issue is to account for the origin of such bio-function based on informational macromolecules and associated physically instantiated algorithms are distinguished by --> That is, observationally differentiated from two other classes --> in context, complexity and specification their specified complexity. . . . --> In a biofunctional, algorithmic, digital information context --> We mark out living systems by their SPECIFIED COMPLEXITY --> That specification is here a FUNCTIONAL one The crystals fail to qualify as living because they lack complexity; the [random] mixtures of polymers fail to qualify because they lack specificity. --> here we see the distinctions for living systems vs crystals and random mixtures of polymers --> this also bring in the implication that living systems reflect a complex organisation of the component machinery that implements the activities of life --> Should the components be damaged, or disarranged sufficiently, functionality vanishes --> and so also we see the issue that there is an irreducible complexity of organisation and mutual adjustment to operating points in such living cells
In short, the matter has long since been clear, from ORGEL, decades before Dembski's contribution of providing a mathematical framework for modelling and analysing what "specified complexity" means. And, as has again just been explained, it is perfectly capable of being quantified, using the metrics by Dembski, that by Abel and Durson et al, or even a simple heuristic on functionally specific bits. Whether or not the good folks at EIL find it useful for their purposes, FSCI is clearly a useful "glorified common sense" term for ours here at UD. So useful in fact that objectors to ID are desperate to dismiss or suppress it. GEM of TKIkairosfocus
July 21, 2009
July
07
Jul
21
21
2009
04:13 AM
4
04
13
AM
PDT
PS: bFast, a subtlety: In using 1,000 bits, the square of a binary form of the estimate of the number of states our observed cosmos could have across its thermodynamically plausible active life, I am not so much appealing to probability as search resource exhaustion: the whole universe we observe, acting as a search engine, cannot reasonably access as much as 1 in 10^150 of the config space specified by just 1,000 bits of info storage capacity. So, even very large and numerous islands of function in such a space will be well beyond the search capacity of our observed cosmos, acting in an undirected fashion. And, the relevant spaces start way beyond that: simplest observed life uses up about 600 - 1,000 k bits for its DNA, having a config space of order 10^180,000+ cells at he lower end. the notion that some prebiotic soup out here could spontaneously create the relevant organisation and information spontaneously to get to an island of initial function then becomes utterly absurd. And, that brings us right back to the core point of this thread, with DNA and its significance, and also the fine-tuning of the cosmos to facilitate such life, we are now dealing with not just matter and energy but INFORMATION as fundamental constituents of the observed universe. So, just as there were hot debates on matter, waves, particles and energy,w e are seeing a hot debate on information in our day. When the dust settles in another couple of decades, it will then be "obvious" that information is a fundamental element of the universe, and that functionally specific complex information comes from mind. But before we get there,t eh committed materialists will be dragged along, kicking and screaming all the way as their cherished worldview collapses in the face of overwhelming evidence. (That is just what happened to the Marxists across the 1980's; and BTW, there are still many - e.g. a certain Mr Chavez, and maybe some disciples of Saul Alinski [who was a committed Marxist, contrary to how Wiki glosses over his real views in both the bio and the review on Rules for Radicals . . . ] closer to the halls of influence and power in your homeland, too -- who plot Marxist revivals!)kairosfocus
July 21, 2009
July
07
Jul
21
21
2009
02:21 AM
2
02
21
AM
PDT
4] The real issue: Intelligent Design, as Dembski has described it, is "the science that studies [reliable] signs of intelligence." If there are reliable sings of intelligence, then from observed sign we may freely and on good warrant infer to the signified. FSCI is claimed to be one such sign, and on its strength, we may then infer from the observed DNA etc based information systems in cell based life to the design of such life. But, in our day a la Lewontin, Dawkins et al, there is often a strong institutional commitment to the proposition that such design is not to be considered, as it may lend support to theistic etc worldviews, which are often viewed as marks of ignorance, stupidity, insanity or wickedness. So much so that institutions such as the US National Academy of Sciences have sought to redefine science -- in the teeth of its history and significant philosophical considerations and plain ordinary facts -- as in effect applied materialism. In short, worldview level question begging and associated censorship is at work on the evolutionary materialist side; as has repeatedly been documented and exemplified (sometimes to the point of absurdity) in this blog. 5] Bottomline: By sharpest contrast to that, FSCI is: 1--> observable and sufficiently definable to be distinguished by operational criteria 2 --> quantifiable by various metrics, in the simplest case, functionally specific bits. 3 --> In every directly known case, FSCI traces to intelligent action. 4 --> Since FSCI is associated with a topology of isolated islands of observable function in a large sea of non-functional configurations, we may distinguish functional and non-functional macrostates and [at least in principle or on a model basis] assign relative statistical weights. 5 --> On so doing, we see that undirected chance + necessity on the gamut of out observable cosmos, cannot credibly arrive at shores of function in the sea of possible but non-functional configurations. (The statistical weight of the non-functional macrostate overwhelms the functional ones.) 6 --> Consequently, that possible hill climbing mechanisms may exit that can help a population of replicating entities with low functionality hill-climb to peaks of locally maximal function, becomes irrelevant. (For, you have to first get to shores of function before you can climb the hill to optimality.) 7 --> thus, there are both positive and negative reasons to infer from FSCI as a reliable sign of intelligence to the best explanation thereof: intelligence in action. ______________ GEM of TKIkairosfocus
July 21, 2009
July
07
Jul
21
21
2009
02:00 AM
2
02
00
AM
PDT
Footnotes: I will note on several points that seem worth underscoring, mostly for the sake of onlookers: 1] Binary variables It is a commonplace of stastistics to have variables that identify observable or infer-able contingent circumstances and so take a binary or similar set of values. In this case, functionality is a macro-observable, and complexity beyond 500 - 1,000 bits information storage capacity is calculable per relevant observables. Indeed, the simple illustrative instances of (a) an ASCII text string of 143+ characters in English that responds to context (and/or a similar string in a program), and (b) the PC screen that shows the windows in which such text strings reside have been on the table from the outset. The resistance to such examples is aptly illustrative of the underlying lack of weight on the merits for the case put by objectors to the FSCI concept. And, by inversion, it shows us just how significant the fact of FSCI is. 2] Genetic algorithms The first thing we need to know about such is that they are computer programs, invariably written by known intelligent agents and aptly exemplifying FSCI in the very list of statements. So, GA's exemplify the known source of FSCI: intelligence. Moreover, we know that algorithms and programs, data structures etc and the machines to implement them illustrate something very significant: the instructions and their structured, organised sequences, are mechanically implemented, not based on common sense and active decision [and recall here, that we have active conscious creative rationality and decision making ability is a first fact of our experience . . . the denial of which lands us in inescapable self-referential absurdities]. So, we know that the active information in the GA that gets it to move towards peaks of performance comes from the designer, not the machine or the code. In short, while created intelligences are a reasonable concept [we ourselves are a case in point, as the FSCI in us testifies to our origin in another intelligence], genetic algorithms are simply not credible candidates for such created intelligences. 3] Definitionitis I have long since pointed out that EXPERIENCES AND CONCEPTS ARE PRIOR TO PRECISING DEFINITIONS, even in science. For instance, the experience, observation and concept "life" resists such stated definition to this day, but is a foundational scientific concept for a major discipline, biology. (nor indeed can everything in a discipline be defined, on pain of circularity or infinite regress. Commonsense concepts -- aka primitives -- must ground any discipline, call them what you will.) What grounds our real-world work in science instead is that we may observe examples and abstract key concepts, which may be used in operational contexts to describe, explain, analyse, model predict and perhaps influence. In that context, the concept of functionally specified, complex information [FSCI] -- a subset of complex, specified information groundeed in OBSERVED function depending on complex, contingent (thus, information-bearing) organisation of elemets -- has ever since Orgel's statement in 1973 been more than adequately grounded in experience and coherent conceptualisation, with no less than two major mathematical metric models, and a simple rule of thumb heuristic. The insistent, sometimes recycled [in the teeth of having already been adequately answered], objections and hair-splitting above are actually inadverte3nt testimony to the force and validity of this point. [ . . . ]kairosfocus
July 21, 2009
July
07
Jul
21
21
2009
01:59 AM
1
01
59
AM
PDT
ROb(186) "if the concept is to be employed in revolutionizing science, it needs to be rigorized. If the concept is to be accepted by science it must be devoid of unacceptable baggage. A definition is called for, a definition of the kind of information which is in DNA, and not found in nature outside of that unique realm of "life". Yet if the definition, no matter how "rigorous", is approximately: csi = created by intelligence, it will be rejected flat out. I propose a simple term and rigorous definition which matches DNA, computer software, and nothing found within nature other than within "life". FSCI - Function specifying complex information. The information must have complexity (probability less than 1 in 10^150 works for me), it must be information (as defined by Shannon, why not) and it must specify something that is functional, that does something. I would suggest that the computer code which implements a word processor is function specifying. I would suggest that DNA, which describes (or provides a significant portion of the description) of a functioning organism, qualify as FSCI. There, we have a rigorously defined term that should not be repulsive to the scientific community on its face.bFast
July 20, 2009
July
07
Jul
20
20
2009
06:25 PM
6
06
25
PM
PDT
1 5 6 7 8 9 14

Leave a Reply