Uncommon Descent Serving The Intelligent Design Community

An Eye Into The Materialist Assault On Life’s Origins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Synopsis Of The Second Chapter Of  Signature In The Cell by Stephen Meyer

ISBN: 9780061894206; ISBN10: 0061894206; HarperOne

When the 19th century chemist Friedrich Wohler synthesized urea in the lab using simple chemistry, he set in motion the ball that would ultimately knock down the then-pervasive ‘Vitalistic’ view of biology.  Life’s chemistry, rather than being bound by immaterial ‘vital forces’ could indeed by artificially made.  While Charles Darwin offered little insight on how life originated, several key scientists would later jump on Wohler’s ‘Eureka’-style discovery through public proclamations of their own ‘origin of life’ theories.  The ensuing materialist view was espoused by the likes of Ernst Haeckel and Rudolf Virchow who built their own theoretical suppositions on Wohler’s triumph.  Meyer summed up the logic of the day

“If organic matter could be formed in the laboratory by combining two inorganic chemical compounds then perhaps organic matter could have formed the same way in nature in the distant past” (p.40)

Darwin’s theory generated the much-needed fodder to ‘extend’ evolution backward’ to the origin of life.  It was believed that “chemicals could “morph” into cells, just as one species could “morph” into another “ (p.43).   Appealing to the apparent simplicity of the cell, late 19th century biologists assured the scientific establishment that they had a firm grasp of the ‘facts’- cells were, in their eyes, nothing more than balls of protoplasmic soup.   Haeckel and British scientist Thomas Huxley were the ones who set the protoplasmic theory in full swing.  While the details expounded by each man differed somewhat, the underlying tone was the same- the essence of life was simple and thereby easily attainable through a basic set of chemical reactions.

Things changed in the 1890s.  With the discovery of cellular enzymes the complexity of the cell’s inner workings became all too apparent and a new theory that no longer relied on an overly simplistic protoplasm-style foundation, albeit one still bounded by materialism, had to be devised.  Several decades later, finding himself in the throws of a Marxist socio-political upheaval within his own country, Russian biologist Aleksandr Oparin became the man for the task. 

Oparin developed a neat scheme of inter-related processes involving the extrusion of heavy metals from the earth’s core and the accumulation of atmospheric reactive gases all of which, he claimed, could eventually lead to the making of life’s building blocks- the amino acids.  He extended his scenario further, appealing to Darwinian natural selection as a way through which functional proteins could progressively come into existence.  But the ‘tour de force’ in Oparin’s outline came in the shape of coacervates- small, fat-containing spheroids which, Oparin proposed, might model the formation of the first ‘protocell’.

Oparin’s neat scheme would in the 1940s and 1950s provide the impetus for a host of prebiotic synthesis experiments, most famous of which was that of Harold Urey and Stanley Miller who used a spark discharge apparatus to make the three amino acids- glycine, alpha-alanine and beta-alanine.  With little more than a few gases (ammonia, methane and hydrogen), water, a closed container and an electrical spark Urey and Miller had seemingly provided the missing link for an evolutionary chain of events that now extended as far back as the dawn of life.  And yet as Meyer concludes, the information revolution that followed the elucidation of the structure of DNA would eventually shake the underlying materialistic bedrock.          

Meyer’s historical overview of the key events that shaped origin-of-life biology is extremely readable and well illustrated.  Both the style and the content of his discourse keep the reader focused on the ID thread of reasoning that he gradually develops throughout his book.

Comments
Kf Are proteins numbers? Have numbers been observed to exist outside of human culture? Can stochastic processes affect DNA replication in a way that can be approximated numerically with the aid of random number generators?BillB
July 19, 2009
July
07
Jul
19
19
2009
09:32 AM
9
09
32
AM
PDT
Joseph, Do you believe that randomness does not occur in nature save for the inventions of intelligent agents?BillB
July 19, 2009
July
07
Jul
19
19
2009
09:25 AM
9
09
25
AM
PDT
Nakashima-san, I used to work in the encryption industry. Our products used random number generators. The old stuff used a noisy diode. That noise was then input to a flip-flop or counter and the output was the random number generated from that noise. It took design engineers to create it.Joseph
July 19, 2009
July
07
Jul
19
19
2009
08:19 AM
8
08
19
AM
PDT
dbthomas, Just how is radiactive decay a random number generator?Joseph
July 19, 2009
July
07
Jul
19
19
2009
08:15 AM
8
08
15
AM
PDT
Or here Bathybius Huxley realized that he had been too eager and made a mistake. He published part of the letter in Nature and recanted his previous views. Later, during the 1879 meeting of the British Association for the Advancement of Science, he stated that he was ultimately responsible for spreading the theory and convincing others. Most biologists accepted this acknowledgement of error. I find it strange that some sites never credit Huxley for admitting and retracting his error. Perhaps it doesn't fit their dramatic preconceptions.Nakashima
July 19, 2009
July
07
Jul
19
19
2009
07:28 AM
7
07
28
AM
PDT
Any of you guys remember "Bathybius"? This was a most important materialist assault at the time. Because of Bathybius, we were all supposed to stop believing in God and so on. Evolutionists said it was a vast sheet of living proto-blob under the oceans, from which all life sprung. It was discovered by Huxley, but then the whole thing turned out to be a scam. Read about it here: BathybiusVladimir Krondan
July 19, 2009
July
07
Jul
19
19
2009
06:36 AM
6
06
36
AM
PDT
Sorry for the late return to the thread. dbthomas, in my question regarding the stop codon you redirected me back to your previous post at 68. My response at 73 was so brief because I literally had 3 minutes before my plane took off. I am now happy to return your post at 68 for a closer look. You say:
TAG doesn’t mean ’stop’ at all. We say ’stop codon’ because it describes its function. It simply doesn’t match any tRNAs, but does react with proteins called release factors, and so translation stops. The ribosome doesn’t need to ‘know’ its ‘meaning’.
This sells the process a little short don’t you think? Firstly, we have to look at the phenomena of “stop” in context, which you seem to have completely ignored. The missing context centers around a chain of nucleotides in DNA that symbolically represents the proteins and processes that are required for living tissue to successfully operate. A function must exist which brings about an orderly end to the process of protein synthesis when the process has completed the sequencing of amino acids in a protein. That function within the process is brought about by a chemical signal along the chain of nucleotides which has the specific intent to end the process. The key word here is process. No one is suggesting that a bucket of thymine, adenine, and guanine means “stop”. However, within the context of reality, it would be hard to argue that a stop codon is merely a human description, and not an actual signal within the process indicating that the end of the amino acid sequencing is complete (so “stop” the sequencing). You say the T-A-G triplet (once transcribed) “simply does not match any tRNAs” and then go on to say release factor proteins come into play. How fortuitous is it that those release factors (and the tRNAs themselves, etc) just happened to be synthesized and waiting inside the cell. Once again, you have discarded the context. This phenomenon is taking place within a cell (actually within a certain part of a cell). That cell has constituent parts which exist there for the specific and organized purpose of cellular function. The specialized release factor proteins are part of the system. They, nor any other constituent parts of the cell, would exist there at all if “stop” did not mean “stop”. In other words, they all required “stop” to mean “stop” so that they can be part of the process where “stop” means “stop”. To assert that the resulting mechanical effect of the stop codon is the cause of the stop codon is to say that the effect of the cause is the cause itself, and perhaps even the cause of the cause. In a system that is well known to be physico-dynamically inert (particularly in regards to the actual sequence of amino acids, such as T-A-G) that assertion has of certain ring of intent to it. I can suppose that if I asked why the coding of the 3 billion base pairs of the human genome exists in the order they do; you could simply answer “so that humans are made”. And if I argued that it could not rationally come to be organized by a mechanism that operates at maximum uncertainty (like chance) then you could simply posit the long period of time that Life has existed on Earth. You could then make a meaningless appeal to selection as the organizing force. Both of these explanations would, of course, ignore that the sequencing of DNA has no physical cause to exist at all, and that organized complex life began on Earth almost immediately after the planet cooled. Perhaps if my hastily posted question (as I was in the airport) could have been more specific, then perhaps your response would have been less trivial and more useful. ID proponents are looking to materialists to provide material explanations that are based on what is known about material causes (and to not contradict what is already known about material causes). Perhaps you could have given us an empirical example of other naturally occurring complex algorithms where such analogous phenomena as a “stop” codon exist. Do you have any such examples?Upright BiPed
July 19, 2009
July
07
Jul
19
19
2009
04:51 AM
4
04
51
AM
PDT
Onlookers: A few footnotes: It is fairly clear from the telling rhetorically strategic silence on the point above that advocates of abiogenesis and/or body plan level macro-evolution have no clear empirical evidence of the following originating by undirected chance and mechanical necessity tracing to blind natural forces:
1 --> Computer languages, codes, algorithms and organisation of data structures. 2 --> Functionally specific, complex information (and broader specified complexity) 3 --> Irreducible complexity (especially that based on finely-tuned mutual adjustments to meet at an operating point).
Each of these is well known and routinely observed to be the product of intelligent design. As well, the challenge to find target zones of function in the relevant configuration spaces with vast seas of non-function, rapidly exhausts the search resources of our observed cosmos. So, such phenomena, credibly, are reliable signs of intelligence. Why that inference is being so stoutly resisted is because of its possible worldview level implications, not anything to do with its empirical weight. (In other words, a la Lewontin et al, we see that an imposed a priori commitment to materialism is blocking and censoring out the inference to what would otherwise be the obvious best explanation.) Now, a few points above require a note or two: a] N, 123: In repeating that the FSCI must have come from the programmer . . . As the above shows, I am not making an a priori commitment (which is what he highlighted indicates) but an inference to best, empirically anchored explanation. That is, I have made a scientific rather than a philosophical inference -- it is evolutionary materialism that has introduced a priori censoring commitments on this subject, cutting off the evidence from speaking. b] If you have a solid way of differentiating between the active information input by the programmer and the active information input by the random number generator, you have solved a very interesting problem for ID. A random number generator of course is strictly capable of making an avalanche of rocks down a hillside fall into any particular shape, including the shape: WELCOME TO WALES. However, as I have pointed out long since, the number of possible at-chance configurations that do not fulfill any linguistically meaningful configuration are so much more abundant in the config space than those that do, that we do not expect to see such. This is the same basic reasoning that underlies the statistical form of the 2nd law of thermodynamics. By contrast, intelligent designers routinely arrange rocks to form such complex, linguistically functional configurations, and do many other similar things. the inference to best explanation is therefore obvious,a nd is a longstanding design theory technique. c] does the fitness landscape have to have islands of function before the functional context generates FCSI? Again, ever since Orgel in 1973, it has been well understood that complex functional organisation is distinct from mechanically generated order, and randomness. (Cf here the Abel et al cluster: orderly, random and functional sequence complexity.) In that context, the concepts of complex specified information and as a relevant subset functionally specified complex information, are relevant. Further to this, since complex function resign on complex co-adapted and configured elements is inherently highly vulnerable to perturbation, such functionality naturally shows itself as sitting on islands in a sea of non-function. That is, the description of islands in a sea of non-function is not arbitrary or suspect, but empirically well-warranted. (We do not write posts here by spewing out letters at random . . . ) d] DBT, 125: two words: radioactive isotopes. A sample of radioactive material does not generate and issue random NUMBERS, it simply has atoms that decay stochastically. Since we have observed and analysed that stochastic pattern (and others like it, e.g. Zener or sky noise), we then use our intelligence to create machines that generate random numbers using the outputs of that stochastic behaviour. (And we can also make pseudo-random number generators that can more or less convincingly mimic that behaviour.) Joseph is clearly right:
Can you show us a random number generator arising via nature, operating freely? That would help your case…
Also, we do not routinely observe such random number generators routinely issuing King Henry V's speech or the like. We do see intelligent agents routinely issuing linguistic and algorithmic organised sequences that exhibit FSCI. e] N, 126: There is no absolute fitness landscape that all population members experience equally in GA systems that focus on competition rather than targetted search. Again, the islands of functionality in a sea of non-function pattern is a natural one for organised complexity. And, absence of function is fairly obvious empirically. (Indeed, we may simply observe that organisms die on modest perturbation of functional organisation.) f] An absolutely low function can still be a strong relative function. The material issue is not competition among functional states of whatever high of low level, but to get to initial function without intelligent direction. The insistence on starting from the shores of islands of function simply underscores that there is no cogent answer on getting to such a shoreline without intelligent direction. GEM of TKIkairosfocus
July 19, 2009
July
07
Jul
19
19
2009
01:36 AM
1
01
36
AM
PDT
Mr Joseph, random.org Personally, I am not convinced true randomness is necessary. As in many things evolutionary, I'm pretty sure it is a relative measure that matters, not an absolute measure. A pseudo-RNG with a period longer than the age of the universe (for example) would serve just as well. Better in some sense, because experiments are repeatable, using the same seed. This focus on relative applies to fitness. of course. There is no absolute fitness landscape that all population members experience equally in GA systems that focus on competition rather than targetted search. This another argument against "islands of function". An absolutely low function can still be a strong relative function.Nakashima
July 18, 2009
July
07
Jul
18
18
2009
09:17 AM
9
09
17
AM
PDT
Well, I can't exactly show them to you, Joseph, but since I assume you accept the existence of atoms, two words: radioactive isotopes.dbthomas
July 18, 2009
July
07
Jul
18
18
2009
06:57 AM
6
06
57
AM
PDT
Nakashima-san, Can you show us a random number generator arising via nature, operating freely? That would help your case...Joseph
July 18, 2009
July
07
Jul
18
18
2009
06:25 AM
6
06
25
AM
PDT
Mr Kairosfocus, Thank you for the Wiki quote. I think I have edited that page in the past, so it is good to know that someone finds it useful. In repeating that the FSCI must have come from the programmer you are overstepping the conclusion of Dembski and Marks. The LCI paper simply concluded that the active information came from one of the inputs, without giving a method of determining which. If you have a solid way of differentiating between the active information input by the programmer and the active information input by the random number generator, you have solved a very interesting problem for ID. And that is a problem that is relevant here. You chose to highlight the word 'stochastic', you could also highlight the word 'random', and then you would see that the 'mechanical' perjoratitve is not apt. So we come round again to this islands of function idea. Let me ask you plainly again - does the fitness landscape have to have islands of function before the functional context generates FCSI? Is there a measure of landscape ruggedness for which you can say "Above this value for this metric, FSCI exists, below this number it is merely CSI."Nakashima
July 18, 2009
July
07
Jul
18
18
2009
05:28 AM
5
05
28
AM
PDT
Nakashima-San: First, I excerpt Wiki on GA's:
Genetic algorithms are implemented in a computer simulation in which a population of abstract representations (called chromosomes or the genotype of the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
The highlights should show the core problem with using GA's and their claimed inspiration in "evolution" to then seek to justify evolutionary materialism: CIRCULARITY, on multiple levels. That is why I have highlighted the issue of first getting TO the beaches of functionality before one may climb to peaks of function by whatever hill-climbing method one may wish, including e.g. modest random variation and steepest ascent, etc. In short, before you can speak of differential reproductive success, you first have to get to a viable and reproducing organism, for first life and then for major novel body plans. That is why the tree of life icon is missing its tap root, and that is why there is no good mechanism for major branching. (What explains minor variations does not account for the information threshold issue and the organised fine-tuned irreducible complexity issue.) In that context, sure GA's can move you around -- by design BTW -- within an island of function, but the issue is not there; it starts with: how do you get tot he shores of function in a very large non-functional space, without recourse to injection of active information? And that BTW is where the NFL issue comes up: you don't get the required information to create that initial functional organised complexity based on multiple complicated interacting parts for free; unless you are willing to resort to incredible luck indistinguishable form magic or materialistic miracles. In that context, FSCI would not be so much "created" by a genetic algorithm, as created by its intelligent designer. And, by Intelligence, I mean this, courtesy Wiki as cited in the glossary above:
“capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.”
PCs and the genetic algorithms we load into their active memories do not reason, plan or solve problems; they simply execute mechanical instructions mechanically, without thought or understanding. Computers mechanically executing instructions based on their architecture are not using language in the sense that we do, as we see form the distinction that computer "languages" are artificial languages. And, computer "learning" is a loose analogy. And all of that applies to GA's, whether such are used to study protein folding or antenna design. (Recall also that proteins are useful because a certain cluster of related information-rich, step by step assembled polymers will fold to mutual key-lock fitting shapes, and in so doing will fulfill key steps in the workings of life. To get to that cluster of nano-machines and their functional organisation puts us well beyond the threshold that the FSCI concept highlights.) GEM of TKIkairosfocus
July 18, 2009
July
07
Jul
18
18
2009
02:21 AM
2
02
21
AM
PDT
BA: Thanks. GEM of TKI PS: I ask you to contact me (through the always linked).kairosfocus
July 18, 2009
July
07
Jul
18
18
2009
01:53 AM
1
01
53
AM
PDT
Concerning biological vitalism and how it has been straw-manified by atheistic materialists, it helps to read the works of Lionel Beale and Hans Dreisch. They can be found at Internet Archive.Vladimir Krondan
July 18, 2009
July
07
Jul
18
18
2009
12:43 AM
12
12
43
AM
PDT
bornagain77,
Mandy Moore - You’re My Only Hope - A Walk To Remember http://www.youtube.com/watch?v=q6zzKZTZ6Ro
Thanks for posting the inspiring video! She could be a great ambassador for ID. I wonder if she is a believer??herb
July 17, 2009
July
07
Jul
17
17
2009
07:32 AM
7
07
32
AM
PDT
Mr Kairosfocus, So, random walk based processes of generating contingent outcomes — and remember, mechanical necessity does not generate high contingency but plays out along trajectories shaped by initial and intervening circumstances — become irrelevant, once we are looking at the sort of recognised functionality that is vulnerable to modest perturbation. Indeed, that is why it is vitally important to understand the difference between a random walk, which has little hope of exploring the space within the life of the universe, and methods based on populations and history. For genetic algorithms, this difference is captured in John Holland's Schema Theorem. Briefly, the Schema Theorem says that exponential growth can conquer a large space. Compound interest wins again! :) There are caveats, of course, otherwise the NFL Theorem would be violated. If the space is structured in such a way that it is arbitrary (history has no predictive value) or deceptive (prediction from history leads to a place where you are worse off than before), then GAs will do no better than, or worse than, random search, and NFL is preserved. Is the space of proteins (for example) amenable to GA search, arbitrary, or deceptive? One clue is the success of other predictive processes in that space. Any success in predicting protein function from sequence would be indicative that this space is, in fact, amenable to GA search. I agree that this discussion of protein space (or a similar RNA space) may be of ultimate interest. But in the meantime, if I could just clarify that per your definition of FCSI, FCSI is generated by the processes of a GA running on a suitably large problem? I beleive this is the position of Dembski and Marks in their LCI work, though they are still struggling with the question of tracing the sources of the FSCI.Nakashima
July 17, 2009
July
07
Jul
17
17
2009
05:14 AM
5
05
14
AM
PDT
kairosfocus, Thank You for the time, patience and effort, you put into explaining the intricacies of ID. I know many times those you are trying to instruct are belligerently unreasonable to the point of making it seem talking to a brick wall would be more profitable, but there are those of us who do listen to you. So keep up the good work. Here is a song for you; Mandy Moore - You're My Only Hope - A Walk To Remember http://www.youtube.com/watch?v=q6zzKZTZ6Robornagain77
July 17, 2009
July
07
Jul
17
17
2009
04:26 AM
4
04
26
AM
PDT
Odd . . .kairosfocus
July 17, 2009
July
07
Jul
17
17
2009
04:10 AM
4
04
10
AM
PDT
Pardon a test: Testing blockquote.
single block
Next, double:
first level block
second level block
Back to level 1
Ordinary (To see how my formatting went wrong.)kairosfocus
July 17, 2009
July
07
Jul
17
17
2009
04:09 AM
4
04
09
AM
PDT
PS: For those troubled by the issue on whether or not I am in agreement with Dembski, note that islands of function (and archipelagos) are target zones where once one reaches the beachline, hill climbing algorithms such as modest chance variation + differential performance leading to culling on "best performers" will be applicable. My point in giving a simple rule of thumb with a 1,000 bit info storage capacity threshold on observed functional information, is that by specifying a criterion of such vastness that the cosmos will not be able to search more than an incredibly tiny speck of it, no reasonable islands of function will be credibly accessible through whatever is comparable to an unaided random walk in the ocean. No beacons, no wafting winds, no wafting currents, no wandering birds that allow one to know one is in the neighbourhood and which direction to go when they go home to roost on evenings, etc. In short, no active warmer/colder information that rewards non-function on proximity. Once we do that, we will very soon see that the reasl problem with OOL and later on body plan level biodiversity is that there is no reasonable way to get tot he shores of initial function on undirected chance + necessity. That is the conundrum that has needed to be answered by evolutionary materialism advocates for years at UD [and elsewhere], and which still stands unanswered.kairosfocus
July 17, 2009
July
07
Jul
17
17
2009
01:34 AM
1
01
34
AM
PDT
Rob: Pardon, but, you are recirculating already answered objections, with a few twists and turns. The unanswered reductio challenge still remains. You are a known intelligence, and the very posts you just put up are instances of original, i.e. creative and more or less contextually responsive text in English of more than 143 characters. In short, all the documentation you really need is sitting there in front of you, and is the product of your own recent creative action. Similarly, you would have done much the same had you put up source code. And that answers to the basic issue directly: computers are programmed mechanisms, designed and developed by intelligent agents. They can be programmed to carry out targetted searches of large configuration spaces, but once the spaces get big enough, they do not do so successfully by random walks in a sea of non-function dotted by isolated islands of function. Instead, they step by step carry out preset routines, and in so doing address targets based on preset algorithms. (The relevance of this to the theme of this thread has just been discussed in my response to Nakashima-San.) By contrast a human in a general language or programming situation exhibits volitional spontaneity and creativity: s/he is not spitting out shuffled pre-programmed contextually pre-programmed responses a la Turing's test, or by random shuffling, but is creating genuine non-pre-existing novelty. [That's why so-called expert systems don't work so well outside of narrow contexts where more or less exhaustive rules and cases can be constructed (typically resulting in rather predictable outcomes) and/or deep searches across contingencies can be undertaken. Humans -- a case of observed intelligences -- don't need more than a fuzzy idea and some practice with examples to begin to create successful novel information-rich entities. (Think here on Chomsky et al on the way even infants generate novel sentences.) And, given the critical significance of surprise in many real world situations, that difference is vital. OPTIMAL answers are usually brittle; indeed since we have bounded rationality, the GIGO principle applies -- a programmed optimum for a model may well leave out key unanticipated information from the environment. A classic case is say an expert aircraft landing system that does not factor in a case where an earthquake has cracked a runway. A common-sense using student pilot will spot that something is wrong, but a machine will go right ahead and will crash the plane. Or, check out the performance of OCR systems and Spell checks or Grammar checks. If you trust an OCR to get it fully right, you deserve the result you will get; observe, we then use a human proof reader to correct the output. Why is that? And, why it is that humans can usually read ordinary handwriting (the mess created by doctors is an exception here . . . ), but computers run into serious difficulties trying to do that?] As to the reiterated assertion, insinuation or implication that FSCI is not well defined so can be dismissed, I again point to the simple rule of thumb description/model from weak argument corrective 28:
For practical purposes, once an aspect of a system, process or object of interest has at least 500 – 1,000 bits or the equivalent of information storing capacity, and uses that capacity to specify a function that can be disrupted by moderate perturbations, then it manifests FSCI, thus CSI. This also leads to a simple metric for FSCI, the functionally specified bit; as with those that are used to display this text on your PC screen. (For instance, where such a screen has 800 x 600 pixels of 24 bits, that requires 11.52 million functionally specified bits. This is well above the 500 – 1,000 bit threshold.) The example: a PC screen such as the one you are reading this on is also still quite relevant and again raises the need to address what is literally right there in front of you. Rob, how did the PC in front of you generate the windows, graphics and text on it -- by [A] chance plus necessity undirected by decision-making creative intelligences [as we observe and experience them -- debates and dismissals over "libertarian free will" notwithstanding (onlookers BTW, cf what happens when we revert to materialist reductionism, here)], or [B] by a process of mechanically -- i.e without common sense intervening -- executing a known intelligently designed program acting on inputs, in the end through using organised circuits and voltages? So much so that it is often said: GIGO -- garbage in, garbage out. (I also note that for instance, "life" and many other key entities in science have no generally agreed precising definition, but are still very useful and important SCIENTIFIC entities. That is, I give you a counter example to the notion that entities that have no precising definitions are not proper conceptual entities and can be dismissed. [In fact, we form concepts by abstracting intuitively from examples and then seek to construct descriptions and definitions in words, testing against examples to see if they are reliable enough to use. This then becomes the foundation of quantitative MODELS -- observe we are not here addressing realities -- which can be used where relevant and reliable. I here assert that the just above model is adequately reliable to be used as a criterion of functionally specific complex information and its empirically credible source. Without needing to go beyond the fact that we OBSERVE certain entities -- including ourselves -- that are creatively intelligent and in that intelligence often significantly differ in actions from programmed behaviour and/or from randomness or mechanical necessity of nature. Specifically, [a] necessity gives rise to low contingency. [b] High contingency has the known sources: (i) stochastic, undirected contingency (= chance) and (ii) intelligence, which often shows its presence by rational, decisional behaviour that is creative and in some cases wise as opposed to merely otpimising on a narrow model. FSCI as just simply modelled is a known, routinely observed artifact of such intelligent action, per millions of cases as can be seen on the Internet.]) It seems to me that a cycle of endless debates over words and terms and objections can only really be resolved by reiterating the still unanswered challenge:
Rob, you need to show us a case where undirected chance + necessity (i.e nature acting freely) has credibly created say a 143 character string of text in English (up to a typo or two) that responds to a real world situation. No libraries of chainable prepackaged responses or preset text strings or generating rules, or targets and rules of improvement by proximity without reference to functionality in a context of isolated islands of function, or the like, etc. This or the like has long since been on the table [for months to years here at UD], and the resort to every artifice of debate but the simple production of an empirical counter-example demonstrates clearly that you have no such good counter example. And that in turn goes to the heart of the issue in this thread: materialistic models of origin of life are based on maximally improbable scenarios, and are often insisted on in the teeth of the known routinely observed source of the functional, specified complex information observed ever since Orgel et al to be a key and discriminating characteristic of life. GEM of TKI
kairosfocus
July 17, 2009
July
07
Jul
17
17
2009
01:20 AM
1
01
20
AM
PDT
Nakashima-San: Again, once we deal with ~ 500 - 1,000+ bits of information storage capacity to carry out a function, the point is that we cannot exhaust the configuration space or even search out a significant fraction thereof. (The entire universe we observe would not be capable of searching out 1 in 10^150 of the space. The implied odds of getting to any one block of 10^150 configs are like marking just one atom for just one instant in the entire history of the observed universe, then getting into a time and space travelling spaceship and going anywhere in the history and locations of the observed cosmos at random, and on the very first try, we pick up the marked atom at just the right instant of time. That's why this is a practically unwinnable lottery.) So, random walk based processes of generating contingent outcomes -- and remember, mechanical necessity does not generate high contingency but plays out along trajectories shaped by initial and intervening circumstances -- become irrelevant, once we are looking at the sort of recognised functionality that is vulnerable to modest perturbation. For instance take a prebiotic soup model, with empirically plausible monomers in it. To move from such soups to metabolic and/or genetic functionality on chance + necessity only requires spontaneous generation of relevant co-adapted macromolecules, and that these be configured together in the "correct" relationships in spaces of order 10^-6 m scale. As I discuss in my App 1, point 6 in the always linked, the configuration space is daunting, and the result is that the odds of getting to such life on the gamut of our observed cosmos are not materially different from zero. But, we know by routine observation -- e.g. posts in this thread (pace Rob's reiterated objections . . . ) -- that intelligences routinely produce FSCI-bearing functional systems. So, on inference to best empirically anchored, current explanation origin of life (the nominal focus of this thread) is best explained by intelligent design. GEM of TKIkairosfocus
July 17, 2009
July
07
Jul
17
17
2009
01:10 AM
1
01
10
AM
PDT
kairosfocus@99:
Real cases of emergence such as how Na and Cl form common salt, have dynamical processes that we can trace.
Do all real cases of emergence have dynamical processes that we can trace?
In short the much touted objection on “uniformity” is a strawman argument.
You're conflating two unrelated points regarding uniformity. My point had nothing to do with the fact that Dembski's null hypotheses are virtually always uniform distributions. The strawman accusation is tiring, especially when it stems from your own misunderstanding.
A 800 x 600 pixel screen with 24 bits per pixel is not going to be compressed below 1,000 bits of information capacity
So when you say "11.52 million functionally specified bits," do you really mean 11.52 million functionally specified bits?
Strawman of immateriality, again.
You're an intelligent person, so you certainly knows what "strawman" means, and yet you repeatedly level the charge against me without telling me how I've misrepresented your position. I even explicitly asked for you to clearly state any position that I have falsely attributed to you so I can retract it. The olive branch was ignored, and I continue to get accusations of strawman. I've said before that you're a good man, kairosfocus, and I believe that, but your brand of "charity in communication" seems strange to me. A few more points: - You might want to follow Dembski's example in including all outcomes that meet the given specification (or function in your case) in your calculation. Dembski does it that way for a good reason. - This would require that you explicitly state a function, rather than just saying that something is functional. The information on my computer screen has many functions, and there is a different quantity of CSI associated with each function, according to Dembski's definitions. - You might also consider incorporating specificational resources as Dembski does, also for a good reason. - You didn't answer my question as to whether a blank screen is functional. Ditto on the screenful of noise. Do you not see the relevance of these questions? - How do we use the FSC of proteins to calculate the amount of FSCI that goes into a design process? What would be a ballpark figure for the amount of FSCI that went into creating, say, this sentence? - Please point me to the documentation that shows that humans generated the FSCI in my PC and in the internet, as opposed to humans being conduits for that information. Thank you in advance.R0b
July 16, 2009
July
07
Jul
16
16
2009
01:01 PM
1
01
01
PM
PDT
kairosfocus@99:
Second, in our experience, we are conscious, enconscienced, minded creatures, who find ourselves making choices and originating things with a breadth of range that transcends the credible reach of programming.
The reach of programming is quite vast. Unless we can solve the halting problem, we're within its reach. As to the credible reach of programming, that depends on who or what the programmer is.
Decision-making creatures, notoriously, are rational but are not predictable, nor reducible to outcomes scatterinfg along a mechnical probabilistic statistical distribution.
Rationality entails some degree of predictability. A perfectly rational person is guaranteed to make one of a set of optimal decisions. It's only within that set that their choice is unpredictable. And certainly human behavior is predictable to some degree, even some irrational behaviors. Any set of outcomes constitutes a statistical distribution, and distributions are often used to predict human behavior. To say that human choices are not "reducible to outcomes scattering along a mechanical probabilistic statistical distribution" begs the question of whether human choices are mechanical, whatever that means.
So, the real answer is that we should be at least open to the possibility t hat the apparent creative enconscienced reasoning and deciding consciousness that we experience and which is a premise of all intellectual activities such as science, is real.
Now we're to the heart of the debate. It sounds like you're positing something like libertarian free will, and it seems that some such idea is a necessary in order to conclude that humans create FSCI as opposed to merely storing and expressing it. So instead of it being obvious and universally observed that intelligence creates FSCI, it turns out to be a conclusion based on an LFW-like metaphysic. That has been my point from the beginning. So I repeat that I have never directly observed a human generating FSCI, a statement that you earlier called "self-referential absurdity." Now you say that we should be at least open to the metaphysic from which we can conclude that humans generate FSCI. It seems that your stance has softened considerably.R0b
July 16, 2009
July
07
Jul
16
16
2009
11:52 AM
11
11
52
AM
PDT
kairosfocus@98:
So it is not an “admission” to note that we are capable of finding targets in large config spaces.
Computers and nature can find targets in large config spaces too. But I didn't ask whether we're capable of finding targets in large config spaces (and I didn't say anything resembling "admission"). The question is whether we can find sparse FSCI targets without any FSCI to guide us.
In short, you are here plainly tilting at a strawman of your own manufacture.
I asked a yes/no question on whether you make a specific claim. How that constitutes a strawman is beyond me.
Again, the point of functionally specified complex information — ever since Orgel introduced the concept — is that first, function must be OBSERVED, which acts as the specification for the complex information.
The hash challenge has nothing to do with identifying FSCI. It's a challenge to see if you can generate FSCI without using existing FSCI. (Or more accurately, FSI, since the ratio of target size to search space size is less than 153 bits.)
Back to 101: when we see a text string of 143 or more ASCII characters that functions as contextually responsive or appropriate text in English, we may draw out certain conclusions:
I have never disputed that responsive text is FSCI. I accept that arguendo, so there's no need to defend that claim.
“we know that intelligence routinely generates such FSCI.”
Your 7-step argument does not address my reasons for disputing this claim, so I'll ignore it.
This — pace your clever word choice — is not a mere ASSUMPTION, it is a fact — whether or not it is a welcome one for those who argue that chance + necessity are sufficient to explain [away?] cases of “apparent” design.
Whether or not chance+necessity are sufficient to explain cases of apparent design is irrelevant to the question of whether FSCI is well-defined. I've accepted, arguendo, that it is, so you don't need to defend it.
We further know that a server is a programmed entity, which is not carrying out decisions of its own volition, but simply mechanically executes a program, through in the end switching logic states and assocated voltages in circuits.
You didn't answer my question explicitly, but the implication is that necessity (plus, I assume, chance) cannot generate FSCI, so we can rule out computer programs as originators of FSCI. Have I interpreted you correctly?R0b
July 16, 2009
July
07
Jul
16
16
2009
10:44 AM
10
10
44
AM
PDT
Mr Kairosfocus, FSCI in the relevant context becomes a consideration when we are addressing config spaces that are large and comprise isolated islands of function. I'm not sure what you are saying. Does this context invalidate whether something is FSCI or not? I thought 1000 bits was pretty large, but we can make the example gigabits in size if you prefer. 2^10^9 is a pretty big config space. How big do the 'cliffs' have to be around these islands before something becomes FSCI?Nakashima
July 16, 2009
July
07
Jul
16
16
2009
09:09 AM
9
09
09
AM
PDT
90DegreeAngel: The issue of impact of randomness on a designed object is of course a complex one. If it has enough redundancy in it, it may be robust enough to still function even with moderate damage to the information. (Think here of error correcting codes.) Insofar as there is damage to functional bits -- and, beyond a threshold, an error correcting code or the like will be overwhelmed -- the number of actually functional bits will be falling as damage occurs, and ability to recover function in the teeth of further damage will also be falling. With the degree of intricacy of function we are talking about, the threshold by which functionality fails will occur long before the number of bits that are undamaged falls below the 500 - 1,000 bit rule of thumb threshold. (Notice, experiments point to auto-disintegration of bio-function for independent life forms once the number of base pairs falls below about 300,000.) This of course bears more than a passing resemblance tot he concerns under Sandfor's genetic entropy. GEM of TKIkairosfocus
July 16, 2009
July
07
Jul
16
16
2009
08:39 AM
8
08
39
AM
PDT
Nakashima-san: Following up briefly. the program is designed to undertake a targetted search within the reasonable scope of resources of the cosmos, and on a fitness landscape that is not based on islands of fucntion in vast config spaces that are non-fucntional and have no beacons to broadcast the "right" direction to move in. FSCI in the relevant context becomes a consideration when we are addressing config spaces that are large and comprise isolated islands of function. It is enough for me that the GA is itself FSCI (programs being informational entities), and that the machine on which it runs exhibits FSCI. They therefore exemplify the pattern that FSCI originates in intelligent design per our observational experience. GEM of TKIkairosfocus
July 16, 2009
July
07
Jul
16
16
2009
08:31 AM
8
08
31
AM
PDT
I have a question for KF . . . As a student of such simulations and their weaknesses. I have to agree with much of what you said. However, I have one objection. This objection stems from the work of GilDodgen and the type of simulations he does. Gil has, correctly, pointed out that all said simulations cannot be worthwhile because they do not take into the true reality of nature and the randomness of it . . . So when you take your machine that is running a simulation and expose it to radiation or chemicals that might impact the simulation and the machine itself running the simulation . . . does this increase or decrease the FSCI?90DegreeAngel
July 16, 2009
July
07
Jul
16
16
2009
07:51 AM
7
07
51
AM
PDT
1 8 9 10 11 12 14

Leave a Reply