Uncommon Descent Serving The Intelligent Design Community

On Active Information, search, Islands of Function and FSCO/I

Categories
ID Foundations
rhetoric
specified complexity
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

A current rhetorical tack of objections to the design inference has two facets:

(a) suggesting or implying that by moving research focus to Active Information needle in haystack search-challenge linked Specified Complexity has been “dispensed with” [thus,too, related concepts such as FSCO/I]; and

(b) setting out to dismiss Active Information, now considered in isolation.

Both of these rhetorical gambits are in error.

However, just because a rhetorical assertion or strategy is erroneous does not mean that it is unpersuasive; especially for those inclined that way in the first place.

So, there is a necessity for a corrective.

First, let us observe how Marks and Dembski began their 2010 paper, in its abstract:

Needle-in-the-haystack problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique will work, on average, as well as blind search. Success requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches, and (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.

That is, the context of active information and associated search for a good search, is exactly that of finding isolated targets Ti in large configuration spaces W, that then pose a needle in haystack search challenge. Or, as I have represented this so often here at UD:

csi_defnUpdating to reflect the bridge to the origin of life challenge:

islands_of_func_chall

In this model, we see how researchers on evolutionary computing typically confine their work to tractable cases where a dust of random walk searches with drift due to a presumably gentle slope on what looks like a fairly flat surface is indeed likely to converge on multiple zones of sharply rising function, which then allows identification of likely local peaks of function. The researcher in view then has a second tier search across peaks to achieve a global maximum.

This of course contrasts with the FSCO/I [= functionally specific, complex organisation and/or associated information] case where

a: due to a need for multiple well-matched parts that

b: must be correctly arranged and coupled together

c: per a functionally specific wiring diagram

d: to attain the particular interactions that achieve function, and so

e: will be tied to an information-rich wiring diagram that

f: may be described and quantified informationally by using

g: a structured list of y/n q’s forming a descriptive bit string

. . . we naturally see instead isolated zones of function Ti amidst a much larger sea of non-functional clustered or scattered arrangements of parts.

This may be illustrated by an Abu 6500 C3 fishing reel exploded view assembly diagram:

abu_6500c3mag

. . . which may be compared to the organisation of a petroleum refinery:

Petroleum refinery block diagram illustrating FSCO/I in a process-flow system
Petroleum refinery block diagram illustrating FSCO/I in a process-flow system

. . . and to that of the cellular protein synthesis system:

Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

. . . and onward the cellular metabolic process network (with the above being the small corner top left):

cell_metabolism

(NB: I insist on presenting this cluster of illustrations to demonstrate to all but the willfully obtuse, that FSCO/I is real, unavoidably familiar and pivotally relevant to origin of cell based life discussions, with implications onward for body plans that must unfold from an embryo or the like, OOL and OOBP.)

Now, in their 2013 paper on generalising their analysis, Marks, Dembski and Ewert begin:

All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted [1]. In previous work, we made assumptions that limited the generality of conservation of information, such as assuming that the baseline against which search perfor-mance is evaluated must be a uniform probability distribution or that any query of the search space yields full knowledge of whether the candidate queried is inside or outside the target. In this paper, we remove such constraints and show that | conservation of information holds quite generally. We continue to assume that tar-gets are fixed. Search for fuzzy and moveable targets will be the topic of future research by the Evolutionary Informatics Lab.

In generalizing conservation of information, we first generalize what we mean by targeted search. The first three sections of this paper therefore develop a general approach to targeted search. The upshot of this approach is that any search may be represented as a probability distribution on the space being searched. Readers who are prepared to accept that searches may be represented in this way can skip to section 4 and regard the first three sections as stage-setting. Nonetheless, we sug-gest that readers study these first three sections, if only to appreciate the full gen-erality of the approach to search we are proposing and also to understand why attempts to circumvent conservation of information via certain types of searches fail. Indeed, as we shall see, such attempts to bypass conservation of information look to searches that fall under the general approach outlined here; moreover, conservation of information, as formalized here, applies to all these cases . . .

So, again, the direct relevance of FSCO/I and linked needle in haystack search challenge continues.

Going further, we may now focus:

is_ o_func2_activ_info

In short, active information is a bridge that allows us to pass to relevant zones of FSCO/I, Ti, and to cross plateaus and intervening valleys in an island of function that does not exhibit a neatly behaved objective function. And, it is reasonable to measure it’s impact based on search improvement, in informational terms. (Where, it may only need to give a hint, try here and scratch around a bit: warmer/colder/hot-hot-hot. AI itself does not have to give the sort of detailed wiring diagram description associated with FSCO/I.)

It must be deeply understood, that the dominant aspect of the situation is resource sparseness confronting a blind needle in haystack search. A reasonably random blind search will not credibly outperform the overwhelmingly likely failure of the yardstick, flat random search. Too much stack, too few search resources, too little time. And a drastically improved search, a golden search if you will, itself has to be found before it becomes relevant.

That means, searching for a good search.

Where, a search on a configuration space W, is a sample of its subsets. That is, it is a member of the power set of W, which has cardinality 2^W. Thus it is plausible that such a search will be much harder than a direct fairly random search.  (And yes, one may elaborate an analysis to address that point, but it is going to come back to much the same conclusion.)

Further, consider the case where the pictured zones are like sandy barrier islands, shape-shifting and able to move. That is, they are dynamic.

This will not affect the dominant challenge, which is to get to an initial Ti for OOL then onwards to get to further islands Tj etc for OOBP.  That is doubtless a work in progress over at the Evolutionary Informatics Lab, but is already patent from the challenge in the main.

To give an outline idea, let me clip a summary of the needle-to-stack challenge:

Our observed cosmos has in it some 10^80 atoms, and a good atomic-level clock-tick is a fast chem rxn rate of perhaps 10^-14 s. 13.7 bn y ~10^17 s. The number of atom-scale events in that span in the observed cosmos is thus of order 10^111.

The number of configs for 1,000 coins (or, bits) is 2^1,000 ~ 1.07*10^301.

That is, if we were to give each atom of the observed cosmos a tray of 1,000 coins, and toss and observe then process 10^14 times per second, the resources of the observed cosmos would sample up to 1 in 10^190 of the set of possibilities.

It is reasonable to deem such a blind search, whether contiguous or a dust, as far too sparse to have any reasonable likelihood of finding any reasonably isolated “needles” in the haystack of possibilities. A rough calc suggests that the ratio is comparable to a single straw drawn from a cubical haystack ~ 2 * 10^45 LY across. (Our observed cosmos may be ~ 10^11 LY across, i.e. the imaginary haystack would swallow up our observed cosmos.)

Of course, as posts in this thread amply demonstrate the “miracle” of intelligently directed configuration allows us to routinely produce cases of functionally specific complex organisation and/or associated information well beyond such a threshold. For an ASCII text string 1,000 bits is about 143 characters, the length of a Twitter post.

As just genomes for OOL  start out at 100 – 1,000 k bases and those for OOBP credibly run like 10 – 100+ mn bases, this is a toy illustration of the true magnitude of the problem.

The context and challenge addressed by the active information concept is blind needle in haystack search challenge, and so also FSCO/I. The only actually observed adequate cause of FSCO/I is intelligently directed configuration, aka design. And per further experience, design works by injecting active information coming from a self-moved agent cause capable of rational contemplation and creative synthesis.

So, FSCO/I remains as best explained on design. In fact, per a trillion member base of observations, it is a reliable sign of it. Which has very direct implications for our thought on OOL and OOBP.

Or, it should. END

Comments
TA, did you actually take time to read what is in front of you that shows that the concept and almost the exact phrasing in the acronym FSCO/I has long been in relevant literature from leading lights starting with Orgel and Wicken in the 1970's? Apart from, that such a descriptive phrase describes things that are literally right in front of us as we participate in this discussion -- text strings, PCs that process such, ever so many artifacts, etc? Or, are you just trying to pile on a dismissive talking point that evades addressing a patent issue? KFkairosfocus
May 4, 2015
May
05
May
4
04
2015
05:13 AM
5
05
13
AM
PST
Kf, thanks for the infovelikovskys
May 4, 2015
May
05
May
4
04
2015
05:03 AM
5
05
03
AM
PST
mike1962: Give me an example with regard to software implemented replicator objects interacting with their environment. A simple example is Weasel, but for a scientific example, see Krupp & Taylor, Social evolution in the shadow of asymmetrical relatedness, Proceedings of the Royal Society B: Biological Sciences 2015. mike1962: In order for a replicator to have a relationship with it’s environment it has to have certain properties that will necessarily be determinative of any future outcome. It's not determinative other than in the trivial sense. mike1962: No, it also requires that the replicators have particular properties that allow it to successfully replicate in the environment. That's what is meant by a fitness landscape. If the fitness is always one or always zero, then the landscape is flat, and evolution would be no different than a random walk. Mung: Let’s recall that the initial population is randomly generated. That's what is commonly done. Consider them conjectured solutions. Mung: How is this relationship defined in an EA? It's entailed in the concept of a fitness landscape. Given a sequence, we determine its fitness. And if the fitness landscape is non-chaotic, then replicators will tend to find higher levels of fitness. If we can't determine fitness or if fitness doesn't vary, then evolution would be no different than a random walk. NetResearchGuy: I think Mung’s point is that known functional evolutionary algorithms start with a fixed set of alleles that are designed. For example Weasel starts with letters, the antenna evolving algorithm starts with a working antenna made out of metal and a list of allowable mutations, the nozzle evolving algorithm starts with a working nozzle, etc. It's entailed in the notion of a fitness landscape, otherwise there would be no relationship between the replicators and the fitness landscape. The fitness landscape would be random or flat with respect to sequence. Evolution only works when the fitness landscape is positively ordered. NetResearchGuy: In other words, the initial genotype is designed to start on an island of function, and the allowable variations to the genotype are designed to remain on that island of function. In many evolutionary algorithms, most variations are not viable. NetResearchGuy: Evolution doesn’t always work in every case — it’s quite easy to construct examples where evolution can’t work, at least with finite resources. Absolutely. There are chaotic landscapes, as well as perverse landscapes, that are not amenable to evolutionary processes. Evolution won't work on the vast majority of conceivable fitness landscapes, even with resources short of exhaustive sampling. However, biological evolution works in a highly ordered, albeit complex, fitness landscape. NetResearchGuy: I’ve never seen EL or Z even tangentially address these issues. Many times. The reason evolution works in rational fitness landscapes is because it doesn't have to sample the vast majority of space, just like you don't have to cover every bit of ground to reach the top of a hill. kairosfocus: As a first answer, we can challenge you, Z, to provide an onward incrementalist functional at each step chance variation and selection based progress to something like Provide us the fitness landscape, a way to determine whether something is a valid phrase in English, then we will provide the algorithm. A phrase book should contain these among other snips: "quick dog" "lazy fox", "jumps over", "the quick", etc.Zachriel
May 4, 2015
May
05
May
4
04
2015
04:35 AM
4
04
35
AM
PST
Kairosfocus: have you submitted a paper covering a theoretical or experimental justification of your FSCO/I proposition to any peer-reviewed journal? If you have, can you provide a citation? Thanks in advance.timothya
May 4, 2015
May
05
May
4
04
2015
04:28 AM
4
04
28
AM
PST
F/N: FSCO/I is BTW a genuine, legitimately accounted for case of the emergent behaviour of systems comprising interacting parts. But, of course, while it readily gets you to mechanical GIGO limited computation, it will not allow you to indulge the fantasy of poof, we get North to rational self-aware contemplation by insistently heading West to blindly mechanical computation. KFkairosfocus
May 4, 2015
May
05
May
4
04
2015
03:12 AM
3
03
12
AM
PST
sparc, 144:
How often have we seen this very thread before? I am not interested in fishing but even I realize that I’ve seen the Abu 6500 C3 reel before (according to Google it appears 42 times on this site). Just opening another thread will not bring the stillborn FSCO/I to life. Didn’t you read what WE had to say about it? And what about Dembski, Meyer, Behe, Marks et al.? Do you think they even consider FSCO/I? FSCO/I just dead and never lived.
How eager -- suspiciously eager -- you are to not look at what is in front of you: concrete cases in point illustrating the reality and characteristics of functionally specific complex organisation and associated information, FSCO/I; and of course, to write a stillbirth certificate. Utterly revealing of the underlying problem. For, this is a blatant case of knocking over a strawman and pretending the strawman was all that ever was there. Backed up, by a hoped for negative appeal to authority in the teeth of the actual demonstration of the reality of FSCO/I before your eyes that you want to avert your eyes and mind from, complaining that you have seen these cases before. Exactly. (That is, with all due respect, you inadvertently showed how your root problem is the fallacy of the closed, indoctrinated mind. A particularly virulent form of selective hyperskepticism that has led you to disbelieve the testimony of your eyes. As the just linked explains. [I add: those who would caricature that other phrase I have often used as a descriptive phrase for what Simon Greenleaf long ago termed the error of the skeptic, should note that it describes a real problem encountered live also: inconsistent, double standards of required warrant that project a demand on what one is inclined to reject that one would not accept for a comparable case one is inclined to accept. And, where typically, if the hyperskeptical standard were to be generally applied whole fields of learning or common sense useful or even vital knowledge would vanish.]) Since you are trying to suggest that functionally specific complex organisation and associated information, FSCO/I, is a non-issue . . . predictable, given a recent exchange in another thread alluded to above, I will AGAIN cite the remark made by Dembski in No Free Lunch, where he highlighted that the functional subset of complex specified information is the relevant one for the biological world. That is, the concept just described and abbreviated, FYI, does appear at a crucial point in the writings of Dr Dembski:
p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. {Dembski cites:} Wouters, p. 148: "globally in terms of the viability of whole organisms," Behe, p. 148: "minimal function of biochemical systems," Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction." On p. 149, he roughly cites Orgel's famous remark from 1973, which exactly cited reads: In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . And, p. 149, he highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .” p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
Nor is this a novelty introduced by Dembski. When he would have been in High School, in 1973, FYFI, this is what leading OOL researcher, Leslie Orgel, had to say:
. . . In brief, living organisms [--> bio-functional context] are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002 . . . ] One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]
And FYSYFI a few years later, J S Wicken in 1979, probably while Dembski was in College as an undergrad:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
The descriptive phrase, functionally specific complex organisation and associated information, FSCO/I for short, FYI, is directly based on Wicken's remarks and is informed by a much wider circle of considerations. You go on to call up Meyer, obviously having failed to note what he was already noted to have stated in answer to Falk's hostile review of the 2009 book, Signature in the Cell:
For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory. While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen. Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA. As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules. Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences. This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers. It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . . [[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[--> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . .
When you can cogently address such, you will rise above strawman tactics. But in fact, your strawman errors go beyond that. For, the focal issue addressed in the OP is not "old hat." It is specifically the role of active bridging information in the origin of FSCO/I, required to bridge seas of non-function to arrive at OOL and OOBP in the relevant configuration spaces. Worse, this is a case of refusing to notice what is in front of you. The 6500 fishing reel is emblematic of literally trillions of cases of FSCO/I all around you, including in the PC you used to compose your dismissive comment and to read this one in reply. FSCO/I is real. The same phenomenon is exhibited in the digitally coded text s-t-r-i-n-g in your dismissive statement, which directly manifests the kind of specific functional organisation found in DNA, mRNA and onward in proteins and enzymes assembled through the FSCO/I rich process and systems in the ribosome. All, backed up by the further FSCO/I in the metabolic networks of the living cell. All of which are illustrated in the OP for this thread, and all of which you are ever so eager to sweep away by resorting to strawman tactics. Worse, the FSCO/I involved in the embedded integrated von Neumann kinematic self replicating facility of the living cell also cries out for adequate causal explanation. And, in case you want to revert to the longstanding strawman tactic of dismissing Paley's watch argument as a flawed analogy, let me cite what Paley directly went on to say regarding self-replication in Ch 2 of his Nat Theol, his actual main watch argument . . . an argument I find suspiciously missing in far too many dismissals of what he had to say back in 1804. Let me clip 19 above (yes, you seem to be commenting dismissively without having first interacted seriously with the thread of discussion). Paley:
Suppose, in the next place [--> immediately following a short C1], that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself — the thing is conceivable; that it contained within it a mechanism, a system of parts — a mold, for instance, or a complex adjustment of lathes, baffles, and other tools — evidently and separately calculated for this purpose [==> update, vNSR, with tape [a bar of cams is a program, as was used in so many C18 automata], and constructor keyed in as an ADDITIONAL facility integrated with the main machine — of course, IIRC a full size clanking unit considered by NASA was many, many tons in scale] . . . . The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done — for referring the construction of the watch to design and to supreme art [--> notice, the impact of seeing ADDITIONAL FSCO/I] . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair — the author of its contrivance, the cause of the relation of its parts to their use.
In short, Paley anticipated Darwin et al by 50 years, and von Neumann by 150 years. The issue he put on the table cries out for adequate, empirically grounded causal explanation. And, going back to the focal issue in the OP, the only adequate, empirically grounded cause of the required FSCO/I -- emphatically not a dead or ignorable matter -- and for the required, active bridging information to navigate the seas of non-function to hit on deeply isolated islands of function, is intelligently directed configuration. AKA, design. KFkairosfocus
May 4, 2015
May
05
May
4
04
2015
02:27 AM
2
02
27
AM
PST
Z, 74 (attn EL & Mung): I quote:
Genotypes evolve by random mutation of letters and random recombination of snippets. If and when they form a word, they are added to the population. Like this: o ox box fox for fore fort And so on.
This inadvertently illustrates the huge list of begged questions behind the darwinist tree of life, incrementalist icon and its failure to cogently address the message that the FSCO/I required for OOL and OOBP must be adequately accounted for. Which of course, is the precise context of active, bridging information that crosses the gaps to and between islands of function, as is discussed and illustrated in the OP. The same OP, that has been assiduously snipped, wrenched into strawman caricatures such as the just cited, and sniped at without truly cogently facing the pivotal questions. What has been done here is to take a simplistic -- thus strawmannish -- short skip distance case, 7 bits apart, and then grossly extrapolate to cases of such higher degrees of complexity and skip distance. The result of such gross oversimplification, is that the result is a fallacy. Strawman caricature, per Nizkor:
The Straw Man fallacy is committed when a person simply ignores a person's actual position and substitutes a distorted, exaggerated or misrepresented version of that position. This sort of "reasoning" has the following pattern: Person A has position X. Person B presents position Y (which is a distorted version of X). Person B attacks position Y. Therefore X is false/incorrect/flawed. This sort of "reasoning" is fallacious because attacking a distorted version of a position simply does not constitute an attack on the position itself . . .
As a first answer, we can challenge you, Z, to provide an onward incrementalist functional at each step chance variation and selection based progress to something like:
The quick brown fox jumps over the lazy dog,
. . . with the underscored requisite that at each step the result is a functional sentence, created in reasonable time by chance variation on the ASCII code by known chance variation. Say, known to be chance variation, as tracing to whitened Zener noise or sky noise. (Such are used in modern random number generators.) Of course, we already know the answer to a realistic monkeys at keyboards exercise. As Wikipedia notes on random sentence generation by such means:
The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare. In this context, "almost surely" is a mathematical term with a precise meaning, and the "monkey" is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913,[1] but the earliest instance may be even earlier. The relevance of the theorem is questionable—the probability of a universe full of monkeys typing a complete work such as Shakespeare's Hamlet is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero) . . . . The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[24] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
That infinite monkeys theorem case [equivalent to the issue of needle in haystack search that you are hoping to dismiss without actually cogently addressing . . . ] has been pointed out here at UD many, many times, for instance at #4 in the now longstanding ID Foundations series, where the link between CSI and functional organisation was discussed. (FSCO/I was discussed at #5 with roots tracing to Orgel et al, and Borel's underlying analysis appears at #11, together with a heuristic for the Chi_500 expression seen in the OP.) As Wiki was clipped in #11:
In one of the forms in which probabilists now know this theorem, with its “dactylographic” [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel‘s 1913 article “Mécanique Statistique et Irréversibilité” (Statistical mechanics and irreversibility),[3] and in his book “Le Hasard” in 1914. His “monkeys” are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly. The physicist Arthur Eddington drew on Borel’s image further in The Nature of the Physical World (1928), writing:
If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.[4]
These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.
In short, this is a discussion of fluctuations from the thermodynamic equilibrium that is to be expected with some fluctuation but with such a dominance of the bulk cluster that deeply isolated special zones are empirically unobservable even on an experiment on the scale of the observed cosmos of 10^80 or thereabouts atoms and 10^17 s duration. To see the point, consider each of those atoms to be given a tray of 1,000 coins. Or equivalently, a paramagnetic substance with 1,000 atoms in a weak B-field with possible orientations, N up or N down. Flip and test every 10^-14 s. As the OP discusses -- but which you, Z, plainly insist on ignoring -- the experiment maxes out at 10^111 possibilities, about 1 in 10^190 of the full config space W of 1.07*10^301. Where, in that wider space, we may find every possible ASCII code string of 143 characters. But, English sense-making strings or computer programs etc will be so deeply isolated and will be so small a fraction of the possibilities that the bulk, predominant group of near 50-50 outcomes in no particular order, within several standard deviations of the central peak, will utterly dominate. Such, is well known, and/or readily accessible. Indeed, its logic is foundational to the molecular statistics analysis that undergirds the second law of thermodynamics, 2LOT. For over 100 years now. So it is no wonder that we see something like picking up a scale of complexity that comes from a space of about 10^50, a factor of 10^150 short of the 500 bit/72 ASCII character Sol system limit, and a factor of 10^250 short of the observed cosmos limit. All this has been pointed out over and over, just studiously ignored and strawmannised, as above. Likewise, just to get to a typical, 300 AA protein, we are dealing with DNA code of 900 bases, or 1800 bits of basic info carrying capacity. For a simplistic first cell based life of 100 such proteins, that is 180,000 bits, or 90,000 bases. This shows the lower end of the range, 100,000 - 1 mn bases for the genome of a reasonable first cell based life. Remember, such would have to be a gated, encapsulated metabolic automaton with integrated, code using von Neumann self replicator. I safely conclude, the FSCO/I required for such first life is not a plausible product of any cluster of actually observed spontaneously occurring processes to be found in a Darwin warm, salty lightning struck pond or any other seriously proposed OOL scenario. The only empirically, observationally warranted source of the active bridging information to get us to the first island of function, the tap root of Darwin's tree of life, is intelligently directed configuration. AKA, Design. Design, therefore, sits at the table from the root up, as of right, not grudging sufferance. (Which sufferance, thanks to ideological commitment to a priori evolutionary materialism, is scarce.) Going on, a simple calculation for informational requisites for origin of body plans, OOBP, will readily come up with the range, 10 - 100+ mn bases. As I noted some time back in discussing OOBP in the IOSE:
. . . the sort of novel body plans observed in the Cambrian fossil life revolution reasonably required 10 – 100+ millions of functional four-state DNA bases. This is more than 100,000 times the 500 – 1,000 bit threshold at which the undirected search resources of the observed cosmos would be inadequate to carry out a credible search of the relevant configuration spaces. Some would doubt such a range, so let us do a fresh calculation: 50 new tissue types to make up the organs for a new body plan would easily take up probably 10 - 100 proteins [[including enzymes etc] per type, i.e we are looking at 500 - 5,000 proteins as a reasonable/ conservative estimate — VERY conservative at the low end. 500 * 300 = 1.5 *10^5 codons, or 4.5 *10^5 bases, plus regulatory, let’s say about 10% more, 1/2 mn bases.At the upper end, we would arrive at 4.5 *10^6 bases. But this estimate is too low: Arabidopsis thaliana [[a flowering plant] 115,409,949 DNA bases Anopheles gambiae [[a mosquito] 278,244,063 bases Sea urchin 8.14 x 10^8 bases Amphibians 10^9–10^11 Tetraodon nigroviridis (a pufferfish) 3.42 x 10^8 In short, 10 – 100 million bases for a novel body plan is reasonable, even generous. And in any case the config space of 500 k bases is: 9.9 *10^301,029 possibilities.
To cross the intervening sea of non-function to arrive at such deeply isolated islands of function, points strongly to intelligently directed, active configuration as source of relevant information and functional organisation. Especially, as design already sits at the table of candidate explanations of FSCO/I as of right. But, it will be predictably asserted -- just see above -- that all that is needed is incremental chance variation and natural selection leading to descent with modification across a branching tree structure. This is tantamount to the assertion or implication that there is a vast continent of living forms, from first common ancestral microbes, to Mozart, molluscs and mango trees, etc. How is such observationally grounded? In the end by gliding over the systematic pattern of missing intermediate forms across the fossil record, highlighting the ever-changing set of icons held to show what is otherwise concealed by the imperfections of the record and the like. After 250,000+ fossil species from all eras and across the world, with millions of samples in museums and billions seen in the ground, I don't buy that argument. The Cambrian fossil life revolution is emblematic of the actual dominant pattern of gaps at exactly the points where transitional forms should utterly dominate the record. And that has been so since Darwin's day. More to the point, such incrementalism runs cross-grain to the known, natural logic of how FSCO/I works. Take the fishing reel in the OP as an example. We see that many correct parts must be properly oriented, aligned, arranged and coupled per a wiring diagram for interactive function to emerge. That extends to the petroleum refinery and ever so many other familiar cases. it also applies to the many cases in the living cell, as protein synthesis and wider metabolism show. This also extends to higher level organisation of complex, multicellular life forms. In short, FSCO/I required for OOBP, implies a drastic limitation of acceptable arrangement of correct parts to achieve function, from the space of possible clumped or scattered arrangements of the sane parts. That obtains whether the scale is nm in the cell or mm to cm in a fishing reel. Put the parts of a 6500 C3 reel in a bait bucket and shake all you want. Unlike the simplistic case EL suggested of sorting stones, it is highly reliably predictable that a functional reel will not result. Because of the search space challenge highlighted in the OP but never cogently addressed above. The only empirically warranted adequate cause of the FSCO/I required to explain OOL and OOBP across the tree of life from the root up, is intelligently directed configuration. AKA, design. KFkairosfocus
May 4, 2015
May
05
May
4
04
2015
01:41 AM
1
01
41
AM
PST
Upright BiPed:
Actually, they’ve both addressed them clearly. Dr Liddle has accepted that she cannot re-create the process from virtual “critters” and Brownian motion;
Elizabeth now admits that the initial population of virtual critters is in fact designed. Progress?Mung
May 3, 2015
May
05
May
3
03
2015
09:37 PM
9
09
37
PM
PST
Mung: That a ’0' in the genome represents a ‘T’ in the phenome is a design decision. That a ’1' in the genome represents an ‘H’ in the phenome is a design decision.
Net Research: I’ve never seen EL or Z even tangentially address these issues. These are unimportant and nonexistent in their minds.
Actually, they've both addressed them clearly. Dr Liddle has accepted that she cannot re-create the process from virtual "critters" and Brownian motion; and Zachriel hides behind an alternate process that does not accomplish what must be accomplished, and demands that everyone ignore that fact.Upright BiPed
May 3, 2015
May
05
May
3
03
2015
09:17 PM
9
09
17
PM
PST
How often have we seen this very thread before? I am not interested in fishing but even I realize that I've seen the Abu 6500 C3 reel before (according to Google it appears 42 times on this site). Just opening another thread will not bring the stillborn FSCO/I to life. Didn't you read what WE had to say about it? And what about Dembski, Meyer, Behe, Marks et al.? Do you think they even consider FSCO/I? FSCO/I just dead and never lived.sparc
May 3, 2015
May
05
May
3
03
2015
09:16 PM
9
09
16
PM
PST
NetResearchGuy:
EL/Z: I think Mung’s point is that known functional evolutionary algorithms start with a fixed set of alleles that are designed.
E/Z would probably disagree. They would argue that the content of any given allele is generated randomly and thus is not designed. But that would miss the point. That a '0' in the genome represents a 'T' in the phenome is a design decision. That a '1' in the genome represents an 'H' in the phenome is a design decision. That 'T' and 'H' have relevance to a potential solution is a design decision. We can of course create in software a Genotype that has no correlation with the fitness function. The Darwinists need to answer the question, why is it that your genotypes are correlated to your fitness functions.Mung
May 3, 2015
May
05
May
3
03
2015
08:04 PM
8
08
04
PM
PST
EL/Z: I think Mung's point is that known functional evolutionary algorithms start with a fixed set of alleles that are designed. For example Weasel starts with letters, the antenna evolving algorithm starts with a working antenna made out of metal and a list of allowable mutations, the nozzle evolving algorithm starts with a working nozzle, etc. In other words, the initial genotype is designed to start on an island of function, and the allowable variations to the genotype are designed to remain on that island of function. For the nozzle evolving example, let's say you started with a spherical piece of material, topologically lacking a hole for water to flow through. How would an evolutionary algorithm modify that into a starting point of a functional nozzle? The starting point is irreducibly complex, either it has a hole of the right shape to interface with a hose it needs to connect to or it doesn't. How would an evolutionary algorithm create that irreducibly complex structure? Given infinite time it could stumble onto that initial island of function, but not finite time. Let's say you wanted the nozzle algorithm to generate a sprayer, with multiple holes instead of one. It couldn't do that because the initial allowed range of variation in the genotype of the nozzle algorithm doesn't allow it to generate topological shapes with multiple holes. To consider another dimension of the problem, let's say you gave your nozzle evolving program just the laws of physics (i.e. the physics of individual water molecules), and ran it using that. It would be too slow! Even the world's most powerful supercomputer would take too long to evaluate the fitness function at a molecular level for a single nozzle shape in a practical amount of time. These types of issues are the point of ID. Evolution doesn't always work in every case -- it's quite easy to construct examples where evolution can't work, at least with finite resources. I.e. irreducible complexity, uncrossable maladaptive holes in the fitness landscape, insufficient time resources, etc. I've never seen EL or Z even tangentially address these issues. These are unimportant and nonexistent in their minds. As long as there is a non zero probability evolution could work, it doesn't matter how many zeroes there are in the probability exponent.NetResearchGuy
May 3, 2015
May
05
May
3
03
2015
06:50 PM
6
06
50
PM
PST
Zachriel: The notion of a fitness landscape entails that there is a defined relationship between the replicators and the landscape... Isn't that what I have been saying all along? Isn't that what Elizabeth denies? How is this relationship defined in an EA? I'm guessing it is designed.Mung
May 3, 2015
May
05
May
3
03
2015
06:20 PM
6
06
20
PM
PST
Mung: You mean potential solution or candidate solution. Zachriel: Typically, they’re approximate solutions. We're being pedantic, remember? Approximate: 1: located close together 2: nearly correct or exact So no, you lose again. Let's recall that the initial population is randomly generated. That's because we don't want them located close together. And that is because we have no idea whether they will be a nearly correct or exact solution. All part of the design.Mung
May 3, 2015
May
05
May
3
03
2015
06:09 PM
6
06
09
PM
PST
Mung and mike1962- over on TSZ Elizabeth posted the following (parapsychology thread- Randi challenge):
Well, I don’t see any reason why psi effects can’t be investigated by normal scientific methodology.
Well, we don’t see any reason why macroevolution can’t be investigated by normal scientific methodology. :cool: Intelligent Design can be investigated by normal scientific methodology.Joe
May 3, 2015
May
05
May
3
03
2015
06:04 PM
6
06
04
PM
PST
Zachriel: The notion of a fitness landscape entails that there is a defined relationship between the replicators and the landscape, the ‘chemistry’ of the artificial world.
Then you agree with me, and not Elizabeth. In order for a replicator to have a relationship with it's environment it has to have certain properties that will necessarily be determinative of any future outcome. This is trivially obvious.
In fact, evolution only requires a fitness landscape that is positively ordered and not chaotic.
No, it also requires that the replicators have particular properties that allow it to successfully replicate in the environment.mike1962
May 3, 2015
May
05
May
3
03
2015
05:36 PM
5
05
36
PM
PST
Zachriel: That’s not generally a function of the initial population, but of the fitness landscape.
Give me an example with regard to software implemented replicator objects interacting with their environment.mike1962
May 3, 2015
May
05
May
3
03
2015
05:33 PM
5
05
33
PM
PST
mike1962: I claim it’s both the properties of the replicators and the environment. The notion of a fitness landscape entails that there is a defined relationship between the replicators and the landscape, the 'chemistry' of the artificial world.Zachriel
May 3, 2015
May
05
May
3
03
2015
05:30 PM
5
05
30
PM
PST
Mung: Zachriel in a more lucid moment: However, specific search algorithms may do better on specific fitness landscapes. Who ever thought otherwise? From the original post: "Conservation of information dictates any search technique will work, on average, as well as blind search. Success requires an assisted search." In fact, evolution only requires a fitness landscape that is positively ordered and not chaotic.Zachriel
May 3, 2015
May
05
May
3
03
2015
05:26 PM
5
05
26
PM
PST
Elizabeth Liddle: OK, accepted.
Here comes the "ubiquitious 'but'"...
But all those things have natural counterparts that do not require an intentional designer.
Whoa Nellie. Says who? Are you telling me you've got your head around the nature of the replicators of earth so well that you know that they didn't require a designer? By all means, do tell.
Sure, you can set up a system in which the results are highly constrained. But many systems exist in which the results are highly constrained, but we do not say: aha! It must have been designed.
Such as?
“Guided” as in “constrained by high granite cliffs” is not the same meaing of “guided” as “led by someone who knows the way and will take you to where she wants you to go”.
Granite cliffs are not replicators. I thought we were talking about how replicators, and how only the environment matters with regards to their evolution. You claim only the environment is determinative of the outcome. I claim it's both the properties of the replicators and the environment. This should be trivially obvious.
If all people mean by “guided evolution” is “evolution constained by the laws of physics and chemistry” then, sure, all evolution is “guided”.
No, that's not all we mean. We mean the entire bio process which includes the properties of the replicators. The properties of the replicators are determinative in any outcome. Not just the environment.
But that tells us nothing about whether a designer is involved, and I can certainly tell you that in computer evolution, once the thing is set up, you sit back and wait for the result. No Designer Intervention required.
Who said anything about intervention after the properties of the replicators are determined? That's another topic.
So if all IDers are saying is that a Designer must have been required to set up the evolutionary system that produced us, then, fine.
Fine? Well, that's a hell of a concession. But by "system" we mean the replicators themselves. Nor just the environment. At least, I do.
But in that case, stop beating up on poor old Darwin!
The problem with Darwin (and his faithful followers) is that you assume that the replicators have no engineered constraints that led to certain outcomes as they related to the environment. That's not demonstrable. Imaginations of the faithful notwithstanding.mike1962
May 3, 2015
May
05
May
3
03
2015
05:26 PM
5
05
26
PM
PST
Mung: You mean potential solution or candidate solution. Typically, they're approximate solutions. Mung: For example, what is the longest word that you allow for, and why? No specified limit. Mung: Do you allow your potential words to consist of non-word characters If a mutation results in a sequence not found in the dictionary, it is still-born, that is, doesn't enter the population. Mung: and if not why not? Because that was the fitness landscape specified by the IDer. Mung: And that would be by design. The use of the genome to represent terms of an equation is the defined relationship between the genome and the fitness landscape, i.e. the 'chemistry'. Mung: And now you’re just repeating what I have been saying all along. And what Darwin pointed out in 1859. Natural selection works on heritable traits (genotypes) that provide a difference in reproductive fitness (landscape). mike1962: The nature of the initial population (the systems, processes and control information they contain) determines to some extent what kinds of variations are even possible for any putative selection to act on That's not generally a function of the initial population, but of the fitness landscape.Zachriel
May 3, 2015
May
05
May
3
03
2015
05:23 PM
5
05
23
PM
PST
Elizabeth Liddle @ 129:
That’s all it is. And it works, as, logically, it must do.
Indeed. It's a tautology. Welcome to the dark side. :DMung
May 3, 2015
May
05
May
3
03
2015
05:22 PM
5
05
22
PM
PST
Hi Elizabeth, ok, trying to get back on topic. :) Zachriel:
However, biological evolution is a specific ‘search algorithm’, not the universal set of search algorithms; and the natural environment is a specific ‘fitness landscape’, not the universal set of fitness landscapes.
Biological evolution is not a specific search algorithm and the natural environment is not a specific fitness landscape. Granted, that's Zachriel spouting their usual nonsense. Zachriel in a more lucid moment:
However, specific search algorithms may do better on specific fitness landscapes.
Who ever thought otherwise? NetResearchGuy:
Evolutionary algorithms in general have additional challenges. There is a lot of fine tuning required to make one work.
Mung:
Indeed. They must be carefully designed.
That sets the general context for the discussion which followed.Mung
May 3, 2015
May
05
May
3
03
2015
05:15 PM
5
05
15
PM
PST
Elizabeth Liddle:
So if all IDers are saying is that a Designer must have been required to set up the evolutionary system that produced us, then, fine. But in that case, stop beating up on poor old Darwin!
But Darwin was misguided and his conclusions were false. His intent was to absolve the designer of responsibility. If the designer designed the Darwinian process then Darwin failed. There is no design without a designer. Losers get beat, by definition.Mung
May 3, 2015
May
05
May
3
03
2015
04:56 PM
4
04
56
PM
PST
No doubt, Mung, no doubt. Metaphors can be very misleading. Best thing is simply to describe it directly. My best description of evolution theory is: in a population of self-replicators that reproduce with heritable reproductive success, those features that best promote reproductive success in the current environment will tend to become more prevalent. That's all it is. And it works, as, logically, it must do.Elizabeth Liddle
May 3, 2015
May
05
May
3
03
2015
04:53 PM
4
04
53
PM
PST
Elizabeth Liddle:
Yes, precisely. And they [candidate solutions] are generated by replication with random variation
No one ever said otherwise. Unless I've misunderstood you [a not insignificant possibility], the replication mechanism is designed, and you admit this. What about the "random variation" mechanism? Also designed? In your NS can generate CSI program did you design the mutation (random variation) mechanism? [Say yes. Save us all the trouble. Thank you.]Mung
May 3, 2015
May
05
May
3
03
2015
04:51 PM
4
04
51
PM
PST
Blind is precisely what it is – it cannot “see” beyond the current generation.
I can't see beyond my immediate surroundings, and I can't see through walls, but I'm not blind. Some people can't see what is right in front of their face, but they are blind. Perhaps it's time the evolutionists came up with a better metaphor.Mung
May 3, 2015
May
05
May
3
03
2015
04:39 PM
4
04
39
PM
PST
Selection vs elimination: From "What Evolution Is", Ernst Mayr, page 117:
What Darwin called natural selection is actually a process of elimination.
Page 118:
Do selection and elimination differ in their evolutionary consequences? This question never seems to have been raised in the evolutionary literature. A process of selection would have a concrete objective, the determination of the “best” or “fittest” phenotype. Only a relatively few individuals in a given generation would qualify and survive the selection procedure. That small sample would be only to be able to preserve only a small amount of the whole variance of the parent population. Such survival selection would be highly restrained.
By contrast, mere elimination of the less fit might permit the survival of a rather large number of individuals because they have no obvious deficiencies in fitness. Such a large sample would provide, for instance, the needed material for the exercise of sexual selection. This also explains why survival is so uneven from season to season. The percentage of the less fit would depend on the severity of each year’s environmental conditions. (bold added)
Joe
May 3, 2015
May
05
May
3
03
2015
04:37 PM
4
04
37
PM
PST
You see. mike1962 get's it. In Elizabeth's [in]famous NS can generate CSI program she chose a genome size of 500 "bases" and populated each "base" with a zero or a one for each member of the initial population, with the size of the initial population set at 100 individuals. Why 100 individuals? Why 500 "bases"? Why zero and one? Why not different numbers? Why not letters? Why not url's to interesting travel sites on the internet? Why did she select the particular design she used if in fact any population of self-replicators would have worked? The answer is glaringly obvious. And it seems Elizabeth has selective amnesia. Her program didn't work, at first. She had to go back and make some design tweaks. Right Elizabeth? It's ok. You're human. Fallible. The end product of an infallible process.Mung
May 3, 2015
May
05
May
3
03
2015
04:34 PM
4
04
34
PM
PST
Elizabeth:
But that tells us nothing about whether a designer is involved, and I can certainly tell you that in computer evolution, once the thing is set up, you sit back and wait for the result. No Designer Intervention required.
Umm a designer is involved if one set the whole thing up and provided life with the programming required to help us adapt. We don't know how intense that initial set up had to be. It could very well be that a special creation-type start is required. Darwinian-type evolution would work well with that and it would explain the extinction rate.Joe
May 3, 2015
May
05
May
3
03
2015
04:21 PM
4
04
21
PM
PST
1 3 4 5 6 7 10

Leave a Reply