Uncommon Descent Serving The Intelligent Design Community

The TSZ and Jerad Thread, III — 900+ and almost 800 comments in, needing a new thread . . .

Categories
Culture
Design inference
Education
Evolution
ID Foundations
science education
specified complexity
worldview
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Okay, the thread of discussion needs to pick up from here on.

To motivate discussion, let me clip here comment no 795 in the continuation thread, which I have marked up:

_________

>> 795Jerad October 23, 2012 at 1:18 am

KF (783):

At this point, with all due respect, you look like someone making stuff up to fit your predetermined conclusion.

I know you think so.

[a –> Jerad, I will pause to mark up. I would further with all due respect suggest that I have some warrant for my remark, especially given how glaringly you mishandled the design inference framework in your remark I responded to earlier.]

{Let me add a diagram of the per aspect explanatory filter, using the more elaborated form this time}

The ID Inference Explanatory Filter. Note in particular the sequence of decision nodes

 

You have for sure seen the per apsect design filter and know that the first default explanaiton is that something is caused by law of necessity, for good reason; that is the bulk of the cosmos. You know similarly that highly contingent outcomes have two empirically warrantged causal sources: chance and choice.

You kinow full well that he reason chance is teh default is to give the plain benefit of the doubnt to chance, even at the expense of false negatives.

I suppose. Again, I don’t think of it like that. I take each case and consider it’s context before I think the most likely explanation to be.

[b –> You have already had adequate summary on how scientific investigations evaluate items we cannot directly observe based on traces and causal patterns and signs we can directly establish as reliable, and comparison. This is the exact procedure used in design inference, a pattern that famously traces to Newton’s uniformity principle of reasoning in science.]

I think SETI signals are a good example of really having no idea what’s being looked at.

[c –> There are no, zip, zilch, nada, SETI signals of consequence. And certainly no coded messages. But it is beyond dispute that if such a signal were received, it would be taken very seriously indeed. In the case of dFSCI, we are examining patterns relevant to coded signals. And, we have a highly relevant case in point in the living cell, which points to the origin of life. Which of course is an area that has been highlighted as pivotal on the whole issue of origins, but which is one where you have determined not to tread any more than you have to.]

I suppose, in that case, they do go through something like you’re steps . . . first thing: seeing if the new signals is similar to known and explained stuff.

[d –> If you take off materialist blinkers for the moment and look at what the design filter does, you will see that it is saying, what is it that we are doing in an empirically based, scientific explanation, and how does this relate to the empirical fact that design exists and affects the world leaving evident traces? We see that the first thing that is looked for is natural regularities, tracing to laws of mechanical necessity. Second — and my home discipline pioneered in this in C19 — we look at stochastically distributed patterns of behaviour that credibly trace to chance processes. Then it asks, what happens if we look for distinguishing characteristics of the other cause of high contingency, design? And in so doing, we see that there are indeed empirically reliable signs of design, which have considerable relevance to how we look at among other things, origins. But more broadly, it grounds the intuition that there are markers of design as opposed to chance.]

And you know the stringency of the criterion of specificity (especially functional) JOINED TO complexity beyond 500 or 1,000 bits worth, as a pivot to show cases where the only reasonable, empirically warranted explanation is design.

I still think you’re calling design too early.

[e –> Give a false positive, or show warrant for the dismissal. Remember, just on the solar system scope, we are talking about a result that identifies that by using the entire resources of the solar system for its typically estimated lifespan to date, we could only sample something like 1 straw to a cubical haystack 1,000 light years across. If you think that he sampling theory result that a small but significant random sample will typically capture the bulk of a distribution is unsound, kindly show us why, and how that affects sampling theory in light of the issue of fluctuations. Failing that, I have every epistemic right to suggest that what we are seeing instead is your a priori commitment to not infer design peeking through.]

And, to be honest, the only things I’ve seen the design community call design on is DNA and, in a very different way, the cosmos.

[f –> Not so. What happens is that design is most contentious on these, but in fact the design inference is used all the time in all sorts of fields, often on an intuitive or semi intuitive basis. As just one example, consider how fires are explained as arson vs accident. Similarly, how a particular effect in our bodies is explained as a signature of drug intervention vs chance behaviour or natural mechanism. And of course there is the whole world of hypothesis testing by examining whether we are in the bulk or the far skirt and whether it is reasonable to expect such on the particularities of the situation.]

The real problem, with all respect, as already highlighted is obviously that this filter will point out cell based life as designed. Which — even though you do not have an empirically well warranted causal explanation for otherwise, you do not wish to accept.

I don’t think you’ve made the case yet.

[f –> On the evidence it is plain that there is a controlling a priori commitment at work, so the case will never be perceived as made, as there will always be a selectively hyperskeptical objection that demands an increment of warrant that is calculated or by unreflective assertion, unreasonable to demand, by comparison with essentially similar situations. Notice, how ever so many swallow a timeline model of the past without batting an eye, but strain at a design inference that is much more empirically reliable on the causal patterns and signs that we have. That’s a case of straining at a gnat while swallowing a camel.]

I don’t think the design inference has been rigorously established as an objective measure.

[g –> Dismissive assertion, in a context where “rigorous’ is often a signature of selective hyperskepticism at work, cf, the above. The inference on algorithmic digital code that has been the subject of Nobel Prize awards should be plain enough.]

I think you’ve decided that only intelligence can create stuff like DNA.

[h –> Rubbish, and I do not appreciate your putting words in my mouth or thoughts in my head that do not belong there, to justify a turnabout assertion. You know or full well should know, that — as is true for any significant science — a single well documented case of FSCO/I reliably coming about by blind chance and/or mechanical necessity would suffice to break the empirical reliability of the inference that eh only observed — billions of cases — cause of FSCO/I is design. That you are objecting on projecting question-begging (that is exactly what your assertion means) instead of putting forth clear counter-examples, is strong evidence in itself that the observation is quite correct. That observation is backed by the needle in the haystack analysis that shows why beyond a certain level of complexity joined to the sort of specificity that makes relevant cases come from narrow zones T in large config spaces W, it is utterly unlikely to observe cases E from T based on blind chance and mechanical necessity.]

I haven’t seen any objective way to determine that except to say: it’s over so many bits long so it’s designed.

[i –> Strawman caricature. You know better, a lot better. You full well know that we are looking at complexity AND specificity that confines us to narrow zones T in wide spaces of possibilities W such that the atomic resources of our solar system or the observed cosmos will be swamped by the amount of haystack to be searched. Where you have been given the reasoning on sampling theory as to why we would only expect blind samples comparable to 1 straw to a hay bale 1,000 light years across (as thick as our galaxy) will reliably only pick up the bulk, even if the haystack were superposed on our galaxy near earth. Indeed, just above you had opportunity to see a concrete example of a text string in English and how easily it passes the specificity-complexity criterion.]

And I just don’t think that’s good enough.

[j –> Knocking over a strawman. Kindly, deal with the real issue that has been put to you over and over, in more than adequate details.]

But that inference is based on what we do know, the reliable cause of FSCO/I and the related needle in the haystack analysis. (As was just shown for a concrete case.)

But you don’t know that there was an intelligence around when one needed to be around which means you’re assuming a cause.

[k –> Really! You have repeatedly been advised that we are addressing inference on empirically reliable sign per patterns we investigate in the present. Surely, that we see that reliably, where there is a sign, we have confirmed the presence of the associated cause, is an empirical base of fact that shows something that is at least a good candidate for being a uniform pattern. We back it up with an analysis that shows on well accepted and uncontroversial statistical principles, why this is so. Then we look at cases where we see traces from the past that are comparable to the signs we just confirmed to be reliable indices. Such signs, to any reasonable person not ideologically committed to a contrary position, will count as evidence of similar causes acting in the past. But more tellingly, we can point to other cases such as the reconstructed timeline of the earth’s past where on much weaker correlations between effects and putative causes, those who object to the design inference make highly confident conclusions about the past and in so doing, even go so far as to present them as though they were indisputable facts. The inconsistency is glaringly obvious, save to the true believers in the evo mat scheme.]

And you’re not addressing all the evidence which points to universal common descent with modification.

[l –> I have started form the evidence at the root of the tree of life and find that there is no credible reason to infer that chemistry and physics in some still warm pond or the like will assemble at once or incre4mentally, a gated, encapsulated, metabolising entity using a von Neumann, code based self replicator, based on highly endothermic and information rich macromolecules. So, I see there is no root to the alleged tree of life, on Darwinist premises. I look at the dFSCI in the living cell, a trace form the past, note that it is a case of FSCO/I and on the pattern of causal investigations and inductions already outlined I see I have excellent reason to conclude that the living cell is a work of skilled ART, not blind chance and mechanical necessity. thereafter, ay evidence of common descent or the like is to be viewed in that pivotal light. And I find that common design rather than descent is superior, given the systematic pattern of — too often papered over — islands of molecular function (try protein fold domains) ranging up to suddenness, stasis and the scope of fresh FSCO/I involved in novel body plans and reflected in the 1/4 million plus fossil species, plus mosaic animals etc that point to libraries of reusable parts, and more, give me high confidence that I am seeing a pattern of common design rather than common descent. This is reinforced when I see that ideological a prioris are heavily involved in forcing the Darwinist blind watchmaker thesis model of the past.]

We’re going around in circles here.

[m –> On the contrary, what is coming out loud and clear is the ideological a priori that drives circularity in the evolutionary materialist reconstruction of the deep past of origins. KF]>>

___________

GP at 796, and following,  is also a good pick-up point:

__________

>>796

  1. Joe:

    If a string for which we have correctly assesses dFSCI is proved to have historically emerged without any design intervention, that would be a false positive. dFSCI has been correctly assessed, but it does not correspond empirically to a design origin.

    It is important to remind that no such example is empirically known. That’s why we say that dFSCI has 100% specificity as an indicator of design.

    If a few examples of that kind were found, the specificity of the tool would be lower. We could still keep some use for it, but I admit that its relevance for a design inference in such a fundamental issue like the interpretation of biological information woudl be heavily compromised.

  2. If you received an electromagnetic burst from space that occurred at precisely equal intervals and kept to sidereal time would that be a candidate for SCI?

  3. Are homing beacons SCI?

  4. Jerad:

    As you should know, the first default is look for mechanical necessity. The neutron star model of pulsars suffices to explain what we see.

    Homing beacons come in networks — I here look at DECCA, LORAN and the like up to today’s GPS, and are highly complex nodes. They are parts of communication networks with highly complex and functionally specific communication systems. Where encoders, modulators, transmitters, receivers, demodulators and decoders have to be precisely and exactly matched.

    Just take an antenna tower if you don’t want to look at anything more complex.

    KF>>

__________

I am fairly sure that this discussion, now in excess of 1,500 comments, lets us all see what is really going on in the debate over the design inference. END

Comments
Kindly, address the issues on the table (remembering your false accusation of “dishonesty” that needs to be resolved in light of the onward presented issues on the quality of Mr Dawkins’ argument, to return to a reasonable context of discussion), as I have answered specifically to OOL and OOBPs issues in the book you advanced, by Dawkins.
I've read this twice and can't make any sense of it. It seems the rule is kairosfocus can argue ad hominem regarding Dawkins but can then claim foul when I point it out. Alice in Wonderland.Alan Fox
December 14, 2012
December
12
Dec
14
14
2012
12:30 AM
12
12
30
AM
PDT
Another paper where research belies the idea that protein function is rare in unknown protein sequences. H/T Allan MillerAlan Fox
December 14, 2012
December
12
Dec
14
14
2012
12:25 AM
12
12
25
AM
PDT
Now this is hilarious. petrushka wants to know that intelligent selection can do. where to begin Toronto wants to know how intelligent designers can predict the future. i kid you not! This is the new argument against ID.Mung
December 14, 2012
December
12
Dec
14
14
2012
12:14 AM
12
12
14
AM
PDT
yes. i tried to point this out over at TSZ. First fold (maybe), then function (maybe).Mung
December 13, 2012
December
12
Dec
13
13
2012
10:42 PM
10
10
42
PM
PDT
PS: Proteins, of course are string based structures and it seems, strongly, that fold domains are deeply isolated. They are assembled in living systems based on DNA and mRNA, using Ribosomes and several dozen helper molecules. We are already seeing a multipart system, with many parts that have to be arranged and interfaced just so, to simply get to the AA strings that make proteins. Then, we have the problem that not just any old AA string will fold correctly. And, last we checked there are thousands of fold domains that seem to not have any semblance of incremental bridges between them, and we have not even got to bio-function and key-lock fitting yet. We are just at the issue of getting the strings to fold and work as a 3-d object. Not to mention it turns out that there is the prion problem, where it has been discovered that here are ways to fold that are energetically advantageous relative to the ways that do biological jobs, that can trigger a cascade of mis-folded proteins, hence mad cow disease and apparently hence Alzheimer's too. Things are getting more and more complex all the time! And, we have not got to a living cell yet, or to the requisites of Von Neumann Self replicating, metabolising automata [what a living cell does, and uses proteins as the workhorse molecules to do], much less a complex organism's body plan. See the island problems piling up?kairosfocus
December 13, 2012
December
12
Dec
13
13
2012
10:38 PM
10
10
38
PM
PDT
Mung (Attn, KS): Islands of function is a no-brainer, for anyone who has had to design, build or fix something made up from multiple, well-matched parts that have to be properly interfaced to work properly. The multiple parts and the sea of possible configs, means that we have an exponentially growing space of ways that parts can be scattered, gathered, set up together. That brings us to a threshold of complexity issue: once we have 500 bits worth of complexity to describe the ways something's components can be scattered, gathered, and arranged, then we know that he atomic resources of the solar system -- our practical universe for atomic interactions -- cannot sample as much as a 1-straw sized sample to a cubical haystack 1,0000 LY across, about as thick as our galaxy. If the config of parts is taken at random, blindly, then we have no reason to prefer isolated and definable specific clusters. In short there are many more ways to get things than thee are to get them to work. The isolation of islands of function follows. For, specific configs to function in observable ways, will come from a much larger field of possibilities, and will tend to come in clusters [i.e. there is such a thing as tolerances as anyone who has had to design a real system will know about]. Now, different clusters of configs of parts may work, but once there is a situation where islands are locaslised, island hopping without intelligent navigation becomes a major problem. For, in general there is no good reason to imagine that there is a smooth, incremental bridge between islands, much less that there is a way to get to a highly complex object from a simple or arbitrary initial config. As an example, ASCII alphanumerical text strings are such cases, where the 128 possible components can be clustered in any number of ways. Just 72 or so such characters will take up 500 bits, and the space of possibilities will include every possible string of such characters, i.e 128 x 128 x 128 . . . x 128 72 times over, or 128^72. The vast majority of these will be garbage, and relatively few will be functional as English text. (Think about the TYPICAL output of a random string generator, and think about why it is that random text generators have so far only been able to get to about 24 letters in a meaningful English string.) Now, here is a simple case: See Spot run. Try to find an incremental, random walk driven path from this to say the text of this post; where every step of the way, there has to be a meaningful, readable English text. That just is not going to happen, and that is for a simple case with a particularly simple structure, a string. (Of course, WLOG, other more complex structures can be described in structured strings, but already we are seeing more and more constraints coming to bear and an elaborate system that allows us to encode. This, for instance is how AutoCAD etc work.) What the genetic Algorithm game and related exercises do, is that they start within islands of funciton, and look at incremental changes within such. Of course, we can move around within an island. That's not the problem -- microevo or adaptation or even drift. The real problem is, de no vo, to get to such islads from an arbitrary beginning. And in that context, the living cell, just looking at its genome, is going to start at 100 - 1,000 K bits worth of info, where for every incremental bit of complexity the space of possibilities DOUBLES. The problem is not probability calcs, it is searching for a needle in a haystack, on steroids, starting from an arbitrary initial config. Where of course the Darwin warm little pond with salts etc or the like is a useful start point. Which is also why OOL is a crucial test. KFkairosfocus
December 13, 2012
December
12
Dec
13
13
2012
10:28 PM
10
10
28
PM
PDT
Mung on December 14, 2012 at 3:12 am said: So let's get one thing straight. I didn't ask keiths or anyone else to start this thread. I may or may not choose to participate in it. If I do decide to participate, there will be one rule and one rule only which will guarantee my continued participation. No censorship! If you don't want your members presented in all their glory for the world to see, block them from this thread now! Banning them from the site might be better. First, it's highly questionable that this thread was started out of any honorable "tradition" regarding how to treat people from UD. But leave that for another post. Second, I don't need a forum to present my views on ID or my case for ID. I can post anytime I want at Uncommon Descent, an ID-friendly site. No doubt there are numerous other places I could go, friendly and hostile alike, to post as well. Third, I am pointing out the misrepresentations of ID in another thread. You want me to repeat myself here? We could devote an entire thread to that one topic alone and I could restrict myself just to content right here at TSZ. Frankly that's what I ought to do to expose this charade for that it is. Fourth. I am presenting arguments for ID in another thread (or I was trying to), so why the need for this one? Is the other thread too focused on a single topic? Probably. Fifth, and this is what i think is the real reason for this thread This thread was begun so that people here could have more opportunities to ridicule me and my beliefs. It appears some members were sorely disappointed when I refused to allow them to draw me off-topic in other threads. They just weren't finding enough to attack in the other threads. The sharks smell blood, but no meat in the water. Too bad. Take it away, Mung. I haven't even begun my opening argument. Are you sure?Mung
December 13, 2012
December
12
Dec
13
13
2012
07:15 PM
7
07
15
PM
PDT
Grr... Why Mung is an ID supporterMung
December 13, 2012
December
12
Dec
13
13
2012
06:34 PM
6
06
34
PM
PDT
Adventures in TSZ So over at TSZ keiths started up a thread to discuss 'islands of function.' It starts off full of the standard lies and misrepresentations that we've come to expect from keiths and complete lack of any supporting material. petrushka wanted to discuss a paper (linked above in my post to gpuccio) on some tests performed with a specific protein involving changes to amino acid residues and testing for effect. So I started posting some material from the paper only to have keiths inform me that the paper has nothing to do with 'fitness landscapes,' seemingly forgetting he was the one that raised the whole question of islands of function. He appears to think that when IDer's say 'islands of function' they can't possibly mean things like protein domains. That they can only be referring to 'fitness landscapes.' Then he accused me of posting material from the paper that sounded 'islandy' in order to make my case. (So far I only presented material from the first paragraph, lol.) Yes, this is TRUE, people. That sounds "islandy." Not allowed! There was so much Guano thrown in my direction that Neil Rickert messed up the thread trying to shovel it all, lol. Thanks Neil, for trying. I refused to be drawn off topic into discussions about THE DESIGNER or other ID arguments. What's the point when they can't even get this one right. So keiths starts a whole thread just for me! Whee! http://theskepticalzone.com/wp/?p=1480 I'm thinking about a response. It will probably get deleted, lol. But really, shouldn't they be careful that they might get what they asked for? I sure didn't ask keiths to create the thread.Mung
December 13, 2012
December
12
Dec
13
13
2012
06:32 PM
6
06
32
PM
PDT
AF: I too have my own life, but on fair comment when you put on the table polarising false accusations, you have imposed on yourself a burden of responsibility that would not otherwise obtain. Kindly, address the issues on the table (remembering your false accusation of "dishonesty" that needs to be resolved in light of the onward presented issues on the quality of Mr Dawkins' argument, to return to a reasonable context of discussion), as I have answered specifically to OOL and OOBPs issues in the book you advanced, by Dawkins. (I note that he understands quite well the pivotal significance of OOL, that is why he tried to make the best face he could of the RNA world hyp. Not a very good one in the end.) KF PS: If you want my views on timelines, examine the long since publicly posted page here on, which starts with a case study on origins science done right, astrophysics and cosmology. My views are irrelevant, the issue -- as always -- is what is warranted, to what degree and on what grounds with what limitations. But that is not necessary for our purposes, the conventional timelines will do nicely, whatever their limitations. The design issues that are pivotal are independent of timelines, whether for H-ball stellar models or suggested ages for the solar system based on meteorite fragments, or proposed geological timelines tracing in the end to the deposition rate models and various adjustments.kairosfocus
December 13, 2012
December
12
Dec
13
13
2012
04:09 PM
4
04
09
PM
PDT
And keiths continues to lie:
1a. Unguided evolution is far better than ID at explaining the evidence of the objective nested hierarchy.
Linnean taxonomy, ie the objective nested hierarchy, is based on a common design and has absolutely nothing to do with unguided evolution.
2a. The Designer is an unknown being with unknown abilities, unknown limitations, and unknown goals. ID therefore predicts nothing, and can be fitted to any set of facts about life by simply saying “that’s how the Designer did it.”
So forensics and archaeology predict nothing? Their designers are unknown with unknown abilities, unknown limitations and unknown goals. Yet they can and do determine design from nature, operating freely. That said unguided evolution has known abilities and they just are NOT up to the task at hand.
3a. To bring ID into alignment with the biological evidence, you have to make a bunch of assumptions about how the Designer operates.
Nope. We just need to do what all other design-centric venues do- eliminate necessity and chance AND observe some specification. So what he have is keiths, pathological liar and loser, spewing his misrepresentations as if they mean something.Joe
December 13, 2012
December
12
Dec
13
13
2012
04:02 PM
4
04
02
PM
PDT
Alan Fox:
Myself, I think the evidence all points to an age of the Earth of around four and a half billion years.
What evidence? A handful of scientists saying so?Joe
December 13, 2012
December
12
Dec
13
13
2012
03:53 PM
3
03
53
PM
PDT
Alan Fox:
I come here in the vain hope of seeing a clear exposition of an ID hypothesis that doesn’t take the form of a default argument (Evolution is like a language, so ID wins, for example).
1- It isn't clear that you understand the word "default" 2- You really need to focus on YOUR position because if you could produce positive evidence for it then you wouldn't need to worry about ID. We cannot say a designer is required once you have demonstrated necessity and chance are all that is needed. 3- ID only wins once necessity and chance have been eliminated AND some specification (eg function and/or meaning) is observed.Joe
December 13, 2012
December
12
Dec
13
13
2012
03:52 PM
3
03
52
PM
PDT
PS @ kairosfocus, Just out of curiosity, Mr. M., could I ask you how old you think the Earth is? Myself, I think the evidence all points to an age of the Earth of around four and a half billion years.Alan Fox
December 13, 2012
December
12
Dec
13
13
2012
03:14 PM
3
03
14
PM
PDT
KF upthread
For a couple of days now, I have been waiting for AF to address the inconvenient fact that not only does Mr[Dr!]Dawkins over-claim the powers of scientific theorizing on deep past of origins...
AF has been missing in action in this thread for a few days now...
Sorry to disapoint you, Mr M. but I have have other things to do beside compose comments for Uncommon Descent. Also, I am not defending or promoting OoL hypotheses. I support Robert Shapiro's view on the matter, especially with regard to space exploration (See his "Planetary Dreams). I come here in the vain hope of seeing a clear exposition of an ID hypothesis that doesn't take the form of a default argument (Evolution is like a language, so ID wins, for example).Alan Fox
December 13, 2012
December
12
Dec
13
13
2012
03:03 PM
3
03
03
PM
PDT
gpuccio:
I will not answer your “argument” about the Rain Fairy. I find it simply stupid, with all respect.
Good for you. This is a common practice keiths uses (when he's not just flat out lying). He makes up some hypothetical scenario out of pure imagination that has no demonstrable relevance to the subject at hand and claims it trumps all evidence, facts, logic and reasoning. Need to prove evolution can happen. Imagine a "fitness landscape" of trillions and trillions of dimensions and say there, see? No problem for evolution, because if one path is closed off billions and billions of wormholes between dimensions make possible what was not otherwise possible. What a crock. And he's still misrepresenting your actual argument from dFSCI every chance he can. No integrity whatsoever.Mung
December 13, 2012
December
12
Dec
13
13
2012
02:34 PM
2
02
34
PM
PDT
GP: Is this the "Rain Fairy" argument:
Keiths: First of all, you haven’t given any independent justification for your assumption. A designer (and especially a Designer) doesn’t have to work through common descent, and he doesn’t have to reuse what already exists. Your only reason for assuming that he does these things is that you are trying to force-fit your theory to the existing evidence. It’s the same error made by an advocate for the Rain Fairy hypothesis who assumes that the Rain Fairy always acts in ways that match the weather we are actually observing.
Of course the first problem is that we are dealing with a major sophomoric overestimation problem in dealing with the new atheists (as well as an associated sociocultural agenda), cf. here. In particular, they are always setting up and knocking over strawmen. Now, cf. OP above, where there is an illustration of the design detection filter. You will notice that things explicable under mechanical necessity and/or chance process -- and meteorological events such as precipitation fit here as any weather man can tell -- will be defaulted to necessity and chance. Of course an agent may imitate such, the EF will not detect that. It was not designed for that and we happily accept such false negatives as a small price to pay for what the EF does target. That is credibly high reliability when it does rule design. (And, on the empirically observable sign FSCO/I in its various forms including dFSCI, it is abundantly confirmed to be reliable at that empirically with billions of cases in point. That is, there are no credible false positives for FSCO/I beyond 500 - 1,000 bits, which gives the needle in the haystack threshold for the solar system tot he observed cosmos scale, on the number of atoms, how fast things happen at that scale and reasonable timelines for the age of these.) So, while KS et al do not wish to accept it, we have a highly reliable sign that is best explained on the known and observed causal factor, design. Instead of addressing this, they have erected a strawman and have knocked it over. In this case with a bit of ridicule tossed into the mix. As is so sadly typical. It so happens that life forms are chock full of FSCO/I and in particular, dFSCI, as in digital code and the associated organised machinery that processes digital code. It is obviously not unreasonable to see that such code is a sign that points to design. Next, we see a pattern of a basic DNA code with some variants, across the world of life. This too is quite familiar, if you have ever had to do with computer languages applied to diverse circumstances. So, code reuse and language reuse with variations is a known phenomenon associated with agent action. (As a matter of fact, just look at the pattern with C, C++ and Java, and the variants on Java. In its day BASIC was notorious for its variant forms, too. And there are any number of "basic-like" languages that were set up.) Now, the next thing is that a strawman demand is set up that a designer of life, to be acceptable to KS et al, must start from scratch every time, instead of using and even modifying a code base. Sorry, reuse and modularity are actually markers of good design praxis, and making mods to suit circumstances is reasonable too. So the whole objection is patently artificial and selectively hyperskeptical, flying in the teeth of common good sense and easily observed design praxis. KFkairosfocus
December 13, 2012
December
12
Dec
13
13
2012
12:35 PM
12
12
35
PM
PDT
Keiths: I will not answer your "argument" about the Rain Fairy. I find it simply stupid, with all respect.gpuccio
December 13, 2012
December
12
Dec
13
13
2012
09:04 AM
9
09
04
AM
PDT
Keiths: Those statements contradict each other. Which do you affirm, and which do you retract? My statement was: "The correct concept is as follows: It is completely wrong to model NS using IS, because they have different form and power." That is absolutely true for all the GAs you guys propose, which are based on IS and try in no way to realistically model NS. That is absolutely true for Joe Felsenstein's "argument" about "NS". My point is very simple: IS is IS. It has different forms and powers, and they do not correspond to what NS can really do. In IS, there is always an element of intelligent choice. Only an implementation of NS some true context, either computational or biological, can really tell us if NS can generate dFSCI. On the other hand, as you guys insist on possible "models" of NS, I have said that the only model which could tell us some very trivial things about NS would be a model where IS correctly tries to mimic the form and power of NS as observed in some true natural context. That would be a model, but it would not answer the question of what NS can really do. It would only model what happens after NS has occurred, with the form and power we assume for it. IOWs, it would only model how some selection, once attained, can give some results. As I have said many times, that would be acceptable, if the form and power attributed to NS are realistical (truly observed in some natural context). But it would be completely trivial and uninteresting. It could model how some simple microevolutionary event can propagate in a population (something we already know), but it could never be used to model more complex events, because we have no natural example where NS generates more complex events. That's why I have said that the only model for NS in the generation of complex events, at present, would be to attribute no role to it, a statement that evoked the fiery remonstrations of your lot, but which remains perfectly true. That is all. It is simple and true. You want to consider it contradictory, be my guest. You keep your strange ideas about contradictions and circularities. I will keep mine.gpuccio
December 13, 2012
December
12
Dec
13
13
2012
09:02 AM
9
09
02
AM
PDT
As 1012 above shows, when Mr Dawkins does so, lo and behold, his case is nowhere near as substantial as he has suggested. That is, (i) by addressing OOL, Dawkins implies its relevance to the overall issue of the evo mat account of origins of life and its forms, but (ii) the root of the Darwinist tree of life is conspicuously absent and empirically unsupported. So, we are entitled to conclude already that (iii) Mr Dawkins’ claims are seriously exaggerated.
You mean we have actual reasons to be skeptical?Mung
December 13, 2012
December
12
Dec
13
13
2012
07:32 AM
7
07
32
AM
PDT
In response to keiths: Why am I an ID supporter? Easy- 1- There isn't any supporting evidence for materialism nor evolutionism 2- There isn't even a way to test materialism nor evolutionism 3- There is plenty of evidence for design starting with the fact there isn't any suppoting evidence for materialism and evolutionism- ya see all design inferences mandate the elimination of materialistic processes before reaching a design inference- see Newton's four rules of scientific investigation 4- Other than #3, the same techniques that allow us to infer design wrt archaeology, SETI and forensics, are used to determine design in biology and the universe. 5- And we see design in living organisms and their subsystems- ie we see that which fits the criteria of design, ie no materialistic explanation along with "the ordering of separate components to achieve an identifiable function that depends sharply on the components" (Behe 1996) So yeah, until materialists can come up with a way to test their position along with positive evidence, the REAL question is why would anyone support materialism and evolutionism?Joe
December 13, 2012
December
12
Dec
13
13
2012
05:45 AM
5
05
45
AM
PDT
JM's remarks are also relevant:
Before examining the underlying fallacy of Dawkins’ argument, let us take a moment to consider the theological undertones in the above text. Theological arguments — by their very nature — cannot be defended as a scientific statement, and thus ought to be given no place in scientific discussions regarding evolution. The subtitle of Dawkins’ book is The Evidence for Evolution. There should be no need, therefore, to prop up Darwinism by appealing to theologically-related considerations. The age of the earth and the proper interpretation of Genesis is the subject of heated debate among Christians. While I do believe that this is a very interesting and important issue (I personally strongly favour the view that the earth is very ancient), it should not be featuring in scientific discourses concerning the scientific evidence relating to evolution. Moreover, to categorically place all Darwin-skeptics in the same category is misleading. Leaving that point aside, let us turn to Richard Dawkins’ understanding of the Cambrian explosion. First, even if we were to grant him his premise — namely, the contention that organisms prior to the Cambrian were of a non-fossilisable composition (which is plausible) — this is not the point in question. Indeed, it is to be expected that non-skeletonized predecessors ought to leave few if any fossils. If it were the case, therefore, that one evolving line appeared suddenly in the fossil record, once it reached the stage of being fossilizable, then Dawkins might have a point here. But the real challenge of the Cambrian explosion is the wide variety of fossilizable forms which appeared at more or less the same instant in geological time. Every single phyla represented by modern day organisms — certainly all those with fossilizable parts — were included, yet for none is there any clearly identifiable ancestor. It is explaining the simultaneous and abrupt appearance of those which is one of the leading challenges in evolutionary biology. Dawkins’ argument here is by no means original. Interestingly, over the last century and a half since the publication of Darwin’s Origin of Species, paleontologists have discovered many Precambrian fossils, many of them microscopic or soft-bodied. As Darwinian paleobiologist William Schopf wrote in his The early evolution of life: solution to Darwin’s dilemma, “The long-held notion that Precambrian organisms must have been too small or too delicate to have been preserved in geological materials…[is] now recognised as incorrect.” If anything, the abrupt appearance of the major animal phyla, conventionally dated to about 540 million years ago, is better documented now that in Darwin’s time. Indeed, as more fossils are discovered it becomes clear that the Cambrian explosion was even more abrupt and extensive than previously envisioned. At any rate, as discussed in some detail here, the Ediacaran fauna are not generally thought to be ancestral to the modern phyla which appear explosively in the Cambrian radiation. The presence of these organisms, therefore, should offer no comfort to Darwinists. As Peter Ward has observed in On Methuselah’s Trail: Living Fossils and the Great Extinctions, “[L]ater study cast doubt on the affinity between these ancient remains preserved in sandstones and living creatures of today; the great German paleontologist A. Seilacher, of Tübingen University, has even gone so far as to suggest that the Ediacaran fauna has no relationship whatsoever with any currently living creatures. In this view, the Ediacaran fauna was completely annihilated before the start of the Cambrian fauna.” (p. 36) Moreover, many phyla (such as the brachiopods and arthropods) couldn’t have evolved their soft parts first and then added the hard parts (such as the exoskeleton or shell) later — their survival depends in large measure upon the ability to protect or shield their soft parts. Soft and hard parts had to arise together. Finally, the critic of Darwinism need not point to the fossil record as the most compelling decisive blow to Darwinian orthodoxy. Dawkins is free to invoke ad-hoc hypothesis in an attempt to explain away the gaps and challenges presented by the fossil record at the most crucial points. Nonetheless, the fact remains that the fossil record simply cannot be used to document anything relating to the common descent of all life forms — which is one of the two central claims of neo-Darwinism. To state otherwise is to engage in circular reasoning.
Now, the significance of this, is that the fossils constitute traces that come from the past of life on earth, and if positive direct evidence of incrementally diversifying, gradually branching life forms were to be found at body plan level, this is where that would be found. But, overwhelmingly, it is not. That is, the actual fossil record SUPPORTS the view that life forms come in islands. Which is of course the same message we get from 2,000 or so basic protein fold domains, the implications of multiple part functionality that depends on proper matching and specific organisation and the consequent needle in the haystack search challenge faced by a blind chance and mechanical necessity search approach. Namely, once we are beyond 500 bits of FSCO/I, the atomic and temporal resources of our solar system could search the equivalent of sampling a one straw sized sample of a cubical haystack 1,000 LY across (about as thick as our galaxy). With all but certainty, were such a haystack superposed on our galactic neighbourhood, such a sample would be overwhelmingly likely to pick up a straw and nothing else. So, we are in a position to see a consistently repeated pattern of bluffing and strawman tactic based ad hominem attack rhetorical tactics on Mr Dawkins' part:
a: For the overall case, he exaggerated the degree of warrant that any empirically based investigation of the remote and unobservable past that we must reconstruct on traces and observed processes that reliably and uniquely give rise to similar traces. b: He compounded this by setting up a Creationist strawman and invidiously associating such with holocaust denial. c: On the root of the tree of life, he resorted to the RNA world hypothesis, but failed to note the crushing difficulties that this model faces. And d: on the Cambrian life revolution, he distracted attention from the suddenness and lack of ancestral forms across the range of body plans, he presented a special case as though it were typical and would account for the overall case, he failed to clearly note in the immediate context of his remarks that it is no longer credible that soft bodied forms prior to that time would not be fossilised. Thus, he failed to note that the actual record supports the islands of function view.
Unfortunately, such rhetorical patterns have been typical of Mr Dawkins, all the way back to the notorious Weasel crude genetic algorithm of 1986 - 7. So, we note instead that the observed evidence suggests something else. namely, that islands of function are real and are linked to the search space challenge posed by multiple well matched and properly organised parts required for specific function. Indeed, the routinely -- and the only -- observed cause for FSCO/I is design. We are quite properly entitled to hold that per reliable and consistent empirical observation supported by the needle in haystack challenge, such FSCO/I is best explained on design, even where we did not directly observe the process of causation. Which is of course the same basic approach that is used in general areas of scientific investigation or scientifically guided investigation where we need to reconstruct an unobserved event that has left us with traces or clues, e.g. was this house fire accident, or arson. This brings us full circle to the 6,000 word essay challenge on the table since Sept 23, 2012. Provide and submit to me for publication at UD, a clean, empirically grounded case that substantiates the evolutionary materialist claim, including especially OOL and OOBPs. Let's just say to AF and the watching penumbra of objector sites, that the continuing failure to address such a clear and direct way to blow up design theory by hitting it in the vitals and blowing up the magazines, speaks volumes. KFkairosfocus
December 12, 2012
December
12
Dec
12
12
2012
10:29 PM
10
10
29
PM
PDT
Onlookers: For a couple of days now, I have been waiting for AF to address the inconvenient fact that not only does Mr Dawkins over-claim the powers of scientific theorising on deep past of origins (suggesting that the suggested reconstructions are practically certain facts) and invidiously associate those who challenge such with holocaust deniers, but he does address OOL, through RNA world. As 1012 above shows, when Mr Dawkins does so, lo and behold, his case is nowhere near as substantial as he has suggested. That is, (i) by addressing OOL, Dawkins implies its relevance to the overall issue of the evo mat account of origins of life and its forms, but (ii) the root of the Darwinist tree of life is conspicuously absent and empirically unsupported. So, we are entitled to conclude already that (iii) Mr Dawkins' claims are seriously exaggerated. Now, AF has been missing in action in this thread for a few days now, so it looks like courtesy JM again, we will have to move on to the next issue, OOBPs, using the Cambrian revolution as the key case in point. The essence of this case, of course is that -- ever since Darwin and even now after 150 years of scouring fossil beds and 1/4+ million fossil species with millions of specimens in museums and billions in the ground, we have the very clear pattern of sudden appearances, stasis and disappearance that Gould and others have discussed. In the case of the Cambrian era, what has been going on is that of the 3.5 - 3.8 BY (or possibly 4.2 BY) of traces of life forms on the conventional timeline for the fossil record, some 530 MYA, in a window of 5 - 10 MY, the top level body plans appear, without evident ancestors, and then these continue down to today. That is, we do not see an observed last universal common ancestral form that then gradually branches out to yield the various forms, but instead we see top-down variation with no empirical trace in the fossils of the sort of incremental variation and increasing branching and diversification that the tree of life model would lead us to expect. Going back to the PBSW paper presented by Meyer in 2004 (which, contrary to what some may wish to suggest evidently did pass proper peer review by "renowned scientists"), we may summarise the issue from a design perspective:
The Cambrian explosion represents a remarkable jump in the specified complexity or "complex specified information" (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary [--> I add, save the cases of LOSS of features, which is obviously irrelevant to origin of same]. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6
How does Mr Dawkins address the Cambrian revolution, then? Does he pull out little known fossils that show the slow, incremental branching from ancestral forms, and directly support his case? Surprise -- NOT! -- no. He spends his time setting up and knocking over a Young Earth Creationist strawman:
This great phylum of worms includes the parasitic flukes and tapeworms, which are of great medical importance. My favourites, however, are the free-living tubellarian worms, of which there are more than four thousand species; that’s about as numerous as all the mammal species put together…They are common, both in water and on land, and presumably have been common for a very long time. You’d expect, therefore, to see a rich fossil history. Unfortunately, there is almost nothing. Apart from a handful of ambiguous trace fossils, not a single fossil flatworm has ever been found. The Platyhelminthes, to a worm, are ‘already in an advanced state of evolution, the very first time they appear. It is as though they were just planted there, without any evolutionary history.’ But in this case, ‘the very first time they appear’ is not the Cambrian but today. Do you see what this means, or at least ought to mean for creationists? Creationists believe that flatworms were created in the same week as all other creatures. They have therefore had exactly the same time in which to fossilise as all other animals. During all the centuries when all those bony or shelly animals were depositing happily alongside them, but without leaving any significant trace of their presence in the rocks. What, then, is so special about gaps in the record of these animals that do fossilise, given that the past history of the flatworms amounts to one big gap: even though the flatworms, by the creationists’ own account, have been living for the same length of time? If the gap before the Cambrian Explosion is used as evidence that most animals suddenly sprang into existence in the Cambrian, exactly the same ‘logic’ should be used to prove that the flatworms sprang into existence yesterday. Yet this contradicts the creationist’s belief that flatworms were created during the same creative week as everything else. You cannot have it both ways. This argument, at a stroke, completely destroys the creationist case that the Precambrian gap in the fossil record weakens the evidence for evolution.
In short, Dawkins evades the missing required positive evidence that should be substantiating a claimed fact, dismisses or ignores the issue that "every tub must stand on its own bottom," and goes on the rhetorical attack. And even with YEC's, he faces a basic problem: YEC is not committed to any need to find fossils of internal flatworm parasites that presumably would be exceedingly rare in contexts that would fossilise, much moreso than other creatures. And, while it is convenient to Dawkins' rhetorical purpose to let this case stand in for all the others, in fact we are dealing with dozens and dozens of phyla and sub-phyla that are first observed in the Cambrian layers, and in subsequent layers right down to today. We also deal with the fact that in lower and presumptively earlier layers, we have clear records of fossils of soft-bodied creatures, traces of creatures and even of micro-organisms. We even have the ediacaran fossils that are generally not held to be "ancestral" to the phyla we do see -- i.e. we have further major body plans that do appear (evidently with their own problem of lack of ancestral trunk and branches) and persist for many layers then disappear. Some of these seem to have been soft-bodied, and certainly we do see that the layers in question can preserve fossils. So, we see the exact fossil form pattern identified by Gould, of sudden appearance, stasis, disappearance/ continuity to/ reappearance in the present, of major body plans. The handwaving and strawman pounding have not made the real issue go away. That is, by implication, Dawkins is acknowledging that we are seeing islands of functional forms in the fossils. That brings up the pattern highlighted by Loennig of the Max Planck Institute in 2004, in a peer reviewed article, on "Dynamic genomes, morphological stasis, and the origin of irreducible complexity." For, speaking of the horseshoe crab as an organism that seems to have been morphologically static across 250 million years of fossil record and on into the contemporary world, he notes:
examples like the horseshoe crab are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by 'living fossils' in the present world of organisms when applying the term more inclusively as "an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time" [85] . . . . Now, since all these "old features", morphologically as well as molecularly, are still with us, the basic genetical questions should be addressed in the face of all the dynamic features of ever reshuffling and rearranging, shifting genomes, (a) why are these characters stable at all and (b) how is it possible to derive stable features from any given plant or animal species by mutations in their genomes? . . . . A first hint for answering the questions . . . is perhaps also provided by Charles Darwin himself when he suggested the following sufficiency test for his theory [16]: "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." . . . Biochemist Michael J. Behe [5] has refined Darwin's statement by introducing and defining his concept of "irreducibly complex systems", specifying: "By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning" . . . [for example] (1) the cilium, (2) the bacterial flagellum with filament, hook and motor embedded in the membranes and cell wall and (3) the biochemistry of blood clotting in humans . . . . One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if "several well-matched, interacting parts that contribute to the basic function" are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because "the removal of any one of the parts causes the system to effectively cease functioning") such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process -- or perish . . . . According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski's criterion of specified complexity . . . . "For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity" [23]. For instance, regarding the origin of the bacterial flagellum, Dembski calculated a probability of 10^-234[22].
[ . . . ]kairosfocus
December 12, 2012
December
12
Dec
12
12
2012
10:28 PM
10
10
28
PM
PDT
Mung: Thank you for the paper. I will certainly give it a try.gpuccio
December 12, 2012
December
12
Dec
12
12
2012
01:26 PM
1
01
26
PM
PDT
KF: Thank you. I think too that the descriptions are very vague. That's why I asked for specific details from someone who knows better the system. I must say that I am not very impressed, too.gpuccio
December 12, 2012
December
12
Dec
12
12
2012
01:26 PM
1
01
26
PM
PDT
gpuccio, welcome back. Be sure to check out the following paper: McLaughlin_Ranganathan Is there enough information to calculate dFSCI?Mung
December 12, 2012
December
12
Dec
12
12
2012
08:01 AM
8
08
01
AM
PDT
GP: Here's a starter on Tierra. I gather, Avida is a derivative (not a good recommendation). The key seems to be that the fitness function so called is alleged to be mere survival, and to be implicit, i.e. there is allegedly no hill to climb that is externally given. From the Tierra what is page:
The Tierra C source code creates a virtual computer and its Darwinian operating system, whose architecture has been designed in such a way that the executable machine codes are evolvable. This means that the machine code can be mutated (by flipping bits at random) or recombined (by swapping segments of code between algorithms), and the resulting code remains functional enough of the time for natural (or presumably artificial) selection to be able to improve the code over time . . . . Along with the C source code which generates the virtual computer, we provide several programs written in the assembler code of the virtual computer. Some of these were written by a human and do nothing more than make copies of themselves in the RAM of the virtual computer. The others evolved from the first, and are included to illustrate the power of natural selection. The operating system of the virtual computer provides memory management and timesharing services. It also provides control for a variety of factors that affect the course of evolution: three kinds of mutation rates, disturbances, the allocation of CPU time to each creature, the size of the soup, etc. In addition, the operating system provides a very elaborate observational system that keeps a record of births and deaths, sequences the code of every creature, and maintains a genebank of successful genomes. The operating system also provides facilities for automating the ecological analysis, that is, for recording the kinds of interactions taking place between creatures. This system results in the production of synthetic organisms based on a computer metaphor of organic life in which CPU time is the ``energy'' resource and memory is the ``material'' resource. Memory is organized into informational patterns that exploit CPU time for self-replication. Mutation generates new forms, and evolution proceeds by natural selection as different genotypes compete for CPU time and memory space. Diverse ecological communities have emerged. These digital communities have been used to experimentally examine ecological and evolutionary processes: e.g., competitive exclusion and coexistence, host/parasite density dependent population regulation, the effect of parasites in enhancing community diversity, evolutionary arms race, punctuated equilibrium, and the role of chance and historical factors in evolution. This evolution in a bottle may prove to be a valuable tool for the study of evolution and ecology.
Frankly, this is suspiciously vague and the description of improvement points to a fitness metric and to intelligent selection as driving replication leading to survival of the preferred. That Os seems to be pivotal to performance and begs the question of the search space challenge of the real world. There is a hint of short codes of about 80 bits. The digital organisms so called look a lot like controlled computer viruses in a sandbox. Where the reference to genomes points to something that looks a lot like what happens with GA's. This is of course a first look, but the bottomline is this is a wholly artificial scheme designed and tuned by a known author to use an underlying designed system to produce desired displays of patterns imagined to have happened in the deep past. Analogies and artificial worlds far removed from Darwin's warm little pond of electrified salts, much less addressing the realistic needle haystack challenge to produce novel body plans with genomes of order 10 - 100+ mn bits apiece. But, when such is beheld through the a priori materialist eye of faith wondrous confirmations of what one wanted to see appear, to great rejoicing. Pardon my being a tad less than impressed. KFkairosfocus
December 12, 2012
December
12
Dec
12
12
2012
06:02 AM
6
06
02
AM
PDT
Joe Felsenstein (and others): Sorry, I have been very busy. You say: gpuccio has seemingly also ruled out GA-type models (although keiths has pointed out contradictory statements gpuccio has made on this point, and there is as yet no clarification of the matter by gpuccio). Well, I believe I have answered Keiths's comment in my post #941. Regarding Tierra, I asked: "Regarding Tierra, I have asked many times that someone on your side explain clearly how it works. I don’t know the code and the system. For example, it would be crucial to understand if the so called replicators in the system are true replicators, and if their replication “advantages” derive from true natural replication functions, and not from measured features. And nobody has ever explained what complexity the system would generate. I you want to use Tierra to make your point, please make your point in detail." I really have not the time to study Tierra in detail. If someone among you is acquainted with the system, could that someone please try to answer my simple points? The fact is, Tierra would be interesting for our discussion only if it is a true "implementation" of NWS. Therefore, the key point is: "Are the so called replicators in the system true autonomous replicators"? IOWs, are they similar to a computer virus, that copies itself in a system that has not been programmed to recognize it?. This is important, because the whole concept of NS is that autonomous self replicators can improve their replicating fitness by RV. So, it is crucial to have some answer also to the second question: "Do their replication “advantages” derive from true natural replication functions, and not from measured features"? IOWs, is Tierra implementing NS or IS? Finally, a true autonomous replication advantage is fine as a function for me. But we need to know what new functional code is responsible for that advantage, so that we can measure the complexity linked to that new code. For example, let's imagine that a computer virus replicates better because it develops a new system to copy its code to new locations in the computer. We could easily find which new code in the virus accomplishes that new task, and measure the linked complexity. So, can anyone there answer these points, please?gpuccio
December 12, 2012
December
12
Dec
12
12
2012
02:28 AM
2
02
28
AM
PDT
F/N 2: AF is addressed on OOL from here on in the UB sets it out thread. The relevance of OOL for onward OOBPs is implicit in the implications of the OOL case putting design firmly at the table, multiplied by the drastic escalation in the quantity of dFSCI to be accounted for AND the drastic reduction in scope of resources to our planet. where for every additional bit in a string, the space of possible configs DOUBLES. OOL requires, per smallest genomes 100 - 1,000 kbits, and OOBPs require 10 - 100+ mn bits apiece. Just 1,000 bits is unsearcheable by the atomic resources of the observed cosmos to date, and 500 bits, by those of our solar system (let's be generous). KFkairosfocus
December 11, 2012
December
12
Dec
11
11
2012
10:30 PM
10
10
30
PM
PDT
Joe: where, we know agency can act because in the here and now we have shown similar results by known agent action and have seen that nature acting blindly and freely by chance and/or mechanical necessity is not observed to do same. The underlying issue being that when multiple well matched parts require a narrow range of specific configurations to achieve a function, the sub space that will function will be deeply isolated in the space of possible configs. In particular the claimed magic bullet, exaptation, will run into the challenge that parts have to interface correctly and match or the complex function will not happen. Let us not forget for instance how with the F-86, simply the fact that an assembly worker put in a given bolt the "usual" way instead of the specified, caused a string of fatal crashes. That gives an index of how exacting "well-matched" can be. KFkairosfocus
December 11, 2012
December
12
Dec
11
11
2012
10:03 PM
10
10
03
PM
PDT
1 2 3 4 5 37

Leave a Reply