Uncommon Descent Serving The Intelligent Design Community

The TSZ and Jerad Thread, III — 900+ and almost 800 comments in, needing a new thread . . .

Categories
Culture
Design inference
Education
Evolution
ID Foundations
science education
specified complexity
worldview
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Okay, the thread of discussion needs to pick up from here on.

To motivate discussion, let me clip here comment no 795 in the continuation thread, which I have marked up:

_________

>> 795Jerad October 23, 2012 at 1:18 am

KF (783):

At this point, with all due respect, you look like someone making stuff up to fit your predetermined conclusion.

I know you think so.

[a –> Jerad, I will pause to mark up. I would further with all due respect suggest that I have some warrant for my remark, especially given how glaringly you mishandled the design inference framework in your remark I responded to earlier.]

{Let me add a diagram of the per aspect explanatory filter, using the more elaborated form this time}

The ID Inference Explanatory Filter. Note in particular the sequence of decision nodes

 

You have for sure seen the per apsect design filter and know that the first default explanaiton is that something is caused by law of necessity, for good reason; that is the bulk of the cosmos. You know similarly that highly contingent outcomes have two empirically warrantged causal sources: chance and choice.

You kinow full well that he reason chance is teh default is to give the plain benefit of the doubnt to chance, even at the expense of false negatives.

I suppose. Again, I don’t think of it like that. I take each case and consider it’s context before I think the most likely explanation to be.

[b –> You have already had adequate summary on how scientific investigations evaluate items we cannot directly observe based on traces and causal patterns and signs we can directly establish as reliable, and comparison. This is the exact procedure used in design inference, a pattern that famously traces to Newton’s uniformity principle of reasoning in science.]

I think SETI signals are a good example of really having no idea what’s being looked at.

[c –> There are no, zip, zilch, nada, SETI signals of consequence. And certainly no coded messages. But it is beyond dispute that if such a signal were received, it would be taken very seriously indeed. In the case of dFSCI, we are examining patterns relevant to coded signals. And, we have a highly relevant case in point in the living cell, which points to the origin of life. Which of course is an area that has been highlighted as pivotal on the whole issue of origins, but which is one where you have determined not to tread any more than you have to.]

I suppose, in that case, they do go through something like you’re steps . . . first thing: seeing if the new signals is similar to known and explained stuff.

[d –> If you take off materialist blinkers for the moment and look at what the design filter does, you will see that it is saying, what is it that we are doing in an empirically based, scientific explanation, and how does this relate to the empirical fact that design exists and affects the world leaving evident traces? We see that the first thing that is looked for is natural regularities, tracing to laws of mechanical necessity. Second — and my home discipline pioneered in this in C19 — we look at stochastically distributed patterns of behaviour that credibly trace to chance processes. Then it asks, what happens if we look for distinguishing characteristics of the other cause of high contingency, design? And in so doing, we see that there are indeed empirically reliable signs of design, which have considerable relevance to how we look at among other things, origins. But more broadly, it grounds the intuition that there are markers of design as opposed to chance.]

And you know the stringency of the criterion of specificity (especially functional) JOINED TO complexity beyond 500 or 1,000 bits worth, as a pivot to show cases where the only reasonable, empirically warranted explanation is design.

I still think you’re calling design too early.

[e –> Give a false positive, or show warrant for the dismissal. Remember, just on the solar system scope, we are talking about a result that identifies that by using the entire resources of the solar system for its typically estimated lifespan to date, we could only sample something like 1 straw to a cubical haystack 1,000 light years across. If you think that he sampling theory result that a small but significant random sample will typically capture the bulk of a distribution is unsound, kindly show us why, and how that affects sampling theory in light of the issue of fluctuations. Failing that, I have every epistemic right to suggest that what we are seeing instead is your a priori commitment to not infer design peeking through.]

And, to be honest, the only things I’ve seen the design community call design on is DNA and, in a very different way, the cosmos.

[f –> Not so. What happens is that design is most contentious on these, but in fact the design inference is used all the time in all sorts of fields, often on an intuitive or semi intuitive basis. As just one example, consider how fires are explained as arson vs accident. Similarly, how a particular effect in our bodies is explained as a signature of drug intervention vs chance behaviour or natural mechanism. And of course there is the whole world of hypothesis testing by examining whether we are in the bulk or the far skirt and whether it is reasonable to expect such on the particularities of the situation.]

The real problem, with all respect, as already highlighted is obviously that this filter will point out cell based life as designed. Which — even though you do not have an empirically well warranted causal explanation for otherwise, you do not wish to accept.

I don’t think you’ve made the case yet.

[f –> On the evidence it is plain that there is a controlling a priori commitment at work, so the case will never be perceived as made, as there will always be a selectively hyperskeptical objection that demands an increment of warrant that is calculated or by unreflective assertion, unreasonable to demand, by comparison with essentially similar situations. Notice, how ever so many swallow a timeline model of the past without batting an eye, but strain at a design inference that is much more empirically reliable on the causal patterns and signs that we have. That’s a case of straining at a gnat while swallowing a camel.]

I don’t think the design inference has been rigorously established as an objective measure.

[g –> Dismissive assertion, in a context where “rigorous’ is often a signature of selective hyperskepticism at work, cf, the above. The inference on algorithmic digital code that has been the subject of Nobel Prize awards should be plain enough.]

I think you’ve decided that only intelligence can create stuff like DNA.

[h –> Rubbish, and I do not appreciate your putting words in my mouth or thoughts in my head that do not belong there, to justify a turnabout assertion. You know or full well should know, that — as is true for any significant science — a single well documented case of FSCO/I reliably coming about by blind chance and/or mechanical necessity would suffice to break the empirical reliability of the inference that eh only observed — billions of cases — cause of FSCO/I is design. That you are objecting on projecting question-begging (that is exactly what your assertion means) instead of putting forth clear counter-examples, is strong evidence in itself that the observation is quite correct. That observation is backed by the needle in the haystack analysis that shows why beyond a certain level of complexity joined to the sort of specificity that makes relevant cases come from narrow zones T in large config spaces W, it is utterly unlikely to observe cases E from T based on blind chance and mechanical necessity.]

I haven’t seen any objective way to determine that except to say: it’s over so many bits long so it’s designed.

[i –> Strawman caricature. You know better, a lot better. You full well know that we are looking at complexity AND specificity that confines us to narrow zones T in wide spaces of possibilities W such that the atomic resources of our solar system or the observed cosmos will be swamped by the amount of haystack to be searched. Where you have been given the reasoning on sampling theory as to why we would only expect blind samples comparable to 1 straw to a hay bale 1,000 light years across (as thick as our galaxy) will reliably only pick up the bulk, even if the haystack were superposed on our galaxy near earth. Indeed, just above you had opportunity to see a concrete example of a text string in English and how easily it passes the specificity-complexity criterion.]

And I just don’t think that’s good enough.

[j –> Knocking over a strawman. Kindly, deal with the real issue that has been put to you over and over, in more than adequate details.]

But that inference is based on what we do know, the reliable cause of FSCO/I and the related needle in the haystack analysis. (As was just shown for a concrete case.)

But you don’t know that there was an intelligence around when one needed to be around which means you’re assuming a cause.

[k –> Really! You have repeatedly been advised that we are addressing inference on empirically reliable sign per patterns we investigate in the present. Surely, that we see that reliably, where there is a sign, we have confirmed the presence of the associated cause, is an empirical base of fact that shows something that is at least a good candidate for being a uniform pattern. We back it up with an analysis that shows on well accepted and uncontroversial statistical principles, why this is so. Then we look at cases where we see traces from the past that are comparable to the signs we just confirmed to be reliable indices. Such signs, to any reasonable person not ideologically committed to a contrary position, will count as evidence of similar causes acting in the past. But more tellingly, we can point to other cases such as the reconstructed timeline of the earth’s past where on much weaker correlations between effects and putative causes, those who object to the design inference make highly confident conclusions about the past and in so doing, even go so far as to present them as though they were indisputable facts. The inconsistency is glaringly obvious, save to the true believers in the evo mat scheme.]

And you’re not addressing all the evidence which points to universal common descent with modification.

[l –> I have started form the evidence at the root of the tree of life and find that there is no credible reason to infer that chemistry and physics in some still warm pond or the like will assemble at once or incre4mentally, a gated, encapsulated, metabolising entity using a von Neumann, code based self replicator, based on highly endothermic and information rich macromolecules. So, I see there is no root to the alleged tree of life, on Darwinist premises. I look at the dFSCI in the living cell, a trace form the past, note that it is a case of FSCO/I and on the pattern of causal investigations and inductions already outlined I see I have excellent reason to conclude that the living cell is a work of skilled ART, not blind chance and mechanical necessity. thereafter, ay evidence of common descent or the like is to be viewed in that pivotal light. And I find that common design rather than descent is superior, given the systematic pattern of — too often papered over — islands of molecular function (try protein fold domains) ranging up to suddenness, stasis and the scope of fresh FSCO/I involved in novel body plans and reflected in the 1/4 million plus fossil species, plus mosaic animals etc that point to libraries of reusable parts, and more, give me high confidence that I am seeing a pattern of common design rather than common descent. This is reinforced when I see that ideological a prioris are heavily involved in forcing the Darwinist blind watchmaker thesis model of the past.]

We’re going around in circles here.

[m –> On the contrary, what is coming out loud and clear is the ideological a priori that drives circularity in the evolutionary materialist reconstruction of the deep past of origins. KF]>>

___________

GP at 796, and following,  is also a good pick-up point:

__________

>>796

  1. Joe:

    If a string for which we have correctly assesses dFSCI is proved to have historically emerged without any design intervention, that would be a false positive. dFSCI has been correctly assessed, but it does not correspond empirically to a design origin.

    It is important to remind that no such example is empirically known. That’s why we say that dFSCI has 100% specificity as an indicator of design.

    If a few examples of that kind were found, the specificity of the tool would be lower. We could still keep some use for it, but I admit that its relevance for a design inference in such a fundamental issue like the interpretation of biological information woudl be heavily compromised.

  2. If you received an electromagnetic burst from space that occurred at precisely equal intervals and kept to sidereal time would that be a candidate for SCI?

  3. Are homing beacons SCI?

  4. Jerad:

    As you should know, the first default is look for mechanical necessity. The neutron star model of pulsars suffices to explain what we see.

    Homing beacons come in networks — I here look at DECCA, LORAN and the like up to today’s GPS, and are highly complex nodes. They are parts of communication networks with highly complex and functionally specific communication systems. Where encoders, modulators, transmitters, receivers, demodulators and decoders have to be precisely and exactly matched.

    Just take an antenna tower if you don’t want to look at anything more complex.

    KF>>

__________

I am fairly sure that this discussion, now in excess of 1,500 comments, lets us all see what is really going on in the debate over the design inference. END

Comments
Toronto: To clarify, if I “show” you an initial string with the information to self-replicate, whose length is below the UPB threshold, are we in agreement that the string can also exist randomly without my actually creating it? No. If you show me that string, and if I have decided that the UPB is an appropriate threshold for the System and Time Span you are suggesting, I will simply not affirm dFSCI for it, and I will not make a design inference. That does not mean that I am saying that it can exist randomly. If you want to consider my non affirming dFSCI as "a negative", that is perfectly correct. Let's say that, at the end of my dFSCI assessment, I am inferring that the string is not designed. As the dFSCI procedure has a lot of false negatives, in no way that means that I, or anyone else, can realistically hypothesize that the string is not designed. The string can well be designed or non designed, and my assessment of dFSCI can tell us nothing credible on that point. That is the simple consequence of the low sensitivity of the dFSCI procedure, that is of the many false negatives.gpuccio
November 26, 2012
November
11
Nov
26
26
2012
06:58 AM
6
06
58
AM
PDT
Toronto: The threshold we choose in all our debates between every contributor on both sides, is the number of bits that we accept as being the boundary between what we accept as being possible without the mechanism of intelligent design, i.e. random processes, as opposed to those configurations that require the efforts of an intelligent designer. No. Again, you are equivocating. And believe me, it is not a problem of language, but of bad reasoning on your part. The threshold has the only purpose of making an inference to design 100% specific. It is a conventional value. It does not make everything under that threshold "possible without design". That is not its purpose. And it is not even true that results above the threshold "require the efforts of an intelligent designer". The correct concept is that over that threshold we think that our design inference will most probably have 100% specificity, while we do not feel safe enough of that under the threshold. Concepts must be understood as they are, not changed for one's purposes. By definition, any bit configuration under the threshold, is considered to be a possibility, in a random distribution. This is folly. Any configuration whose probability is not zero is a possibility, in a random distribution. Only configurations whose probability is zero are impossible. You conflate, like many of your lot, logical possibility with empirical inference. As an example it is possible to find 7 bit patterns in a random distribution that will equal ‘A’. It is possible also to find a Shakespeare sonnet. The point is not what is possible. If I design a 7 bit pattern to equal ‘A’, it in no way means that all other patterns that exist with that same value, are thus no longer random. Obviously not. Your was designed. the random ones are random strings. The string remains the same in all cases. The origin changes. If random patterns can be found with the same configuration as a designed pattern, those random pre-existing patterns are still considered to be due to random processes, even though the pattern I configured with intentional design, exists in the universe alongside them. And so? The whole point of designing a bit configuration that can also be found occurring randomly, is to show the functionality of a string, that can also exist due to non-design processes. Again, simple nonsense. If you design a functional string of 499 bits, it will be under the UPB. That does not mean that "it can also exist due to non design processes". I would safely bet that such a string cannot exist due to non design processes, even in the whole universe. But UPB is a conventional threshold, that Dembski considers appropriate for the whole inference, and he uses that threshold because over that he feels the inference is absolutely reliable. That does not mean that you are ensured that you will find a functional 499 bits string in a random system. Get it by non designed mechanisms, and then we will discuss. There is a simple fact that you seem not to understand: obtaining a 501 bits functional string by non design mechanisms means one and only one thing: obtaining a 501 bits functional string by non design mechanisms. Obtaining a 501 bits functional string from a designed 499 bits string is all another matter. By designing the string, I am demonstrating it, not forcing its existence. Demonstrating? What do you mean? Without my design efforts, the string could still exist in nature since it is below our design threshold. You have made a law of nature out of an inference procedure! This is what darwinism does to human minds :) So in conclusion, I will design a string, that is also within the capabilities of non-design processes to produce. Then let non design processes produce it! And then go on from it. Since it is below the threshold, it already exists randomly and I don’t have to design it for any other purpose than to show what its capabilities are. I confirm my diagnosis. You are completely lost.gpuccio
November 26, 2012
November
11
Nov
26
26
2012
06:50 AM
6
06
50
AM
PDT
keiths:
And I have given you evidence that selectable intermediates exist. You didn’t deny that Lenski had found one — your only claim was that it was a case of ‘microevolution’, not ‘macroevolution’.
What were those alleged selectable intermediates Lenski found? There weren't any until after the potentiating mutations AND the tandem dupication occurred. Also no one knows if what happened, happened by chance. IOW there is no reason to invoke the blind watchmaker.Joe
November 26, 2012
November
11
Nov
26
26
2012
05:03 AM
5
05
03
AM
PDT
keiths:
Also, my objective nested hierarchy argument shows that the evidence supports the existence of selectable intermediates far, far better than it supports the existence of your designer.
1- keiths does not understand nested hierarchies- that is evidenced by his referencing Doug Theobald, another evo who does not understand the concept 2- Nested hierarchies do not support the existence of selectable internmediates- another fact that proves keiths is clueless wrt nested hierarchies. 3- Linnean taxonomy, the nested hierarchy, was based on a common design. Another fact keiths ignores because he loves his ignorance.Joe
November 26, 2012
November
11
Nov
26
26
2012
05:00 AM
5
05
00
AM
PDT
keiths:
Even you accept the reality of natural selection.
Right, the reality of natural selection is that A) it exists and B) it doesn't do anthing.
Natural selection operates via differential fitness,
Natural selection is differential reproduction due to heritable random variation.
so of course there are fitness parameters in the equations. How could there not be?
But no one can tell what will be selected for at any point in time- it is all relative to the environment. Not only that whatever is good enough survives to reproduce.Joe
November 26, 2012
November
11
Nov
26
26
2012
04:57 AM
4
04
57
AM
PDT
Mark: I believe you may be right that there is a problem of language, and probably it is not only the fault of my being italian, but of a substantial ambiguity in the use of the words. Let's try to clarify. I have checked Wikipedia at "inference":
Definition of inference The process by which a conclusion is inferred from multiple observations is called inductive reasoning. The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations. This definition is disputable (due to its lack of clarity. Ref: Oxford English dictionary: "induction ... 3. Logic the inference of a general law from particular instances.") The definition given thus applies only when the "conclusion" is general. 1. A conclusion reached on the basis of evidence and reasoning. 2. The process of reaching such a conclusion: "order, health, and by inference cleanliness".
That is vague enough to generate confusion. So, let's clarify what I mean, and always have meant, by inference. It is the first meaning: "The process by which a conclusion is inferred from multiple observations is called inductive reasoning. The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations." That is my meaning. I agree that it is probably more correct to say that the inference (in that sense) is made by someone, and not by the explanation itself. That, however, does not change anything. To be even more clear, still from Wikipedia:
Description Inductive reasoning is probabilistic; it only states that, given the premises, the conclusion is probable. A statistical syllogism is an example of inductive reasoning: 90% of humans are right-handed. Joe is a human. Therefore, the probability that Joe is right-handed is 90% (therefore, if we are required to guess we will choose "right-handed" in the absence of any other evidence). As a stronger example: 100% of life forms that we know of depend on liquid water to exist. Therefore, if we discover a new life form it will probably depend on liquid water to exist. This argument could have been made every time a new life form was found, and would have been correct every time. While it is possible that in the future a life form that does not require water will be discovered, in the absence of other factors (e.g. if it were from another planet) then the conclusion is probably correct as it has been in the past. Inductive vs. deductive reasoning Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true.[3] Instead of being valid or invalid, inductive arguments are either strong or weak, which describes how probable it is that the conclusion is true.[4] A classical example of an incorrect inductive argument was presented by John Vickers: All of the swans we have seen are white. Therefore, all swans are white. Note that this definition of inductive reasoning excludes mathematical induction, which is a form of deductive reasoning.
This is what I mean by "inference". I definitely do not mean "logical deduction" or "logical conclusion". The key difference as far as we are concerned is whether a design explanation logical implies a design origin or empirically implies it. And this is my key answer: a design explanation implies a design origin empirically, and not logically implies it. I would better say that it probabilistically points to it (implies is usually meant in a logical sense). You may wrongly infer that a protein is designed. That is not the point. But I don't mean that I make a mistake in assessing dFSCI. What I mean is that dFSCI can be wrong about the design origin, in principle. Empirically, it's all another matter. Yes this is what I mean. And, assuming that you use includes in a reasonably normal sense (but this is getting into a real quagmire) it follows that a design explanation logically implies a design origin. I don't follow you. We have said that my explanation is probabilistic. We have said that it does not imply a design origin logically, but only empirically. My design explanation include the hypothesis of a design origin. What are you saying here? That if I make a hypothesis, that logically implies that the hypothesis is true? I really don't understand. This the same confusion. I am not saying the pattern in the cloud logically implies that the pattern was designed or the origin. What I am saying is that if the pattern was designed that it follows logically (not empirically) that the pattern had a design origin. Again I don't follow you. "To be designed" and "to have a design origin" mean exactly the same thing. Obviously if a pattern is designed it follows logically that it had a design origin. It is the same thing. It is the logical principle of identity. Yes – but the key thing is the nature of that impossibility. I just want you to agree that if origin of the configuration of a digital string was a design process then it follows logically (not empirically) that the configuration of the digital string was designed. I obviously agree with this. I think you agree with this, but these things need absolutely nailing down given the nature of the debate. If you disagree then it should be possible to describe a case where a digital string’s configuration had an origin in a design process and yet was not designed. I don't disagree. I absolutely agree. The configuration of a digital string is designed if and only if the origin was a design process. Is that all? We have a saying in Sicily that, translated into english, would be more or less: "you took a lot of troubles on yourself by just not asking"! I absolutely agree with that statement. For me, the configuration of an object is defined as designed if, and only is, that configuration was outputted into the object by a conscious intelligent being from his personal conscious representations. That is in my original definition. IOWs, designed things are by definition the result of a design process.gpuccio
November 26, 2012
November
11
Nov
26
26
2012
04:08 AM
4
04
08
AM
PDT
Alan Fox: Could you link to just one example? I will give you a very simple example of how we can personally verify that protein superfamilies are not bridged by any homology. I will use the Protein Blat tool on Pubmed. I have chosen, just as an example, the Cytocrome C superfamily in SCOP, a member of the alpha proteins class, and of the fold "Cytocrome C". Just to stay simple, I have chosen a member of the family "monodomain cytocrome c", so that we are sure that we are dealing with a single domain sequence. I have taken, just by chance, a specific protein of the family: Cyanobacterium (Synechococcus elongatus) [TaxId: 32046] In SCOP. In Uniprot, it is listed as an 86 AAs protein, a really simple example. The function is: "Functions as an electron carrier between membrane-bound cytochrome b6-f and photosystem I in oxygenic photosynthesis." From Uniprot. The identifier, which can be used in the Blast tool, is P0A3Y0. I blasted that sequence. I asked for 1000 hits, to go beyond the obvious homologies. The result: Only 712 hits have an E-value lower than 0.05, with a minimal homology of about 30%. Practically all of these proteins are Cytocrome C variant. That is true also of the remaining proteins listed in my 1000 hits list. At the end of the list, the E-value is as high as 0.20. Most hits are specific for proteins of the "monodomain cytocrome c" family, indeed for the subcategory: Cytochrome c6 (synonym: cytochrome c553) domain. I hope this example gives you some idea of how specific sequences are in the proteome, and how insulated they are. It is just a random example. We could choose thousands like that.gpuccio
November 26, 2012
November
11
Nov
26
26
2012
03:20 AM
3
03
20
AM
PDT
keiths:
You are saying that if we currently lack knowledge of selectable intermediates, then the only “realistic” way to model NS is to assume that there are none and that NS therefore plays no role at all.
Or you could be a "good" skeptic and believe that the selectable intermediates once existed even though you have no strong evidence for your belief. Are you a "good" skeptic? You can just 'poof' them into existence as and when needed by your theory.Mung
November 25, 2012
November
11
Nov
25
25
2012
06:27 PM
6
06
27
PM
PDT
Joe Felsenstein on November 24, 2012 at 4:21 pm said:
It is less clear from Mung’s comments that Mung knows what is and is not a GA, and it is unclear what gpuccio would do about modeling evolution, as gpuccio has presented no such models.
Ok, Joe, I'll bite. What is a GA. I'll say what a GA is not. A GA is not a model of evolution.Mung
November 25, 2012
November
11
Nov
25
25
2012
06:16 PM
6
06
16
PM
PDT
Joe Felsenstein on November 25, 2012 at 7:23 am said:
Mung also seems not to realize that by increasing the frequencies of rare alleles one can give rise to new combinations of those alleles that did not exist before.
So? Natural selection didn't put the allele into the genome and it didn't put the combination into the genome either. A new combination could arise even while an allele is still 'rare.' Or is there something to prevent that? Natural selection could just as easily prevent new combinations from arising that did not exist before. Right? Say you have in a population two alleles, A and a. A is increasing in frequency in the population, is being substituted for a, and thus the frequency of a is decreasing. Now imagine a mutation bringing about some new novel allele B. You may have increased the probability that B will be combined with A, but you've likewise decreased the probability that B will be combined with a. And for all we know, given that the mutation is random with regard to fitness, a + B might be beneficial whereas A + B might not be. So what's your point? "Natural selection" cuts both ways. Don't your equations tell you that?Mung
November 25, 2012
November
11
Nov
25
25
2012
05:46 PM
5
05
46
PM
PDT
petrushka:
I continue to be amused by the fact that IDists want the respectability of mathematics, but deny the one mathematical route to modelling the process they wish to discredit.
It just happened, that's all, is hard to model, mathematically or otherwise.Mung
November 25, 2012
November
11
Nov
25
25
2012
05:18 PM
5
05
18
PM
PDT
gpuccio:
I hope that will lower my rate of typos
Maybe, but it may also take some of the fun out of reading your posts!Mung
November 25, 2012
November
11
Nov
25
25
2012
03:45 PM
3
03
45
PM
PDT
I just don’t see that you can say anything about a DNA [s]equence without looking for homologies with known sequences.
which is very easy, I have done it a lot of times.
Could you link to just one example?Alan Fox
November 25, 2012
November
11
Nov
25
25
2012
03:11 PM
3
03
11
PM
PDT
Keiths: The added context just reinforces my point. You are saying that if we currently lack knowledge of selectable intermediates, then the only “realistic” way to model NS is to assume that there are none and that NS therefore plays no role at all. Yes. That’s as ridiculous as saying “I haven’t measured the air resistance of my new car design yet. Therefore the only realistic model of my car’s performance must assume that air resistance plays no role at all.” No. We know that there is air resistance, although we may have not measured it in that specific case. But we have no evidence that your selectable intermediates exist. They are only in your imagination, as far as we know. I am not saying that we know they don't exist. I am definitely saying that we have no reason at all to believe that they exist. The phrase “the only model he can really build at present” does not mean the same thing as “the only realistic model of NS.” And as I pointed out, Joe did the right thing in his equations. Instead of foolishly assigning NS no role, as you recommend, he included fitness parameters in his equations. His equations therefore apply both in cases where selectable intermediates exist and in cases where they don’t. You just have to plug in appropriate values for the fitness parameters. No. He just invented parameters for which there is no empirical support. He is modeling hypothetical selection, not certainly NS. Look at hos "example" os 1s and 0s: can you explain what realtionship that has with NS? None at all. And yet he says that he is modeling NS, and demonstrating that NS can generate CSI. That is ridiculous, superficial, and completely false. Joe’s definition of fitness covers both natural and artificial selection. Your airy dismissals might be more persuasive if you actually understood what you are airily dismissing. I understand it perfectly. And airily! I have already answered your "arguments" based on hierarchies. I won't do it again. And they are irrelevant to the existence of selectable intermediates. There is only one way to prove the existence of selectable intermediates: find the. Nested hierarchies will not help. By the way, thank you for the suggestion about setting the spell checker to english. It seems to work. I hope that will lower my rate of typos :)gpuccio
November 25, 2012
November
11
Nov
25
25
2012
03:05 PM
3
03
05
PM
PDT
Allan Miller: Design is the basis of that paper. What is designed could never have been obtained by RV and NS. And the function they find is not clear, and minimal. That paper is not a good argument for your cause.gpuccio
November 25, 2012
November
11
Nov
25
25
2012
02:53 PM
2
02
53
PM
PDT
Toronto:
I hope you understand what this means for your argument and also Upright BiPed’s. Your “dFSCI” and UBP’s semiotic codes are no longer arbitrary and cannot be considered improbable purely from a statistical perspective, if actual chemistry is involved.
No, I don't understand. I have not been able to understand you for some time, now. What do you mean by "your dFSCI is no longer arbitrary"? (I will leave UB free to answer for himself). Because chemistry is involved? You must have lost any residual clarity of reasoning! How do we compute dFSI? Try to answer, I have said it at least 50 times, I believe. We measure the search space... that is easy... and then? Guess what: we measure the target (functional) space. Escuse me, can you see what that means? We have to evaluate what sequences express the defined function. What do you think is the reason why a protein sequence expresses a function? Guess! Biochemistry. What has that to do with "arbitrary"? The laws of biochemistry obviously dictate which sequences are functional and which are not. That's exactly what Axe is investigating. Whoever said anything different? The statistical aspect is to compute how likely it is to get a functional sequence by RV. You may have not realized it, but the biochemical processes that generate random variation at the nucleotide level are not aware, in any way, of the biochemical laws that determine the function of a protein. Simple, but you seem to miss it. So, RV in the genome is certainly "arbitrary" in respect to the biochemical laws that determine a protein's functionality. It's as simple as that. And you must be desperate to try this kind of argument. Any bit configuration under a certain probability threshold has a possibility of existing without the aid of a conscious designer, according to ID proponents themselves. No. Any bit configuration has some probability of being generated in a random system. That is its only probability. ID proponents have no responsibility for that. ID proponents choose appropriate thresholds to be able to infer design safely, without any reasonable risk of having false positives. That is not the same thing as what you are saying. If I can design a 7 bit string of bits that represent the ASCII letter ‘A’, that configuration is very likely to occur randomly. To be precise, in a system where all configurations have the same probability, that configuration has a probability of 1:2^7, that is 0,0078125, of being found in a single attempt. That is all. Show me where the “trick” is there. Let's go on, and you will see... Any string under a threshold X, has a possibility of occurring 1 in X configurations, regardless of whether that string is identical to some designed string. Obviously. Not completely precise, but I suppose the idea is correct. That is the whole point of Dembski’s and other IDist’s improbability arguments, that something above a certain threshold needs design, not a configuration below that threshold. Completely wrong. The idea is simply that, above some appropriate threshold, some event is simply too improbable, and we refuse a random process as a credible explanation. That does not mean that the event "needs design". Your terminology is naive, imprecise, and confounding. And it does not mean that some event below the threshold can easily come by randomly. The threshold is always very high, because we need absolute specificity. But many event under the threshold are still extremely unlikely. Yours is simply a reification of a simple inference procedure based on probability. If I design a self-replicator below the UPB, that string has a possibility of existing due to random processes. No. It will probably never happen through a random process. If you really believe it can, just get it from a random process. I intend to generate a significant number of bits above my threshold. If you want to falsify dFSCI, you have to generate a string with dFSCI (at an appropriate threshold) entirely by non designed mechanisms. As I have said, you cannot generate a designed string of, say, 499 bits, then add 2 bits by simple random variation, and say that you have overcome the probability barriers of UPB. You have overcome a probability barrier of two bits, and designed 499 bits of functionality. Even a child would understand that. If you really believe in such a silly argument, you are completely lost.gpuccio
November 25, 2012
November
11
Nov
25
25
2012
02:51 PM
2
02
51
PM
PDT
Mark:
If something is logically possible (even if empirically false) it should be possible to imagine it. So please could you describe what it would be like to have: a design explanation for the configuration of a digital string which does not imply a design origin (You quote the example of the cloud that was designed by an airplane a few minutes ago. Surely the airplane creating the cloud is the origin? In any case that is not the configuration of digital string). a design origin for the configuration of a digital string where the correct explanation of the configuration of the digital string is not designed
I am not sure I understand. a) No design explanation implies logically that the origin of the object is a design process. A design explanation infers a design origin. An inference is not a logical implication. It is an attempt at knowing something that we don't really know for certain. So, what do you mean? I make a design inference for a protein, by dFSCI. Does the fact that I made that inference logically imply that the protein had a design origin? No. My inference can well be wrong, like any scientific inference. If you just mean that a design explanation includes the hypothesis of a design origin as part of the explanation, that is true. And absolutely trivial. So, what do you really mean? My example of the airplane, even if analogical and not digital, shows very clearly the difference. When I give the explanation (an airplane designed the cloud image) I don't know the origin of the cloud image, because I was not present, I have just arrived. But I make an inference, following some personal reasoning, and I give an explanation based on the hypothesis of a designing airplane. I can ask the people who were already there, and check if my inference corresponds to the facts they observed, or not. b) I am totally in the dark about your second request. You ask for: "a design origin for the configuration of a digital string where the correct explanation of the configuration of the digital string is not designed" What I wrote is: "If the origin of the configuration of a digital string was a design process (as explained before, an origin is not an object, and cannot be “designed”) then a design explanation of the configuration is empirically correct (that is, the inference made in the explanation, that the origin of the information was a design process, corresponds to facts)". Therefore, if the configuration of an object has a design origin (as a fact), then certainly the only correct explanation for the emergence of that configuration is a design explanation. A non design explanation, in this case, would simply be wrong. So, if I understand well what you are asking for, it is simply impossible.gpuccio
November 25, 2012
November
11
Nov
25
25
2012
02:24 PM
2
02
24
PM
PDT
Alan Fox: Your claim seems to be you can tell something about the origins of, say, a sequence of DNA, merely by looking at it, is that right? This is not my claim. My calim is that it is possible to make a design inference about a protein, or other kinds of strings, when we can recognize and define a function for it, measure the target space/ search space ratio, csrefully consider the string itself and the system and time span of its emergence, and affirm dFSCI for that string according to the right procedure. This is not "merely by looking at it". So no, that's not right. Now, it is open to us these days, now that we can sequence whole genomes, to compare and find homologies. So a whole new field of study has opened up. Obviously. That's exactly the field of study that supportd ID, and falsifies the RV + NS theory. My argument is, indeed, about the emergence of new protein domains, that share no homologies with other previously existing sequences. How do we know that? By looking for homologies, and not finding any. As I said, more than 6000 groups can be derived from the SCOP database that share less than 10% homology. We can also synthesize a protein and test for biological function such as enzymatic activity. (According to Hazen, this is what FSC represents.) Yes, obviously. And we can study the protein functional space in many other ways, both top down and bottom up. That's what Axe is doing. Durston makes no claim to be able to conclude anything about the origins of a protein sequence. I never said he did. At least, not in that paper. I believe it would never have been published, if he had. I have always said that I make a design inference for the 29 protein families for whih Durston computed a dFSI (or FSC, as you like) higher than 150 bits, which is my proposed threshold for the emergence of biological information on our planet, in our planet's time span. I can make up (I mean just write down an arbitrary string) a sequence of nucleotides. Can you tell me, just by looking at it, anything about its origins or its functionality? Not certainly "just by looking at it". But there are the existing proteins in the proteome, for many of which we perfectly know the biochemical function. And there are many indirect ways to approximate the functional space for them, and therefore to compute their dFSI. Durston'r method is the best at present. Axe is approaching the problem differently. I may have stumbled on a novel functional protein. Unless we synthesize and test the protein, how could we possibly know? We can synthesize and test the protein. That's what Szostak did, with the protein he had designed. And, as said, there are many other ways to explore the functional space of proteins. That field in in rapid expansion, as it is strictly connected to protein engineering. We will soon know much more about that. And we already know much. What does your version of dFSI/dFSCI/FSC do in reality? It gives a firm foundation to a design inference for many known proteins, like the 29 families in Durston's paper. I just don’t see that you can say anything about a DNA equence without looking for homologies with known sequences which is very easy, I have done it a lot of times. or testing as-yet unknown sequences for any (which will not be a quick or easy task) function. That's the worst way to approach the problem, but it can be done. A better way is to approach the question top down, like Axe, by checking how robust to change an existing function is, or bottom up, for example by showing how "easy" it is to find a naturally selectable sequence in a random library.gpuccio
November 25, 2012
November
11
Nov
25
25
2012
02:05 PM
2
02
05
PM
PDT
What has the TSZ thread "What Has Gpuccio's challenge shown?", demonstrated? That evos will say anything to try to distract from the fact that their position does not have any positive evidence. Congrats to Mark Frank- nice job ace...Joe
November 25, 2012
November
11
Nov
25
25
2012
10:45 AM
10
10
45
AM
PDT
keiths:
3. The assumption that out of trillions of possibilities, the designer just happens to behave in one of the few ways that produce an objective nested hierarchy and thus make it appear that unguided evolution is operating.
Except that with unguided evolutuion we wouldn't expect to see an objective nested hierarchy. You are just a gullible moron. So there is no way we can assume that keiths posts on good faith because it is obvious that keiths is totally clueless.Joe
November 25, 2012
November
11
Nov
25
25
2012
10:42 AM
10
10
42
AM
PDT
Alan Fox runs back to the safety of TSZ:
This is what puzzles me. How do you know a string is not a random string that just happens to coincide with a designed string? A random DNA sequence can stiil be translated into a protein. How do we know without actually doing the synthesis and checking the resultant protein for properties and functions?
Where did you get that random DNA sequence from, Alan? Can you demonstrate that blind and undirected processes produced it? If you cannot then the only reason to even bring it up is because you do not understand the debate.
Can gpuccio distinguish between a DNA sequence that codes for a functional protein from a sequence just pulled from the air?
No need to as DNA sequences do NOT appear out of the air.
Of course that arbitrary sequence could by chance code for a protein with some activity! I’ve asked him and am interested to see the answer.
You want an answer to your strawman? Now Alan you are and have been ducking many of our questions and our refutations of yur bald assertions. Perhaps you should get to answering those.Joe
November 25, 2012
November
11
Nov
25
25
2012
09:11 AM
9
09
11
AM
PDT
gpuccio:
You can obviously invent all the parameters. You can imagine that 20 intermediates exist, that each of them is more functional than the previous one, and invent a parameter for each selective reproduction rate. You can invent that the transition from one intermediate to the other is not more complex than a few bits. You can build any model you like, but what will it be?
Design by poofing.Mung
November 25, 2012
November
11
Nov
25
25
2012
08:56 AM
8
08
56
AM
PDT
Mr Bell must be as 'thick as two short planks'. And your citing his 'pronunciamento' does Mr Miller scant credit, to put it mildly.Axel
November 25, 2012
November
11
Nov
25
25
2012
08:23 AM
8
08
23
AM
PDT
“Natural selection is a simple theory because it can be understood by everybody; to misunderstand it requires special training.” Indeed, it does. Unfortunately, the only people who need that 'special training' are the very people who conceived it, and are blissfully unaware that it is farcically mythological in nature - despite the ever-growing evidence against it. Only one major, unanswerable question is required for it to be disqualified from the most cursory consideration; for example, the Cambrian Explosion. Until that is persuasively answered, the whole matter should be 'off the table'.Axel
November 25, 2012
November
11
Nov
25
25
2012
08:18 AM
8
08
18
AM
PDT
And Allan Miller chimes in:
Time once again for my favourite quote, from Graham Bell’s Masterpiece of Nature: “Natural selection is a simple theory because it can be understood by everybody; to misunderstand it requires special training.”
Jerk. We understand natural selection, Allan. We understand that nothing gets selected and it doesn't do anything.
Like NS, AS presumably ‘does nothing’.
What a dolt! The two are NOT the same Allan. AS has an agency doing the selecting whereas NS is just a result.Joe
November 25, 2012
November
11
Nov
25
25
2012
07:50 AM
7
07
50
AM
PDT
And Joe Felsenstein continues to prove that he does NOT understand science:
All this avoids the real question: what proof do the UD commenters have that natural selection cannot result in Complex Specified Information being in the genome?
LoL! Earth to Joe- YOU need positive evidence that natural selection is up to the task. And you do NOT have any. I take it that it bothers you that your position doesn't have anything.Joe
November 25, 2012
November
11
Nov
25
25
2012
07:21 AM
7
07
21
AM
PDT
Alan Fox spews:
It is a theory that makes predictions. Predictions such as an objective nested hierarchy of relatedness in living and extinct organisms.
That is incorrect as natural selection does NOT make any predictions and Alan doesn't even know what an "objective nested hierarchy" is. The reality says that any gradual process would produce a smooth blending of characteristics and that would not lead to an objective nested hierarchy. Proof 1 is in family trees- we canNOT create the same objective nested hierarcies out of family trees as we do out of the alleged tree of life Proof 2- there isn't an objective nested hierarchy amongst prokaryotes. And those proofs prove that Alan Fox is ignorant of nested hierarchies. BTW Alan, experiments have demonstrated that the theory of evolution is total BS. For one Lenski's shows how limited evolutionary processes are.
I can make up (I mean just write down an arbitrary string) a sequence of nucleotides.
What a jerk. Alan we need to see that sequence. Just writing it down is a moron's challenge, and here you are.Joe
November 25, 2012
November
11
Nov
25
25
2012
06:39 AM
6
06
39
AM
PDT
gpuccio Sorry, I got distracted. I wanted to just point out that I am having a hard time seeing "the emperor's clothes". Your claim seems to be you can tell something about the origins of, say, a sequence of DNA, merely by looking at it, is that right? Now, it is open to us these days, now that we can sequence whole genomes, to compare and find homologies. So a whole new field of study has opened up. We can also synthesize a protein and test for biological function such as enzymatic activity. (According to Hazen, this is what FSC represents.) Durston makes no claim to be able to conclude anything about the origins of a protein sequence. He only claims:
This method successfully distinguishes between FSC and OSC, RSC, thus, distinguishing between order, randomness, and biological function.
I can make up (I mean just write down an arbitrary string) a sequence of nucleotides. Can you tell me, just by looking at it, anything about its origins or its functionality? I may have stumbled on a novel functional protein. Unless we synthesize and test the protein, how could we possibly know? What does your version of dFSI/dFSCI/FSC do in reality? I just don't see that you can say anything about a DNA equence without looking for homologies with known sequences or testing as-yet unknown sequences for any (which will not be a quick or easy task) function.Alan Fox
November 25, 2012
November
11
Nov
25
25
2012
06:19 AM
6
06
19
AM
PDT
Eric Anderson writes:
NS is not a force; it’s a process.
Even that is generous. In reality, natural selection is simply a label attached to the results of phenomena that, in most cases, we are unable to clearly identify and which we do not fully understand.
The concept is not complicated in my view. Whether you think it is a real process depends on how convincing you find its explanatory power and supporting evidence.
On rare occasions we are able to look at a population and find an obvious biological characteristic that will result in that particular phenotype becoming more prevalent in the population.
I think that depends on how hard we look to see how the environmental niche and its occupants suit each other.
In nearly all cases, however, we look at things after the fact, note that some particular phenotype tended to prosper more than its less fortunate brethren, and proclaim that this was a result of “natural selection” in action.
It is a theory that makes predictions. Predictions such as an objective nested hierarchy of relatedness in living and extinct organisms. Is's what we find. Predictions such as morphologies running through genomes in a similar pattern of relatedness. It's what we find. Predictions that mutations occur and can be selected for. It's what Lenski found.
Natural selection is just a convenient label — a convenient placeholder for our current state of ignorance. It doesn’t do anything.
If you're unconvinced, you're unconvinced. Fair enough. You must have a better explanation, I guess. —–
(Incidentally, I should add the following: Although the majority of people (even evolutionary critics) don’t like to harp on this point, it is nevertheless still quite true that in most cases references to natural selection operate as useless circular tautologies. This is very common in papers and news stories in which this or that biological feature is “explained” as being the result of natural selection.)
"Explanation" means "is consistent with" the theory of evolution. If experimental observations that contradicted an aspect of the theory were made, then (assuming those observations were confirmed - and you can be sure they would be closely scrutinized) then the theory would have to be modified or abandoned. To date the theory of evolution is the only explanation that is consistent with observations. Were there another theory that had a better fit, was consistent with the evidence, and made better predictions, it would be embraced by the scientific community. Perhaps an alternative theory will come along soon. Maybe even the Intelligent Design community will come up with one! Never say never!Alan Fox
November 25, 2012
November
11
Nov
25
25
2012
05:48 AM
5
05
48
AM
PDT
Mark:
Consider these statements: If an explanation of the configuration of a digital string is a design explanation then that explanation entails the origin was designed. If the origin of the configuration of a digital string was designed then a true explanation of the configuration is a design explanation. Are they true?
The first is certainly not true. As I have said, a design explanation does not imply a design origin. It only makes an inference about it. The second one should be reformulated as follows (I will explain the reasons in parentheses): "If the origin of the configuration of a digital string was a design process (as explained before, an origin is not an object, and cannot be "designed") then a design explanation of the configuration is empirically correct (that is, the inference made in the explanation, that the origin of the information was a design process, corresponds to facts)". It is possible that the explanation includes other aspects, that may be correct or not, but if we infer a design origin for the information, and if facts confirm the design origin, then we know that at least the design inference was correct.gpuccio
November 25, 2012
November
11
Nov
25
25
2012
05:36 AM
5
05
36
AM
PDT
1 11 12 13 14 15 37

Leave a Reply