Uncommon Descent Serving The Intelligent Design Community

The TSZ and Jerad Thread, III — 900+ and almost 800 comments in, needing a new thread . . .

Categories
Culture
Design inference
Education
Evolution
ID Foundations
science education
specified complexity
worldview
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Okay, the thread of discussion needs to pick up from here on.

To motivate discussion, let me clip here comment no 795 in the continuation thread, which I have marked up:

_________

>> 795Jerad October 23, 2012 at 1:18 am

KF (783):

At this point, with all due respect, you look like someone making stuff up to fit your predetermined conclusion.

I know you think so.

[a –> Jerad, I will pause to mark up. I would further with all due respect suggest that I have some warrant for my remark, especially given how glaringly you mishandled the design inference framework in your remark I responded to earlier.]

{Let me add a diagram of the per aspect explanatory filter, using the more elaborated form this time}

The ID Inference Explanatory Filter. Note in particular the sequence of decision nodes

 

You have for sure seen the per apsect design filter and know that the first default explanaiton is that something is caused by law of necessity, for good reason; that is the bulk of the cosmos. You know similarly that highly contingent outcomes have two empirically warrantged causal sources: chance and choice.

You kinow full well that he reason chance is teh default is to give the plain benefit of the doubnt to chance, even at the expense of false negatives.

I suppose. Again, I don’t think of it like that. I take each case and consider it’s context before I think the most likely explanation to be.

[b –> You have already had adequate summary on how scientific investigations evaluate items we cannot directly observe based on traces and causal patterns and signs we can directly establish as reliable, and comparison. This is the exact procedure used in design inference, a pattern that famously traces to Newton’s uniformity principle of reasoning in science.]

I think SETI signals are a good example of really having no idea what’s being looked at.

[c –> There are no, zip, zilch, nada, SETI signals of consequence. And certainly no coded messages. But it is beyond dispute that if such a signal were received, it would be taken very seriously indeed. In the case of dFSCI, we are examining patterns relevant to coded signals. And, we have a highly relevant case in point in the living cell, which points to the origin of life. Which of course is an area that has been highlighted as pivotal on the whole issue of origins, but which is one where you have determined not to tread any more than you have to.]

I suppose, in that case, they do go through something like you’re steps . . . first thing: seeing if the new signals is similar to known and explained stuff.

[d –> If you take off materialist blinkers for the moment and look at what the design filter does, you will see that it is saying, what is it that we are doing in an empirically based, scientific explanation, and how does this relate to the empirical fact that design exists and affects the world leaving evident traces? We see that the first thing that is looked for is natural regularities, tracing to laws of mechanical necessity. Second — and my home discipline pioneered in this in C19 — we look at stochastically distributed patterns of behaviour that credibly trace to chance processes. Then it asks, what happens if we look for distinguishing characteristics of the other cause of high contingency, design? And in so doing, we see that there are indeed empirically reliable signs of design, which have considerable relevance to how we look at among other things, origins. But more broadly, it grounds the intuition that there are markers of design as opposed to chance.]

And you know the stringency of the criterion of specificity (especially functional) JOINED TO complexity beyond 500 or 1,000 bits worth, as a pivot to show cases where the only reasonable, empirically warranted explanation is design.

I still think you’re calling design too early.

[e –> Give a false positive, or show warrant for the dismissal. Remember, just on the solar system scope, we are talking about a result that identifies that by using the entire resources of the solar system for its typically estimated lifespan to date, we could only sample something like 1 straw to a cubical haystack 1,000 light years across. If you think that he sampling theory result that a small but significant random sample will typically capture the bulk of a distribution is unsound, kindly show us why, and how that affects sampling theory in light of the issue of fluctuations. Failing that, I have every epistemic right to suggest that what we are seeing instead is your a priori commitment to not infer design peeking through.]

And, to be honest, the only things I’ve seen the design community call design on is DNA and, in a very different way, the cosmos.

[f –> Not so. What happens is that design is most contentious on these, but in fact the design inference is used all the time in all sorts of fields, often on an intuitive or semi intuitive basis. As just one example, consider how fires are explained as arson vs accident. Similarly, how a particular effect in our bodies is explained as a signature of drug intervention vs chance behaviour or natural mechanism. And of course there is the whole world of hypothesis testing by examining whether we are in the bulk or the far skirt and whether it is reasonable to expect such on the particularities of the situation.]

The real problem, with all respect, as already highlighted is obviously that this filter will point out cell based life as designed. Which — even though you do not have an empirically well warranted causal explanation for otherwise, you do not wish to accept.

I don’t think you’ve made the case yet.

[f –> On the evidence it is plain that there is a controlling a priori commitment at work, so the case will never be perceived as made, as there will always be a selectively hyperskeptical objection that demands an increment of warrant that is calculated or by unreflective assertion, unreasonable to demand, by comparison with essentially similar situations. Notice, how ever so many swallow a timeline model of the past without batting an eye, but strain at a design inference that is much more empirically reliable on the causal patterns and signs that we have. That’s a case of straining at a gnat while swallowing a camel.]

I don’t think the design inference has been rigorously established as an objective measure.

[g –> Dismissive assertion, in a context where “rigorous’ is often a signature of selective hyperskepticism at work, cf, the above. The inference on algorithmic digital code that has been the subject of Nobel Prize awards should be plain enough.]

I think you’ve decided that only intelligence can create stuff like DNA.

[h –> Rubbish, and I do not appreciate your putting words in my mouth or thoughts in my head that do not belong there, to justify a turnabout assertion. You know or full well should know, that — as is true for any significant science — a single well documented case of FSCO/I reliably coming about by blind chance and/or mechanical necessity would suffice to break the empirical reliability of the inference that eh only observed — billions of cases — cause of FSCO/I is design. That you are objecting on projecting question-begging (that is exactly what your assertion means) instead of putting forth clear counter-examples, is strong evidence in itself that the observation is quite correct. That observation is backed by the needle in the haystack analysis that shows why beyond a certain level of complexity joined to the sort of specificity that makes relevant cases come from narrow zones T in large config spaces W, it is utterly unlikely to observe cases E from T based on blind chance and mechanical necessity.]

I haven’t seen any objective way to determine that except to say: it’s over so many bits long so it’s designed.

[i –> Strawman caricature. You know better, a lot better. You full well know that we are looking at complexity AND specificity that confines us to narrow zones T in wide spaces of possibilities W such that the atomic resources of our solar system or the observed cosmos will be swamped by the amount of haystack to be searched. Where you have been given the reasoning on sampling theory as to why we would only expect blind samples comparable to 1 straw to a hay bale 1,000 light years across (as thick as our galaxy) will reliably only pick up the bulk, even if the haystack were superposed on our galaxy near earth. Indeed, just above you had opportunity to see a concrete example of a text string in English and how easily it passes the specificity-complexity criterion.]

And I just don’t think that’s good enough.

[j –> Knocking over a strawman. Kindly, deal with the real issue that has been put to you over and over, in more than adequate details.]

But that inference is based on what we do know, the reliable cause of FSCO/I and the related needle in the haystack analysis. (As was just shown for a concrete case.)

But you don’t know that there was an intelligence around when one needed to be around which means you’re assuming a cause.

[k –> Really! You have repeatedly been advised that we are addressing inference on empirically reliable sign per patterns we investigate in the present. Surely, that we see that reliably, where there is a sign, we have confirmed the presence of the associated cause, is an empirical base of fact that shows something that is at least a good candidate for being a uniform pattern. We back it up with an analysis that shows on well accepted and uncontroversial statistical principles, why this is so. Then we look at cases where we see traces from the past that are comparable to the signs we just confirmed to be reliable indices. Such signs, to any reasonable person not ideologically committed to a contrary position, will count as evidence of similar causes acting in the past. But more tellingly, we can point to other cases such as the reconstructed timeline of the earth’s past where on much weaker correlations between effects and putative causes, those who object to the design inference make highly confident conclusions about the past and in so doing, even go so far as to present them as though they were indisputable facts. The inconsistency is glaringly obvious, save to the true believers in the evo mat scheme.]

And you’re not addressing all the evidence which points to universal common descent with modification.

[l –> I have started form the evidence at the root of the tree of life and find that there is no credible reason to infer that chemistry and physics in some still warm pond or the like will assemble at once or incre4mentally, a gated, encapsulated, metabolising entity using a von Neumann, code based self replicator, based on highly endothermic and information rich macromolecules. So, I see there is no root to the alleged tree of life, on Darwinist premises. I look at the dFSCI in the living cell, a trace form the past, note that it is a case of FSCO/I and on the pattern of causal investigations and inductions already outlined I see I have excellent reason to conclude that the living cell is a work of skilled ART, not blind chance and mechanical necessity. thereafter, ay evidence of common descent or the like is to be viewed in that pivotal light. And I find that common design rather than descent is superior, given the systematic pattern of — too often papered over — islands of molecular function (try protein fold domains) ranging up to suddenness, stasis and the scope of fresh FSCO/I involved in novel body plans and reflected in the 1/4 million plus fossil species, plus mosaic animals etc that point to libraries of reusable parts, and more, give me high confidence that I am seeing a pattern of common design rather than common descent. This is reinforced when I see that ideological a prioris are heavily involved in forcing the Darwinist blind watchmaker thesis model of the past.]

We’re going around in circles here.

[m –> On the contrary, what is coming out loud and clear is the ideological a priori that drives circularity in the evolutionary materialist reconstruction of the deep past of origins. KF]>>

___________

GP at 796, and following,  is also a good pick-up point:

__________

>>796

  1. Joe:

    If a string for which we have correctly assesses dFSCI is proved to have historically emerged without any design intervention, that would be a false positive. dFSCI has been correctly assessed, but it does not correspond empirically to a design origin.

    It is important to remind that no such example is empirically known. That’s why we say that dFSCI has 100% specificity as an indicator of design.

    If a few examples of that kind were found, the specificity of the tool would be lower. We could still keep some use for it, but I admit that its relevance for a design inference in such a fundamental issue like the interpretation of biological information woudl be heavily compromised.

  2. If you received an electromagnetic burst from space that occurred at precisely equal intervals and kept to sidereal time would that be a candidate for SCI?

  3. Are homing beacons SCI?

  4. Jerad:

    As you should know, the first default is look for mechanical necessity. The neutron star model of pulsars suffices to explain what we see.

    Homing beacons come in networks — I here look at DECCA, LORAN and the like up to today’s GPS, and are highly complex nodes. They are parts of communication networks with highly complex and functionally specific communication systems. Where encoders, modulators, transmitters, receivers, demodulators and decoders have to be precisely and exactly matched.

    Just take an antenna tower if you don’t want to look at anything more complex.

    KF>>

__________

I am fairly sure that this discussion, now in excess of 1,500 comments, lets us all see what is really going on in the debate over the design inference. END

Comments
Toronto:
Have you changed your mind?
Do you have a mind that can be changed?Mung
November 21, 2012
November
11
Nov
21
21
2012
06:28 PM
6
06
28
PM
PDT
Allan Miller:
I would encourage Mung to write a GA. It’s an interesting programming task, and may be instructive in terms of gaining an understanding of what faithful representations of evolutionary mechanisms do to populations of replicating strings. The relevance of it to biological considerations may (or may not) become apparent as part of the process.
I did write one. People at TSZ were not impressed. :) But I'm still willing. Would you or anyone else there like to present a programming challenge involving a GA? I can publish my code online and we can discuss it openly. I am completely open to it as a learning experience.
If you don’t or won’t understand the evolutionary process, you are in a position neither to critique it as an explanatory mechanism, nor to declare it incapable of functioning as a ‘necessity mechanism’ alternative to active string design.
The debate is over whether GA's accurately represent the evolutionary process. Take Elizabeth's GA. Her strings had a pre-defined function, and she selected them based on how well they performed the function. So hers was an optimization problem that she was trying to solve. Is that your understanding of evolution? It's simply an optimization strategy? That cannot be. Optimization is goal seeking.
Start with strings of zero length, and keep tabs on descent from a collection of such null-strings numbered 1 to n. This is NOT an analogue of a DNA string of zero length, nor the OoL, it is an analogue of a replicator that can do no more than merely replicate, no better or worse than any other. Copy and kill them at random … and one of them will become the ancestor of all, guaranteed.
What does this have to do with natural selection? How does this explain functionality and the appearance of design?
Then introduce methods that add and change bits, recombination, and internal duplication of string segments and a fitness function operating on these now non-null bits of ‘extra’ string.
I can simply add those methods, I don't need to evolve them? How would that intervention be discernable from goddidit?
You are randomly patterning your replicators, introducing non-critical function which nonetheless affects fitness differentially. Your strings adapt to the prevailing conditions, as if designed to fit.
How does one introduce non-critical function without design? Without some sort of design decision, why should that non-critical function affect differential reproduction?Mung
November 21, 2012
November
11
Nov
21
21
2012
06:25 PM
6
06
25
PM
PDT
Joe Felsenstein:
At forums like UD you see repeated assertions that if we find CSI (or something), that there is no way that this could have been put into the genome by natural selection.
Meanwhile, over at forums like TSZ, we have posts by actual educators [paid by the state?] in which they assert that things are put into the genome by natural selection. Is that what you teach your students Joe? Do you teach that natural selection is the "creative force" that makes a Creator unnecessary? And the strong evidence that you offer consists of? You're no skeptic. Is anyone posting at TSZ a real skeptic? I hear you banned the only true skeptic to show up there.Mung
November 21, 2012
November
11
Nov
21
21
2012
05:56 PM
5
05
56
PM
PDT
Allan Miller chokes:
Since GP rejects all models of evolution due to their inevitably being non-’natural’,
That is incorrect. We reject the alleged models because you cannot model what you do not understand. And you do not understand biology enough to model biological evolution. Not only that Intelligent Design is OK with evolution. Intelligent Design says that organisms were designed to evolve and evolved by design. That said, when AVIDA is given realistic parameters the model of evolution bites it- meaning it does not do what you require and actually does the opposite.Joe
November 21, 2012
November
11
Nov
21
21
2012
02:48 PM
2
02
48
PM
PDT
And omtwo admits it needs to get a life:
What amazes me is the continued absence of Dr Dembski from UD and the continued absence of any question as to why from any UDite.
To be amazed by something as mundane and uninteresting as that seals the deal on omtwo being a tosser and a wanker. Not necessarily in that order.Joe
November 21, 2012
November
11
Nov
21
21
2012
02:42 PM
2
02
42
PM
PDT
Joe Felsenstein:
At forums like UD you see repeated assertions that if we find CSI (or something), that there is no way that this could have been put into the genome by natural selection.
That's correct. And there are at least two reasons why it is the case. 1. Natural selection doesn't "put things into the genome." At best it can only preserve or eliminate what is already there. For something to be "selectable" it must already exist. 2. If natural selection is a deterministic mechanism, as you claim, it cannot create information.Mung
November 21, 2012
November
11
Nov
21
21
2012
02:15 PM
2
02
15
PM
PDT
And check out the egotard- he really thinks he invented "BWAAAAAHAHAHAHAHAHAHAHHAHAH" This is the sort of egotard it takes to be an evo. Richie "cupcake" Hughes, the Al Gore of "BWAAAHAHAHAHAHAAHAHAHAHA"Joe
November 21, 2012
November
11
Nov
21
21
2012
01:48 PM
1
01
48
PM
PDT
Joe Felsenstein, still clueless:
At forums like UD you see repeated assertions that if we find CSI (or something), that there is no way that this could have been put into the genome by natural selection.
Except it isn't an assertion. That is based on all of our knowledge of cause and effect relationships. Why do you guys try to blame us for your position's lack of supporting evidence?
The UD commenters have not realized that since Dembski’s Law of Conservation of Complex Specified Information collapsed,...
Collapsed under your tremendous girth, no doubt. Is that what you mean, Joe? Or are you still fantasizing?Joe
November 21, 2012
November
11
Nov
21
21
2012
01:43 PM
1
01
43
PM
PDT
Joe Felsenstein:
I am waiting for some theoretical framework (equations, simulations) showing that dFCSI cannot arise by natural selection.
Keep waiting. Nothing arises via natural selection. Natural selection is a filter. It keeps or rejects certain organisms that have already arisen.Mung
November 21, 2012
November
11
Nov
21
21
2012
10:46 AM
10
10
46
AM
PDT
Alan Fox:
To date, despite the manful attempts by hard-working commenters like Sal Cordova, Joe and mung, I never got a satisfactory reply. Do you think you can successfully use the EF?
You forgot to mention gpuccio. What do you think his dFSCI is an implementation of. So it's been used, successfully, and you know it.Mung
November 21, 2012
November
11
Nov
21
21
2012
10:40 AM
10
10
40
AM
PDT
kairosfocus- Seeing that von Braun was a Creationist, they probably reject rocket science. :roll:Joe
November 21, 2012
November
11
Nov
21
21
2012
08:52 AM
8
08
52
AM
PDT
PS: It is not exactly rocket science to be able to see that digital strings are real, and that once we deal with above 500 bits or equivalent, we have more possibilities than the atoms of our solar system can reasonably explore to find deeply isolated and narrow zones by blind processes. In context, the high contingency of digital systems -- obvious for codes for different proteins -- eliminates blind necessity. Blind chance or chance and necessity run into the needle in haystack challenge, here the haystack as discussed is 1,000 LY ON THE SIDE AND YOU GET ONE CHANCE TO PICK A ONE STRAW SIZED SAMPLE. On billions of observed cases -- start with the Internet -- dFSCI is observed reliably, routinely and without exception, to come from intelligence. In the case of GA's and the like, they all start WITHIN islands of function -- cf the fitness function and its useful well behaved slope, and proceed based on intelligently designed search and selection procedures that use carefully metered out chance variations to search within the island. Not a counter-example, but another example on the point. In this light we have a confident inductive inference that dFSCI is a reliable observable sign of design. The problem with cell based life is not that this is contradicted by observed cases, but that such an inference runs counter to deeply entrenched institutionally dominant ideology. If you doubt me on this, kindly provide an empirically warranted detailed, and widely accepted summary of OOL by chance and or necessity only in a warm little pond or the like. Similarly, for OO body plans. Indeed, there is a still open 6,000 word essay challenge unmet since Sept 23rd.kairosfocus
November 21, 2012
November
11
Nov
21
21
2012
08:28 AM
8
08
28
AM
PDT
AF: Why not simply read here on in context? Your question has been directly and even trivially answered any time someone points to the dFSCI involved in D/RNA and proteins, once reasonable complexity is involved, as obtains with ALL observed life forms, from about 100 k bases on up. But then, I see where some on your side are now struggling to accept that the D/RNA code is just that, a digital symbolic, info-carrying code, familiar to anyone who has had to deal with a register level view of a significant digital system. This speaks volumes, none of it to your side's good. KFkairosfocus
November 21, 2012
November
11
Nov
21
21
2012
08:15 AM
8
08
15
AM
PDT
Clueless Joe Felsenstein:
I am waiting for some theoretical framework (equations, simulations) showing that dFCSI cannot arise by natural selection.
That is based on all observations and experiences. Meaning YOU need to step up and demonstrate that natural selection can produce dFSCI. The GAs you are so proud of, start with the dFSCI that needs explaining
Lacking that, I don’t find dFCSI of any use.
It is obvious that natural selection isn't of any use except to desperate evos who will use it to fool the fools.Joe
November 21, 2012
November
11
Nov
21
21
2012
07:55 AM
7
07
55
AM
PDT
Alan, That all depends on how you define "succesful". I say it has been done for biological organisms and to date neither you nor anyone else on this planet can produce any evidence to teh contrary, ie that blind and undirected processes can produce a living organism from inanimate matter. As I said the EF is a process that ensures that the investigator follow Newton's four rules of scientific investigation. That is all it is. And as a matter of fact everyone who conducts an investigation needs to use it to conduct a proper, scientific investigation. So I don't see what your issue with the EF is. Anyone with any interest in science should be able to see that it is as I say. So what is your problem with the EF? How do you think investigatirs determine the cause of the effect they are investigating?Joe
November 21, 2012
November
11
Nov
21
21
2012
07:49 AM
7
07
49
AM
PDT
@ kairosfocus You link to a version of Dembski's explanatory filter. A while ago (over seven years), I asked a question at ARN forum:
Can anyone point me to an example of the successful application of the EF to a biological system?
To date, despite the manful attempts by hard-working commenters like Sal Cordova, Joe and mung, I never got a satisfactory reply. Do you think you can successfully use the EF?Alan Fox
November 21, 2012
November
11
Nov
21
21
2012
06:39 AM
6
06
39
AM
PDT
Joe: Inter alia they seem to have problems correctly interpreting a simple flowchart. That speaks volumes. KFkairosfocus
November 21, 2012
November
11
Nov
21
21
2012
06:30 AM
6
06
30
AM
PDT
Mark Frank:
There are two stages to the ID argument: 1) Evolution can’t explain life. 2) Therefore life was designed.
Nope, nice strawman though. Geez it's as if these alleged "skeptics" are just a bunch of willfully ignorant children.Joe
November 21, 2012
November
11
Nov
21
21
2012
04:55 AM
4
04
55
AM
PDT
LoL!: Joe Felsenstein:
gpuccio does not have any mathematical theorem showing in a model of evolution that dFCSI cannot arise by natural selection
Have archaeologists put forth a mathematical theorem that says natural processes cannot produce Stonehenge? No. Joe Felsenstein seems to be totally ignorant of science as he does not understand that his position requires POSITIVE evidence. Earth to Joe Felsenstein- YOU need positive evidence that natural selection, for example, is up to the task.Joe
November 21, 2012
November
11
Nov
21
21
2012
04:03 AM
4
04
03
AM
PDT
Alan Fox:
Well, if that’s so, why on Earth use another term?
They are not exacty the same.
Do you think Durston or you have the means to predict the functionality of proteins not yet existing in vivo or in vitro?
Why is that required? AGAIN functionality is an OBSERVATION. We make an OBSERVATION that something is doing something, ie functionality and then via science we try to explain how it came to do that. And Alan, if you had any evidence, any evidence at all, tat unguided evolution could produce functioning proteins, you would post it. That you haven't posted it proves that you have nothing.Joe
November 21, 2012
November
11
Nov
21
21
2012
04:00 AM
4
04
00
AM
PDT
Gpuccio writes:
The problem, in a nutshell, is your quote mining.
In practice, the sequence space for possible novel functional states may not be known.
It's a simple statement of fact. What you quote that follows does not alter the meaning. Do you think Durston or you have the means to predict the functionality of proteins not yet existing in vivo or in vitro? A quote mine is selective quoting to alter the meaning that was intended. Not seing how I have done that.Alan Fox
November 21, 2012
November
11
Nov
21
21
2012
01:22 AM
1
01
22
AM
PDT
dFSI and FSC are the same thing. Durston’s procedure is an empirical way to approximate the value of dFSI.
Well, if that's so, why on Earth use another term? Just to be clear, Durston's FSC and your dFSCI (and now you are using dFSI)are identical concepts? Now Durston builds on Hazen. The Hazen paper is very clear and easy to follow. FSC is a measured property arrived at by comparing the degree of function that, say, a particular protein sequence has at a specific task such as catalyzing a particular reaction. Functionality can be expressed as a rate of enzymic activity that can be compared. There is no question of drawing any inference about the origin of the sequence or its rarity in this process. Durston is less clear.Alan Fox
November 21, 2012
November
11
Nov
21
21
2012
01:17 AM
1
01
17
AM
PDT
Alan Fox: But there is no mention of dFSCI in the paper. Is it claimed that dFSCI and FSC (functional sequence complexity) are the same concept? dFSI and FSC are the same thing. Durston's procedure is an empirical way to approximate the value of dFSI. That is lredy obvious in the opening phrase: "Abel and Trevors have delineated three aspects of sequence complexity, Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC) observed in biosequences such as proteins. In this paper, we provide a method to measure functional sequence complexity." Emphasis mine. And: "As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. FSC includes the dimension of functionality". You quote, from the paper: In practice, the sequence space for possible novel functional states may not be known. But why do you stop there? The paper goes on, as follows: "In practice, the sequence space for possible novel functional states may not be known. However, by considering particular proteins, estimated mutation rates, population size, and time, an estimated value for the probability can be chosen and substituted into the relevant components of Eqn. (9) to limit search areas around known biosequences that are observed, such as protein structural domains, to see what other possible states within that range might have some selective advantage." The problem, in a nutshell, is your quote mining. Durston gives a method and applies it. And that method approximately measures the dFSI linked to a function.gpuccio
November 21, 2012
November
11
Nov
21
21
2012
01:01 AM
1
01
01
AM
PDT
Joe Felsenstein: gpuccio does not have any mathematical theorem showing in a model of evolution that dFCSI cannot arise by natural selection Why should I? I suppose it's you that should have some mathemathical theorem showing in a model of evolution that dFCSI can arise by natural selection. Please, note the emphasis on "natural selection", before you run to pseudo arguments about GAs that do not model NS at all!gpuccio
November 21, 2012
November
11
Nov
21
21
2012
12:50 AM
12
12
50
AM
PDT
Zachriel: dFSCI is not a theory. It is a procedure in a theory. ID is the theory. dFSCI is a tool for design detection. The theory is that it is possible to detect design by that tool.gpuccio
November 21, 2012
November
11
Nov
21
21
2012
12:47 AM
12
12
47
AM
PDT
Alan Fox: In all the discussion, unless I overlooked it, you have not shown how to estimate your dFSCI “parameter” or what purpose the exercise would serve. You have definitely overlooked it. I have assessed dFSCI for the simple software string offered by Mung. And I have affirmed dFSCI for the 29 protein families analyzed in the Durston paper, whose FSC exceed 150 bits. The purpose of the exercise in the case of Mung's string, or of any other that can be proposed, is to show that dFSCI has 100% specificity in detecting design in the cases where a design origin can be independently confirmed. The purpose of affirming dFSCI for the 29 protein families is to show that it can be used to infer design in cases where the origin id not known. Does that asnwer your questions? Apart from that, well done! Thank you! :)gpuccio
November 21, 2012
November
11
Nov
21
21
2012
12:44 AM
12
12
44
AM
PDT
Joe. Mung, KF: Thank you guys for the good work. Joe to Toronto: 1- The design inference mandates discrediting darwinism. So we have to- it’s part of the rules of science Absolutely. We could ignore neo darwinism if it were just a wrong and irrational theory proposed by some isolated fool and correctly diasavowed by all. But unfortunately, that's not the case. Neo darwinism is a wrong and irrational theory that has gained the almost universal acclaim of the scientific world. That, I suppose, requires some correction... Joe to Joe Felsenstein: No, in that case gpuccio, and others, would say that dFSCI is not a true indicator of design. Absolutely. Our confused interlocutors have been sticking to the "necessity clause" for all possible invented reasons: to suggest a non existing circularity, to suggest that dFSCI is subjective, to suggest that dFSCI is useless, to suggest that dFSCI could contradict itself: emphasis on "suggets2, no real argument or example ever provided for those fantasies. They simply don't want to understand te meaning of the "necessity clause". Which is important, but only for very simple considerations, that have nothing to do with myths like NS as possible origin of dFSCI. The "necessity clause" is simply a duty if correct reasoning. As the puspose of the whole procedure is to infer design, we have the duty to ask ourselves: is there a simple explanation of how that arrangement could arise in this system, without a design intervention? The answer is very simple, if we consider the natural laws acting in the system, and the kind of information linked to the function. Essentially, laws work through regularity. So, we look for obvioous regularities that could be explained by the laws acting in the system. Let's take the case of protein genes, which are after all the object of the whole discussion. We know, form biochemistry, that the protein sequence is what makes the protein functional. We know also that there is no simple way to find what protein sequence will be functional: it can probably be computed from the laws of chemistry, but the computation is so complex that we cannot do it ourselves, with all our computational resources and all our understanding of how chemistry works. That's obciously enough to understand that those sequences cannot arise in a biochemical system by necessity. But there could be exceptions. I have proposed many times the example of a sequence of one aminoacid. While a protein made of 100 alanins, just to make an example, is certainly non functional in a general sense, we could probably find some use for it, with some imagintion. As the dFSCI procedure leaves the functional definition completely free, we could in some contexts be tempted to assign dFSCI to such a protein. But the necessity clause is a safeguard against that error, that could generate a false design inference. The simple repetition of one aminoacid is definitely a regularity that can arise in a biochemical system. So, we have to exclude that kind of sequences. The same could be said of a long protien which is only the repretition, for example, of a fixed 4 AAs module. Agai, tha obvious regularity makes that result possible in a biochemical system. Modular repetitions are observed in DNA, and there is no special reason not to believe that they can arise in the processes of DNA duplication. So, even if a completely modular protein were functional (which is not very likely), it would not exhibit dFSCI, if the module itself is very simple. But the simple fact is that complex functional proteins are not that way. They are strictly dependent on a definite sequence that has no obvious regularity, no more that a computer software code has obvious regularities, or a sonnet. The objection that some small regularities always can exist, and all the stupid discussion about possible compressions, is completely irrelevant. It is like saying that, as Shakespeare's sonnet can be zipped, then the regularities that allow the zipping can explain its meaning. In the same way, even if a the myoglobin sequence can certainly be somewhat compressed by some algorithm (which is true also of any random sequence, because truly random sequences exhibit some regularities), that certainly does not mean that those small regularities that can be compressed explain the myoglobin function. The function of a proteins is explained only by the special setting of those "configurable switches" (as Abel says) that are free from the restraints of necessity, and that only conscious intelligence can set in a very specific arrangement, to express thye function. No law can do that, because that arrangement is only linked to function by intelligent understanding, and never by repetitive law. That is the only role of the "necessity clause". It has nothing to do with NS, which is a necessity mechanism, but simply does not explain the sequences we observe. It has nothing to do with future scientific revolutions: we will deal with them if and when they come. It has nothing to do with excluding "possible" explanations that nobody know of: excluding the undefined possible is simply impossible. It has nothing to do with strange wars between cognitive groups, where some know an explanation and keep it hidden from others, probably for copyright reasons: the "knowledge" my definition refers to is simply our present scientific understanding of nature, shared by all who can access internet: how we understand physics, how we understand biochemistry, how we understand probability, and so on. It has nothing to do with possible errors in applying dFSCI that can be done by some: any procedure can be applied incorrectly. It has nothing to do with possible errors that would be made by people of ceturies ago, if they could understand the procedure, while ignoring modern science: a special prize goes to Mark for that, because his imagination is as big as his faulty logic. Our interlocutors have tried anything available to them: to make dFSCI circular, to make it useless, to make it irrelevant, to make it wrong, to make it politically incorrect, and so on. Unfortunately (for them) what is available to them can never undermine the simple fact that they have lost their challenge. I have applied dFSCI to all the examples that have been proposed, I have found true positives, and never a false positive. QED.gpuccio
November 21, 2012
November
11
Nov
21
21
2012
12:38 AM
12
12
38
AM
PDT
Alan Fox:
There’s the problem in a nutshell.
And yet science continues to advance in the face of incomplete knowledge and it's difficult to think of something being informative when all is known in advance. And science is tentative. Weak rejoinder, really weak. If we become aware that the the sequence space for possible novel functional states is much larger than previously thought, we take that into account. That's science. If the probabilities then become such that they fall below the bound (e.g., 500 bits), then the dFSCI concept becomes useless. That's science. And this highlights that it is testable and falsifiable. IOW, you folks over at TSZ really have nothing to complain about except for the fact that given our currently state of knowledge gpuccio appears to be right and you appear to be wrong. Why not just admit it and hope for something to come along and help your side in the future? I'll tell you why. Ideology. It has nothing to do with science. Nothing to do with empiricism. Nothing to do with skepticism. It's Ideology.Mung
November 20, 2012
November
11
Nov
20
20
2012
02:51 PM
2
02
51
PM
PDT
mung:
But he[gpuccio]did point you to the Durston paper, repeatedly, as an example of the concept in action.
But there is no mention of dFSCI in the paper. Is it claimed that dFSCI and FSC (functional sequence complexity) are the same concept? From the paper:
Consider the number of all possible sequences is denoted by W...
and
In practice, the sequence space for possible novel functional states may not be known.
There's the problem in a nutshell.Alan Fox
November 20, 2012
November
11
Nov
20
20
2012
01:07 PM
1
01
07
PM
PDT
Mung: remember it has to be functional step by step, all the way from a few bits to at least 500 bits, starting from a random walk from an arbitrary sequence. KFkairosfocus
November 20, 2012
November
11
Nov
20
20
2012
11:07 AM
11
11
07
AM
PDT
1 15 16 17 18 19 37

Leave a Reply