Uncommon Descent Serving The Intelligent Design Community

A Darwinist responds to KF’s challenge

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

It has been more than a year since kairosfocus posted his now-famous challenge on Uncommon Descent, inviting Darwinists to submit an essay defending their views. A Darwinist named Petrushka has recently responded, over at The Skeptical Zone. (Petrushka describes himself as a Darwinist in a fairly broad sense of the term: he accepts common descent as a result of gradual, unguided change, which includes not only changes occurring as a result of natural selection but also neutral change.)

The terms of the original challenge issued by kairosfocus were as follows:

Compose your summary case for darwinism (or any preferred variant that has at least some significant support in the professional literature, such as punctuated equilibria etc) in a fashion that is accessible to the non-technical reader — based on empirical evidence that warrants the inference to body plan level macroevolution — in up to say 6,000 words [a chapter in a serious paper is often about that long]. Outgoing links are welcome so long as they do not become the main point. That is, there must be a coherent essay, with

(i) an intro,
(ii) a thesis,
(iii) a structure of exposition,
(iv) presentation of empirical warrant that meets the inference to best current empirically grounded explanation [–> IBCE] test for scientific reconstructions of the remote past,
(v) a discussion and from that
(vi) a warranted conclusion.

Your primary objective should be to show in this way, per IBCE, why there is no need to infer to design from the root of the Darwinian tree of life — cf. Smithsonian discussion here – on up (BTW, it will help to find a way to resolve the various divergent trees), on grounds that the Darwinist explanation, as extended to include OOL, is adequate to explain origin and diversification of the tree of life. A second objective of like level is to show how your thesis is further supported by such evidence as suffices to warrant the onward claim that is is credibly the true or approximately true explanation of origin and body-plan level diversification of life; on blind watchmaker style chance variation plus differential reproductive success, starting with some plausible pre-life circumstance.

It would be helpful if in that essay you would outline why alternatives such as design, are inferior on the evidence we face.

Here is Petrushka’s reply:

Evolution is the better model because it can be right or wrong, and its rightness or wrongness can be tested by observation and experiment.

For evolution to be true, molecular evolution must be possible. The islands of function must not be separated by gaps greater than what we observe in the various kinds of mutation. This is a testable proposition.

For evolution to be true, the fossil record must reflect sequential change. This is a testable proposition.

For evolution to be true, the earth must be old enough to have allowed time for these sequential changes. This is a testable proposition.

Evolution has entailments. It is the only model that has entailments. It is either right or wrong, and that is a necessary attribute of any theory or hypothesis.

Evolution is a better model for a second reason. It seeks regularities.

Regularity is the set of physical causes that includes uniform processes, chaos, complexity, stochastic events, and contingency. Regularity can include physical laws, mathematical expressions that predict relationships among phenomena. Regularity can include unpredictable phenomena, such as earthquakes, volcanoes, turbulence, and the single toss of dice.

Regularity can include unknown causes, as it did when the effects of radiation were first observed. It includes currently mysterious phenomena such as dark matter and energy. The principle has been applied to the study of psychic phenomena.

Regularity can include design, so long as one can talk about the methods and capabilities of the designer. One can study spider webs and bird nests and crime scenes and ancient pottery, because one can observe the agents producing the designed objects.

The common threads in all of science are the search for regularities and the insistence that models must have entailments, testable implications. Evolution is the only theory meeting these criteria.

One could assert that evolution is true, but it is more important to say it is a testable model. That is the minimum requirement to be science.

PS:

My references are the peer-reviewed literature. We can take them one by one, if kairosfocus deems it necessary to claim the publishing journals have overlooked errors of fact or interpretation.

PPS:

To make Dembski’s explanatory filter relevant, one must demonstrate that natural history is insufficient. So I will entertain ID arguments that can cite the actual history of the origin of life and point out the place where intervention was required or where some deviation from regular process occurred.

Same for complex structures such as flagella. Cite the actual history and point out where a saltation event occurred.

Or cite any specific reproductive event in the history of life and point out the discontinuity between generations.

PPPS:

If CSI or any of its variants are to be cited, please discuss whether different living things have different quantities of CSI. For example, does a human have more CSI than a mouse? Than an insect? Than an onion? Please show your calculation.

Alternatively, discuss whether a variant within a species can be shown to have more or less CSI than another variant. Perhaps a calculation of the CSI in Lenski’s bacteria before and after adaptation.

These are just proposed examples. Any specific calculation would be acceptable, provided it can provide a direct demonstration of different quantities of CSI in different organisms.

In his original challenge, kairosfocus promised:

I will give you two full days of comments before I post any full post level response; though others at UD will be free to respond in their own right.

So let’s hear it from viewers. What do readers think?

Comments
In response to Petrushka's comments:
Evolution is the better model because it can be right or wrong, and its rightness or wrongness can be tested by observation and experiment.
ID can also be right or wrong, so no advantage there. Observation and experiment have never shown any of the larger claims of evolution to be correct. No new body plans, no information-rich systems, no complex functional machines have ever been observed or seen by direct experimentation to come about through alleged evolutionary mechanisms. All of the interesting questions about evolution lie at the end of a long trail of inferences, suppositions, and speculations. The only way Petrushka’s first paragraph even comes close to being true at first blush is due to the rhetorical trick of defining “evolution” so broadly that it encompasses virtually everything. No-one doubts any of the minor observational evidence (finch beaks, bacterial resistance, peppered moths and so on). No-one has every observed or demonstrated the required major evidence.
For evolution to be true, molecular evolution must be possible. The islands of function must not be separated by gaps greater than what we observe in the various kinds of mutation. This is a testable proposition.
Agreed. This has not been demonstrated, is highly unlikely, and is subject to considerable doubt.
For evolution to be true, the fossil record must reflect sequential change. This is a testable proposition.
Perhaps. But then of course folks like Gould helpfully proposed things like punctuated equilibrium, which essentially hypothesized that we don’t see a lot of sequential change in the fossil record because – wouldn’t you know it – evolution seems to be always taking place just out of reach of our observational ability, an ironic example of proposing a theory based on the lack of evidence. The fossil record is, as Gould and Eldridge and many others have admitted, incongruous, jumpy, characterized by stasis and jumps and gaps. The testable proposition – at least Darwin’s “slight, successive modifications” version has been shown false.
For evolution to be true, the earth must be old enough to have allowed time for these sequential changes. This is a testable proposition.
Agreed. And the Earth is not nearly old enough. The universe is not nearly old enough. The entire age of the universe is but a rounding error against any realistic calculation of what would be required for the alleged changes to take place.
Evolution is a better model for a second reason. It seeks regularities. Regularity is the set of physical causes that includes uniform processes, chaos, complexity, stochastic events, and contingency. Regularity can include physical laws, mathematical expressions that predict relationships among phenomena. Regularity can include unpredictable phenomena, such as earthquakes, volcanoes, turbulence, and the single toss of dice.
There are a couple of quite obvious problems with holding up “regularity” as some sort of measure of what constitutes a “better” model. Regularity is certainly important to recognize when it exists. It is also quite important to recognize its limitations. Regularity might help us understand the slow deposition of sand in delta or the slow carving of a riverbed. But there are lots of irregular physical phenomena that are just as valid in explaining certain features of the physical world – things like floods and meteorite impacts and supernovae. More importantly, we know for a fact of one cause that does not simply follow physical regularity, namely, intelligent designing agents. So asserting that the model that insists on “regularity” is the better model commits (i) the practical mistake of ignoring a large swath of causal events that are known to exist, and (ii) the logical mistake of assuming as a precedent the very conclusion that one is trying to reach. ----- Finally, just a quick comment on the parting shots:
Alternatively, discuss whether a variant within a species can be shown to have more or less CSI than another variant. Perhaps a calculation of the CSI in Lenski’s bacteria before and after adaptation.
Why would anyone think that CSI can be calculated as though it is subject to a simple mathematical formula? CSI includes not only complexity, but specificity. The latter is not amenable to simple mathematical calculation. Rather, it deals with function, context, operational aspects, purpose, meaning. Yes, we can calculate the unfortunately-misnamed “Shannon information”; and, yes, that relates to complexity. But the specificity is also required. Anyone who does not understand this point cannot understand CSI, cannot understand design, and cannot mount an effective attack against ID because they will not know what they are talking about. Moreover, in the context of the current discussion, I trust the reader will recognize the rich irony of the evolutionary proponent, on the one hand, acknowledging that molecular evolution is required, that enough time must be available, that a sequential stepladder approach is necessary, but never once offering a detailed analysis of what would be required to get from, say, organism A to B; while on the other hand demanding that the skeptic provide a precise calculation of the difference between organism A and B. The irony of the complete lack of calculation-driven and analysis-driven detail on the evolutionary proponent's part is all the more rich, given the near universal acknowledgement by even staunch evolutionists that organisms appear designed. Truly the onus is on the evolutionist to provide some reasonable evidence contra this nearly self-evident observation, rather than just vague references to "change over time" and the like, coupled with demands that skeptics prove a negative.Eric Anderson
March 13, 2014
March
03
Mar
13
13
2014
05:46 PM
5
05
46
PM
PDT
Petrushka should certainly be allowed to defend the response here on UD. The challenge was issued here, the response from Petrushka was posted here. The ability to defend the response should be allowed.Eric Anderson
March 13, 2014
March
03
Mar
13
13
2014
05:02 PM
5
05
02
PM
PDT
147,490kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
05:00 PM
5
05
00
PM
PDT
Sal @3:
ID is composed of two theories, Design theories and Intelligence theories.
Sal, I sure wish you would stop beating this drum. The idea that ID "merges" design theories and theories of intelligence is not helpful. Particularly not when the quote from Dembski that you provided a while back to support your assertion says precisely the opposite -- namely, that they can, and should, be kept carefully separate.Eric Anderson
March 13, 2014
March
03
Mar
13
13
2014
05:00 PM
5
05
00
PM
PDT
F/N: As of a moment ago, neither my inbox nor spam box has anything from Petrushka, unless P is using a Nigerian scam type greeting. KFkairosfocus
March 13, 2014
March
03
Mar
13
13
2014
04:59 PM
4
04
59
PM
PDT
FYI- petrushka is whining because he cannot defend his equivocating, lie-filled and meaningless drivel here. petrushka, you are welcome on my blog. If you can get your crap past me I will plead your case to the UD moderators.Joe
March 13, 2014
March
03
Mar
13
13
2014
04:36 PM
4
04
36
PM
PDT
145,865kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
12:54 PM
12
12
54
PM
PDT
143,158 for VJT on Tour . . .kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
07:00 AM
7
07
00
AM
PDT
It needs to be pointed out that Behe, one of the top three design school scientists, holds to universal common descent.kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
06:42 AM
6
06
42
AM
PDT
ID is anti a priori materialist, blind watchmaker molecules to man evolutionary narratives presented to the public as if they were demonstrated unassailable fact. For cause.kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
06:41 AM
6
06
41
AM
PDT
Living organisms are islands of functions and blind watchmaker processes cannot reach them. That said, petrushka is nothing but a grand equivocator who cannot grasp the fact that ID is not anti-evolution.Joe
March 13, 2014
March
03
Mar
13
13
2014
06:31 AM
6
06
31
AM
PDT
P, you asked for islands of function. You have them: a key defining characteristic of the individual species or the like. KFkairosfocus
March 13, 2014
March
03
Mar
13
13
2014
06:11 AM
6
06
11
AM
PDT
PS: . . . or should that be, needle. Also, the incidence of isolated protein forms in the space even between close species, is relevant. Kozulic on singletons:
Proteins and Genes, Singletons and Species Branko Kozuli? Gentius Ltd, Petra Kasandri?a 6, 23000 Zadar, Croatia Abstract Recent experimental data from proteomics and genomics are interpreted here in ways that challenge the predominant viewpoint in biology according to which the four evolutionary processes, including mutation, recombination, natural selection and genetic drift, are sufficient to explain the origination of species. The predominant viewpoint appears incompatible with the finding that the sequenced genome of each species contains hundreds, or even thousands, of unique genes - the genes that are not shared with any other species. These unique genes and proteins, singletons, define the very character of every species. Moreover, the distribution of protein families from the sequenced genomes indicates that the complexity of genomes grows in a manner different from that of self-organizing networks: the dominance of singletons leads to the conclusion that in living organisms a most unlikely phenomenon can be the most common one. In order to provide proper rationale for these conclusions related to the singletons, the paper first treats the frequency of functional proteins among random sequences, followed by a discussion on the protein structure space, and it ends by questioning the idea that protein domains represent conserved units of evolution.
A bit more:
One strategy for defusing the problem associated with the finding of functional proteins by random search through the enormous protein sequence space has been to arbitrarily reduce the size of that space. Because the space size is related to protein length (L) as 20 ^ L , where 20 denotes the number of different amino acids of which proteins are made, the number of unique protein sequences will rapidly decrease if one assumes that the number of different amino acids can be less than 20. The same is true if one takes small L values. Dryden et al. used this strategy to illustrate the feasibility of searching through the whole protein sequence space on Earth, estimating that the maximal number of different proteins that could have been formed on planet Earth in geological time was 4 x 10^ 43 [9]. In laboratory, researchers have designed functional proteins with fewer than 20 amino acids [10, 11], but in nature all living organisms studied thus far, from bacteria to man, use all 20 amino acids to build their proteins. Therefore, the conclusions based on the calculations that rely on fewer than 20 amino acids are irrelevant in biology. Concerning protein length, the reported median lengths of bacterial and eukaryotic proteins are 267 and 361 amino acids, respectively [12]. Furthermore, about 30% of proteins in eukaryotes have more than 500 amino acids, while about 7% of them have more than 1,000 amino acids [13]. The largest known protein, titin, is built of more than 30,000 amino acids [14]. Only such experimentally found values for L are meaningful for calculating the real size of the protein sequence space, which thus corresponds to a median figure of 10 ^ 347 (20 ^267 ) for bacterial, and 10^ 470 (20 ^361 ) for eukaryotic proteins . . . . one should bear in mind that in a 300 amino acid protein there are 5,700 (19 x 300) ways for exchanging one amino acid for another, and that each one of these 5,700 possibilities points to a unique direction in the fitness landscape [41]. A single amino acid substitution can trigger a switch from one protein fold to another, but prior to that one, multiple substitutions in the original sequence might be necessary . . . . as a matter of principle, how can one possibly talk about a separate or additional fitness effect due to a 3D structural change if the protein sequence determines its structure, and the structure determines function and the function determines fitness? My literature search for publications describing evolutionary modeling based on fitness effects of protein structures gave no results. And according to a paper published in 2008: “the precise determinants of the evolutionary fitness of protein structures remain unknown” [47] – 18 years since Lau and Dill proposed the „structure hypothesis“[15]. On the other hand, in a number of papers it was shown that all relationships in the protein structure space can be described in purely mathematical terms [18, 25-28], and a most recent study concludes that „these results do not depend on evolution, rather just on the physics of protein structures” [29]. If all relationships in the protein structure space can be described fully without the need to invoke evolutionary explanations, then such explanations should not be invoked at all (Ockham’s razor).
That's the real scope of challenge. And it bites:
When proteins of similar sequences are grouped into families, their distribution follows a power-law [65-72], prompting some authors to suggest that the protein sequence space can be viewed as a network similar to the World Wide Web, electrical power grid or collaboration network of movie actors, due to the similarity of respective distribution graphs. There are thus small numbers of families with thousands of member proteins having similar sequences, while, at the other extreme, there are thousands of families with just a few members. The most numerous are “families” with only one member; these lone proteins are usually called singletons. This regularity was evident already from the analysis of 20 genomes in 2001 [66], and 83 genomes in 2003 [69]. As more sequences were added to the databases more novel families were discovered, so that according to one estimate about 180,000 families were needed for complete coverage of the sequences in the Pfam database from 2008 [71]. Another study, published in the same year, identified 190,000 protein families with more than 5 members - and additionally about 600,000 singletons - in a set of 1.9 million distinct protein sequences [73] . . . . The frequency of functional proteins among random sequences is at most one in 10^ 20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] - and singletons per definition are exactly such unrelated proteins. Thus, to enter the distribution graph as a newcomer (Fig. 2d), each new protein (singleton) must overcome the entry barrier of one against at least 10 ^20 . After the entry, singleton’s chance of becoming prominent, that is to grow into one of the largest protein families, is about one in 10^ 5 (Fig. 2d). Thus, it is much more difficult for a protein to become biologically functional than to become, in many variations, widespread: the entry barrier is at least fifteen orders of magnitude higher than the prominence barrier. This huge difference between the entry and prominence barriers is what makes the protein family distribution graph unique. In spite of this high entry barrier, in the sequenced genomes the protein newcomers (singletons) always represent the largest, most common, group: if it were otherwise, the distribution graph would break down. The mathematical models that incorporate data from all sequenced genomes in effect “spy” on nature [21]. With the help of one such model we have just uncovered something remarkable: in living organisms the most unlikely phenomenon can be the most common one. This feature clearly distinguishes the complexity of living organisms from the complexity of self-organizing networks . . . . Koonin and coworkers have developed several versions of their gene birth-death-and-innovation model (BDIM). The power-law distribution, however, could be reproduced only asymptotically, the family evolution time required billions of years when empirical gene duplication rates were brought in, the genes within a family needed to interact, and prodigious gene innovation rate was necessary for maintaining a high influx of singletons [83-87]. Horizontal gene transfer (HGT), rapid sequence divergence and ab initio gene creation were mentioned as the possible sources of singletons. In another attempt, Hughes and Liberles proposed that just gene duplication and different pseudogenisation rates between gene families were sufficient for emergence of the power-law distribution [88]. The authors ruled out horizontal gene transfer and ab initio gene creation as the processes that could form new genes, because these processes were rare in eukaryotes but the power-law distribution was observed also with eukaryotic families. The evident problem with this study, however, is in that pseudogenisation per definition leads to a loss of function: the resulting power-law distribution of non-functional protein families is entirely different from the power-law distribution of functional protein families [read that blind search in AA space] . . . . For the origin of unique genes one has to turn to divergence of the existing sequences beyond recognition, or to ab initio creation, where the ab initio creation can happen either from non-coding DNA sequences present already in the genome or by introduction of novel DNA sequences into the genome. Regardless of which one of these three scenarios, or their combination, we consider, necessarily we come into the wasteland of random sequences or we must start from that wasteland: facing the probability barrier of one against at least 10 20 cannot be avoided. The formation of each singleton requires surmounting this probability barrier. Without the incorporation of this probability, or perhaps another one that might be better supported by future experimental data, all models aiming to explain the observed protein family distribution will remain unrealistic.
This leads to a pivotal challenge:
Siew and Fischer succinctly described the issues at stake: “If proteins in different organisms have descended from common ancestral proteins by duplication and adaptive variation, why is that so many today show no similarity to each other?” And further: “Do these rapidly evolving ORFans correspond to nonessential proteins or to species determinants?” [103] . . . In 2008, Yeats et al. [73] found around 600,000 singletons in 527 species - 50 eukaryotes, 437 eubacteria and 39 archaea - corresponding to 1,139 singletons per species. No information about the number of singletons is available in the most recent summary of the data from over 1100 sequenced genomes encompassing nearly 10 million sequences [64]. In spite of the missing recent data on singletons, the results of the above calculations are sufficient for an unambiguous conclusion: each species possesses hundreds, or even thousands, of unique genes - the genes that are not shared with any other species. . . . . The presence of a large number of unique genes in each species represents a new biological reality. Moreover, the singletons as a group appear to be the most distinctive constituent of all individuals of one species, because that group of singletons is lacking in all individuals of all other species. The conclusion that the singletons are the determinants of biological phenomenon of species then follows logically. In System of Logic, John Stuart Mill outlined his Second Canon or Method of Difference [133]: “If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensible part of the cause, of the phenomenon.”
Hamming-distance isolated islands of function with a vengeance indeed.kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
06:08 AM
6
06
08
AM
PDT
kairosfocus, Axe is working with real proteins with over 80AAs. That doesn't count with evos. ;)Joe
March 13, 2014
March
03
Mar
13
13
2014
05:30 AM
5
05
30
AM
PDT
F/N: Axe's empirical work indicates protein rarity in AA space (for which he suffered a little dose of being expelled . . . ) is of order 1 in 10^60 - 70+. That puts us in the ballpark of the one straw to 1,000 Ly cubical haystack that was outlined on above. KFkairosfocus
March 13, 2014
March
03
Mar
13
13
2014
05:18 AM
5
05
18
AM
PDT
Richie didn't read the PDF by Kalinsky. Richie didn't read the Durston paper. Richie didn't read the Hazen paper. And Richie didn't read the Szostak paper I also referenced. As I said the moron just wants to make this personal. And he persoanally fails every time we ask for evidence for blind watchmaker evolution. And not only that the only reason to attack ID and CSI is because blind watchmaker evolution is a total failure. IOW Richie et al are admitting theirs is a failed position. Case closed.Joe
March 13, 2014
March
03
Mar
13
13
2014
04:21 AM
4
04
21
AM
PDT
As predicted Richie Hughes choked on the references. And to prove Alan Fox is totally clueless he followed Zachriel's ignorant lead by thinking Keefe and Szostak refutes Durston on the rarity of proteins. Ik Keefe and Szostak used median sized proteins they wouldn't have had any success. They used 80AA proteins, ie very, very small proteins. And they are not indicative of the proteins in living organisms.Joe
March 13, 2014
March
03
Mar
13
13
2014
04:15 AM
4
04
15
AM
PDT
PPS: The log reduced Chi metric is actually equivalent to the per aspect explanatory filter that RTH and co also despise. But, that reaction is unable to overturn the basic fact that, properly used, it works reliably. Default, mechanical necessity explains phenomena. High contingency of outcomes on similar starting conditions overturns that. Two empirically grounded alternatives: chance leading to statistical scatter, or intelligence acting by design. Default 2: chance. Overturned by FSCO/I as a tested sign of design. Testability, show FSCO/I coming about by blind chance and/or mechanical necessity. Tests, billions of successful cases, many failed attempts to show otherwise. Needle in haystack analysis (similar to that behind the 2nd law of thermodynamics pivoting on relative statistical weights of macroscopically distinct clusters of microstates) backs up the empirical findings. That is, the result is as we should expect with high reliability.kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
01:39 AM
1
01
39
AM
PDT
Joe: Info is quite a serious matter. I think RTH and co need to first clarify what info is and why it is measured as it is for t/comms purposes. Namely on info-carrying capacity typically in bits. Then, they need to ponder the difference between that and things like how we report computer file sizes in bits, but these bits at work are as a rule functionally specific. (E.g., I have done the exercise of looking at doc format Word files from the bit and ASCII code end. Looks like there is a lot of repetitive useless stuff. Clip just one of those at whim. Close off then try to re-open the file. Crash, corrupt file. Functional specificity. Such can also try the exercise of sending noisy info to an analogue monitor and watching the picture dissolve into snow. Text files can be corrupted to varying degree and it is easy to see how recoverability/function deteriorates into gibberish as things get worse. This is just one way to see how FSCO/I is real, and it gives context to Orgel and Wicken back to the '70's, who highlighted how functionally specific complex organisation and associated information were pivotal to understanding life based on cells by contrast with crystallographic order or the sort of random patterns of micro crystals in a bit of granite. But, as my mom so often said, a man convinced against his will is of the same opinion still. [Way back, they used to teach gems of wisdom in school for kids to memorise as sayings; this is an especially apt one in a situation where just a glance at the thread will show abundant evidence of rage- and hostility- driven blindness. For instance, it should be obvious that if you don't actually submit work you have no just cause for complaint if it is not received.]) It is in that context that they can begin to understand null state vs ground state vs functional state strings and how different degrees of constraint shift the avg info per symbol (Shannon's H) for AA sequences. Flat random across 20 possibilities gives 4.32 bits per symbol carrying capacity, but actual protein statistics . . . similar to flat random ASCII vs patterns of frequencies of English text . . . will shift proportions, so a stochastic pattern would give a null. Then, looking at families of proteins in living forms gives an empirical measure of functional info capacity, site by site. That is some aligned sites can vary considerably, others much less so. One aspect is responsiveness to water and effects on folding, a first step to function. The math follows, and gives a useful empirically grounded functionally specific info metric rooted in actual sequencing. Take this and blend in the relevant config spaces noting how proteins of relevant character typically are 250 - 300 to 1,000+ AAs long. The results quickly put one beyond the sort of threshold already outlined above on the toy example of every atom in our solar system searching through a 500 fair coins config space at the rate of a fresh observation every 10^-14 s. For 10^17 s. This leads to the 1 straw to a cubical haystack 1,000 light years on the side sample to pop ratio, as in searching for a needle in a haystack with strictly limited resources. So, one only has a right to expect to see the bulk, non-functional gibberish. And BTW, if such objectors had paid attention over the years, they would have noticed also, that by analogy of AutoCAD etc, we can see that organisation expressible in a nodes and arcs and components pattern (common in engineering, think the exploded view so commonly used in assembly) is reducible to a structured set of strings. Where the structure itself is effectively a code, an expression of language. Discussion on strings is WLOG. So, enough Math for purpose has long since been there; for instance cf. the summary here on. Yes, once one has understood basics of info th it is not rocket science, but no one said it was. Indeed, Dembski's 2005 metric can be reduced logarithmically and seen to be an info beyond a threshold metric: I - 500. (A point that seemed to have escaped May/Mathgrrl* some years back.) Put in some reasonable limits and the threshold is 500 bits. Blend in a dummy variable that sets to 1 when there is objective evidence of functional specificity. Voila: Chi_500 = Ip*S - 500, bits beyond the solar system threshold. (For observed cosmos as a whole, go for 1,000 bits) Yes, this requires some scientific empirical investigation to see why 500 is a good threshold for the sol system, to identify info content metrics and to evaluate whether S can be set or holds its 0 default. That is, this is a science starter, not a science stopper, and it is not a creedal declaration, but an invitation to testability. Effectively, the import of this is that, reliably, FSCO/I beyond 500 bits will set Chi_500 > 0, and that indicates design as most credible causal explanation. So, try to test this, and see if it is so. Billions of positive cases, many dozens of attempts to get counter-examples over the course of years [stuff like canals on mars in drawings of astronomers from 100 years ago, or an imaginary clock world that "evolves" more and more sophisticated clocks, etc etc etc] uniformly failing, often by letting in intelligently driven active information or target-guiding oracles in the back door. Inductively massively tested and reliable. Why then the controversy, rage and increasingly hostile personalities in response? because of implications: life forms starting from protein complexes and associated DNA, are on this criterion to be seen as designed. With ever so many examples in the living cell that pass the threshold. So, we are cutting across ideological commitments and doing so in a context that is scientifically rooted in light of a major field of science over the past 70 years: Information Theory. The rage and fury of the blast will pass, the hecklers will find themselves discredited, the hate sites will increasingly isolate themselves into irrelevance as fever swamps to be quarantined. The slanders will increasingly be obviously false accusations. The irrational behaviour driven by ideologically driven rage rooted in seeing evolutionary materialist scientism under threat will expose itself for what it is. The silly notion that one has to adopt evolutionary materialism in order to promote scientific, technological and economic progress will fall of its own weight once stated in bald terms and compared with history and current situations. And, a token of the problem is plain from the above: what could have led P to imagine that such a list of talking points would constitute an adequate answer to the challenge to actually reasonably substantiate the evolutionary materialist, blind watchmaker claim? And,if the case were such a slam dunk as to be as sure as gravity or the roundness of the earth, in actuality, exercises similar to what I called for should be a dome a dozen, and should have an irrefutable, solid character. Not hurling elephants, ignoring major aspects of the issue [OOL, Tree of Life branching and the need to address Gould's stasis and suddenness issues as well as the problem of homology vs the diverse gross anatomy and molecular trees etc], making vague phil of sci assertions on testability [both sides of a scientific issue will generally be testable], and so forth. What seems to be plain is that this is a case where imposition of Lewontin-Sagan a priori materialism makes a case seem much stronger than it is, but at the price of begging big questions. Hence Phil Johnson's point:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
P and those who cheered P on, will need to do some sober reflections. But, I am not sure they are even paying duly careful attention. KF *PS: Not the Calculus professor.kairosfocus
March 13, 2014
March
03
Mar
13
13
2014
01:24 AM
1
01
24
AM
PDT
Barb @ 59 I see your point, but I think here the talk is about a model being testable and falsifiable, i.e. can be proven either right or wrong (true or false). Some models might not allow that kind of test where the result could point either way. For example, at least at this point, the multiverse theories don't appear to be testable and much less falsifiable. The ID model proposes that CSI is only the product of intelligence. If someone can test such proposition and find a case where CSI is produced by unintelligent means, then the ID proposition could be considered false.Dionisio
March 12, 2014
March
03
Mar
12
12
2014
09:36 PM
9
09
36
PM
PDT
The reply to KF lost me at the very beginning: "Evolution is the better model because it can be right or wrong." Wait, what? If it's wrong, why would you--or any intelligent person--believe it? What would be the point of believing in something that is false? Linus Pauling stated that science is a search for the truth. Truth is that which conforms to reality. If you state that your theory is either right or wrong, then it's not the truth.Barb
March 12, 2014
March
03
Mar
12
12
2014
06:47 PM
6
06
47
PM
PDT
Poor little Richie is kicking and screaming for math that he can't understand. Start here and read the pdf Then see Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
[N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.
Here is a formal way of measuring functional information: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, "Functional information and the emergence of biocomplexity," Proceedings of the National Academy of Sciences, USA, Vol. 104:8574–8581 (May 15, 2007). See also: Jack W. Szostak, “Molecular messages,” Nature, Vol. 423:689 (June 12, 2003). What Richie and friends 't seem to realize is that biological information came from Crick with his Central Dogma. It relates to sequence specificity. And if unguided evolution could account for it Richie and pals wouldn't need to worry about CSI nor any math. The only reason to even try to argue against CSI is because he knows materialism cannot explain it. He doesn't understand that attacking ID will never be positive evidence for materialism and evolutionism. But anyway it's a sure bet that Richie won't be happy with my references because his agenda is to make this personal- it definitely ain't about evidence because he has no idea how to assess any evidence.Joe
March 12, 2014
March
03
Mar
12
12
2014
06:42 PM
6
06
42
PM
PDT
Hi Vincent, thanks for your response. As I said in my response to Richie those questions prove that ID is not a dead-end. Dembski said so also. Intelligent Design is the study of patterns in nature that are best explained as the result of intelligence. — William A. Dembski Why do we want to not only detect but also study it? To answer those questions. For example with ID the accepted paradigm we would have institutions and universities working on these questions. And Universities would be pumping out students to help find the answers. For me the more interesting question is where is the software. The DNA is akin to the 1s & 0s, ie the electric pulses that represent the software, but it ain't the software. There is immaterial information in all living organisms that makes it all go. Yes having a plan is nice but we cannot get ahead of ourselves. Nor can we forget the evidence for design from cosmology, eg "The Privileged Planet". BTW the age of the earth depends on how it was formed. The 4.5x billion year mark requires the assumption that the proto-earth was entirely molten, ie 20,000Kelvin- no crystals from the accretion material is allowed to have survived.Joe
March 12, 2014
March
03
Mar
12
12
2014
04:50 PM
4
04
50
PM
PDT
F/N 3: A sad point:
TSZ: petrushka on March 11, 2014 at 7:05 pm said: This is not complicated. KF promised to start a thread in the name of anyone who responds to his challenge. This is the second time I’ve offered an essay dealing with his questions and criteria. Let him keep his word.
P, of course notified me of neither such attempt. Which is what I indicated 1 1/2 years ago -- kindly notify me, as Joe did recently and I guest-posted his post. Those who are desirous and serious know well enough how to find my contact through the always linked note from the handle that appears for every comment I have ever made at UD. As is obvious from the above, I found out about the recent post at TSZ -- which I do not regularly visit -- by VJT's cross-post here. I have noticed no notification, and so far all I have is a say-so on an earlier claimed answer. P, in simple terms, you know or should know how to find my email. I have not found in my email box an attempt or notification of same. In this second case VJT has posted, which renders my own posting a moot issue. Unfortunately, the posted answer does not adequately address the matter, as shown. This is similar to my experience six months ago when I put together a composite from a remark of Dr Liddle (to effect, nothing on OOL) and one by Jerad [IIRC] on macro evo, which was disappointing. And earlier I had taken up the Wiki articles ans The Talk Origins 29 evidences, by way of saying an answer needs to be better than that. In short, it sure looks like I have (again) been misrepresented in a way that would cast unjustified doubt on me. I think you need to set the record straight, P. (And I would post your try no 1 if you notify me on it.) KFkairosfocus
March 12, 2014
March
03
Mar
12
12
2014
01:45 PM
1
01
45
PM
PDT
Sal -
...the hypothesis has its challenges in terms of believability because of the absence of seeing the Designer.
I have to say, I don't quite agree there Sal. I think this is at the fault of the observer, rather than the theory. They are those chained in the cave. We need not know anything about the ancient Egyptians to rightfully conclude the Sphinx and pyramids were designed. We don't know exactly the designing mind behind the antikythera mechanism, however, we know it to be an object of design. Yes, I agree there is a wide-spread issue of individuals refusing to look at the theory because the designer cannot be presented, but is that attributable to the theory itself, or the a priori worldview of the observers? These same people quite often accept ideas, and theories that cannot be presented, so long as it isn't the idea of God.TSErik
March 12, 2014
March
03
Mar
12
12
2014
01:38 PM
1
01
38
PM
PDT
F/N 2: The strawman arguments continue. Objectors to design inferences on the world of life full well should know that ever since Thaxton et al in the early 1980's, it has been recognised that an inference to design as process on the world of life does not entail an inference to any particular designer, whether within or beyond the observed cosmos. Indeed, I have myself repeatedly pointed to the work of Venter et al and raised the point that a molecular nanotech lab some generations beyond our state of the art could be a sufficient cause for what we see in the world of life on our planet. Which so far is the only place we actually observe cell based life. Those who persistently distort and caricature the design inference are therefore willfully continuing a misrepresentation. Going beyond, there is a whole other field of design inferences, pioneered by the likes of Sir Fred Hoyle, that redneck Bible-thumping fundy ignoramus [Nobel-equivalent prize holding astrophysicist and lifelong agnostic], who pointed to the fine tuning evidence in its early features, a pattern of evidence that has now grown by leaps and bounds. That evidence as I summarised in brief earlier in this thread, points to a cosmos designed to host C-chemistry, aqueous medium cell based life from its basic physics on up. Couple that to the logical implications of a credibly contingent cosmos, and we see ourselves looking -- even through a multiverse speculation -- at a necessary being cause with a designing mind as the explanation to beat. In that context, it would be no surprise to find life that fits the implications of the cosmos' design. And, it would be independent of whether or not there is universal common descent. In that context, the evident design of life is not even a critical issue for design thinkers. It is just that that is where the evidence points. KFkairosfocus
March 12, 2014
March
03
Mar
12
12
2014
01:28 PM
1
01
28
PM
PDT
F/N: I see P, end of the posted answer:
I count 323 words. I would be happy to post it in response to Kariosfocus’ challenge, but unfortunately I am not allowed to post.
P full well knows that I gave my word that I would host an answer, under my account. As was indicated from the outset. So, a serious answer would have been posted. As it is VJT noted the comment, clipped and posted. I happened to notice his title line, and took time to respond. The thread above suffices to show that the attempt is weak, as should be obvious save to those looking with the darwinist eye of faith. As to RTH's attempt to change subject to discussing the designer, the gap in that should be fairly obvious from what we are trying to do in origins science. Namely, to seek to understand the past which we did not see and for which there is no generally accepted record. To do that, one has to play the detective examining circumstantial evidence tracing from that past. And to explain cause adequately, the vera causa test is needed: we must show factors adequate to the effects. In this case, it is obvious that designed objects OFTEN -- as opposed to always -- exhibit features that mark them as distinct from mechanical necessity and chance. One of these is FSCO/I. And with objects showing such, we are entitled to infer design on best explanation. Process. Just as we can infer to design in a suspicious fire without knowing whodunit. Buit od=f course if one is explicitly or implicitly committed ideologically to no designer being possible, one will easily reject a design inference. Not because of the inductive case but because of an a priori. And it seems ever more clear that such controlling ideological a prioris are at work. Time for fresh thought. KFkairosfocus
March 12, 2014
March
03
Mar
12
12
2014
01:10 PM
1
01
10
PM
PDT
VJT: Thanks. I took a look at title and OP, and setting ad hominems aside, at best it pivots on the sort of ideological misunderstandings like those Marxists used to have. Back to basics in a nutshell. On the inductive side there are billions of cases of FSCO/I around us beyond 500 bits, and as libraries full of books, Internets full of web pages, industries full of PC's phones and cars etc jointly testify, in each case, FSCO/I is a reliable index of design; as RTH knows or full well could easily confirm to the point where this is glorified common sense. The needle in haystack analysis shows why, with 10^57 solar system atoms as observers, each observing a 500 coin tray flip every 10^-14 s, for 10^17 s, we see that the sample size to config space size for 500 bits is as one straw to a cubical haystack 1,000 light years on the side, comparable to our galaxy at its central bulge. Superpose on our neighbourhood, blindfold, reach in and pick a one-straw size sample at random. With all but absolute certainty, straw. That is, as RTH full well knows from the proverb but is irritated by, a blind and small sample is overwhelmingly likely to be haystack, not needle.In this case, it means FSCO/I is an inductively and analytically strong sign of design. Where, obviously, he and ilk -- after years of failed attempted counter examples, is unable to show blind chance and mechanical necessity producing FSCO/I. However, locked up in an ideology that demands otherwise, he is desperate to dismiss what the induction tells us. Sadly, he goes beyond such and ends up enabling those who have tried to expose the names of uninvolved family including minor children, and to publish street address of same. And he refuses to face the difference between dealing with heckling, personal attack and slander, which he and ilk would so desperately love to tag as "censorship," and undue suppression of publication. A sad picture. Surely, he can do better, and the founder-owner of TSZ can do better. KFkairosfocus
March 12, 2014
March
03
Mar
12
12
2014
12:53 PM
12
12
53
PM
PDT
Hi Joe, Re your comments above on questions 1 to 4 by Richard Hughes, I would like to make a distinction between an attribution of design and a design hypothesis. The latter is required to make testable predictions which scientists can investigate; whereas with the former, all we need to show (with a high degree of plausibility) is that the outcome in question exhibits a high degree of specified complexity. Hence we can know that Stonehenge was designed without knowing the who/what/when/where/why. Scientists, however, don't like twiddling their thumbs: they need work to do. Faced with a choice, most of them would rather check out an implausible but fruitful hypothesis than endorse a more plausible hypothesis that gave them no leads to follow up. That's why naturalistic theories of origins get the (undeserved) attention they do: there are lots of competing origin hypotheses, ranging over multiple pathways. Scientists can also give free rein to their imaginations, and it is not hard to think of more and more outlandish proposals. The only constraints that these hypotheses have to satisfy are that they have to stick to the sequence of events we have observed, as well as the available time (four billion years). It is very easy to poke fun at these speculations, but the only way to effectively counter them is with a rival hypothesis of our own. There are a few other reasons why a detailed design hypothesis is required, too. First, it's not just Stonehenge we are talking about here. It's a whole planetful of organisms of various kinds, which are often competing against each other. When we claim that all of these were designed, the question naturally arises: what for? Second, we know the timescale involved: billions of years. Designers typically don't take that long to do a job, so that's a prima facie argument against design that we have to address up front. Yes, I know it's horribly unfair, as the Darwinists have yet to show that their own hypothetical account of origins is capable of doing the job within the time available, but let's face it: it's an obvious criticism of the design hypothesis, and we're not gong to make any headway in gaining adherents until we address the "time question" up-front. Third, there's the objection from apparent mal-design: laryngeal nerves, prostate glands and suchlike. Once again, the difficulty posed by examples such as these is far over-shadowed by the problem of explaining the origin of the simplest living cell as a result of unguided processes, but once again, it's a very obvious criticism of the design hypothesis, and we can't run away from it. Fourth, there's the moral objection: the long process of life's unfolding over the last four billion years has taken the lives of many creatures, and one wonders at the motives of any designer who would employ such a costly process to achieve their goals. Again, this is an emotional argument, but humans (like it or not) are emotional creatures. So there we have it. As I see it, we're never going to make much headway until we address these questions. Human scientists feel a pressing need to ask them, and as I see it, we need at least the outline of an answer to these questions before we can get a hearing for Intelligent Design in scientific circles. Hence my call for multiple Design hypotheses. My tentative hypothesis is almost certainly wrong in a big way, but at least it's an attempted answer, which tries to address the popular objections I alluded to above.vjtorley
March 12, 2014
March
03
Mar
12
12
2014
12:48 PM
12
12
48
PM
PDT
Hi kairosfocus, First of all, I'd just like to apologize for putting up this post without consulting you first. That was rather thoughtless of me. Petrushka's reply was made in a comment inside this post by Richard Hughes over at TSZ: http://theskepticalzone.com/wp/?p=4228 . Second, I would pretty much agree with your criticisms in comment #8 above. There has to be a demonstration of the possibility of body plan evolution (and for that matter, OOL) before any evolutionary model can be taken seriously. All too often, evolutionists have lowered the far by claiming that all they have to do is poke a hole in any argument which purports to demonstrate the impossibility of evolution. But that's not the same as demonstrating that the proposal you're putting forward is a workable one. Third, I think your remarks on the time available are crucial to the discussion. Evolution has to not only work, but also satisfy the time constraints posed by the four-billion year history of life - and all the indications so far suggest that it would take many orders of magnitude more time to get from organic chemicals to the first cell, and from the cell to a complex animal, than the time available in the fossil record. Fourth, I would endorse your remark on FSCO/I bits. All that needs to be shown is that we are talking about more than 500 bits, which places the (specified) outcome far beyond the reach of chance and/or necessity. A precise calculation of the FSCO/I in a living thing is not required. Darwinists, to make their theory credible, have to show that the FSCO/I in a living thing is less than that.vjtorley
March 12, 2014
March
03
Mar
12
12
2014
12:15 PM
12
12
15
PM
PDT
1 2 3 4 5 6

Leave a Reply