Uncommon Descent Serving The Intelligent Design Community

A Twist on the Infinite Regress Argument

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a previous post there was a vigorous debate about idnet.com.au’s suggestion that Craig Venter might soon manufacture a living organism from scratch. This comment caught my eye:  “OMG!! When Craig Venter produces a living organism, will this event trigger the infamous “infinite regress?” WHO DESIGNED CRAIG VENTER???”

Indeed.  Assume for the sake of argument that Craig Venter actually succeeds in creating life in the lab (We’ll call them Venter’s critters or “VCs” for short).  Then assume the dreaded super virus comes along and wipes out all life on earth except for the VCs, who are immune.  Assume further that a million years passes and there are no traces that any living thing other than VCs ever existed on earth.  Then two aliens come along.  Alien 1 of 2 observes the VCs running around, all of which are desendants of the original lab-created VCs.  The VCs exhibit irreducable complexity and complex specified information, and 1 of 2 therefore concludes that it is very unlikely that they were caused by chance and necessity, and therefore, making an inference to the best explanation, he concludes they were designed.  Alien 2 of 2, a dyed in the wool materialist, says, “1 of 2, you’re an IDiot.  You are just pushing the question back.  If these beasties were designed, tell me who designed the designer?” 

This thought experiment brings into relief the fatuousness of the “who designed the designer” argument.  2 of 2’s question is unanswerable.  1 of 2 would have no way to know who Craig Venter was, where he came from, what his purpose was, what process he used to design his critters, etc.  Nevertheless, his design inference would be correct. 

Comments
and anyone who doesn't think the rover is a product of nature is just not creative enough to figure out how it happened; could be ignorant, stupid or insane (or wicked, but I'd rather not consider that). (on the other hand, when face to face w/ Behe, he opted for lazy)es58
December 5, 2007
December
12
Dec
5
05
2007
09:05 PM
9
09
05
PM
PDT
atom 83; that's where I was going, thankses58
December 5, 2007
December
12
Dec
5
05
2007
08:53 PM
8
08
53
PM
PDT
BarryA, in 82, said "You may refuse to deal with the hypothetical on its own terms if you like, but that is your problem, not a problem with the hypothetical." Yes, that is a valid point. I had assumed that the example was meant to represent events that may occur in the real world. If that is what you meant, then my point is that the example would need to be modified to accommodate those things that would remain in the real world. However, if you merely meant that in a hypothetical world, this hypothetical example could occur, then I concede. BarryA: "One does not need to know why I said that for it to be true for purposes of the hypothetical." Again, if you are providing the example to represent a hypothetical that represents what could happen, then the irreducibility should have some standing other than an assertion. I mean, since you are talking about what Venter could do, then for this part of your example to work, Venter must really be able to create irreducible complexity. I'm not seeing that it is guaranteed that he can, and at least your example didn't assert that he could. BarryA: "Even a very poorly designed object can be irreducibly complex if by removing just one of its parts it would lose 100% function." Agreed, but in the example, no stipulation was made that by removing something from the VC it would lose 100% function. All that was provided is that Venter created them. Nonetheless, it seems that we really are in agreement in the usage. I'm suggesting that it is quite probable that Venter is not a really good bio-engineer, and he may over design some attributes. If so, then by removing some of the over-engineered parts, the functionality may actually work better. Venter is human after all, and we can't expect that his design will necessarily be so finely tuned that he wouldn't include cruft, such that removal of the cruft would benefit the VC. I am pretty sure that the goal of this example was not to create a straw man argument. But, until certain elements of the example are resolved, it doesn’t seem to be able to exemplify anything that could be used to explain the real world.Q
December 5, 2007
December
12
Dec
5
05
2007
05:08 PM
5
05
08
PM
PDT
@Q 79: Right, but his point was that Alien 2 does exactly what I'm discussing, namely, lets his assumptions/philosophical biases stop him from making the correct inference. He would take research dollars from and deny tenure* to any alien foolish enough to entertain a design hypothesis, since "Who Designed the Designer?" is an insurmountable barrier that could never be overcome in his mind. Thus, those who have an irrelevant "WDTD?" objection would not be able or think to look for a "who" or "why" in designed objects. They wouldn't even be able to conclude design, though that would be the objective truth. *Ok, they're aliens, but you get the point.Atom
December 5, 2007
December
12
Dec
5
05
2007
03:12 PM
3
03
12
PM
PDT
Q, when I say “Assume further that a million years passes and there are no traces that any living thing other than VCs ever existed on earth,” I intend for the hypothetical to mean literally what it says. You can say, “Well it just doesn’t make sense for me to assume that” if you want to, but you won’t be responding to my point. You would simply be refusing to assume what I asked you to assume. You may refuse to deal with the hypothetical on its own terms if you like, but that is your problem, not a problem with the hypothetical. As far as irreducible complexity is concerned, you are confused about both the terms of the hypothetical and the definition of irreducible complexity. In my hypothetical I said “The VCs exhibit irreducible complexity and complex specified information.” One does not need to know why I said that for it to be true for purposes of the hypothetical. I said it because, for purposes of the hypothetical, it is simply true; in other words, it is a base assumption of the hypothetical. In other words, in the world of the hypothetical it is a brute fact that needs no justification. Leaving the hypothetical aside, you are using a definition of irreducible complexity that is not accepted by the ID community. You seem to believe an object is irreducibly complex only if it has “hit the optimal design.” That is not how I (nor anyone else in the ID movement of whom I am aware) uses the term. Even a very poorly designed object can be irreducibly complex if by removing just one of its parts it would lose 100% function.BarryA
December 5, 2007
December
12
Dec
5
05
2007
03:02 PM
3
03
02
PM
PDT
Oops! Meant the last line above to be "and that the VC's are not automatically IC."Q
December 5, 2007
December
12
Dec
5
05
2007
02:49 PM
2
02
49
PM
PDT
BarryA, in 78, wrote "If this does not exclude the matters you bring up, I don’t know what would." Yes, I understand that you meant that earth would look like only the VC's ever existed here. But, your example would require the elimination of all evidence of other life ever existing. Real-world artifacts such as mining shafts, dredging, construction, LEM's on the moon with plaques depicting earth - would also need to be removed within that million of years. That level of change was so large that it didn't fit the environment in which you created the model - on earth. Quite simply, I suggest, that interpretation moved your example into the surreal. So, I read your statement as meaning that the evidence of what the living things were was destroyed. That, it seems, could be the result of a super virus or of VC's critters and keeps the example as a practical what-if. Additionally, you asserted that the "VCs exhibit irreducable complexity". Was that because they were life forms, or because you assumed that Venter could hit the optimal design? Since, in your model, the VC's were man-made, it is quite reasonable to assume a less-than-optimal design, including some over-engineered parts. As such, it would be reasonable to expect that some simplification could result in improved functionality, and that the VC’s are automatically IC.Q
December 5, 2007
December
12
Dec
5
05
2007
02:44 PM
2
02
44
PM
PDT
Atom, in 77 stated "I don’t think it would make a difference. Since they could not entertain the idea of design (due to the “Who Designed the Designer?” problem they’d see), they would never look." Well, possibly. But, BarryA did specificallly state that Alien 1 "concludes they were designed."Q
December 5, 2007
December
12
Dec
5
05
2007
02:06 PM
2
02
06
PM
PDT
Q writes: "BarryA did not exclude the opportunity to investigate. He also did not exclude he opportunity of the aliens to find Venters machinery or documentation surrounding Venter’s process. In his specific example, it is quite reasonable to expect that Venter, and his work, could be uncovered." In the post I wrote: "Assume further that a million years passes and there are no traces that any living thing other than VCs ever existed on earth." If this does not exclude the matters you bring up, I don't know what would.BarryA
December 5, 2007
December
12
Dec
5
05
2007
02:01 PM
2
02
01
PM
PDT
I see what you're saying...just close off any loopholes that may exist. I still think it works, however, even with loopholes. If the aliens did have the opportunity to search for the "who" of the Designer or the "why", I don't think it would make a difference. Since they could not entertain the idea of design (due to the "Who Designed the Designer?" problem they'd see), they would never look. They would, out of necessity, continue down the fruitless road of looking for non-existent ateleological mechanisms instead of following the evidence where it leads. Who is to say that we'll never uncover the ID machinery or documentation surrounding life on this planet (if the hypothesis is given a chance for serious investigation)? It is quite reasonable to expect that the Designer, and his work, could be uncovered.Atom
December 5, 2007
December
12
Dec
5
05
2007
01:52 PM
1
01
52
PM
PDT
Atom, in reply 75, said "The aliens could not conclude (even tentatively) that anything was actually designed, as that would be the “easy answer” according to materialists/Darwinists." I am sure that BarryA's example was meant to show that. However, I am suggesting, the specific example he chose leaves open a likely opportunity to ultimately see that the life form was designed, and actually who was the designer. As you indicated, the alien's might not immediately have proof as to whether the life form was designed or not. But, BarryA did not exclude the opportunity to investigate. He also did not exclude he opportunity of the aliens to find Venters machinery or documentation surrounding Venter's process. In his specific example, it is quite reasonable to expect that Venter, and his work, could be uncovered.Q
December 5, 2007
December
12
Dec
5
05
2007
01:37 PM
1
01
37
PM
PDT
Hopefully any examples that use Venter’s life forms, or the rover, could be refined so that they can adequately lead only to what they want to illustrate.
I think the original analogy did just fine. The aliens could not conclude (even tentatively) that anything was actually designed, as that would be the "easy answer" according to materialists/Darwinists. It would be "UnknownDesignerDunnit", which robs them of the hard work of finding a completely unintelligent, ateleological explanation for the lifeforms. (Even though none exists.) And as was pointed out, if they conclude there was a Designer, who designed the Designer? I think the point is well made; it is actually pretty striking. Hopfully this will finally put to rest both the "Unknown Designer" and "Who Designed the Designer?" sophmorisms.Atom
December 5, 2007
December
12
Dec
5
05
2007
12:09 PM
12
12
09
PM
PDT
es58, in post 73, asked "These are 2 aliens landing only on Mars, they have never seen any designer, so how can they ‘understand’ who did the designing?" Consistent with the example of the rover, a very good expectation is that the aliens would look around, find earth, and continue their investigation. Even though for a moment all that they would know is "that neither they themselves, nor any beings of which they have been formerly aware, designed it", the best explanation is that these interstellar investigators would keep looking and find the next planet over. They could then extend their investigation, and could quite reasonably be expected to find evidence of the rover's designer (or of Venter's life forms) The two examples were not sufficiently limited to exclude this obvious event. My purpose in pointing this out is that these examples which show what humans did only show that design can be specified. They are, as yet, inadequate to show that these designs are irreducible, especially given human tendencies to over-engineer things (i.e specify too much design). The examples also don't show the foolishness of searching for the identity of a designer, because the examples leave behind evidence of the humans. Thus, these examples end up not being being able to illustrate that a designer of life on earth is beyond investigation. In fact, these examples tend to infer just the opposite - which is contrary to the goals of these examples. es58 also asked "Don’t they first have to infer a designer?" Sort of. They first need to infer that design occured, which they would in these examples because the examples are constructed so the specificity test would pass. Then, almost by definition, they would know that a designer was involved, whether they know anything about the designer or not. But, as is still consistent with these examples, they could keep looking, and still consistent with the examples, there would be a very good likelyhood that they would actually find out who is the designer of the rover (or of Venter's life forms). Hopefully any examples that use Venter's life forms, or the rover, could be refined so that they can adequately lead only to what they want to illustrate.Q
December 5, 2007
December
12
Dec
5
05
2007
11:36 AM
11
11
36
AM
PDT
Q you wrote: But, as with BarryA’s example, these two examples diverge from the arguments about the validity of evolution. Because, with both the rover and Venter’s life form, it is fully possible to understand who did the designing, how they implemented the design, etc., and all of this can be adequately explained in materialistic processes. These are 2 aliens landing only on Mars, they have never seen any designer, so how can they 'understand' who did the designing? Don't they first have to infer a designer? All they know for sure is that neither they themselves, nor any beings of which they have been formerly aware, designed it.es58
December 5, 2007
December
12
Dec
5
05
2007
06:13 AM
6
06
13
AM
PDT
Reply to kairosfocus in post 71. Yes, I agree that the examples provided relate to both IC and SC. But, in a manner somewhat different that you are suggesting. I think we both agree that both the rover and Venter’s life form demonstrate the conditions of specificity. This, it seems we agree, is a good clue that they are both the result of agency. But, it seems clear, that these examples specifically stop there. They do not include enough requisite details make inferences at all related to any agency that would have been involved with the source and permutations of our life. Both of these examples unambiguiosly miss the essential element of irredicibility. Since they were both explicitly human made, then they are most likely fraught with sub-obtimal design. Maybe leaving out some carbon from the metals used in the rover would make it worked better. Maybe the axles were over-engineered, and get stuck in the grit to easily. Additional claims need to be provided about the rover and Venter’s life forms to adequately be able to claim that they are IC. I mention this because in BarryA’s original post, he asserted that “The VCs exhibit irreducable complexity and complex specified information”, and es58 was talking about “if 1 or a few of these parts were removed or rearranged, the function would significantly degrade or be lost entirely”. These are explicitly claims about IC. But, given that these two specific systems were created by people, we don’t really have enough information to trust that they meet anything more than the specificity claim. They did not exclude shoddy engineering - a very provable event in the course of most complex human projects! Of course, this is not an argument against whether our life was created and managed by intelligent agency. It is only meant to show that these two examples, and probably other examples based on human-made processes, fail as full analogies. Because, human-made systems are quite likely to be sub-optimal and reducing them might reasonably be expected to improve them.Q
December 4, 2007
December
12
Dec
4
04
2007
09:39 AM
9
09
39
AM
PDT
Q First, functional degradation on random changes to parts is both an IC and a SC issue. Cf my discussion here, based on the late, great sir Fred Hoyle's famous 747 in a junkyard built by a tornado example. Second, the issue (as discussed in the just linked -- and BTW, it is also always linked) is that chance + necessity alone is, on maximal improbability, effectively incapable of successfully searching the configuration space on the gamut of the observed universe. Once we have the equivalent of 500 - 1,000 bits of information storage capacity, that begins to hold. On the rover, just the software alone for the embedded microcontrollers is vastly beyond that. Third, thinking as Kzinti just happening to come across a strange object on Mars: the well-known explanatory filter used routinely by keepers of large and small things [= "physicists," hopefully delicious-for-dinner earthlings] reasons from empirical evidence in the context of the known causal factors and their properties: [1] chance, [2] mechanical necessity showing itself in natural regularities, [3] agency. When we deal with the sort of high contingencies relevant to the case, only 1 or 3 are candidates, and 1 is the default or null hypothesis. On the same reasoning that undergirds the highly successful field of statistical thermodynamics, the config space is now so large that 1 is eliminated in the same way we eliminate null hyps in general statistical hypothesis testing. This procedure is known to reliably work on cases where we do directly know the causal story through observation. There is no good reason to infer that it should suddenly fail on the grounds that we have not happened to observe the causal process and/or do not happen to know the specific designer or class of designer at work. Thus, on empirically anchored inference to best explanation of the observed functionally specified complex information beyond the 500 - 1,000 or so bits range, we conclude the Rover is designed. BTW, even the text written on various bits and pieces of the Rover would probably come across this threshold. On the nanotech of cell-based life, DNA is a known information storage medium, with 2 bits of capacity per nucleic acid [4-states]. In all known lifeforms, it runs from about 500,000 to 3 - 4 bn elements. The storage capacity is so far beyond the range required that this alone is decisive -- save to one unduly influenced by the notion that science MUST explain based on chance + necessity only. That is called worldview level question-begging, unless one can successfully and independently show that agents COULD NOT have existed at the relevant point in time and space. In short, there is good reason to infer that life is in part the result of agency. GEM of TKIkairosfocus
December 4, 2007
December
12
Dec
4
04
2007
12:18 AM
12
12
18
AM
PDT
Reply to es58 (#68), who said "I am not shooting for IC, but SC here" Oh, SC and not IC. That's different. Seeing a rover that was obviously designed, the evidence would be pretty clear that the complexity of the rover was specified. There would even be a good clue that it was designed by intelligent beings. But, then in the same post, you mentioned "if 1 or a few of these parts were removed or rearranged, the function would significantly degrade or be lost entirely". Am I wrong, or isn't this a question related to irreducible complexity? Losing or changing parts is about reducibility and not specificity, isn’t it? Additionally, in your rover example, you didn't provide enough information to answer if the function would even significantly degrade or be lost. As a simple example of what I mean (and not meant as the only way that my point could be manifest), a lens cap could have been designed onto the system, but with a poor opening mechanism. By simply removing the lens cap, the rover may actually operate better. If you had specified at the outset of your example that the rover was specified with an optimal design, only then would an answer to your specific question be available. You also mention “the probability of these parts being arranged in this manner through natural forces is vanishingly small”, similar to the arguments regarding the origins and permutations of life. Sure, the probability may be true, at least as it seems you intend it to mean. (In reality, that rover would have been built with natural forces of machinery, metallurgy, fingers manipulating screwdrivers, etc. I think you mean that it being arranged through non-intelligent forces is vanishingly small.) But, as with BarryA’s example, these two examples diverge from the arguments about the validity of evolution. Because, with both the rover and Venter’s life form, it is fully possible to understand who did the designing, how they implemented the design, etc., and all of this can be adequately explained in materialistic processes. Understanding the origins and evolution of life, as Dembsky, Behe, et al suggest, does not provide, or depend upon, an understanding of who the designer is, or what was done.Q
December 3, 2007
December
12
Dec
3
03
2007
07:02 PM
7
07
02
PM
PDT
GAW: How do you mean quantifying? One quantifier is that CSI contains a minimum of 500 informational bits. Is that what you meant by quantifying? Or are you asking how to calculate the informational bits? For calculating the informational bits using 8-bit single-byte coded graphic characters, here is an example: “ME THINKS IT IS LIKE A WEASEL” The informational content of the individual items of the set is 16, 48, 16, 16, 32, 8, 48 plus 8 bits for each space. So aequeosalinocalcalinoceraceoaluminosocupreovitriolic would be 416 informational bits. The specification is that it is an English word with a specific function. That specific function does not have any intermediates that maintain the same function, so it is also IC. Here we have a situation where indirect intermediates are well below 500 informational bits and thus there is nothing to select for that will help much in reaching the goal. Thus this canyon must be bridged in one giant leap of recombination of various factors, making it difficult for Darwinian mechanisms. For biology, Dembski is using the same methods as Haldane I would assume: https://uncommondescent.com/intelligent-design/universe-tunes-itself/#comment-153125 In order to determine whether or not there exists sufficient chance for any given sequence to occur it's helpful to know the maximum size of the probability space. The probabilistic resources are limited by the maximum number of action quanta, h_bar, (Planck's quantum of angular momentum) available. If the universe is finite with total mass/energy Mc^2, then the maximum number, N, of action quanta is readily calculated by: N = G * M^2 / (h_bar * c) ~ 10^123, where M is ~ 10^56 gm. For physio-chemical processes, probability space is greatly reduced compared to the estimated maximum of 10^123. I've read estimates of 10^40 for biological events, plus or minus one order of magnitude (more likely minus).
Patrick [33], I’m not sure whether my argument is “like PZ Myers” or what it would mean if it were. Unlike Myers, I’m not an atheist.
Meyers position is not dependent on his personal beliefs in this case (although obviously it stems from them). He believe that in order for the design detection to be valid that characteristics and/or the identity of the designer(s) must firmly be known. BTW, my discussion related to data integrity in digital storage mediums really is not relevant to arguments over common descent.Patrick
December 3, 2007
December
12
Dec
3
03
2007
03:24 PM
3
03
24
PM
PDT
Q you wrote: ...yes if the rover were found,there would be clues that design occured. But that would still say little about the tenets of intelligent design. Specifically, it would say nearly nothing about irreducible complexity, and would say nothing about whether IC was the best explanation of the origin of that rover. what part of the following would not be applicable, and what other elements would be necessary to point to design, for either an IDist, or any other science perspective: the rover: -) is composed of multiple parts -) these parts are arranged in a very specific way that gives the *appearance* of function -) if 1 or a few of these parts were removed or rearranged, the function would significantly degrade or be lost entirely -) there is a vast number of altnerataive ways that those same components could be arranged that would provide no function whatsoever and, combine with this, to a person who is well informed with regard to the state of the art of natural science, the realization that: -) the probability of these parts being arranged in this manner through natural forces is vanishingly small again, what is missing (I am not shooting for IC, but SC here)es58
December 3, 2007
December
12
Dec
3
03
2007
09:51 AM
9
09
51
AM
PDT
getawitness you stated: Jerry, try “Mitochondrial evolution” by Gray, Burger, and Lang, Science 283. 5407, 1476-1481. Briefly, there is strong evidence that mitochondria are related to the origin of eukaryotic cells, which would mean that we’re all descended from bacteria. There are of course competing theories about how mitochondria evolved, with Lynn Margulis on one side and T. Cavalier-Smith on another. See Cavalier-Smith, “The phagotrophic origin of eukaryotes and phylogenetic classification of Protozoa,” International Journal of Systematic and Evolutionary Microbiology (2002), 52, 297–354. Thus I looked up your "solid proof" of common descent to see what you have so much confidence in as "solid proof" for our direct ancestry to bacteria. Gray study: http://www.sciencemag.org/cgi/content/abstract/283/5407/1476?maxtoshow=&HITS=&hits=&RESULTFORMAT=&searchid=QID_NOT_SET&FIRSTINDEX=&minscore=50&journalcode=sci "The serial endosymbiosis theory is a favored for explaining the origin of mitochondria, a defining event in the evolution of eukaryotic cells. As usually described, this theory posits that mitochondria are the direct descendants of a bacterial endosymbiont that became established at an early stage in a nucleus-containing (but amitochondriate) host cell. Gene sequence data strongly support a monophyletic origin of the mitochondrion from a eubacterial ancestor shared with a subgroup of the alpha -Proteobacteria. However, recent studies of unicellular eukaryotes (protists), some of them little known, have provided insights that challenge the traditional serial endosymbiosis-based view of how the eukaryotic cell and its mitochondrion came to be. These data indicate that the mitochondrion arose in a common ancestor of all extant eukaryotes and raise the possibility that this organelle originated at essentially the same time as the nuclear component of the eukaryotic cell rather than in a separate, subsequent event." Thus Getawitness: Here we have the "hard evidence" sequence similarity founding a whole lot of unsubstantiated conjecture about what must have occurred because evolution is considered true prior to investigation. (It is called a "philosophical bias" when you decide what the evidence must say prior to investigation, and is clearly the practice of very bad science Getawitness!) You other study that you place so much faith in for "solid proof" of common descent: T. Cavalier-Smith study: http://ijs.sgmjournals.org/cgi/reprint/52/2/297.pdf partial of Intro: These radical innovations occurred in a derivative of the neomuran common ancestor, which itself had evolved immediately prior to the divergence of eukaryotes and archaebacteria by drastic alterations to its eubacterial ancestor, an actinobacterial posibacterium able to make sterols, by replacing murein peptidoglycan by N-linked glycoproteins and a multitude of other shared neomuran novelties. The conversion of the rigid neomuran wall into a flexible surface coat and the associated origin of phagotrophy were instrumental in the evolution of the endomembrane system, cytoskeleton, nuclear organization and division and ual life-cycles. Cilia evolved not by symbiogenesis but by autogenous specialization of the cytoskeleton. I argue that the ancestral eukaryote was uniciliate with a single centriole (unikont) and a simple centrosomal cone of microtubules, as in the aerobic amoebozoan zooflagellate Phalansterium. I infer the root of the eukaryote tree at the divergence between opisthokonts (animals, Choanozoa, fungi) with a single posterior cilium and all other eukaryotes, designated ‘anterokonts’ because of the ancestral presence of an anterior cilium. Anterokonts comprise the Amoebozoa, which may be ancestrally unikont, and a vast ancestrally biciliate clade, named ‘bikonts’. The apparently conflicting rRNA and protein trees can be reconciled with each other and this ultrastructural interpretation if longbranch distortions, some mechanistically explicable, are allowed for. What a friggin story Getawitness, and all the many conjectures are derived because of sequence similarities and from what I can tell only one morphological similarity, with absolutely no lab work proving that these transformations are even possible in the first place! (you do remember the fact that all mutation studies done so far don't offer any realistic hope for all these conjectures of his don't you getawitness?) This is quite the tale. From what I can see the only really hard science in the whole paper is the sequence similarities he alludes to, before he goes off on his fantastic tangent about how evolution may have occurred. Methinks your hard science is wanting something fierce! Or is that, "Methinks I see/smell a weasel" If I wanted fanciful conjectures, based on what is now known about genetic sequence similarities, I much rather prefer Eugene Koonin's Biological Big Bang Mo^del, since it is #1. More recent and accurate to the sequence similarity data we now have, and #2. More sober, and realistic, in its analysis of the tremendous problems presented by the radical changes implemented, in, as far as we can tell, an instant, by the fossil record itself. http://www.biology-direct.com/content/2/1/21 The Biological Big Bang for the major transitions in evolution Eugene V Koonin Getawitness I thought you may actually have something when I saw you quote sources,,,But alas, it was just bedtime stories for the Darwinists' children.bornagain77
December 3, 2007
December
12
Dec
3
03
2007
07:12 AM
7
07
12
AM
PDT
PS: Your "eliminate the vowels" example illustrates: [1] the role of redundancy, [2] the capability of smart [functionally organised and complex!] receivers to decode near-enough words. In the relevant cases we have to account for the ORIGIN of such codes and processing machinery, as well as the algorithms that drive them! (Has a case of spontaneous -- Chance + Necessity only -- origin of such ever been observed for a CSI entity? Have we ever observed agents creating such, including exploiting necessity and chance or at least doing workarounds?)kairosfocus
December 3, 2007
December
12
Dec
3
03
2007
03:31 AM
3
03
31
AM
PDT
GAW: Re . . .
I’m still stuck on figuring out the relation between CSI and Shannon information, and determining a way of measuring CSI
Why not take a look here and here [both in my always linked] for a start, then come back to us? I note: 1] Shannon info can be calculated or measured fairly easily, that is in large part why it was specified. 2] Complexity can be measured or quantified, e.g by estimating the scale of the relevant configuration space -- an aspect of he foundational thought in statistical thermodynamics' concept of phase space. 3] Compressibility can be defined and quantified, e.g. through Kolmogorov complexity [cf Dembski's discussion]. 4] At the same time, we can identify specification in several ways, including of course K-compressibility. 5] In many relevant cases, it is far more reasonable to identify functionality and observe the effect of random perturbation [bearing in mind error detection and correction and the potential to saturate these if the code distance is high enough], which gives and idea of how large the island/archipelago of functionality is. 6] A comparison of the config space and the relative isolation of functional [which is easily macro-observable or describable] states within that -- especially where several information-rich components must act together -- easily leads to the impication that chance + necessity only is not credibly able to access functional states that require 500 - 1,000 or more bits of information storage capacity, on the gamut of the observed cosmos. 7] The systems of interest, for origin of life and for body-plan level biodiversity, easily exceed this limit by many orders of magnitude. 8] Additionally, in all observed cases where the Explanatory filter rules design and we can directly observe the causal process, systems that are beyond the UPB are designed by agents. [This post is a case in point.] That is, the EF is reliable when it rules "designed." (As I discuss in another response to you this AM, this is the case of material interest and revolutionary impact.) GEM of TKIkairosfocus
December 3, 2007
December
12
Dec
3
03
2007
03:23 AM
3
03
23
AM
PDT
getawitness, As I say you only nitpick. The examples you cite add nothing to the debate but only raise an irrelevant objection. Are you trying to prove yourself clever or trying to understand? Don't you see that by raising irrelevant objections you only are conceding that ID is valid. If you had any objection of substance, it would have been raised a long time ago. Go out into the cyberspace and see what other irrelevancies you can dig up. I can guarantee you we have probably seen them all. Maybe one of the moderators should make a list of the silly objections we get like the way to disprove NDE is to find a rabbit in the Cambrian as opposed to someone showing us how NDE ever gave rise to anything of consequence. Nit picking, trivialities and clichés are all we get. Occasionally we get an interesting question, but it is infrequent. Keep chugging away.jerry
December 3, 2007
December
12
Dec
3
03
2007
01:31 AM
1
01
31
AM
PDT
getawitness: I must confess that I don't really understand your problem about quantifying CSI. If you have a clear understanding of what CSI is (and I think you have), then you know that a CSI unit is "any" piece of information which has the following 3 properties: 1) It is complex (that is, its chance to come out as a random result is lower than a conventional limit, which could well be Dembski's UPB, although I think that a less severe limit would certainly be more appropriate). 2) It is specified, in one of the many senses we have discussed in this blog. In particular, for biological information, specification is usually of the functional type. 3) There is no known mechanism which can explain that informatioin in terms of necessity according to known material laws. That clearly defines a "unit" of CSI. A unit of CSI is any piece of information, however long and complex, which exhibits the above 3 properties, and ias explained by a "single" specification. To be more clear, a sequence of bits corresponding to prime numbers would be a single unit of CSI, however long it is. In biological beings, you have billions, trillions, probably quadrillions of units of CSI. You have only to choose. You are richer than Uncle Scriooge. Any single functional protein, longer than a minimum (should be, if I remember well, little more than 100 AA to satisfy Dembski'UPB), is a unit of CSI. Moreover, any protein network, where many different units interact to realize a meta-function, is a further unit of CSI. Any highly functional organisation (of organs, of systems, of the neural network) is a unit of CSI. The genetic code is a unit of CSI. The transcription system is a unit of CSI. The translation system is a unit of CSI. And so on, and so on. Practically everything, in a cell or in a multicellular being, is CSI. If you want to measure, you have just to count the functional units which are complex enough and for which you know no reasonable explanation. The result? Practically everything. You seem to think that CSI should be measurable in terms of some measure unit like bits, like Shannon information. I don't think that is correct (although I leave the answer to Dembski or to people who have the qualifications to answer in more detail). In my opinion, CSI is a "property" of some global unit of information. You can measure the complexity of a unit of information, and verify that it is specified (in other words, that its complexity is functional). Finally, you can ascertain that no known necessary mechanism exists to explain that particular functional information. At this point, your result is: this information unit is a CSI unit. And you can count it in your general results.gpuccio
December 2, 2007
December
12
Dec
2
02
2007
11:52 PM
11
11
52
PM
PDT
Jerry, I think there's a lot more in nature that exhibits CSI than DNA. For example, all the living organisms which have DNA inside their cells! I'm still stuck on figuring out the relation between CSI and Shannon information, and determining a way of measuring CSI. As I said to BA77, not all DNA is CSI: a string of DNA containing a random mutation could decrease specificity and increase information. Or you could decrease the number of letters and have the same meaning. For example, most English words have vowels. And calculations of information include the vowels. But if I eliminate the vowels, y cn stll rd my wrtng prtty wll: th mnng stys lthgh th ttl nfrmtn my b rdcd.getawitness
December 2, 2007
December
12
Dec
2
02
2007
09:54 PM
9
09
54
PM
PDT
CSI is very easy to quantify. How often that is actually done or not I do not know. I have a hard time following many of Dembski's examples but Meyers uses the example of language which is easily understandable and almost directly comparable with DNA. Quantifying it can be done as easily it would be to estimate the chance reproduction of an English sentence or paragraph. As soon as you get to a paragraph you run out of time in the universe since its very beginning to reproduce it by chance or law. There is nothing in nature that is CSI except for DNA. If CSI could arise from one of three possible methods, why prohibit the only one we know of that has been shown to produce it in other areas. No you are arbitrarily limiting the most obvious answer because you do not want to accept it. You just repeat the same tired clichés. If you were sincere about it then you would consider it and compare it with the other possible answers. But every thing you seem to do is like a constant obstacle course of irrelevant issues. Essentially CSI is complex information that specifies something independent of it self that has function or meaning. As I said language is the best example in our daily lives.jerry
December 2, 2007
December
12
Dec
2
02
2007
09:46 PM
9
09
46
PM
PDT
jerry, "I need to read more?" I've read a lot of ID, including several books* by Dr. Dembski, both of Dr. Behe's books, Wells's Icons, and four or five works by Phillip Johnson. In all that reading, I have never encountered a rigorous argument for quantifying the amount of CSI in a system: only whether it's there or not. Hell, I don't even know what a unit of CSI would be called. I've asked here for someone to point me to such an argument, and I've gotten BA77 going on and on about genetic entropy and you telling me I need to read more. (I should be grateful: at least I didn't get his list of "a million things theism didn't predict.") Meanwhile, nobody has pointed me to that elusive demonstration of measurement. Of course DNA works in a very complex way and requires a number of other elements to drive development of the organism. I never said otherwise. And quantifying the information in any developed organism would be very difficult. But nobody's quantified CSI for me here, at all. And yet it's "quantifiable"? You say I should prefer the statement “Until CSI is capable of being produced through natural processes. we should not accept that it can.” But there's no evidence of intelligent intervention beyond CSI itself. No traces, which was my original point on this thread. Why should I write a natural history in which the normal rules of natural processes are suspended and the phrase "insert intelligent event here" substitutes? Reminds me of a famous scientific cartoon: I bet you know the one. *ID: The Bridge . . . was my first ID book. Then I read TDI and NFL.getawitness
December 2, 2007
December
12
Dec
2
02
2007
09:00 PM
9
09
00
PM
PDT
getawitness, Thank you for the cite by Gray, Burger and Lang. We are well aware of Margulis'3 symbiosis hypothesis. I have downloaded the article and will see how much I can understand.jerry
December 2, 2007
December
12
Dec
2
02
2007
08:58 PM
8
08
58
PM
PDT
es58 (from 50), yes if the rover were found,there would be clues that design occured. But that would still say little about the tenets of intelligent design. Specifically, it would say nearly nothing about irreducible complexity, and would say nothing about whether IC was the best explanation of the origin of that rover.Q
December 2, 2007
December
12
Dec
2
02
2007
08:18 PM
8
08
18
PM
PDT
getawitness you said "incapable of being produced through natural processes." Why didn't you say "Until CSI is capable of being produced through natural processes. we should not accept that it can." To me that would be a more logical statement. Why assume something can happen naturally when there is no evidence that there is any way it can. It just shows how Darwinists continually beg the question by assuming the assumptions they need no matter how illogical these assumptions may be. But without them they would have to fold their tent. CSI is coherent, quantifiable and scientifically relevant. You really should read more. An example is English writing on which this blog depends. All our posts are meaningless unless parsed through an English dictionary and English grammar to have a non arbitrary relationship with thoughts in our head. Similarly DNA is parsed through transcription and translation by mRNA, tRNA and Ribosomes to produce very functional proteins. There is probably another parsing process we are completely unaware of that relates the regulatory DNA to actions we are also not aware of but which are necessary for gestation and the funcitoning of the organism. Stop nit picking and try to learn.jerry
December 2, 2007
December
12
Dec
2
02
2007
08:09 PM
8
08
09
PM
PDT
1 2 3

Leave a Reply