Uncommon Descent Serving The Intelligent Design Community

Mathematically Defining Functional Information In Biology

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Lecture by Kirk Durston,  Biophysics PhD candidate, University of Guelph

[youtube XWi9TMwPthE nolink]

Click here to read the Szostak paper referred to in the video.

 HT to UD subscriber bornagain77 for the video and the link to the paper.

Comments
Prof. Olofsson- To be perfectly honest, the math is above my head. But having spent a couple of years lurking and listening to both sides of the debate, I still come down on the side of ID. I dont know if the odds are actually twenty gazillion to one against, or eighty-five buzillion-ding-dong-dillion to one against human biological information complexity coming about without any intelligent design involved, but I think I am pretty confident that A. ) it can be pretty fairly charachterized as a pretty darned unlikely thing, and B.) that a purely naturalistic process is a lot less likely than design. I'll let you and Dembski decide how many zeroes to add to the X-to-one against figure, and maybe someday that number will tell us whether or not this theory jibes with a finite universe or not. Can we not say this though... that the burden of proof should properly lie on those making the affirmative claim of a purely naturalistic explanation for life, or on anybody adocating for ANY theory so unlikely, to show how it actually happened? We can guess and infer until the cows come home, but if life happened via purely naturalistic processes then why has nobody been able to specify, observe, or demonstrate that event/process? I hear a lot of theoretical talk, but the actual specific method or set of exact steps by which spontaneous abiogenesis supposedly occurred seems to be totally insubstantiated or laid out- even in theory, never mind being actually observed to ever occur in nature. Am I wrong about that? And if I am not wrong, then I suppose ID is just as reasonable a school of thought as Darwinism correct? At any rate, I just wanted to say how very much I appreciate your serious and thoughtful contributions to uncommon descent. It's really refreshing to hear somebody coming from a darwin-based standpoint debating using their wits and fact-based arguments as opposed to just condescending to all ID advocates as a bunch of creationist nuts and hurling insults as many of your tenured colleagues do elsewhere on the web. So I just wanted to say thanks for that. The manner in which you conduct yourself does you and your viewpoint much credit, and listening to people like you debate from a standpoint of intellectual honesty instead of hate is really, really nice. I feel like I can try to learn from you because you dont seem to be ideaology or agenda-driven. It's that open-minded willingness to see what one sees as opposed to what one wants to see that landed a lot of us here at UD looking for truth as opposed to dogma to begin with.tyharris
January 30, 2009
January
01
Jan
30
30
2009
08:23 PM
8
08
23
PM
PDT
jerry @ 93
You are assuming that law and chance are not part of the design. A designer does not have to design every detail but could very well allow chance to operate within a framework of initial and boundary conditions. And theoretically change some of these conditions over time.
As I said, my understanding of the Explanatory Filter was that it it was claimed to be able to identify design regardless of the nature of the designer. That would mean that if, as you say, a designer chose to incorporate law and chance into a design the EF would still be able to detect the design element. Law and chance were not assumed to be excluded.
So I guess what the EF is doing is separating out those phenomenon that are allowed to proceed by chance and law. Now I am not an expert or even well read on the EF but your objection seems to be irrelevant as far as I know.
Put very simply, the Explanatory Filter seems to proceed by a process of elimination. For any highly-improbable event, if you can rule out both law and chance as sufficient causes then what remains must be design. Obviously, the trick is going to be to exclude law and chance with any degree of certainty.
By the way no one is stopping you from investigating the nature of the designer. People have been doing that for several thousand years and you are welcome to join them.
I was not suggesting in any way that people should stop investigating the nature of the designer. I was just pointing out that ID proponents say the nature of a designer is not a necessary consideration for the detection of design. By all means continue to look for evidence of a designer.Seversky
January 30, 2009
January
01
Jan
30
30
2009
07:50 PM
7
07
50
PM
PDT
tribune[74], I didn't forget you...thanks for your nice words! I enjoy your presence too, you're always a good sport. The answer to yuour question: quite the contrary, we almost always have to base decisions on probabilities.Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
07:30 PM
7
07
30
PM
PDT
Kirk, I thought your comment was fantastic up to the math and then it lost me. I will have to read the math part again and it may sink it. But your description of islands of functional proteins was one of the best I have seen anywhere of this topic. Thank you. You have helped us a lot here and we will be able to use this information in the future in trying to describe the issues.jerry
January 30, 2009
January
01
Jan
30
30
2009
06:02 PM
6
06
02
PM
PDT
R0b said: "I’m open to correction. What studies have been done that involve specified complexity? Specified complexity is purportedly a rigorous metric, so the claim that humans create it and law+chance doesn’t should be empirically testable. Where are those tests published?" Apparently you need a basic biology course. Functional specification of this complex data (DNA) was established in the early 1960's through what is known as the transcription process and then the translation process. Individual DNA string specify protein polymers through these two processes and these proteins have function. I believe Francis Crick of double helix fame first came up with the three DNA codon code and was proven right and then one by one the relationships between DNA and a specific amino acid was discovered along with the stop codons. Since that time there has been an immense amount of research on the functionality of these proteins found in life. And as Kirk Durston said above these proteins are very rare. But I should stop here and recommend you take any high school biology book and start there. It will explain it for you better than I can. As far as law and chance creating it, there is as of now not one example of it in the natural world outside of life and human activity. A diamond doesn't specify anything. You are maybe confusing complexity with functional complex specified information. They are not necessarily the same thing. As far as humans doing it, your comments here are an example of it so that is not an issue. DNA does it, humans do it, but nothing else does it and up in Boston unfortunately beans don't do it.jerry
January 30, 2009
January
01
Jan
30
30
2009
05:58 PM
5
05
58
PM
PDT
KD...if you have the time, I'd be curious to know your thoughts on frontloading, as opposed to intermittent/continual design. Given the hurdles for NS to create "ordinary" proteins, how could a frontloaded algorithm encapsulate all that complexity?WeaselSpotting
January 30, 2009
January
01
Jan
30
30
2009
04:37 PM
4
04
37
PM
PDT
R0b:
No studies have ever been done involving specified complexity. There’s no consensus that specified complexity is even a coherent concept, or that law+chance doesn’t include human behavior.
jerry:
This statement is nonsense.
I'm open to correction. What studies have been done that involve specified complexity? Specified complexity is purportedly a rigorous metric, so the claim that humans create it and law+chance doesn't should be empirically testable. Where are those tests published? And when I speak of consensus, I'm not talking about just among ID proponents. Can you provide any evidence of a larger consensus on your claims?R0b
January 30, 2009
January
01
Jan
30
30
2009
04:10 PM
4
04
10
PM
PDT
KD, thanks for the pointer to your paper. I certainly need to read it, although it appears to consist mostly of biological applications, which are way over my head. Your paper includes a crucial ingredient that your presentation does not, namely probabilities. Of course, those very probabilities are at issue in the ID debate. Best of luck in that arena. With regards to the diamond example, there seems to be an awful lot of ways that 3 grams of carbon dust could be configured without violating the laws of physics. Is your measure intended to be applied only to genetic sequences? Also, I assume that the function of Venter's watermarks is identification. When non-watermarked DNA is used for identification (which, of course, happens all the time), is it likewise functional?R0b
January 30, 2009
January
01
Jan
30
30
2009
03:59 PM
3
03
59
PM
PDT
I ask about Genetic Entropy because you stated in: “Measuring the functional sequence complexity of proteins” "although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY. Thus from what I can gather from your paper this looks like it may be sufficient to establish the principle of Genetic Entropy:bornagain77
January 30, 2009
January
01
Jan
30
30
2009
03:52 PM
3
03
52
PM
PDT
R0b said. "It’s interesting how many ID proponents take these claims as a given, but the fact is that they aren’t established at all. No studies have ever been done involving specified complexity. There’s no consensus that specified complexity is even a coherent concept, or that law+chance doesn’t include human behavior." This statement is nonsense. They are most definitely established and you are new here and are making assertions without reading all that has been said. DNA is functional complex specified information. This has come up at least a half dozen times in the last 2 weeks. So you are not reading everything. DNA is complex, information that specifies something else, RNA and proteins, that are functional. If you deny that then I suggest a beginning biology course. Diamonds don't specify anything. They might be functional in some contexts and so may be a rock if you are defending yourself or build a wall. No where else on the planet does such functional complex specified information happen except with humans and there it happens all the time. Some might stretch a point and say some animal constructions might be so but I don't think so. Now you can make up your own definitions and play with them but what I just described briefly is what we are dealing with here. Now I haven't read Kirk Durston's long reply and do not know what he says about this but the simple explanation up above can do till I read what he says.jerry
January 30, 2009
January
01
Jan
30
30
2009
03:50 PM
3
03
50
PM
PDT
KD, Thanks for the talk as it has certainly generated a lot of interest here as well as on youtube and Godtube; (forgive my editing as I had to edit for the 10 min. limit on youtube) One question I had for you is, Do you think this approach is sufficient, or will be sufficient, to establish the principle of Genetic Entropy to the molecular level of biology? A principle which many lines of empirical evidence are already overwhelmingly pointing to on a semi-macro level(J.C. Sanford, M. Behe, etc..)bornagain77
January 30, 2009
January
01
Jan
30
30
2009
03:45 PM
3
03
45
PM
PDT
Kirk: thank you for trying to clarify this lecture. I have to say, however, that I'm still puzzled - in your lecture, you're using the functional information equation I = -log2[M/N] which is fine for examples like your safe, but didn't make sense to me when applied to evolution, because you defined no function. I understand you're modelling evolution as a random walk to find one needle in a 10^42 sized haystack, but in my opinion arguments from improbability like this always miss out the details of the evolutionary process, even if they have matured from the old and laughably simplistic creationist 'whole cell forming by chance' notions. But every time I try to argue this it eventually turns into 'but what does sequence space really look like' and doesn't go anywhere, so I'll let other people fuss over that.Venus Mousetrap
January 30, 2009
January
01
Jan
30
30
2009
03:22 PM
3
03
22
PM
PDT
KD[95], Thanks for joining the discussion and explaining your thoughts in more detail. I hope to contribute more later, but I just wanted to point out my initial objection to your talk. You claim that "It was about 10^80000 times more probable that ID was required..." which is a probability statement about ID. Yet, in your explanation, there are no such probability statements; ID only shows up to the right of the conditioning bar. Do you agree with my original criticism that your statement, as presented in the video, is inaccurately formulated?Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
03:16 PM
3
03
16
PM
PDT
Peter[81], Thanks for the assessment of my mental faculties. On a factual note, the type of probability you mention is the conditional probability P(data|innocence). Without reasonable estimates of the other relevant probabilities in Bayes' rule, the values of this probaility alone is not enough to convict. Just google and read about the case of Sally Clark and you will see what I mean.Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
02:57 PM
2
02
57
PM
PDT
Rob (94), I just noticed your discussion of diamonds. In my paper, I discuss the null state, the ground state and the functional state. The functional complexity/functional information is measured as the difference in function entropy from the ground state to the functional state, not the null state as you assume in your diamond example. Essentially, the ground state is a state determined by the laws of physics. The null state is completely random and is a special case of the ground state where physics imposes no constraints on the initial conditions. In the case of the diamond crystal lattice, the null state is not an option, as physics imposes a priori constraints on the crystal lattice structure. In other words, the ground state represents the possibilities permitted by nature before any additional functional constraints are imposed.KD
January 30, 2009
January
01
Jan
30
30
2009
02:29 PM
2
02
29
PM
PDT
R0b (94),
If, by “functional complexity”, you mean Durston’s functional information (which is crucially different from Dembski’s specified complexity), then nature certainly can produce it [in diamonds].
But isn't that functional information already encoded into the "fitness function" of nature? Crystals form due to the properties of atoms--not due to a blind search.Timothy
January 30, 2009
January
01
Jan
30
30
2009
02:26 PM
2
02
26
PM
PDT
As I stated in my brief prior post, this particular talk was given about a year ago at the University of Edinburgh, Scotland, to a general student audience, so it was fairly non-technical and time constraints made it difficult enough to even give an overview of my thinking on this subject, forget about expanding on technical points. At present, I am thinking through a new ID presentation that will be less broad, but go deeper on what I feel are key technical points. With regard to the presentation in the video, I've briefly outlined below some of the probability factors. I only skim the surface here, but hopefully, you it will clarify some concerns that have been raised here (I have not been able to take the time to read more than half the comments posted above). Before the reader can adequately understand my approach, I must briefly discuss some key concepts. I welcome any constructive criticism from members here, especially those who may not be convinced of the need for any role for intelligence in the origin of the protein families. Regarding stable proteins as 'targets' in sequence space: A key piece of background information has to do with treating folding, functional proteins as 'targets'. I get the impression from evolutionary biologists that the stable folding proteins are the products of biological life. Virtually any combination of amino acids confer various levels of fitness upon the organism so that natural selection can direct the search toward better proteins. In this manner, evolution can 'climb Mount Improbable'. This is not so. It is physics that determines which combinations of amino acids can fold into stable folds. Function is an additional requirement and is a joint relationship between the system, in this case a life form, and the stable proteins permitted by physics. In other words, it is biology that must search amino acid sequence space to find these stable folds that are determined by physics. Thus, the stable, folding proteins represent very real, objective targets that are 'out there' in amino acid sequence space. Biological life does not make them up, it must find them and physics holds the combinations. The role of natural selection in searching sequence space: All papers I've read on this subject, that actually deal with experimental results, indicate that most amino acid combinations do not yield a stable, folded functional protein. This is confirmed by my own research as well. Both published research, and my own as well, seem to indicate that virtually all of sequence space codes for non-stable proteins, and are of no use to life. There is an infinitesimal subset of amino acid sequences that physics determines to be stable folded proteins. For simplicity, you can regard this subset as being made up of fold-set islands in an ocean of non-folding sequence space. I say 'fold-set' because some sequences may be able to provide more than one fold. To find a novel protein family by mutating an existing gene, the evolutionary track must cross non-folding sequence space. Because non-folding proteins are not useful to biological life, indeed they can be lethal, they have no phenotypic effect and, thus, natural selection cannot help navigate the evolving gene through non-folding sequence space. (I am aware than about 30 percent of proteins are intrinsically disordered, but they tend to achieve some structure after binding, completing the folding process in most cases.) The only place natural selection can work is within a fold-set island, where the protein already exists, and can be fine-tuned through selection. This is not to be confused with locating a novel protein family. Bottom Line: Physics determines which amino acid combinations produce stable folding proteins. These stable folding sequences appear to be extremely rare in sequence space and can be regarded as targets. The regions between the fold-set islands are non-folding, produce no phenotypic effect (except for harmful effects if the non-folding proteins begin to clump) and natural selection is of no use whatsoever in guiding the evolutionary trajectory as it random walks across non-folding sequence space. If anyone thinks natural selection will be useful to find a novel protein family, that is a huge assumption which flies in the face of the consensus of experimental results. The onus would be on such a person to provide experimental support. There is none at present and plenty that says quite the opposite. Thus, the search for novel protein families is very much a random search. I cannot emphasize this enough ….. natural selection is of no help in discovering a novel protein family. Assumptions by those who believe Darwinian evolution did it are in a position where their assumption is not only without any experimental support, but experimental results falsify that assumption. With the above in mind, the below was my thinking as I put together the presentation shown in the video. Let e = a variable that represents a value of functional information. I noticed that some were talking about P(data), but I am not interested in any particular data set. I am only interested in e, the level of functional information for any data set or effect. Given 10^42 possible trials let B represent a target that occupies 10^-42 of sequence space. We will make the extremely generous assumption that B can be found with a probability that approaches 1, for 10^42 trials (i.e., P(B|10^42 trials) ? 1 and P(B) = 10^-42). In reality, a random walk of only 10^42 moves would be much less efficient, as there could be numerous instances of the evolutionary pathway producing a sequence more than once in its random walk. Then P(B) = 10^-42. Let 10^42 trials be a constant in any search for a novel protein family. In other words, I am making the very generous assumption that the full 10^42 trials were available and actually carried out. In other words, P(B|e) = 1. To clarify, for any e required by a protein family, a full search of 10^42 trials is necessarily carried out (again a generous assumption to make an evolutionary success more likely). Also, since we know that intelligence already exists, and is capable in the case of humans of easily producing the levels of e that we observe in the protein families (e.g., one page of an essay typically contains a level of functional information that exceeds the level required to code for the average protein family), the existence of intelligent design is an a posteriori empirical fact. Therefore, P(ID) = 1 and P(e|ID) = 1 for values of e typically found in biopolymers. If there is a limit to the level of e that known intelligence can achieve, it is certainly larger than what is contained in a typical university library. Reminder, do not confuse data with e. It is e that is key in identifying effects that require intelligence, not data sets. P(e)=target size/size of sequence space. This is an a posteriori probability computationally computed from a set of sequences for a protein family, where the number of sequences is preferably greater than 1,000 to give an adequate sampling. (this requires its own discussion to show how it is done, for more info, see my paper. Recall Bayes' Theorem where P(e|B) = (P(B|e)*P(e))/P(B). Therefore, since P(e|ID) = 1 P(e|ID)/P(e|B) = P(B)/ (P(B|e)*P(e)) Example: Given that RecA has a Fit value of about 832 Fits, P(e) ? 10^-250. Therefore, P(e|ID)/P(e|B) ? 10^-42/ 10^-250 P(e|ID)/P(e|B) ? 10^208 In other words, if we had to choose the most likely option for an effect such as RecA that required 832 Fits of information to produce, intelligence would be 10^208 more likely than biological life with its 10^42 trials to achieve that level of functional information. This gives you the rationale behind the probability numbers I used in my presentation a year ago. However, I've been too generous with what 10^42 trials can achieve, so in my next presentation I will be more conservative, possibly using Demski's approach in his forthcoming paper, although I haven't read it yet, but intend to do so in the next few days. I've not expanded upon Fits and its relation to Hazen et al. method to measure functional info. However, if you look at my paper and equate my measure of functional complexity to his equation, you will see that it is straight forward to arrive at an estimation for his M(Ex).KD
January 30, 2009
January
01
Jan
30
30
2009
02:20 PM
2
02
20
PM
PDT
jerry @ 83:
Specified complexity arises all the time in human activity ... Now take law and chance - there is not one single instance where these forces have ever produced functional complexity.
It's interesting how many ID proponents take these claims as a given, but the fact is that they aren't established at all. No studies have ever been done involving specified complexity. There's no consensus that specified complexity is even a coherent concept, or that law+chance doesn't include human behavior. If, by "functional complexity", you mean Durston's functional information (which is crucially different from Dembski's specified complexity), then nature certainly can produce it. Let's take diamonds, which are naturally occurring and quite functional due to their hardness. A diamond's hardness is due to its diamond lattice structure. We don't know of any other configuration of carbon atoms that would result in this level of hardness. Given a 14K diamond, which has on the order of 10^23 atoms, how many configurations are conceivable? (Not just regular lattice structures, but any way that those 10^23 atoms could be configured.) The fraction of those configurations that represent the diamond lattice structure is infinitesimal, so we're talking an enormous amount of functional information in that structure.R0b
January 30, 2009
January
01
Jan
30
30
2009
01:36 PM
1
01
36
PM
PDT
Seversky, You are assuming that law and chance are not part of the design. A designer does not have to design every detail but could very well allow chance to operate within a framework of initial and boundary conditions. And theoretically change some of these conditions over time. So I guess what the EF is doing is separating out those phenomenon that are allowed to proceed by chance and law. Now I am not an expert or even well read on the EF but your objection seems to be irrelevant as far as I know. By the way no one is stopping you from investigating the nature of the designer. People have been doing that for several thousand years and you are welcome to join them.jerry
January 30, 2009
January
01
Jan
30
30
2009
01:35 PM
1
01
35
PM
PDT
Peter @ 81
You are confused. The use of DNA in court cases is always based on the probability of a match between a sample and the defendant. Lawyers will always tell you that the probability of a match is say 1 in 50 million (P1). In a case where there are only two possible outcomes, the second outcome must have a probability 1 - P1. On this basis people go to jail. Also, DNA match is not circumstantial. It is the most respected form of evidence on which many wrongfully convicted people were released.
I think I am confused, too. My understanding was that, when a probability of something like 1 in 50 million is quoted in the context of DNA evidence, it means that there is only a 1 in 50 million chance that the sample could belong to someone other than the defendant. That is a very low probability but not an impossibility. It would not necessarily, on its own, make the guilt of the defendant certain, since highly improbable events happen all the time, but taken in combination with other evidence it could take the question of guilt out of the realm of reasonable doubt. Equally, where convictions have been set aside following the submission of new DNA evidence it is not always that innocence has been established, it is sometimes just that the new evidence has made the original verdict unsafe and so the benefit of the doubt is granted.Seversky
January 30, 2009
January
01
Jan
30
30
2009
01:29 PM
1
01
29
PM
PDT
Seversky[90]. But keep in mind that the explanatory filter is very explicitly non-Bayesian. Also note ROb's insightfull comment #77.Prof_P.Olofsson
January 30, 2009
January
01
Jan
30
30
2009
01:10 PM
1
01
10
PM
PDT
Peter @ 86
How bout: The designer existed before matter, time, and space and was intelligent enough to design a universe with numerous constants which are extremely fine tuned for life. Such a designer in my opinion would find creating life on earth as an almost trivial task compared to creating the universe.
Possibly, but how could you ever detect the imprint of such a designer? Any method of reliably detecting design, regardless of the nature of the designer, such as is claimed for the Explanatory Filter, must be premised on the possibility of being able to distinguish what is designed from what is not designed. But if the entire Universe is the product of some Original Intelligent Designer then the Explanatory Filter - if it works as claimed - should throw up nothing but positive results since there is nothing not-designed for it to filter out as a negative result. The problem is, how could we ever know whether or not the filter is working reliably given that, again, there is nothing not-designed on which to test it? The other consequence of assuming an Original Intelligent Designer is that, if the nature of the Designer is ruled out of consideration and if generic design itself cannot be reliably detected from within a fully-designed Universe then the Intelligent Design project becomes pointless and, hence, uninteresting from a scientific perspective.Seversky
January 30, 2009
January
01
Jan
30
30
2009
12:59 PM
12
12
59
PM
PDT
P(life|intelligence)=) 0 and P(life|law and chance)=0. Interesting predicament we have here. The first assessment is based on ideology which arbitrarily asserts no intelligence existed before life and the second assessment is based on data which confirms that law and chance does not have the power to produce the complexity of life. So where should we go with this. Is it time to fetch the Tooth Fairy to settle this? And for those of you who doubt the Tooth Fairy, my niece just got a $5 gift certificate from Dunkin Donuts for her first tooth. In my day I only got a quarter.jerry
January 30, 2009
January
01
Jan
30
30
2009
12:40 PM
12
12
40
PM
PDT
# 86 "How bout: The designer existed before matter, time, and space and was intelligent enough to design a universe with numerous constants which are extremely fine tuned for life. Such a designer in my opinion would find creating life on earth as an almost trivial task compared to creating the universe." Well it is a start. At least we can distinguish this from rival hypotheses based on aliens or less talented (but still very impressive) deities. Care to give a basis for estimating the prior probability of this particular designer (a) existing (b) creating life? Bear in mind when estimating (b) that you should not dismiss a priori all rival hypotheses based on other forms of intelligence.Mark Frank
January 30, 2009
January
01
Jan
30
30
2009
12:39 PM
12
12
39
PM
PDT
Prof. a-priori states; bornagain[75], I don’t think you’re getting my point which is that there is no empirical meat with which we can cook up priors for “design” or “chance.” to which i refer: Scientific Evidence For God Creating The Universe - Establishing The Theistic postulation and scientific validity Of John 1:1:, “In the beginning was the Word, and the Word was with God, and the Word was God.”, By showing “transcendent informations” complete specific dominion of a photon of energy as well as its integral relationship with the definition of a photon qubit. http://www.godtube.com/view_video.php?viewkey=f61c0e8fb707e76b0e20 excerpt of description from these findings, we can now draw this firm Conclusion; A infinite amount of transcendent information is necessary for the photon qubit to have a specific reality, thus infinite transcendent information must exist for the photon qubit to be real. Since photons were created at the Big Bang, this infinite transcendent information must, of logical necessity, precede the light and "command" the light to "become real", thus demonstrating intent and purpose for the infinite transcendent information. Thus a single photon qubit, coupled with the Big Bang, provides compelling evidence for the existence of the infinite and perfect (omniscient) mind of God Almighty. (God is postulated to be infinite and perfect in knowledge in Theism)) Quantum teleportation, coupled with the First Law of Thermodynamics (Conservation of Energy; i.e. energy cannot be created or destroyed, energy can only be transformed from one state to another), provides another compelling and corroborating line of evidence for the existence of infinite transcendent information, by demonstrating the complete transcendence of information to any underlying material basis, or even any underlying natural law, as well as demonstrating the complete, specific, and direct dominion of infinite transcendent information over a single photon qubit of energy. (Since energy cannot be created or destroyed by any known material means, then any transcendent entity which demonstrates direct dominion of energy, of logical necessity, cannot be created or destroyed also: (This is the establishment of the Law of Conservation of Information i.e. Information cannot be created or destroyed; i.e. all information that can possibly exist for all physical events in this universe already does exist) The main objection, would be that you can have infinite information for the photon qubit yet still not complete and total infinite information(infinite odd number vs. infinite even number hotel rooms enigma). (I think this objection, though reasonable to the overall principle that needs to be established for Theism, is superfluous to the main point of this proof in establishing infinite transcendent informations' primacy over energy/material in the first place and thus validating the Theistic postulation of John 1:1.) This should not be surprising to most people. Most people would agree that transcendent truths are discovered by man and are never "invented" by man. The surprise comes when we are forced by this evidence to realize that all transcendent information or "truths" about all past, present, and future physical events already exist. This is verification of the omniscient quality of God when we also realize that a choice must be made for a temporal reality to arise from a timeless reality. i.e. why should we expect a timeless reality do anything at all otherwise.bornagain77
January 30, 2009
January
01
Jan
30
30
2009
12:22 PM
12
12
22
PM
PDT
Mark Frank [85] "Until ID is prepared to say just something about the nature of the designer then there is no basis for a prior probability." How bout: The designer existed before matter, time, and space and was intelligent enough to design a universe with numerous constants which are extremely fine tuned for life. Such a designer in my opinion would find creating life on earth as an almost trivial task compared to creating the universe.Peter
January 30, 2009
January
01
Jan
30
30
2009
10:56 AM
10
10
56
AM
PDT
Re Bayesian #64 I enjoy your comments. The foundations of statistics and probability are a passionate interest of mine (sad I suppose). I will try to curb my enthusiasm and be concise but it will be hard. First. I am a great fan of Bayesian approaches but my concern with the short video was not particularly Bayesian. The logic: “Observed outcome B is highly improbable given hypothesis A. Therefore A is highly improbable.” is fallacious using almost any approach to hypothesis testing. If you use a Bayesian approach then the equation is of course: P(A|B) = P(B|A) * P(A)/P(B) In our terms: A = chance, B = (something like) there exists a large number of proteins which support life after a billion years of earth’s existence P(B) is the prior probability of B. It is always tricky with priors to agree how much is known. In one sense P(B) is 1. But that is clearly not what we mean. I guess it something like the chances of B given the initial conditions on earth. P(chance|functional protein) = P(functional protein|chance)*P(chance)/P(functional protein) Let us assume that P(A|B) is incredibly small. The trouble is we have no idea if P(B) is even smaller! You force P(B) to be relatively high by assuming P(ID) is relatively high and that P(B|ID) is 1. To me P(B|ID) is not so much low as meaningless. Let me explain - on P(ID) you wrote So taking a very low P(ID) is very reasonable, perhaps P(ID)=10^-9 or even P(ID)=10^-150, which is Dembski’s universal probability bound. And, as far as I’m concerned you may assume P(ID) as small as you like, provided you can give a reasonable explanation for your choice. For instance “Because I want P(data|chance) to be larger than 10^-150? does not seem reasonable to me. And taking P(ID)=0 is entirely unreasonable, because then we are excluding ID a priori. Durston said that intelligence, e.g. human intelligence, is capable of producing proteins. In effect this means P(data|ID)=1. In other word: If someone or something with sufficient intelligence sets out to create the stuff (proteins/primitive organism), it will succeed. I find this an entirely reasonable assumption. It is true that a human has produced a protein. It is also true that given a human with sufficient intelligence, equipment and motivation they are very likely (but not certain!) to produce a specific protein given enough time. But I hope you agree the prior probability of humans producing the current set of functional proteins found in life is zero. So your hypothesis is – there is (or was) another form of intelligence with sufficient intelligence equipment and motivation to certainly (P=1) produce a functional protein. Of course this is only one of infinitely many hypotheses that involve intelligence. There are infinitely many hypotheses that posit intelligences that just increases the chances of a functional protein without certainly producing one. So it cannot really be called the ID hypothesis – it just one that falls under that broad umbrella. But it is also a very odd one. Odd because of the word “sufficient”. It is similar to saying – my hypothesis is that magic exists that can always do the job. And then demanding the sceptic to give a prior probability for magic and justify. It you define your hypothesis in terms of “that which gives rise to the data” you really haven’t defined a hypothesis at all. Until ID is prepared to say just something about the nature of the designer then there is no basis for a prior probability.Mark Frank
January 30, 2009
January
01
Jan
30
30
2009
10:09 AM
10
10
09
AM
PDT
----Laminar: "That isn’t always true, you can be certain that an explanation for a particular event is wrong without having to postulate a correct explanation." Think of it this way. Darwinism can either be measured or it can't. A mathematician should be able to know that and comment one way or the other. With regard to intelligent design, one cannot recognize an allegedly flawed approach to applying mathematical models without having some idea about that which would be appropriate.StephenB
January 30, 2009
January
01
Jan
30
30
2009
09:53 AM
9
09
53
AM
PDT
Since I abandoned my probability and mathematical endeavors in the distant past, I do not have the time to go back an relearn them. But if we are going to assign prior probabilities based on something concrete, we know two things. Specified complexity arises all the time in human activity and there are potential instances of humans creating specified complexity in living organisms by manipulating the DNA. So based on that, the prior probability of intelligence created specified complexity has to be higher than zero. The one problem is that we cannot identify any prior intelligence to the origin of life which is the crux of the argument. If there was one solitary concrete fact that showed there was a prior intelligence to humans then the game would be over. Now take law and chance - there is not one single instance where these forces have ever produced functional complexity. You cannot say life because that it begging the question. Life is the issue under scrutiny. So it would be reasonable to assign a probability near zero for this case since maybe it could happen but it has never been witnessed. What law and chance has over intelligence is that we know they existed prior to life. That is all they have going for them. Not logic, not science, now empirical data, nothing but the hope of some people that it might have happened. And not just one instance of functional complexity but millions of them and a large subset of them are complementary to each other so they produce an even greater effect. So those who choose law and necessity have the burden of believing all this without the support of even one simple example. So if I was going to assign prior probabilities it would be P(ID) very close to 1 and P(chance) very close to zero. It is the only logical assignment based on what we know today. Tomorrow may be different but today we can only use what we know.jerry
January 30, 2009
January
01
Jan
30
30
2009
09:42 AM
9
09
42
AM
PDT
KD Your findings seem to be very significant. Are they published? I am doing a presentation for a class I am taking on evolution and theology at TST. I would like to be able to quote your findings.Peter
January 30, 2009
January
01
Jan
30
30
2009
09:30 AM
9
09
30
AM
PDT
1 6 7 8 9 10 11

Leave a Reply