Uncommon Descent Serving The Intelligent Design Community

FAQ4 is Open for Comment

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

4. ID does not make scientifically fruitful predictions.

This claim is simply false. To cite just one example, the non-functionality of “junk DNA” was predicted by Susumu Ohno (1972), Richard Dawkins (1976), Crick and Orgel (1980), Pagel and Johnstone (1992), and Ken Miller (1994), based on evolutionary presuppositions. In contrast, on teleological grounds, Michael Denton (1986, 1998), Michael Behe (1996), John West (1998), William Dembski (1998), Richard Hirsch (2000), and Jonathan Wells (2004) predicted that “junk DNA” would be found to be functional.

The Intelligent Design predictions are being confirmed and the Darwinist predictions are being falsified. For instance, ENCODE’s June 2007 results show substantial functionality across the genome in such “junk DNA” regions, including pseudogenes.

Thus, it is a matter of simple fact that scientists working in the ID paradigm carry out and publish research, and they have made significant and successful ID-based predictions.

A more general and long term prediction of ID is that the complexity of living things will be shown to be much higher than currently thought. Darwin thought the cell was a relatively simple blob of gelatinous carbon. He was wrong. We now known the cell is a high-tech information processing system, with superbly functionally integrated machinery, error-correction-and-repair systems, and much more that surpasses the most sophisticated efforts of the best human mathematicians, mechanical, electrical, chemical, and software engineers. The prediction that living systems will turn out to be vastly more complicated than previously thought (and thus much less likely to have evolved through naturalistic means) will continue to be verified in the years to come.

Comments
Adel: Glad to know that the C-value paradox has been solved. The Wiki page you quote does to seem to agree with you: "It is unclear why some species have a remarkably higher amount of non-coding sequences than others of the same level of complexity. Non-coding DNA may have many functions yet to be discovered. Though now it is known that only a fraction of the genome consists of genes, the paradox remains unsolved."gpuccio
April 30, 2009
April
04
Apr
30
30
2009
07:24 PM
7
07
24
PM
PDT
Gpuccio:
IOW, even if we don’t know anything of the designer, we certainly know that he is a designer, and we know the things he has designed. That’s the reasonable origin of all our assumptions. And again, “to know nothing of the designer” does not mean that we don’t know that the designer is a designer. We know that. And a designer is a conscious intelligent being who acts out of purpose and intent. That we know. That is true of all designers.
As a meta-comment, I find this mode of reasoning completely bizarre. ""We certainly know that he is a designer, and we know the things he has designed" are your starting assumptions? You can look high or low, but you'll never find a more stark example of "argument by definition," or of an argument that explicitly assumes its conclusions.Diffaxial
April 30, 2009
April
04
Apr
30
30
2009
04:44 PM
4
04
44
PM
PDT
Well said, jerry [178]. And there is also the C-value paradox: http://en.wikipedia.org/wiki/C-value_enigma that was not predicted by either evolutionary science or ID (but has been explained by one of those theories.).Adel DiBagno
April 30, 2009
April
04
Apr
30
30
2009
02:02 PM
2
02
02
PM
PDT
gpuccio, you said "Well, ID has to take its risks. I am certainly willing to take mine" You are certainly welcome to your risks but I do not see why ID has to take on your risks which is what this FAQ is about. This prediction is being identified with ID and with little corroborating evidence to support that it is true. My faith in ID will not be shaken one iota if most of the genome is not obviously functional. In fact it will be strengthened. It will mean that the incredible complexity of the human organism and other organisms was accomplished with so little of an instruction set and way beyond what any random happenstance could accomplish. And it will mean that the function of the genome that is supposedly unused will have another potential use of which we are completely unaware. Who ever designed everything had to think of a lot of contingencies and the size of the genome may be due to some of these contingencies for which we as of now have no ideas. I think it is a needless position for ID to take.jerry
April 30, 2009
April
04
Apr
30
30
2009
01:09 PM
1
01
09
PM
PDT
Hoki:
Let’s just for the sake of argument say that you are right. CSI always comes from something intelligent (this is the weak ID or posterior probabilities argument). What I’m saying is that this is vastly different from saying that something intelligent will invariably cram it’s designs full of CSI (strong ID/likelihood).
Well, let's see. "CSI always comes from something intelligent". That's what we observe, in human design. And we have only two sets of CSI: in things designed by humans, and in biological information. Now, here we have to discuss briefly what is that makes CSI CSI. As you probably know, there are two components: one is specification, the other is complexity. Specification (in the case of FSCI, functional specification) is the true mark of design: design is always specified because it always is the projection of a conscious intent. Sometimes that specification, that intent, is easy to detect. Other times it could remain hidden, even forever. Complexity is not required for design, but it is what makes design detectable. It is important to note that in human design complexity is almost the rule, especially in design expressed by codes or language. IOW, simple design certainly exists, but complex design is the rule. It is fascinating how human design crosses the threshold of complexity, according to ID definition, with the greatest facility, while any other causal mechanism (RV, necessity, or any mix of the two) never even approaches that threshold. So, what I mean is that if a designer designs 3 Gigabases of code, they will be specified. And if the apparent reason why that code has been designed is to guide vital processes in a living being (and there are many reasons to infer that, without knowing anything about the designer, but just reasoning on the designed object and its context), the simplest inference is that all the code is functional. Exceptions can be represented by parts of degenerated code, random errors, and so on. IOW, processes which do not depend on the activity of the designer. Again, I object to words like "invariably". According to your definitions, I can only say that: a) What you call "weak ID" is the observational part of ID (the basis for inference) b) What you call "strong ID" is the inferential part of ID, the theory. There is only one ID. Both the processes you describe are part of it.
(here you just made a likelihood argument, btw)
Yes, and in this case it is really a weak likelihood argument, becasue it is an argument about the modalities of implementation. I have many times stated that there is not presently a lot of evidence to choose between different possible modalities of implementation.
You don’t know ANYTHING about the designer. How can you possibly make such an assumption?
Well, I know that this designer has designed functional proteins, functional cells, functional beings. That's something, IMO. Those are informations coming directly from the designed product. And I know that if humans want to design a new proteins, they have two options: either they know how to write a primary sequence which will have a specific function (and they usually don't), so they can directly write the genotype and synthesize the protein or They just know the function they want to obtain, and they make guided random searches and test for the obtained function, and go on by a process of intelligent selection. Or some mix of the two. Those informations don't come from a specific knowledge of humans, but from our knowledge of the cognitive process and of the design process. Even if we know nothing of a designer, we can well expect that he will solve problems by a cognitive approach, because that's our model of design, not of a specific designer. We could be wrong, but you know, then we could be wrong in anything we think we know. That's where I think both you and Diffaxial apparently equivocate: my assumptions don't come from any knowledge, true or imagined, that I have of the designer, but rather from the knowledge we have of the process of design in conscious beings as we are. IOW, even if we don't know anything of the designer, we certainly know that he is a designer, and we know the things he has designed. That's the reasonable origin of all our assumptions. And again, "to know nothing of the designer" does not mean that we don't know that the designer is a designer. We know that. And a designer is a conscious intelligent being who acts out of purpose and intent. That we know. That is true of all designers.
I think it’s perfectly reasonable to speculate that the designer used multiple rounds of intelligent phenotypic selection to create the organisms we see today.
Phenotypic selection alone will not do. Intelligent selection of the genotype according to expression of function in the phenotype is a perfectly admissible modality, as I have argued. What it reasonably implies is that the designer knows the function he wants to obtain, but does not know directly how to implement it. That's certainly a possibility. In human design, there are important models of that kind. But my point is that, either the designer writes his code directly, or intelligently selects it after targeted random search, the result is functional, and neither modality explains a result of 98.5% useless code with 1.5% highly functional code interspersed. Even human protein designers, with all their ignorance, are much more efficient than that.
If you think that the designer would probably have done what you think it did, could you put some (very) rough numbers on that? 90%? 50%? 10%? 0.001%?
As I have said, I would not bet on the modalities of implementation; I am only sure, form the observation of the results, that a highly efficient modality has been used. For the same reason, I am ready to bet on a definite function of at least 80% of the genome, and I bet at 90%. Where and when can I pass to take my money?gpuccio
April 30, 2009
April
04
Apr
30
30
2009
01:02 PM
1
01
02
PM
PDT
gpuccio:
That’s the empirical basis of the hypothesis: CSI is a product of design, and of design only. Please, take notice that such a hypothesis has never been falsified, not even once, which should mean something. The second part is: biological information, whose origin is unknown to us, does exhibit CSI. Indeed, a lot of CSI. Indeed, tons of CSI. Therefore, it seems very plausible to interpret it as the product of design. Moreover, all the other explanations given up to now are bogus.
I have not made any arguments for or against what you write above. Let's just for the sake of argument say that you are right. CSI always comes from something intelligent (this is the weak ID or posterior probabilities argument). What I'm saying is that this is vastly different from saying that something intelligent will invariably cram it's designs full of CSI (strong ID/likelihood). In another post you wrote:
Your distinction about phenotype and genotype does not mean much. A designer will probably act on the genotype and measure on the phenotype, unless he can directly check the information implemented in the genotype.
(emphasis added) (here you just made a likelihood argument, btw) You don't know ANYTHING about the designer. How can you possibly make such an assumption? I think it's perfectly reasonable to speculate that the designer only acted on the phenotype. I think it's perfectly reasonable to speculate that the designer used multiple rounds of intelligent phenotypic selection to create the organisms we see today. Note: when I say that I find these things reasonable, I mean that I find them just as reasonable as some other assumptions that have to be used to form junk DNA predictions. If you think that the designer would probably have done what you think it did, could you put some (very) rough numbers on that? 90%? 50%? 10%? 0.001%?Hoki
April 30, 2009
April
04
Apr
30
30
2009
12:04 PM
12
12
04
PM
PDT
jerry: You are entitled to your opinions. And I am entitled to mine. You say: "What if it was found out that 90% or even 50% was junk. After all there are much large genomes than human genomes and very few would point to most of these genomes as having any use. ID would not look very good if this prediction was not proven out." Well, ID has to take its risks. I am certainly willing to take mine. Yoo can well keep a different stance. If what you suggest happens, I will certainly not look very good, and with me my personal conception of ID. Time will say.gpuccio
April 30, 2009
April
04
Apr
30
30
2009
10:11 AM
10
10
11
AM
PDT
Alan Fox: Personally, I believe that most of it (let's say more than 90%) will be found to have function. But it's not only a question of quantity. In an ID perspective, the various parts of the genome have to be justified in some way. So, it is not certainly a randm case (always in an ID pespsective) that genes are so fragmnted, and separated by introns. Both the fragmentation of genes into exons and the introns will be shown to be necessary and functional, almost certainly to the global regulation of the genome. In the same way, I don't believe that repetitive elements are only what they appear: senseless repetitions of code. I am sure that vthey are powerful tool for the regulation and probably the gradual modification of geneomes, according to a very definite plan. I can accept that small parts of the genome be the product of degradation or corrtion of information (some pseudogenes or some ERVs could have that meaning). But I am not sure even of that. As far as I can understand, the genome is highly error checked. Errors can certainly still occur, and do occur, but I do not believe that they can involve 98.5% of the genome. Nor 50%. Nor 70%. And I could go further. Am I definite enough? I am waiting to be falsified.gpuccio
April 30, 2009
April
04
Apr
30
30
2009
10:07 AM
10
10
07
AM
PDT
gpuccio, I disagree. So far the so called junk DNA has not been shown to have any function. I don't think anyone ever said that the only useful DNA were the coding regions and that all of the rest of it was nothing but junk. Every right minded scientist thought there was more. Right now we can only point to a small percentage of the genome, less that 10% as having possible function. What if it was found out that 90% or even 50% was junk. After all there are much large genomes than human genomes and very few would point to most of these genomes as having any use. ID would not look very good if this prediction was not proven out. So as it stands right now, a large percentage, certainly most of the genome, is not functional in any way we know about. It may or may not be functional but that is for science to determine in the future. To come along and say that ID predicted it would have use and then not to be able to show any use is not very smart and as I said can make ID look foolish. It may turn out that there is use for most of the DNA but until that is shown to be true, do not hold it up as something ID predicted that came true.jerry
April 30, 2009
April
04
Apr
30
30
2009
08:33 AM
8
08
33
AM
PDT
That is a strong and useful prediction, wich can be verified or not. Maybe in the end darwinists will show that most non coding DNA is really evolutionary garbage. That would certainly be a heavy blow for ID.
Is there an upper bound to the proportion of non-coding DNA that turns out to have some function related to the organism in which it is located, that is favourable to ID?Alan Fox
April 30, 2009
April
04
Apr
30
30
2009
08:19 AM
8
08
19
AM
PDT
Hoki: thank you for the clarifications. I will try to understand your bayesian arguments as soon as I fell in the right frame of mind. I do have problems with the bayesian scenario, that was not an excuse. But again, that's just my pesonal problem. However, I hope I have made my points clear enough to be understood (or countered) without the help of a bayesian analysis. :-)gpuccio
April 30, 2009
April
04
Apr
30
30
2009
08:10 AM
8
08
10
AM
PDT
Adel:
Whenever one says, “either A or B,” one has declared a disjunction. A disjunction is a logical statement. gpuccio’s rephrasing maintains the disjunction when he says “we have only two theories.” What makes his disjunction fallacious is the fact that there are more than two theories.
This was my original comment: "We have to explain X. At present, we have only two theories, A and B. A does not work. B works. At present, B is the best explanation (always waiting for any possible C) This is empirical reasoning." OK, I had not taken into consideration "New Earth Creationism, Young Earth Creationism, Sanford’s Genetic Entropy, Davison’s Semi-Meiosis, Remine’s Message Theory, etc.". Some of them are not even scientific theories for me. I refer to "New Earth Creationism, Young Earth Creationism" and all forms of creationism. Indeed, while I can respect creationism and creation science, I don't consider them as scientific theories. Just my view. But whatever they are, creatinist theories are certainly design theories. Others, like Sanford's and ReMine's, do not look as alternatives to ID. I have not read Sanford's book, but I cite from a review of it: "The conclusion is that we were created perfect, have been headed downhill ever since and the human race cannot be a thousand generation old yet. Solutions are not in better technology but a relationship with God who will take us out of this decaying creation at the proper time." That does no look like a non design theory. I the same way, although I do not know in the details Davison's theories, I doubt that they are non design theories, but I could be wrong. Regarding ReMine, again I am not an expert, but again I cite from a review of one of his books: "However Remine's message theory is unique, in that he adds that life was intentionally designed to look unlike evolution." So, why are you quoting design theories as alternatives to ID? Those are ID theories, although each of them has specific peculiarities. But to go back to the "logical fallacy" argument, I think kairosfocus has pointed out very correctly: "For, a disjunction per purported demonstrative “proof” — as GP has said — is very different from alternatives presented in the context of inference to best explanation [IBE] per competing empirically testable hypotheses. That is, IBE is a species of induction in which a cluster of puzzling facts F1, F2, . .. Fn is on offer and alternative explanations E1, E2, Em are put. Per criteria of relative warrant, e.g. factual adequacy, coherence and explanatory power and elegance, the currently best explanation is preferred." I would like only to repeat that: a) the choice of best explanation is not a logical choice, but an empirical choice. Terefore, a logical disjunction or any similar logical apparatus is in no way necessary to choose the best empirical explanation. b) there is no need that only two theories exist. That is just my perception of the current situatuion for our problem. But is you want to include other theories (possibly non design theories; design theories are not in competition with ID), then it's fine for me. The reasoning becomes: We have to explain X. At present, we have only five theories, A, B, C, D and E (or any other number yu like). A, B, C, D do not work. E works. At present, E is the best explanation (always waiting for any possible F) This is empirical reasoning. Or can you show me a non design theory which works?gpuccio
April 30, 2009
April
04
Apr
30
30
2009
08:05 AM
8
08
05
AM
PDT
jerry: I don't follow you. this FAQ is about predicions, not about verified predictions. There are already many demonstrations that at least part of non coding DNA has rgulatory functions, and other evidence that almost everything is transcribed. I think ID can and must make the prediction that all or almost all non coding DNA wil be found to be funtional, and regulatory, even if it will be, as you sa, in ways that we cannot now imagine. That is a strong and useful prediction, wich can be verified or not. Maybe in the end darwinists will show that most non coding DNA is really evolutionary garbage. That would certainly be a heavy blow for ID. But I think we can accept the risk.gpuccio
April 30, 2009
April
04
Apr
30
30
2009
07:34 AM
7
07
34
AM
PDT
kairosfocus, I thing this FAQ should be shelved till further work on the genome is complete. There is no evidence that most let alone all of the genome has current day function. They are transcribed, that is all. There is no evidence at present that these transcribed nucleotides to RNA then actually do anything to monitor, regulate, repair, catalyze or anything else within the cell. We have to wait on that. So for ID to crow about the supposedly functionality of the DNA may turn in to crow to eat when they cannot find this functionality. It may turn out that a good bit of these transcribed nucleotides do have function but ID should wait. It also may be that the function of the transcribed elements have use in a way we cannot now imagine.jerry
April 30, 2009
April
04
Apr
30
30
2009
06:56 AM
6
06
56
AM
PDT
Onlookers (and participants): Most current discussion on this thread on the fourth corrective of weak anti-design arguments seems to now be tangential. One can take it that that is a sign that he basic point has been made. Namely, that precisely because the design view is a distinct paradigm, it led people to stick up on the point that it is unlikely that we knew sufficient on the DNA to infer confidently that by far and away most of it was essentially non-functional "junk," the accumulation of accidents across time. And now, as results have rolled in in recent years, there has been a wave of support for what was in certain quarters quite mockingly dismissed only a few short years ago. In that context, the basic point in the WAC 4 is supported. Now, I do see a few points that I would comment further on: 1] JM, 158: Calculations of FSCI First, a threshold value is a metric. And, it remains true that there are no observed cases where 1,000-bit length functional data strings have been observed to originate by chance or chance + necessity. Second, on the flagellum, I presented a simple estimate that puts it beyond that threshold quite comfortably. So, on the background that we have a known and frequently observed cause of FSCI, and we also have a good explanation for why the other main observed source of highly contingent outcomes, chance, will not be reasonably likely to give rise to FSCI, we asre entittled to deraw a conclusion that the observed DNA basis for the flagellum is designed. (And, BTW, mechanical necessity is not the actual source of highly contingent outcomes: we see that under certain conditions, forces act that push processes or phenomena along predictable lines, often to astonishing degrees of precision. Thus, from reliable natural regularity we infer to forces of mechanical necessity; Newtonian mechanics being the classic case. We infer to the known causes of highly contingent outcomes when we see that that is a dominant factor: chance and intent.) So, I provided the simplest type of calculation in the thread above, despite the dismissive objection made. (And, the line that I am providing pointless verbiage has long passed its sell-by date.) I find too that the comment I found is a mere dismissal, it has no provision of [1] an observed case of FSCI beyond 500 - 1,000 bits being shown to have originated spontaneously by chance + necessity, nor [2] some evidence that the flagellum is within that threshold sufficiently that we can be confident that the search resources of our planet or cosmos as observed are adequate to provide a probabilistically reasonable explanation. [Recall, 1,000 bits in a functional string specifies a contingency space of 2^1,000 ~ 10^301 possible configs, or ten times the square of the number of plausible quantum states of our observed cosmos across its reasonably estimated lifespan. the flagellum is credibly at least an order of magnitude beyond the threshold, simply on DNA to code the required proteins.] Furthermore, at the more sophisticated level, as pointed out in WAC no 27, Durston et al have published, with details of underlying numerical considerations, since 2007, a list of 35 measured values of FSC [cf the table they published]; this being both a more technical form of what the FSCI calculation is about and a manifestation of CSI [the superset]. In turn, this method traces to the underlying CSI approach, which rests on the principle of finding small targets in large contingent spaces, multiplied by the explorations of the space of funcionality of various protein families. I therefore consider that your challenge has been answered at WAC 27, long since; on the standing record at UD. 2] "Censor[ship]" accusation The searched out comment by JM as reproduced in another forum has in it the direct import that I have contributed to/acquiesced in silencing of those who merely differ with me. This is not so; and such a false, loaded insinuation would easily explain why the comment in quesiton was removed for cause. It also shows to me the wisdom of not subjecting myself to the exemplified tone of debate -- and notice, I almost never use that term in a positive sense -- likely to dominate such a forum. Moreover, since there is abundant evidence that discussion on the merits is freely permitted at UD, while invective, privacy violation and general nastiness occur abundantly in other fora [and mostly coming from the evo mat and friends side], I see no reason to transfer discussion to such fora. In short, JM: the issue is answered on the merits, and if you have a substantial response [which includes of course the peer review published Durston et al table of calculated FSC values for proteins and the underlying analysis], I have no doubt that it would be entertained here. [After all, in recent weeks a string of threads on the Weasel 86 program ran to over 1,000 comments and several threads. My closing summary in my always linked is in what is now Appendix 7.] 3] Adel, 163: Whenever one says, “either A or B,” one has declared a disjunction. A disjunction is a logical statement. gpuccio’s rephrasing maintains the disjunction when he says “we have only two theories.” What makes his disjunction fallacious is the fact that there are more than two theories. Among them are New Earth Creationism, Young Earth Creationism, Sanford’s Genetic Entropy, Davison’s Semi-Meiosis, Remine’s Message Theory, etc. Pardon an intervention on the issue of epistemological warrant on abduction vs disjunctive reasoning in the logic of deduction. For, a disjunction per purported demonstrative "proof" -- as GP has said -- is very different from alternatives presented in the context of inference to best explanation [IBE] per competing empirically testable hypotheses. That is, IBE is a species of induction in which a cluster of puzzling facts F1, F2, . .. Fn is on offer and alternative explanations E1, E2, Em are put. Per criteria of relative warrant, e.g. factual adequacy, coherence and explanatory power and elegance, the currently best explanation is preferred. In the relevant context, chance and necessity [blind spontaneity] is one major family of explanations, and design is another [and all the cluster of alternatives you suggested are post design, dealing with proposed mechanisms and candidate designers]. The overarching design explanation is proffered on the criteria that there are certain features of observed designed systems that are strongly correlated with designs and NOT with spontaneous occurrences. For instance, functionally specific complex digital information. For similar instance, function dependent on multiple mutually coupled parts that is broken when for a certain core subset any one part is removed. Similarly, coded algorithms instantiated in physical systems that effect said procedures. And more. On abundant evidence, it is held that these are observed to be consistently associated with design. For, in cases where we directly know the causal story, this is observed without significant counter-instance, once a reasonable threshold of complexity is seen. [Per the Dembski Universal Probability Bound, that is at about 500 - 1,000 bits for practical purposes; though in contexts that are much narrower, these are very generous margins.] Further to this, we can easily see that 10^150 states is a reasonable upper search limit for the observed cosmos across its reasonable lifespan. 1,000 bits specifies a config space of 10^301 states; i.e well over the SQUARE of the reasonable search limit of the observed cosmos. So, as long as function is sufficiently specific that arbitrary configurations will not be functional -- i.e so long as function is even moderately vulnerable to perturbation of the information [think of randomly reshuffling letters in sentences to see what I am getting at] -- then it is highly reasonable to see that random search strategies on the scope of the observed cosmos would be maximally unlikely to find the shores of islands of function. [Indeed, odds of 1 in 10^50 are a reasonable threshold of practical -- as opposed to logical -- thermodynamic impossibility used in statistical thermodynamics; i.e. this is the same basic thinking that underlies the statistical form of the second law of thermodynamics.] Further to this, we know from observation and experience of design, that designers are able to use their intelligence and imagination to create initial configurations of complex entities that are close enough to functional that with relatively modest investigation nd modification, they are able to achieve function. that is, the active information that cuts down the search space to manageable proportions is known to originate in intelligent action, per empirical investigation. So, we have excellent reason to see that we have identified to best explaantion with causal framework and reason why we see what we see, that intelligence is the cause of the sorts of complex organisation we are discussing. This is generally uncontroversial. But, it is now a matter of hot debate in science of origins because it tends to support the idea that intelligence may have had something to do with the origin of life and the observed, life-facilitating, fine-tuned cosmos. So, the root issue is a worldviews clash, not really a matter of whether the detection of design per empirical investigation is reasonably feasible. Our courts are abundant demonstration that yes, such is feasible, and that there are reliable signs of intelligence. GEM of TKIkairosfocus
April 30, 2009
April
04
Apr
30
30
2009
02:28 AM
2
02
28
AM
PDT
Diffaxial: Thank you for clarifying better what you meant with "weak" and "strong". I can so better specify my position.
Did not Behe state at Dover that the only justified inference about the designer is that it is capable of design?
I agree with Behe, at the present state of knoweldge. That does not mean that other aspects of the designer or of the designers's methods are not open to scientific inquiry or to scientific hypotheses. But the present evidence allows us more or less only to infer design.
Other ID advocates are much more bold, in that in addition to claiming the above, much more explicit assertions are made: the designer front-loaded information one or a few of the first cells, to unfold for billions of years thereafter. The designer is God or otherwise supernatural. There can be only one designer. The designer acted by effecting “sudden appearances,” etc. And so on.
My positions on those points (many times explicitly stated on this blog): a) "the designer front-loaded information". I respect front loading as a possible scenario, but personally don't believe in it. Some evidence is in that sense, and much evidence is against. b)"The designer is God or otherwise supernatural". While I personally believe that the designer is God, that belief has nothing to do with ID. From a scientific point of view, I firmly and sincerely believe that with present evidence we cannot say who the designer is. And, as I have told many times, the word "supernatural" means nothing to me, unless more strictly defined. c)"There can be only one designer." There is absolutely no reason for that statement. Indeed, some evidence could point to different conclusions, but the point is absolutely open. d) "The designer acted by effecting “sudden appearances,”". The chronological modalities of design implementation are very interesting, and IMO completely and immediately open to inquiry according to the data which constantly come from the study of natural history. As I have recently stated on another thread, my personal idea, according to what we know now, is that design implementation occurred both in more "acute" form (OOL, ediacara and cambrian explosions) and in more gradual form (speciation). But that's only my personal opinion. Ther's no reason for ID to prefer one modality or the other, if not what comes from research. So, as you can see, form those points I would be rather a "weak" IDist, according to your standards. But I do maintain that the concept that a designer implements function and purpose in his designs remains true, adn is all another matter. That concept comes from the concept itself of design, and from what we know of the process of design in human design, the only observable model, which we can observe both subjectively and objectively. Then process of design is "always" linked to purpose and function. Design is by definition teleologic, because it is the motivated output of a conscious representation. We may not recognize and understand the purpose or the function, but it is there just the same. So, to those who ask why we should think that the designer of biological information has the same purposes of humans, I would answer: we don't know that, but we do know that he has purposes. And we, as humans, are potentially capable of understanding purposes, even if different from ours. If we are empirically capable of partially or completely achieving that result, that remains to be seen. So, to sum up, in this sense I am a weak IDists who believe that the expectation of function is implicit in the existence of a designer, and is not a further assumption about him. And who believes that further assumptions about the designer are open to scientific approach, although at present remain at the state of unsupported hypotheses.
I don’t have any remarks to make regarding your many arguments by assertion: “not one single phenomenon attributable to design has ever been shown to have other explanations,” “Structures which are recognized was designed are designed” “No false positives,” your “predictions WILL be verified.”
In order: a) "not one single phenomenon attributable to design has ever been shown to have other explanations". That's simply true. I obviously mean "attributable to design" by the ID inference. Have you any counter-example? b) "Structures which are recognized as designed are designed". That's only the same point as a). c) "No false positives". Again the same point. Have you any example of false positives? I mean, outside biological information, which remains the controversial issue, can you offere some example where a design detection process with the quantitative threshold given by Dembski gave a false positive? d) My complete paragraph was: "So, what I am saying is not they we will find saltations: saltations are everywhere under our eyes, in the biological world. I am saying that, when the progress of biological research will allow us to evaluate those saltations quantitatively, with all the necessary knowledge of the relationship between protein structure and function to definitely evaluate the target space of a search, and enough information about natural history to definitely build a model with some detail, it will be obvious that the observed saltations will not be explained by a random variation + NS model. That is a prediction, and it will be verified. For those verified saltations, quantitatively and qualitatively the same as those observed in human design products, design remains the best explanation, indeed the only available explanation." Well, that's a prediction, isn't it? That was what you were looking for. Obviously, the statement that it will be verified is just my opinion. But the prediction is there.
I don’t see any testable predictions arising uniquely from ID in the above. Offer testable predictions in these domains and you will strengthen this FAQ, and the work you describe may become something resembling science.
a) "Quantitative research about the size of protein function space, aimed to the qunatification of the target space." The target of functional space will be shown to be so small that the probability of achieving a target space in most, or all, hypothesized protein transitions independent from NS will be shown to be so low that those transitions will be obviously empirically impossible by RV alone. IOW, the "islands of functionality", of protein superfamilies at least, will be shown to be completely separated by oceans of non functional search space totally unsurmountable by RV, and not available to NS. b) "Analysis of as many genomes as possible, to try to asses in detail the natural history of protein emergence." That's fundamental for the establiahment of true natural history of protein emergence. The prediction? Once protein natural history is known enough in detail, it will be obvious that new proteins, totally unrelated to others, have constantly emerged in relatively short times, and that will make possible to apply to specific models the quntitative analysis suggested at point a). c) "Quantification of the functional information content in as many protein families as possible, by the Durston method or other approaches." The prediction? In most independent proteins (at least, protein superfamilies, the functional information content will be shown to be well higher than any possible RV engine can ever explain, given all the possible biological resources. d) "Research about the transcriptome dynamics in the process of differentiation." The prediction? That will show, sooner or later, where the regulation information is, and how big it is, Then it will be possible to apply quntitative analysis to that information too, and not only to the protein structure information. And the regulation information (the "procedures") will be shown to be much richer in functional information content than mere protein structure information, adding heavily to the credibility of the design explanation. e) "Research about the regulation of alternative splicing and of post translational modifications." The prediction: that will prove that further written information and procedures guide those processes, and will allow us to understand where that information is, and how big and complex it is. Same point as in d).
Claims that one possesses “the best explanation,” absent the process I describe (necessary entailments, empirical tests of same such that your theory or elements thereof are placed at risk of disconfirmation), are claims only, and your reports of certainty regarding same are reports of subjective confidence only.
I have tried to give you as many objective points as possible. My subjective confidence remains untouched, but that's only my problem. Perhaps, I am a bayesian after all, and like betting. :-)gpuccio
April 30, 2009
April
04
Apr
30
30
2009
12:02 AM
12
12
02
AM
PDT
GP:
I still don’t understand what you mean with “weak ID”, least of all why you make such a strange statement. In your #69 you state: ““weak ID” (merely asserting the possibility of design detection) and “strong ID” (positive assertions about design)”
In a considerable quantity of ID literature the assertion is repeated that ID is about design detection, only, and that ID makes no claims about the characteristics, powers, or methods employed by the designer. Demands for more are demands to provide the notorious "pathetic level of detail." Did not Behe state at Dover that the only justified inference about the designer is that it is capable of design? Other ID advocates are much more bold, in that in addition to claiming the above, much more explicit assertions are made: the designer front-loaded information one or a few of the first cells, to unfold for billions of years thereafter. The designer is God or otherwise supernatural. There can be only one designer. The designer acted by effecting "sudden appearances," etc. And so on. Surely you are not denying the above. I chose to designate the first "Weak ID," and the second "Strong ID." You don't like my designations. So be it. I don't have any remarks to make regarding your many arguments by assertion: "not one single phenomenon attributable to design has ever been shown to have other explanations," "Structures which are recognized was designed are designed" "No false positives," your "predictions WILL be verified." A refutation with equal merit? "Meh. Sez you."
Some suggestions? Quantitative research about the size of protein function space, aimed to the qunatification of the target space. Analysis of as many genomes as possible, to try to asses in detail the natural history of protein emergence. Quantification of the functional information content in as many protein families as possible, by the Durston method or other approaches. Research about the role of non coding DNA, especially introns and transposons. Research about the transcriptome dynamics in the process of differentiation. Research about the regulation of alternative splicing and of post translational modifications. And so on. All of that is research which is extremely pertinent to the ID theory. And please, don’t answer that most of that research is already in some way being done by conventional biologists. That was exactly my point.
I don't see any testable predictions arising uniquely from ID in the above. Offer testable predictions in these domains and you will strengthen this FAQ, and the work you describe may become something resembling science.
That’s not true. Best explanations are not subjective. If you have a set of data, either you can explain them or you can’t. If you have an explanatory theory, that’s quite different from not having any theory.
You misread my statement. Claims that one possesses "the best explanation," absent the process I describe (necessary entailments, empirical tests of same such that your theory or elements thereof are placed at risk of disconfirmation), are claims only, and your reports of certainty regarding same are reports of subjective confidence only. To firmly yoke your claims to evidence, you need to put them on the line in a testable manner.Diffaxial
April 29, 2009
April
04
Apr
29
29
2009
05:34 PM
5
05
34
PM
PDT
gpuccio, I'm not surprised if you didn't understand my post. It was a terrible one. I forgot to mention that weak ID was supposed to be a posterior probability argument and strong ID a likelihood one. ID predicting anything regarding junk DNA would be strong ID/likelihood. Note to self: read what I write.Hoki
April 29, 2009
April
04
Apr
29
29
2009
03:19 PM
3
03
19
PM
PDT
On logical fallacies: gpuccio [30] wrote:
There are not many “theories of origins”. If RV and NS are ruled out, what are we left with? I’ll tell you. Design, or no theory.
To which I responded [70]:
This the logical fallacy of false disjunction
gpuccio countered [95]:
Not again, please… I think we had already clarified that tis is not a logical point. And so there is no logical fallacy. I had clarified that even to Diffaxial: “Yes, it does amount to support for ID. Again, you forget that we are talking empirical science here, and not logical demonstrations. I am really surprised of how often I have to repeat this simple epistemological concept, which should be obvious to anybody who deals with empirical science.” It is not: Either A or B Not A Therefore B (logical disjunction) but rather: We have to explain X. At present, we have only two theories, A and B. A does not work. B works. At present, B is the best explanation (always waiting for any possible C)
Whenever one says, "either A or B," one has declared a disjunction. A disjunction is a logical statement. gpuccio's rephrasing maintains the disjunction when he says "we have only two theories." What makes his disjunction fallacious is the fact that there are more than two theories. Among them are New Earth Creationism, Young Earth Creationism, Sanford's Genetic Entropy, Davison's Semi-Meiosis, Remine's Message Theory, etc. Moreover, as Allen MacNeill has pointed out, the so-called Modern Synthesis is dead (or dying) and a new theoretical framework in evolutionary biology is emerging. New and better scientific theories are always in the cards, as the history of science has demonstrated. Therefore, if current evolutionary theory is ruled out, a cornucopia of alternative theories is waiting in the wings, and ID is not proven thereby. As others have noted on this thread and elsewhere, ID still must prove itself in a positive way. To the laboratories!Adel DiBagno
April 29, 2009
April
04
Apr
29
29
2009
02:26 PM
2
02
26
PM
PDT
jerry, I also gave a reply to your ENCODE question, but it was held up in moderation for a long time, in case you missed it. It's at #149Dave Wisker
April 29, 2009
April
04
Apr
29
29
2009
01:42 PM
1
01
42
PM
PDT
Hoki: Thank you for your argument. I am not sure I understand it fully (I don't know why, but when I read about Bayesianism I get a headache agter ome minute!). And I can't really see the relatioship with Diffaxial's weak or strong ID. My limit, probably. So, I will try to do what I can do: explain myself (and, I hope, ID) more clearly. ID does not invent the relationship between CSI (or FSCI) and the process of design. That relationship is constantly observed in human design. Moreover, no CSI is ever observed out of a design process (except for biological CSI, which is the one at issue). That's the empirical basis of the hypothesis: CSI is a product of design, and of design only. Please, take notice that such a hypothesis has never been falsified, not even once, which should mean something. The second part is: biological information, whose origin is unknown to us, does exhibit CSI. Indeed, a lot of CSI. Indeed, tons of CSI. Therefore, it seems very plausible to interpret it as the product of design. Moreover, all the other explanations given up to now are bogus. That's the simple reasoning. With all respect for Bayesians, whom I sincerely admire (as I admire all those who can understand what gives me headache). :-)gpuccio
April 29, 2009
April
04
Apr
29
29
2009
01:41 PM
1
01
41
PM
PDT
Here is what I think that Diffaxial means by weak vs strong ID. (I rarely tend to cut into other people's conversations, but Diffaxial's point is, I think, important [plus the fact that I have also brought up the same thing in this thread]). In bayesianism, one differentiates between posterior probabilities and likelihoods.The likelihood of a hypothesis (H) is the probability that H confers on an observation O while the posterior probability is the probability that O confers on H. Sounds cryptic so far, but bear with me. Say, for example, that you have lots of noises coming from your attic. You hypothesise that the noises emanate from gremlins playing bowling. This is a noisy endeavour, so your hypothesis has a high likelihood (I.e. given your hypothesis, you are very likely to get your observations). I doubt that anyone would argue that the converse is probable, however. I.e. given the observation that there is noise coming from the attic, the probability that this is due to gremlins plaing bowling is low (i.e. the posterior probability is low). It seems to me like ID is framed as a posterior probability argument. Given an observation of, for example, CSI, ID says that there is a high probability (one, in fact) that this is due to intelligent intervention. However, I have argued that given intelligent intervention, the likelihood of any specific observation is very low. At least, I think this is what Diffaxial meant...Hoki
April 29, 2009
April
04
Apr
29
29
2009
01:28 PM
1
01
28
PM
PDT
Hoki: #111 and #152
I might be misunderstanding you here, but are you saying that since some of the DNA is functional, we somehow assume that the rest of it is and therefore ID predicts that most DNA will have function. Sounds a tad circular to me.
It's not circular. By design detection, we infer that the protein coding genome was designed, because we know its function, and can evaluate its complexity. From that, we hypothesize, quite naturally, that the whole genome was designed. You may ask why: very simply, if a designer designed the protein coding part, that is the most natural hypothesis. To hypothesize that the protein coding part was designed, and the rest is the product of random variation, seems a little bit bizarre, at least to me. So, the most natural hypothesis under the design scenario is: all the genome is designed. We understand the function of 1.5% of it. The function of the rest will be understood in time. Your reference to human genetic engineering is interesting. Indeed, humans use a variety of techniques in protein engineering, for instance. Some of them benefit of the (scarce) understanding we have of protein structure and function relationship. Partial random search is used, too. But above all, intelligent selection is used. And the results, in the end, are intelligent products. Your distinction about phenotype and genotype does not mean much. A designer will probably act on the genotype and measure on the phenotype, unless he can directly check the information implemented in the genotype. Anyway, th hypothesis is that the designer has enough control to get 20000 protein coding genes, perfectly functional, interspersed in a 3 Gigabase genome, split in a great number of exons, controlled by procedures we still don't understand, capable to generate thousands of different ordered transcriptomes for different, ordered states and cell types, to check for errors, to respond to a lot of different challenges, to generate the macroscopic form of the organism, and so on. That does not seem the kind of designer who accumulates 98.5% of useless code. All that seems very much common sense to me. A child would probably see that without any difficulty. But darwinists have probably lost their common sense a long time ago...gpuccio
April 29, 2009
April
04
Apr
29
29
2009
01:01 PM
1
01
01
PM
PDT
To Kairosfocus: Since I'm being censored here, I invite you to continue the discussion in a neutral venue. If you go to http://groups.google.com/group/talk.origins and search for "Kairosfocus" you'll see where I've started the discussion anew. I sincerely hope you'll join me. I'd like to finish our chat. JJJayM
April 29, 2009
April
04
Apr
29
29
2009
12:48 PM
12
12
48
PM
PDT
Attn: Kairosfocus and moderators I see that a post I made refuting Kairosfocus' claim to have computed the CSI of a biological construct, taking into account known evolutionary mechanisms, has been removed. That post was perfectly polite and well within both the documented moderation guidelines and the behavior of others on this forum. So much for the new, open UD. JJJayM
April 29, 2009
April
04
Apr
29
29
2009
12:10 PM
12
12
10
PM
PDT
Diffaxial #151:
most of ID proper, including this website, has most often advanced weak ID.
I still don't understand what you mean with "weak ID", least of all why you make such a strange statement. In your #69 you state: "“weak ID” (merely asserting the possibility of design detection) and “strong ID” (positive assertions about design)" To be clear, ID in a strict sense is about how to detect design when it is possible to detect it. That does not mean that it does not make positive assertions about design. The only point is that simple design detection does not necessarily give information about the designer and his purposes or means of implementation. Design detection detects design. But the nature of design and its phenomenology is obviously a premise of ID. So, again, I think that your concept of "weak" or "strong" ID is completely specious. There is no such division. There is the science of design detection, with its quantitative and formal methods, and there is a broader science of design as a phenomenological reality. The two are strictly connected.
In my opinion you are mistaken. The phenomena that can be explained by means of “randomness or…necessity mechanisms” shifts as research within evolutionary biology progresses, leaving for design explanations only those phenomena that are not swept up by the competing research project.
IMO you are mistaken. If you apply the correct methods of design detection, not one single phenomenon attributable to design has ever been shown to have other explanations. That's why we say that the method of design detection has practically no false positives. So, yours is only wishful thinking and mythology.
Hence the boundary beyond which design can be claimed arises not from necessary positive entailments of ID theory, nor from research motivated by ID theory, but by research that continues without taking notice of or receiving useful input from design theory.
There is no boundary. The structures which are recognized as designed are designed. See previous point. No false positives.
A moment ago you approached the threshold of making an empirically verifiable prediction arising from a “strong” form of ID (that designed forms must include saltations) but backed away to a “weak” position (the prediction of saltations arises from the logic of design detection, not as a necessary entailment of the design hypothesis.)
I must say that you really don't understand ID. So I shall try to be even more clear. If something is designed, there is always a saltation. When the designer inputs information, he always creates a saltation. The result he models would not have occurred by itself. It is marked by specification. The only problem is that if the information inputted by the designer is simple, then we can still recognize the specification but we have no way, unless we have observed the design process or know the designer, to be sure that it is not a "pseudo-specification", arisen by some random process. That's why design detection has many false negatives: all simple designs cannot be detected with certainty by the method of design detection used by ID. So, what I am saying is not that saltations will be there because my method recognizes saltations, as you seem to believe. I am saying that saltations will be there in all designed things, but that in all cases where design is complex enough, they will be recognizable with certainty. But the presence of saltations is a characteristic of design, and the presence of complex saltations is an intrinsic characteristic of all design whose complexity is beyond a certain threshold. Now, we know perfectly well that almost anything we can observe in living beings is highly complex. Proteins are just the first example. So, what I am saying is not they we will find saltations: saltations are everywhere under our eyes, in the biological world. I am saying that, when the progress of biological research will allow us to evaluate those saltations quantitatively, with all the necessary knowledge of the relationship between protein structure and function to definitely evaluate the target space of a search, and enough information about natural history to definitely build a model with some detail, it will be obvious that the observed saltations will not be explained by a random variation + NS model. That is a prediction, and it will be verified. For those verified saltations, quantitatively and qualitatively the same as those observed in human design products, design remains the best explanation, indeed the only available explanation.
But as we’ve seen in this discussion, absent tests of necessary entailments of a given theory, “best explanation” remains entirely subjective, as IMHO design is an explanation that doesn’t explain within biology.
That's not true. Best explanations are not subjective. If you have a set of data, either you can explain them or you can't. If you have an explanatory theory, that's quite different from not having any theory. And design (which is an observed, empirical process) does explain designed things, both in biology and in all other fields.
Describe empirically testable entailments that arise from ID theory, such that failure to observe those predicted entailments places the theory at risk of discomfirmation.
I have done that. If, when more is known, the darwinist model can generate a credible quantitative explanation, ID is falsified. We only need the necessary information about protein functional space and natural history of protein emergence, and then we will be able to make detailed calculations.
Yet no one seems able to describe the research that would arise from such guidance. Such descriptions are free, and have ample venues on the internet within which they may be proposed. But…?
Some suggestions? Quantitative research about the size of protein function space, aimed to the qunatification of the target space. Analysis of as many genomes as possible, to try to asses in detail the natural history of protein emergence. Quantification of the functional information content in as many protein families as possible, by the Durston method or other approaches. Research about the role of non coding DNA, especially introns and transposons. Research about the transcriptome dynamics in the process of differentiation. Research about the regulation of alternative splicing and of post translational modifications. And so on. All of that is research which is extremely pertinent to the ID theory. And please, don't answer that most of that research is already in some way being done by conventional biologists. That was exactly my point.
But observation serving these other purposes absent the relationship between theory and test I describe, are not alone sufficient to establish an investigation as a scientific investigation, nor to increase confidence in a theory by empirical means.
I deeply disagree with your view about science. I am happy that it is not you who can decide what establishes "an investigation as a scientific investigation", or other such things. For me, observations and the capacity to give a credible explanation of them remain the foundation of science. And the incapacity to explain observations, as shown so brilliantly by darwinian theory, remains the hallmark of non science. Anyway, I have given you also the most important prediction which IMO will be able to discriminate between the two "explanations".
Similarly, reinterpreting others’ data may have role in science, but absent, for example, replication studies that test predictions that arise uniquely from your own, competing theory, what you are left with is data that is amenable to two interpretations, with no reason to prefer your reinterpretation over the paradigm that originally generated the data.
I state again that the progress of biology is bound to allow a clear confrontation between darwinian theory and ID theory. Your phrase about "the paradigm that originally generated the data" is for me a very clear proof of your cognitive bias. Data are never generated by a paradigm. Data are data. You go on thinking that, if data were discovered by people who believe in a paradigm, or in the hope of supporting a paradigm, they are in some way the property of that paradigm. That's absolutely false, even offensive for any conception of science. If data are interpreted in an incorrect way (which is the rule in the darwinian paradigm) it is the duty and the privilege of anybody else to point to that error, and to suggest a better interpretation. And there is always reason to prefer a theory which explains data to a theory which doesn't explain them.gpuccio
April 29, 2009
April
04
Apr
29
29
2009
11:10 AM
11
11
10
AM
PDT
So, seems like the argument for why ID predicts that there should be little junk DNA is because the experience is that human designers make functional things. A favorite analogy seems to be the functionality of computer code. However, if the experience of human intelligence should be used in the first place, we should, perhaps, use human genetic engineering as a model instead. Here, genetic fragments are spliced together in test tubes and the results from such procedures is far from predictabe. Multiple insertions of desired fragments (as well as non-intended fragments) occur constantly (and potential deletions and more). The easiest way to check if you got your desired genetic fragment is to check the PHENOTYPE of the organism you are engineering. The phenotype will most likely say very little about the genotype - e.g. if you got junk inserted. So, if we were to use the assumption that the designer did something like above, we could reasonably predict that we should find junk DNA. Does anyone think that my assumption is better/worse than the one used by Dembski et al? Can you even remotely put a number on the plausibility of either?Hoki
April 29, 2009
April
04
Apr
29
29
2009
09:37 AM
9
09
37
AM
PDT
gpuccio at 110
1) I understood what you meant for “weak” and “strong”, but I have no room for “weak” ID.
Then mine was a "garden path" error induced by your remark, "If you knew ID," because most of ID proper, including this website, has most often advanced weak ID.
I am just referring to ID methodology, which uses complexity as a tool to avoid false positives due to random effects. That has nothing to do with any “weak” conception of ID...That’s completely wrong. The threshold is designate to avoid false positive due to randomness, not in light of a competitive theory. Indeed, the concepts of design detection were not created by ID, and apply, as you know, to many other fields. Avoiding false positives due to randomness or to necessity mechanisms is an essential requirement of scientific design detection, and is not done “in light of a competing theory”
In my opinion you are mistaken. The phenomena that can be explained by means of "randomness or...necessity mechanisms" shifts as research within evolutionary biology progresses, leaving for design explanations only those phenomena that are not swept up by the competing research project. Hence the boundary beyond which design can be claimed arises not from necessary positive entailments of ID theory, nor from research motivated by ID theory, but by research that continues without taking notice of or receiving useful input from design theory.
That’s simply not true. We have done that many times. I was recently commenting exactly on that in the thread “Extra Characters to the Biological Code”. Therefore, I see no “embarrassing fact”.
Neither does the guy trailing TP from his pants unawares see an "embarrassing fact." :) Would you please provide some references that document the application of the explanatory filter to a biological structure of unknown origin (natural vis necessarily designed), and that includes computations of the relevant probabilities?
I don’t understand. A moment you seem to admit that my strong ID makes predictions, another moment you deny it. I stick to the prediction I made about informational saltations.
A moment ago you approached the threshold of making an empirically verifiable prediction arising from a "strong" form of ID (that designed forms must include saltations) but backed away to a "weak" position (the prediction of saltations arises from the logic of design detection, not as a necessary entailment of the design hypothesis.) The latter reminds me of my theory of lost coins: I predict that lost coins are distributed such that they will all be found near streetlights. My theory of coin detection specifies a night time search.
And, in classical hypothesis testing, rejection of the null hypothesis does not comprise a test of the alternative hypothesis, but the alternative hypothesis can be affirmed if it is the “best explanation”
But as we've seen in this discussion, absent tests of necessary entailments of a given theory, "best explanation" remains entirely subjective, as IMHO design is an explanation that doesn't explain within biology. How to resolve such impasses? Describe empirically testable entailments that arise from ID theory, such that failure to observe those predicted entailments places the theory at risk of discomfirmation.
I simply believe that ID could very well guide empirical research, if it were accepted...
Yet no one seems able to describe the research that would arise from such guidance. Such descriptions are free, and have ample venues on the internet within which they may be proposed. But...?
But if you really believe that observations merely serve as tests of existing theories, I am afraid you have a really strange conception of science and knowledge.
It does not follow from the assertion that the epistemological foundation of science lies in theoretical prediction, and empirical tests of such predictions, that the only role for observation in science is to conduct such tests. Of course observations serve other purposes. But observation serving these other purposes absent the relationship between theory and test I describe, are not alone sufficient to establish an investigation as a scientific investigation, nor to increase confidence in a theory by empirical means.
But reinterpreting others’ data is absolutely science, and a very essential part of it.
Similarly, reinterpreting others' data may have role in science, but absent, for example, replication studies that test predictions that arise uniquely from your own, competing theory, what you are left with is data that is amenable to two interpretations, with no reason to prefer your reinterpretation over the paradigm that originally generated the data. Ultimately, you've got to roll up your sleeves in the manner I describe. Absent that, you aren't doing science.Diffaxial
April 29, 2009
April
04
Apr
29
29
2009
09:26 AM
9
09
26
AM
PDT
Gentlemen: No one said that the issue of junk DNA was enough to establish design theory! It serves as a simple, direct example of a successful prediction where the majority paradigm plainly led to decades of going down the wrong road on interpreting non genetic code DNA. (Go back up and see the actual citations . . . ) And, design thinking pointed in the right direction, swimming strongly against the tide. Precisely because thinking in terms of design allowed ID thinkers to expect what seemed to be unreasonable to those whose thought pattern was different. And now, against the odds, that odd man out view is being substantiated. [For me this is a matter of direct memory: only several years ago I recall the reasons of why Junk DNA is Junk being trotted out and thrown in my face with mocking scorn as though this proved that I was an ignoramus butting in where he did not have any business being. Never mind, that my background precisely equips me to think about digital information in algorithmically functional contexts. just what we see DNA and its co-molecules providing. So, when I see what looks a lot like "ho hum, so what else do you have . . ." I think that that too is telling.] GEM of TKIkairosfocus
April 29, 2009
April
04
Apr
29
29
2009
09:04 AM
9
09
04
AM
PDT
People where do we draw the line? when do we say enough is enough? no more get-out-jail-free cards for evolution "theory"? Is the mere fact that evolution clouds our understanding not a good enough reason to abandon the sacred cow??Polanyi
April 29, 2009
April
04
Apr
29
29
2009
08:06 AM
8
08
06
AM
PDT
1 2 3 4 8

Leave a Reply