Uncommon Descent Serving The Intelligent Design Community

Friday Musings: The Credible Versus The Incredible

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

When considering design versus no design in both cosmology and biology, one thing seems strikingly obvious: The default position is backwards.

Concerning cosmology, the fine-tuning of the universe for life would appear to be prima facie evidence for design. One can either choose to believe (at least provisionally) that this is the case, based on some evidence, or one can choose to believe in an infinitude of hypothetical alternate universes, which are in principle undetectable, based on no evidence.

Concerning biology, complex information and information-processing systems, plus highly sophisticated, functionally integrated machinery, would appear to be prima facie evidence for design. One can either choose to believe (at least provisionally) that this is the case, based on some evidence, or one can choose to believe that life, its diversity, and its information, information-processing systems, and machinery arose spontaneously by purely materialistic means, based on no evidence, only speculation analogous to alternate-universe speculation. (Sorry, bacterial antibiotic resistance won’t do the trick.)

Thus, at least among many intellectual elites and others, the incredible is given precedence over the credible as the default position. How did we arrive at this curious state of affairs?

Comments
Continuation is in the pending bin; it has in it two hyperlinks. GEMkairosfocus
April 2, 2007
April
04
Apr
2
02
2007
12:15 AM
12
12
15
AM
PST
DS: I think there are points of agreement and points where a few additional remarks will be helpful. Accordingly: 1] Sensitivity The core of the issue is that we have dozens of parameters, which are locally quite sensitive in aggregate, i.e. slight [or modest in some cases] changes relative to the current values will trigger radical shifts away from t he sort of life-habitable cosmos we observe. Further, as Leslie has noted, in some cases the Goldilocks zone values are such as meet converging constraints. That gives rise to the intuitions that we are looking at complex, co-adapted components of a harmonious, functional, information-rich whole, and that such an entity is not likely to have been arrived at by chance. Dr Robin Collins et al have sought to quantify that intuitive insight, through the development of mathematical models based on commonly accepted principles of probabilistic reasoning. This, I have noted on, linked and summarised on above. (BTW, this is quite similar, though in a different context, to what Dr Dembski has done with his explanatory filter and concept of complex, specified information, which incorporates Dr Behe's Irreducible complexity as a special and important case.) 2] If it were probability there would be no argument because there’s no way to know the probability of any physical constant or law not being the sole unalterable value that we observe today. Here, the operative word is “know.” This raises also the essence of probability: if something is certain its probability is 1, if it is impossible, the value is 0. In-between, there is an inherent uncertainty and/or ignorance about the issue. But, that does not preclude the ability to make the credible claim that we have a “knowledge” of the relevant probability. (In a simplistic case, a die has six faces and for practical purposes once it tumbles, the uppermost one is a matter of probability in light of the Laplacian principle of indifference. Namely, a priori, each face has probability 1/6, and that, before we carry out statistical studies.) Obviously, this value is provisional – a statistical study may expose that there is imbalance due to inevitable accidental variation, or may lead us to suspect deliberate loading. However, once we recognise that there is such a thing as provisionality (and thus fallibility) in commonly accepted knowledge -- including factual, historical, scientific and post-Godel Mathematical knowledge-claims – your just cited becomes problematic. Pausing . . .kairosfocus
April 1, 2007
April
04
Apr
1
01
2007
11:55 PM
11
11
55
PM
PST
kf [1] the root issue is sensitivity not probability Agreed. If it were probability there would be no argument because there's no way to know the probability of any physical constant or law not being the sole unalterable value that we observe today. If any of the physical laws and constants were different by varying amounts (some are more sensitive than others) then life as we know it would not be possible. However, we don't know the probability of them being different. 2] that the actions and arguments of the opponents of the inference to design on cosmological origins implies that they agree with the general probability estimate Not really. The strongest argument is that it's not possible to form any general probability estimate because the mechanisms which force one law instead of another, one physical constant instead of another, are unknown. We don't know if it's even possible for laws or constants different from those observed to be realized. Multiverse arguments which seek to negate small odds of any particular set of physical constants with a correspondingly large number of trials to find it are as silly as the probability they seek to counter. The blind arguing with the blind about what they see. Funny stuff. Try to at least make it brief. Quantity is no substitute for quality.DaveScot
April 1, 2007
April
04
Apr
1
01
2007
09:04 AM
9
09
04
AM
PST
DS: Re: Woolgathering, mind, physics and metaphysics I think it is fair comment for me to say that -- having first noted [1] the root issue is sensitivity not probability , and [2] that the actions and arguments of the opponents of the inference to design on cosmological origins implies that they agree with the general probability estimate -- I have given links to and then a summary of reasons and how the relevant probabilities are estimated using generally acceptable principles for doing that. E.g., the range cited on gravitation is such as gives a reasonable UPPER bound to the probability, i.e. a wider range would reduce the probability; as I noted in brief previously. Similarly, on the most discussed point, the Cosmological constant, the range is set based on physical considerations, and the issues on finetuning arise out of the implications of twiddling the value even slightly, thence the sort of odds we see: 1 in 10^53. Thus,it is fair for me to observe that the procedure used by Collins can reasonably be taken as conservative relative to the fact that we have no known physics that constrains the value otherwise on the value of G, and that he is working within the ambit of the usually accepted physics on the cosmological constant. He then uses a reasonable principle of estimating probabilities in absence of reason to infer to a non-uniform distribution, similar to how we estimate odds when we toss a die or a coin or shuffle cards. Now, on the issue of empirical testing of whether the parameters of the cosmos are such as reflect design not chance + necessity, we are first observing a pattern of sensitivity that points to intelligent action on an inference to best of competing explanations basis. Second, the matter is indeed partly beyond empirical testing -- there being no empirics beyond the beginning of the space-time domain we experience. But that is hardly a mere matter of woolgathering, it simply means that we are on that point, dealing with comparative difficulties across alternative world models, i.e. Metapahysics. [And with metaphysics, the issue is not whether one has or has not such a model, but whether it has been examined or simply rules unconsciously by default.] Insofar as mind is inaccessible to the empirical world, the same issue obtains there too. That is, we are here pointing to a border between the physical world and the full world. But, since mind can leave traces in matter -- a premise of our daily existence and reasoning -- we may therefore reasonably infer to mind/intelligent agency on cosmological matters as well as biological, forensic, medical, or common-sense ones. It is in that general context that Hoyle -- not exactly a mindless wool-gathering fundy -- asked the pointed question cited above. Cheerio GEM of TKIkairosfocus
April 1, 2007
April
04
Apr
1
01
2007
05:54 AM
5
05
54
AM
PST
I think it's all to easy to simply label supporters of naturalistic evolution god haters. Many believe the idea of a god creating and/or allowing the many cruel creations blasphemous. And isn't this why the ID movement has gained so much support? It doesn't try to rationalize god. It doesn't have to explain why a god would eternally torment people in hell. It doesn't have to explain why this god allegedly supported, if not ordered, the massacres (leave none alive, kill even the animals etc) written down in the old testament. In fact, ID doesn't even necessitate there being a personal god. It could be merely a intelligent creative force, one perhaps we could someday demonstrate like we demonstrate gravity.Acquiesce
April 1, 2007
April
04
Apr
1
01
2007
05:38 AM
5
05
38
AM
PST
Gil I agree the physical constants of the universe appear to be designed and that should be the default presumption but we have no empirical way of confirming or denying the presumption. At least with biological ID we can experiment with various ways of creating or modifying organic constructs to gauge their sufficiency absent intelligent agency. We have no way to experiment with various ways of creating physical constants. In point of fact we don't know of any ways, with or without intelligent agency, wherein a physical constant can be created or modified. Cosmological ID is interesting on a hypothetical basis but as far as I can see there is no hope at all of it ever moving beyond the hypothetical. In other words it's woolgathering.DaveScot
April 1, 2007
April
04
Apr
1
01
2007
03:01 AM
3
03
01
AM
PST
kf Thus, we see that there is in fact a reasonable process by which estimates for the relevant probabilities can be assigned. We see nothing of the sort. Collins is just making up suppositions out of thin air thus:
Suppose, for instance, that the “theoretically possible” range, R, of values for the strength of gravity is zero to the strength of the strong nuclear force between those protons
Upon what is he basing that supposition? It's based on nothing. A totally empty claim. Anyone can "suppose" anything they like.DaveScot
April 1, 2007
April
04
Apr
1
01
2007
02:29 AM
2
02
29
AM
PST
H'mm: TANGENTIAL, but illustrative: Let me give a link to the Montserrat debate, just for fun, to see how such issues currently work out on the ground when life and property are being bet on the outcomes. [My own solution: I do not go south of a reasonable line unless I have reason adequate to risk my life.] The underlying methods are discussed here, in a typical case across the eruption cycle. Here, the same expert, Aspinall, applies it in another context. [It seems EE dates back to about the time of the Challenger disaster.] The idea here is that experts can be calibrated through reasonably "known" cases, then their judgements can be pooled to yield reasonable values on the unknown case in front of us. On this, note that the inference to the reality of finetuning and associated estimates of probabilities are being made by many of the top level figures in Cosmology, so even though no formal EE has been done, we should not lightly brush aside their estimates. The debate on the Cosmological constant is especially illuminating -- as Collins discusses:
There are other cases of the fine-tuning of the constants of physics besides the strength of the forces, however. Probably the most widely discussed among physicists and cosmologists—and esoteric—is the fine-tuning of what is known as the cosmological constant. The cosmological constant was a term that Einstein included in his central equation of his theory of gravity—that is, general relativity—which today is thought to correspond to the energy density of empty space. A positive cosmological constant acts as a sort of antigravity, a repulsive force causing space itself to expand. If the cosmological constant had a significant positive value, space would expand so rapidly that all matter would quickly disperse, and thus galaxies, stars, and even small aggregates of matter could never form. The upshot is that it must fall exceedingly close to zero for complex life to be possible in our universe. Now, the fundamental theories of particle physics set a natural range of values for the cosmological constant. This natural range of values, however, is at least 10^53 that is, one followed by fifty-three zeros—times the range of life-permitting values. That is, if 0 to L represent the range of life-permitting values, the theoretically possible range of values is at least 0 to 10^53 L. 2 To intuitively see what this means, consider a dartboard analogy: suppose that we had a dartboard that extended across the entire visible galaxy, with a bull’s eye on the dartboard of less than an inch in diameter. The amount of fine-tuning of the cosmological constant could be compared to randomly throwing a dart at the board and landing exactly in the bull’s-eye!
In short, we see here indifference in absence of known constraint used to estimate probabilities in the reasonable range for this constant. Hope this helps too. GEM of TKIkairosfocus
April 1, 2007
April
04
Apr
1
01
2007
02:01 AM
2
02
01
AM
PST
Continuing . . . 4] Conditional EP: He adds The conditional epistemic probability of a proposition R on another proposition S—written as P(R/S)—can be defined as the degree to which the proposition S of itself should rationally lead us to expect that R is true. Thus, the statement that the fine-tuning of the cosmos is very improbable under the atheistic single-universe hypothesis makes perfect sense: it is to be understood as making a statement about the degree to which the atheistic single-universe hypothesis would or should, of itself, rationally lead us to expect cosmic fine-tuning. [And by the resort to multiverse ideas and to hoped for superlaws, we know that their intuitive estimate is that the CEP is maximally low on this observationally referenced scenario that by Occam should prevail unless/until observational evidence for multiverses turns up. 5] Applying to cosmos-level physical parameters: Collins sums up the above thusly: Suppose, for instance, that the “theoretically possible” range, R, of values for the strength of gravity is zero to the strength of the strong nuclear force between those protons—that is, 0 to 10^40 G0, where G0 represents the current value for the strength of gravity. As we saw above [cf. Previously linked article by Collins], the life-permitting range r for the strength of gravity is at most 0 to 10^9 G0. Now, of itself (specifically, apart from the knowledge that we exist), the atheistic single-universe hypothesis gives us no reason to think that the strength of gravity would fall into the life-permitting region instead of any other part of the theoretically possible region. Thus, assuming the strength of the forces constitute a real physical magnitude, the principle of indifference would state that equal ranges of this force should be given equal probabilities, and hence the probability of the strength of gravity falling into the life-permitting region would be at most r/R = 10^9/10^40 = 1/10^31. Note also that this is an example of an upper bound – using the known range of physical forces to constrain the range of an indifference-assigned probability estimate. 6] The Russell Conflicting parameters paradox: Lord Russell as usual has a paradox: A famous example of the Bertrand paradox is that of a factory that produces cubes whose sides vary from zero to two inches, which is equivalent to saying that it produces cubes whose volumes vary from zero to eight cubic inches . . . [but] this leads to conflicting probability assignments—for example, using lengths, we get a probability of 0.5 of a cube being between zero and one inch in length, whereas using volumes we get a probability of 0.125. But: one can easily avoid this objection either by restricting the applicability of the principle of indifference to those cases in which Bertrand Paradoxes do not arise or by claiming that the probability is somewhere between that given by the two conflicting parameters. This problem of conflicting parameters, however, does not seem to arise for most cases of fine-tuning. Thus, we see that there is in fact a reasonable process by which estimates for the relevant probabilities can be assigned. Taken together, they easily lead tot he inference that our cosmos is on a single universe scenario, most unlikely to have occurred by chance + necessity only. On a superlaw scenario, we have opened up the implication that the superlaw that sets the other laws to finetuned values, is evidence of design. On the multiverse scenario, we are looking at local isolation, and the issue of the setting of the parameter-ranges of the “subuniverse factory.” [Collins has an interesting survey discussion on Inflationary scenarios deriving from Linde etc]. Now, too, this sort of trilemma in which inferences to design catch you whichever way you leap brings up a final issue: falsifiability as a criterion of “science.” Of course, we are hardly arguing in a context of science here, but actually metaphysics, on the hoped for superlaw and multiverse scenarios, so the issue is comparative difficulties, and the argument works quite well att hat level – there is a robust inference to design across the alternatives on the table. When we get back to the one cosmos scenario, which is what is currently observationally supported, there is good reason to infer that the probabilities in question are sufficiently low to in aggregate raise what we can term the generalised issue of CSI. Namely, we see specification through finetuning for the sort of intelligent-life facilitating cosmos we observe, and we have reason to infer that the probability of this by chance is exceedingly low. So, even though the Dembski type estimated probability bound is inapplicable, we see that inference to design makes much better explanatory sense than the alternative that by happenstance the observed cosmos just happened to pop into being at a singularity some 13.7 or so BYA, out of nothing and for no sufficient reason. Trust this is helpful GEM of TKIkairosfocus
April 1, 2007
April
04
Apr
1
01
2007
01:41 AM
1
01
41
AM
PST
Hi again DS & GD: I note your: “nowhere did I see mention of the probability of any physical constant assuming the value that it did” [DS] and “here is no way of calculating the probability that the constants of the universe are as they are. “ [GD]. I will respond. First, as I already noted, the root of the argument on cosmology is not probability as such but sensitivity. That is the context of GD's intuition that “design seems to scream at us .” Further to this, I noted that I made an en passant remark above, that here is an intuitive estimate of the probability of a single cosmos – as we actually observe -- having in it the sort of parameter values we observe, in the anti-design resort to quasi-infinite multiverse arguments: nearly zero. [Similarly, the attempt to infer to a forcing superlaw has that implication too. In both cases the design inference still comes through, so the following is non-critical to the issue at stake, but I believe will be helpful to us “interested laymen.”] Now, Collins as linked above makes brief reference to a way to reasonably estimate the probabilities of at least some of the parameters on a provisional basis [which last is typical for all of science . . .], which includes upper bound values, i.e. not more than. That technique is related to commonly used so-called subjective probabilities arrived at through expert elicitation, and used in actuarial science, operations research [decision support], management and of particular relevance to us here in Montserrat, to estimate the probabilities of various disaster scenarios with our friendly local volcano. [Right now there is a debate over whether a 1 in 160 chance over the next year of a lateral blast capable of wiping out the remaining southernmost settlement warrants its full rather than partial evacuation. The scientists and the managers evidently disagree . . .] In brief outline: 1] The Laplacian principle of indifference: in effect once we can assign a reasonable range of scenarios in a situation, in absence of reason to prefer any one value, we assign equiprobable values to all states. E.g. we routinely do this with a coin or a die or a properly shuffled deck of cards. [This is where we begin, then we redistribute values if we have reason to.] 2] General utility and twiddling: This principle is in fact fundamental to statistical mechanics, ie. It is assumed that any one microstate available to a system is just as probable as any other. It is also used by Dembski in his analysis of probabilities. [NB: Experts commonly begin here, then move around values to different scenarios if one is deemed more or less probable than others. On this, too, ranges can be turned into scenarios by assigning typical values adn finite probabilities to sub-ranges of a so-called continuous probability distribution function. But, too, the basic indifference approach is surprisingly powerful all over physics as well as mathematics, statistics and management etc.] 3] A note on Epistemic probabilities: Collins points out that apart from the frequentist approach [statistical studies] and the principle of [adjusted if necessary] indifference, we can interpret probabilities epistemologically: Epistemic probability is a widely recognized type of probability that applies to claims, statements, and hypotheses—that is, what philosophers call propositions.8 Roughly, the epistemic probability of a proposition can be thought of as the degree of credence—that is, degree of confidence or belief—we rationally should have in the proposition. Put differently, epistemic probability is a measure of our rational degree of belief under a condition of ignorance concerning whether a proposition is true or false. Pausing . . .kairosfocus
April 1, 2007
April
04
Apr
1
01
2007
01:15 AM
1
01
15
AM
PST
There is no way of calculating the probability that the constants of the universe are as they are. My point was that the default position should be design, until proven otherwise, because design seems to scream at us.GilDodgen
March 31, 2007
March
03
Mar
31
31
2007
07:23 PM
7
07
23
PM
PST
kairosfocus You wrote a lot of words but nowhere did I see mention of the probability of any physical constant assuming the value that it did. I'm going to assume your answer, stripped of obfuscation, was that you have no answer.DaveScot
March 31, 2007
March
03
Mar
31
31
2007
06:46 PM
6
06
46
PM
PST
Oops: 1] For many fundamental physical parameters/constants . . .kairosfocus
March 31, 2007
March
03
Mar
31
31
2007
03:38 PM
3
03
38
PM
PST
Hi Dave [and others] I see finetuning is a topic that excites much inquiry. On constants and ratios, let's just say that the argument is not so much one on probability as to sensitivity. Hence the term finetuning. I originally simply noted en passant on probability that the resort to a quasi-infinite array of sub-cosmi implicitly accepts that the probability of such a tightly balanced configuration in a single cosmos such as we observe is vanishingly small. In short, my argument is that actions here speak louder than words on the intuitive estimate of the relevant probability by those who wish to infer to chance and/or necessity but not agency on cosmic origins. In a tight summary, cf above linked and below for more details: 1] For many, there is no known law or principle that forces them to take any particular value AND slight shifts trigger such deleterious consequences that it has given pause to the cosmologists and their philosopher interlocutors.
As Astrophysicist Hugh Ross reports, "[u]nless the number of electrons is equivalent to the number of protons to an accuracy of one part in 10^37, or better, electromagnetic forces in the universe would have so overcome gravitational forces that galaxies, stars, and planets never would have formed." [The Creator and the Cosmos, p. 109.] Similarly, John Leslie notes: "One striking thing about the fine tuning is that a force strength or a particle mass often appears to require accurate tuning for several reasons at once . . . Universes all obeying the same fundamental laws could still differ in the strengths of their physical forces, as was explained earlier, and random variations in electromagnetism from universe to universe might then ensure that it took on any particular strength sooner or later. Yet how could they possibly account for the fact that the same one strength satisfied many potentially conflicting requirements, each of them a requirement for impressively accurate tuning?" [Our Place in the Cosmos, 1998]
2] Some have suggested that there are overarching superlaws, i.e. a theory of everything. That simply puts the design issue up one level -- what does a law that forces such a diverse cluster of parameters to so odd and precise a configuration suggest, other than design? 3] Yet others infer to some form or another of randomly distributed physics across a quasi-infinite array of subuniverses. This leads to the countering point [by Leslie] that local finetuning is as impressive as global, and to the further inference that in absence of empirical observation [however theory laden] we are here dealing with ad hoc metaphysical resorts. 4] I think some version of hte Kalam cosmological and Aquinas' inferences give on an inference to best explanation basis a powerful case to point to the best explanation of the universe we do see: --> a beginning of the cosmos implies contingency and calls for a begin-ner. --> the contingency and finetuning as observed which so happen to take on anthropic values, point to a purposeful order, i.e. to design to create a cosmos suitable for intelligent life. --> In turn this points to a designer of enormous intelligence and power beyond the cosmos, as the best explanation. [Note this is not offered as a demonstrative proof but an invitation to comparative difficulties dialogue] A good example of the persuasiveness of the overall pattern of argument, starting with the issue of the finetuning "surprise" is one of the greatest cosmologists of the last Century, Sir Fred Hoyle:
From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? Following the above argument, I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has "monkeyed" with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16
Maybe Robin Collins' summary here will help us get our heads around the issue, after having a look at my own summary is you wish.. These introductory remarks give a flavour of what is to come:
Almost everything about the basic structure of the universe—for example, the fundamental laws and parameters of physics and the initial distribution of matter and energy—is balanced on a razor’s edge for life to occur. As eminent Princeton physicist Freeman Dyson notes, “There are many. . . lucky accidents in physics. Without such accidents, water could not exist as liquid, chains of carbon atoms could not form complex organic molecules, and hydrogen atoms could not form breakable bridges between molecules" (1979, p. 251)—in short, life as we know it would be impossible. Scientists and others call this extraordinary balancing of the fundamental physical structure of the universe for life the “fine-tuning of the cosmos." It has been extensively discussed by philosophers, theologians, and scientists, especially since the early 1970s, with many articles and books written on the topic. Today, many consider it as providing the most persuasive current argument for the existence of God. For example, theoretical physicist and popular science writer Paul Davies claims that with regard to basic structure of the universe, “the impression of design is overwhelming” (Davies, 1988, p. 203).
Hope that helps. GEM of TKIkairosfocus
March 31, 2007
March
03
Mar
31
31
2007
03:17 PM
3
03
17
PM
PST
Borne LOL!! (But why not Deiphobia? I like how it sounds. . . ) Is another definition for Teleophobia really - religious ceremony? I wish you hadn't introduced me to all those new terms. I have a fear of phobias. BTW, I loved your movies and own both of them. You are very ninja-like. Without the pajamas. Is this off topic? I just got here and would hate to be reprimanded so early (Poinephobia - fear of punishment).Sa.jones97
March 31, 2007
March
03
Mar
31
31
2007
10:23 AM
10
10
23
AM
PST
Sa.jones97 BTW, there is no such thing as deiphobia (unless you wish to invent a new one ;-) - the proper terms are Theophobia or Zeusophobia. Of course Teleophobia - 1) Fear of definite plans. 2) Religious ceremony. - might be proper too. :-) Well, maybe it's actually Hadephobia or Stigiophobia - Fear of hell!! Or, Catagelophobia- Fear of ridicule. Or Lilapsophobia- Fear of tornadoes (in junkyards)!! Ok, ok, I've got it! Phronemophobia- Fear of thinking. HA!Borne
March 31, 2007
March
03
Mar
31
31
2007
10:12 AM
10
10
12
AM
PST
kairosfocus How many different values could the gravitational constant have assumed and what is your evidence for any number you might give? How many different values could the weight of a proton and the weight of an electron have assumed and how many different ratios between them could have formed? Again, if you give a number what is your evidence for that number? This is a basic problem in trying to make a cosmological design inference based on probabilities. We have no idea what range of values the physical constants could have assumed and without those ranges it is not possible to calculate the odds of them all assuming the values they did. As far as anyone knows there is only one value they may have assumed and that one value is the value we observe today.DaveScot
March 31, 2007
March
03
Mar
31
31
2007
07:48 AM
7
07
48
AM
PST
sa.jones My understanding is that the ID camp isn’t even arguing from an origins perspective. That's a bit ambiguous. Cosmological ID is a design inference from the physical constants of nature. The argument is that the physical laws and constants which govern the observable universe are so finely tuned and interdependent that if any of them were different by the slimmest of margins life as we know it could not exist. Stars wouldn't form, atoms wouldn't form, heavier elements wouldn't form, and so on. No one knows why these constants took on the precise values they did instead of some other value. The range of explanations include so-called multiverse theories where there are a vast number of universes and some smaller number of them happened to form in such a way that life is possible, or that there is some unknown law that requires the constants be of the values they are, or that the universe itself is designed. This could arguably be called origins. In Biological ID perhaps the most complex machinery of all is the ribosome and DNA which together resemble nothing so much as a computer controlled robotic assembler. DNA and ribosomes are common to all observed forms of life (barring viruses which some say aren't really alive but in any case rely on hijacking existing ribosomes). Since this machinery is so basic it's very close to arguing origins. However, in neither of these cases is there any specific inferral about the originating mechanism other than it appears to require foresight and planning (intelligence) rather than any haphazard mechanism lacking the ability of abstraction into the future. We’re inferring design in existing systems, not attempting to demonstrate how it got that way in the first place. This is quite true. We are only looking at physical evidence. The arrangement of matter and energy, the physical laws and constants which govern interactions of matter and energy, and the patterns we can physically observe in and between them.DaveScot
March 31, 2007
March
03
Mar
31
31
2007
07:24 AM
7
07
24
AM
PST
Hi Jaredl: Passed by and saw your points. Without getting into dozens of pp worth of data: a] The cosmological inference to design is an issue that long predated Dembski's work on the explanatory filter, and is essentially not dependent on his particular model of complex, specified information. That is why the discussion is on finetuning, with dozens of parameters observed [insofar as astronomy at this level can be observational . . .] that are in some cases precise to within 1 part in 10^60. If one comes across an extremely precise, well-functioning system, which is critically dependent on dozens of finely adjusted parameters, one normally infers to intelligence as its best explanation. b] As to whether universes with other rule-sets are “possible” depends in part on what the definition of what “possible” means. In this case, plainly, they are logically possible – we are often talking about the same basic physical laws, but twiddling the parameters around. Rather slight twiddlings would simply not lead to the sort of cosmos we experience, e.g. the required atoms would not form, or the universe would collapse back in far too quickly, or the required abundance for key elements would vanish, etc. Cf my previously linked summary. c] Further, note that – as I discuss in the linked -- once we get into speculative quasi-infinite arrays of subcosmi, we are in the realm not of science proper, but metaphysics. In that land, the proper methodology is comparative difficulties, and in that context, as I noted, in the linked and alluded to above, that there is but one comsos as observed which exhibits fine tuning is plainly explanatorily superior by Occam's principle, in absence of positive evidence for such infinite arrays. d] Finetuning is in fact a generally accepted conclusion in cosmology, the issue is to explain it. Cf the already linked summary and similar discussions. e] Probability is of course a concept that is relative to our degree of ignorance: 1 --> sure to happen, 0 --> sure not to happen, between, not so sure. Such is obviously revisable, but on an inference to best explanation basis, we have a very low probability relative to origin by chance, of the cosmos in which we live. f] My points 3 – 4 above note just why. THE VERY ARGUMENT USED TO TRY TO MAKE THE ODDS SEEM MORE PLAUSIBLE IMPLY THAT IF THERE IS JUST ONE COSMOS – AS IS WHAT IS OBSERVED -- THE ODDS OF ITS ORIGIN BY CHANCE ARE NEGLIGIBLY DIFFERENT FROM ZERO. That is in material part why the attempt is made to speculate about an unobserved quasi-infinite array of sub cosmi: to provide enough probabilistic resources that the odds don't look too incredibly thin. This is of course in large part an ad hoc inference. g] To make matters worse, as John Leslie has pointed out, the real problem does not go away by rushing off to a quasi-infinite array. For, the point is that on the relevant parameters, our observed cosmos is LOCALLY RARE in the field of possible parameters. So, whether or no there are many flies on the wall in other places, locally there 's just the one fly, and swat, out of nowhere a bullet hits it. So, is it: chance or good marksmanship? h] Observe, again, the context of discussion: inference to best current explanation,in a context of certain empirical observations and issues of coherence and explanatory power. We are unable to prove beyond dispute that we do not live in Plato's cave, or that we are not butterflies dreaming we are humans, or that we are not brains in vats, or that we are not living in the Matrix world. But by taking on board the principle that we provisionally trust the evidence we have we can reasonably infer to the world in which ewe evidently live on a basis of comparing alternative worldviews on factual adequacy, coherence and explanatory power. In this case, it is a reasonable principle that since we have to use our senses and common sense to think, we should not take seriously such worldviews as imply the general untrustworthiness of our senses and common sense reasoning faculties. i] By extension, we are looking at factual evidence that points to one common universe that follows a common Hubble expansion process and the same physics with the same classes of atoms etc across its gamut; thus evinces that it is a coherent whole. On the basis of that evidence, it had a beginning at a finitely remote time, implying that is is caused [and requiring a necessary being as its underlying explanation]. You may if you wish infer to a quasi-infinite cosmos as a whole that randomly throws up sub cosmi with sufficiently diverse parameters to scan across the relevant range – though where the universe making machine comes from is another issue on that. But we still have to explain the locally isolated cosmos in which we live that is well suited for life as we know it. j] And – just as tellingly --we have to face the issue of the famous razor: we have evidence that warrants one, and not more than one, cosmos. k] You are right to note that in all known cases, we observe that such finely specified and functionally tightly balanced systems are the products of intelligent agency. So,relative to what we know, that is the best explanation. However, is someone insists, neither logic nor physics forbids such from happening by chance no matter how long the odds. But, once odds fall sufficiently low, the fact is that except where certain worldview assumptions are at stake, we routinely infer to design. For instance with this web post – which y'know could be just plain old lucky noise. [Cf my discussion in the previously linked.] GEM of TKIkairosfocus
March 31, 2007
March
03
Mar
31
31
2007
06:18 AM
6
06
18
AM
PST
Crandaddy: “Design without mechanism!” That’s the materialists’ cry. Therefore, POOF! It’s magic! God musta dun it, so it’s not science! Or: "Mechanism without design! That's the Darwinista's cry. Therefore, POOF! It's magic! NOTHING musta done it, so it's SCIENCE!" With logic like this, the only explanation must be a healthy dose of deiphobia. Can someone shed some light on this for me (I'm new to the debate)? My understanding is that the ID camp isn't even arguing from an origins perspective. We're inferring design in existing systems, not attempting to demonstrate how it got that way in the first place. It's the materialists who are saying "POOF!", concerning the beginning of everything--or am I missing something?Sa.jones97
March 31, 2007
March
03
Mar
31
31
2007
05:48 AM
5
05
48
AM
PST
GEM of TKI - 1. I'm not sure how your point #1 - that the cosmological argument predates Dembski's work - is relevant to the concern I raise, which is that there does not exist evidence that universes with rule-sets other than the ones we observe are actually possible and therefore one cannot utilize Dembski's design-detection framework to infer design since we cannot affirm our rule-set has low-probability with respect to the set of all possible rule-sets. 2. Your point #2 is an example of circular reasoning - you've presupposed fine-tuning, which is the very point at issue. 3. It would require exposure to a great number of alternative cosmoses, the vast majority having different rule-sets than our own, to be able to assess the actual improbability of our rule-set, if indeed it is improbable. 4. As long as you agree, as per your point #4, that your test of the probability of other universes with different rule-sets is only imaginary, because you have no evidence of such, you're in precisely the position of the materialist with respect to their imagining an infinitude of possible universes with differing rule-sets to explain the occurrence of this one, and you're unable to rigorously infer design, presuming Dembski's framework is the sole way to do so. Is there an alternative criterion for design detection to Dembski's? I have had a thought, however, and I would appreciate Bill's input on this. Could we strengthen the design inference by noting that events exhibiting algorithmic compressibility, where the causal history can be confirmed, have not been observed to arise except as a result of intelligent agency? It seems to me that the requirement of low-probability is to treat the laws of nature as a brute given; need it be so? Where has there ever arisen an algorithm without a programmer?jaredl
March 31, 2007
March
03
Mar
31
31
2007
04:52 AM
4
04
52
AM
PST
Hi Jared: I see your The cosmological argument for design is vacuous within Dembski’s design-detection framework. One does not have warrant to infer design unless one has evidence that the outcome was low-probability Not at all: 1] The cosmos-scale argument long predates WD's work. Scientists began to see that the increasingly accepted Big Bang cosmology not only entails an origin in time for the cosmos [raising the point that what begins to exist is caused . . .], but also . . . 2] that it is enormously and multidimnensionally fine-tuned in a multitude of ways. That invited the very natural inference that the "tuning knobs" were more or less intelligently set. 3] Within that context, we can make a highly plasible case, that the probability that a cosmos that originated at random and just happened to have the parameters set up for a cosmos that facititates intelligent life, is vastly low. One of the best evidences for that is the fact that the alternative usually put forward nowadays is the idea that there is a quasi-infinite number of sub-cosmi in a larger universe as a whole. 4] Why is that? Simple: this argument is an attempt to dilute the evident odds by multiplying the number of imagined at-random tests. That the imagined number of sub-cosmi is quasi-infinite is equivalent to say that the odds of just one cosmos by chance [its evident origin at a specific point in time implies thatit is contingent so only chance or agency can explain it.] arriving at the parameters we observe, are vanishingly small. [And yet, we only have observational evidence for just one cosmos, the one we observe.] So, the argument that would apply Dembski's probabilistic explanatory filter has enormous plausibility. So, I think it is fair comment to say that it is compelling, on an inference to best explanation basis. GEM of TKIkairosfocus
March 31, 2007
March
03
Mar
31
31
2007
01:18 AM
1
01
18
AM
PST
"Design without mechanism!" That's the materialists' cry. Therefore, POOF! It's magic! God musta dun it, so it's not science! Why oh why do people think that mechanism is so necessary to design detection? Why can't they see that minds don't operate by any mechanism that we can understand and that mechanism resides in the absence of design. I've debated several people on this, and I get the written equivalent of blank stares and tail chases. It seems that some people just can't get it. (And some people are perhaps so thoroughly consumed by their ideologies that they have the mental equivalent of firewalls which just won't allow them to see as valid anything which isn't compatible.) Blind, purposeless chance and necessity--that's the only choice because we don't have a MECHANISM!!! (I do so hope that the denziens here know what I'm talking about...)crandaddy
March 30, 2007
March
03
Mar
30
30
2007
11:45 PM
11
11
45
PM
PST
The cosmological argument for design is vacuous within Dembski's design-detection framework. One does not have warrant to infer design unless one has evidence that the outcome was low-probability - mere algorithmic compressibility is not sufficient. You must be able to demonstrate that things could have been different; what if time, space, and matter must, by logical necessity, relate in exactly the fashion we observe in our universe?jaredl
March 30, 2007
March
03
Mar
30
30
2007
10:12 PM
10
10
12
PM
PST
All true. But (it seems to me that) the *really* amazing thing is that most 'materialists' seem to expect that the rest of the world simply does not and cannot understand what lies behind their fabulous and justly famous credulity.Ilion
March 30, 2007
March
03
Mar
30
30
2007
08:56 PM
8
08
56
PM
PST
Call it the materialist "God-angst". Anything but God. Anything, however ridiculous, however obviously wrong. Anything at all. "A disbelief in God does not result in a belief in nothing; disbelief in God usually results in a belief in anything." As Richard Lewontin said : "We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubsantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counterintuitive, no matter how mystifying to the uninitiated. Moreover the materialism is absolute, for we cannot allow a Divine Foot in the door." With that kind of blind faith anything goes, as long as it doesn't include any implications in a higher power or governing intelligence.Borne
March 30, 2007
March
03
Mar
30
30
2007
07:36 PM
7
07
36
PM
PST
We arrived at the present "state of affairs" primarily through the education system. We can also see a historical trend from the time of the "enlightenment" period onwards of a search for materialistic ontologies in order to create a loss of faith in religious power structures. Historically the leading materialist philosophers and "scientists" of that period, as today, have been inspired in their speculative materialistic research and ontological preaching work by their fear and dislike of real or imagined religious power structures. Their goal has always been to replace or destroy as much as possible the basis for religious power structures by attacking the underlying foundation for religion, the belief that god exists. As time went on there arose a materialist establishment which refered to themselves as scientists and educators and researchers. They were part of a larger materialistic social and cultural milieu who considered themselves for the most part to be "freethinkers", "antiestablishment" "modern progressives". They sought to gain control of the political and cultural establishments which they saw as being under the control of religious ideas or religious power structures. They wanted political and cultural freedom and they saw religious ideas and power structures as standing in the way of their freedom. They formed organizations in order to furthur their political and cultural goals. The most prominent and influential and earliest being "The Royal Society" of Great Britain. As time went on and as this new consensus amongst the illuminated enlightened "freethinkers" (that religious ideas led to religious power structures which stifled freedom) gained more and more converts to their way of thinking amongst the upper classes, there arose a new elitest cultural establisment. What began as antiestablishment thinking and activity eventually prevailed over the establishment and became a new establishment. They were particularly intersted in creating or changing and controlling educational systems and institutions. Which they in fact did accomplish. By their efforts and control over education they were able to overturn establishment supported existential ontologies i.e religious outlooks, and replace that with their anti-religious outlook. They were intent on destroying the power of religious thought, therefore they supported any kind of educational or research endeavor which furthured the elimination of god as a plausible reality. They knew they couldn't attack theology because of the kind of emotional attachment the mass of people hold their theological views. Just like today if you criticize the theology of any religion for the most part you will not be accepted as an authority to be taken seriously by the believers of that theology because you are not a believer of the theology, it's a catch-22. Therefore the idea was to "prove" that we don't need god to explain anything and to "prove" that what is written in scriptures is historically false e.g time spans, origin of life, origin of species, etc. Also it became de rigueur in the education system to discuss god and religion in a belittling fashion, not just in the sciences but also in other fields anti-god rhetoric was and is the respectable norm while any type of non-atheistic thought or speech or research is taught or treated as being anywhere from boorish to dangerous. So what began as an antiestablishment effort to undermine religiou authority eventually grew until it became the establishment, at least the establishment which controlled education. And as in the past a serious goal is to diminish the power of religious thought. Although for many if not most who accept and promote the establishment materialistic ontology it is less about fighting religious thought then it is simply about getting a degree, making a living by going along with the program, and exhibiting socially or politically correct behavior and thought modification due to conditioning from the educational system and cultural milieu (which is also informed and conditioned [brainwashed] to a large degree by the education system).mentok
March 30, 2007
March
03
Mar
30
30
2007
07:31 PM
7
07
31
PM
PST
Sometimes, the obvious is only admitted by science after an acceptable naturalistic explanation is found. For example, I live in the Lower Columbia Basin of the inland northwest US. I cannot walk one mile, or dig one foot deep in my back yard without seeing "obvious" evidence of a great prehistoric flood. Back in the days of unchallenged uniformitarianism such a thought was abhorrent to geologists. But beginning in the 1920s, J Harlen Bretz developed a theory that certain local features were caused by cataclysmic water flows. He spent the next forty years trying to convince his colleagues. The turning point was when Joseph Thomas Pardee suggested that the water came from a large lake formed by an ice dam, which eventually failed catastrophically. This was plausible enough that scientists finally began to accept the obvious. Now scientists are able to see evidence of great floods all over the world. Of course these are always explained as the result of "ancient seas" or "ice-age floods" - never The Flood. I think the same thing will happen with ID. Eventually someone will come up with a plausible explanation that excludes You-Know-Who and scientists will trip all over each other to take credit for their part in this great scientific advance.sagebrush gardener
March 30, 2007
March
03
Mar
30
30
2007
05:58 PM
5
05
58
PM
PST
The materialists have no trouble believing in some infinite multiverse, but have trouble believing some kind of transcendant intelligence can exist. Go figure. Is it possible that they just hate the idea of God? I'm starting to wonder.mike1962
March 30, 2007
March
03
Mar
30
30
2007
05:46 PM
5
05
46
PM
PST

Leave a Reply