Uncommon Descent Serving The Intelligent Design Community

Multiverse Mavens Hoisted on Own Petard

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Several factors are combining to increase belief (of the “faith” variety, not the “demonstrated fact” variety) in the multiverse among materialists. Two of these factors are relevant to ID at the biological and cosmological levels. At the biological level materialists are beginning to understand that the probability that life arose by random material processes is so low (estimated in this article written by materialists to be 10 raised to -1018) that infinite universes are required for it to have occurred, the implication being that we just happen to live in the ever-so-lucky universe where it all came together.

At the cosmological level, the probability that the fine tuning of the universe necessary for the existence of life arose by sheer coincidence is so low that again the multiverse is invoked to provide infinite “probabilistic resources” to do the job (see here).

Of course, there is another possible explanation for both the emergence of life and the fine tuning of the universe. These phenomena may be the results of acts of a super powerful being whom we might call God.

Obviously, the whole reason materialists have invoked the multiverse in the first place is to avoid resorting to agency to explain the emergence of life and cosmological fine tuning. But isn’t it obvious that given the very premises invoked by materialists in the multiverse scenarios that we can just as easily conclude that God exists.

Here is how the logic runs: The materialists says, “Yes, the probability that life emerged through random material processes is vanishingly small, but in an infinite multiverse everything that is not logically impossible is in fact instantiated, and we just happen to live in the lucky universe where life was instantiated. Similarly, we happen to live in the Goldilocks universe (which, again, is one of infinite universes) where the physical constants are just right for the existence of life.”

But the theist can play this game too. “The existence of God is not logically impossible. In an infinite number of universes everything that is not logically impossible is in fact instantiated, and we just happen to live in one of those universes in which God is instantiated.”

I do not believe in the multiverse. The entire concept is a desperation “Hail Mary” pass in which logical positivists and their materialist fellow travelers are attempting to save a philosophical construct on the brink of destruction. The point is that materialists’ own multiverse premise leads to the conclusion that God exists more readily than the opposite conclusion. Ironically, far from excluding the existence of God, if the multiverse exists, God must also exist.

Comments
Dr Torley, Yes, part of the problem with establishing this range is know whether the sub-range that permits life is a single continuous region. At different points it seems that Collins allows as to how it is possible that it might not be true, that it actually might be necessary to take the sum of several non-contiguous sub-regions, but then basically abandons this caution. He seems really over the top in agreeing with Leslie's fly-on-the-wall analogy that the local region is all that needs to be taken into account, even if the wall is thick with flies elsewhere. How does that make sense to someone trying to calculate the total area of the sub-regioins?? # BTW, I see your quotes correct Collins' documents (on his website) from "Plank" to "Planck". I have to admit that while reading, this kind of error did not increase my confidence (or rational expectation) in his argument. Again, in other spots Collins toys briefly with the idea that the parameter G actually could have a negative value. But then this is abandoned. Why? I'm not worried that allowing negtive values for G will increase the eventual appearance of fine tuning. I'm worried that Collins' method for arriving at a range is completely arbitrary. It seems to me that Collins' heuristics for arriving at a range are either completely grounded in our experience of this univese, applied at the wrong level of physical reality, or arbitrary. The question of whether a non-contiguous region holds parameter settings that support life seems to me to be strongly supported by 1D and 2D automata capable of universal computation, and 2D CA replicators in the tradition of von Neumann.Nakashima
March 10, 2010
March
03
Mar
10
10
2010
12:24 AM
12
12
24
AM
PDT
Sooner Emeritus (#49) I'd like to address your remark:
The fact is that I regard so-called proofs and disproofs of God as idolatry of reason.
Surprisingly, I think Robin Collins would agree with you. To support my point, I'd like to quote a few extracts from Collins' latest essay, "The Teleological Argument: An Exploration of the Fine-Tuning of the Universe" at http://commonsenseatheism.com/wp-content/uploads/2009/09/Collins-The-Teleological-Argument.pdf , where Collins is discussing the question of whether LPU (the existence of a life-permitting universe) lends evidential support to T, the theistic hypothesis that "there exists an omnipotent, omniscient, everlasting or eternal, perfectly free creator of the universe whose existence does not depend on anything outside itself", as opposed to NSU [the naturalistic single-universe hypothesis] and the naturalistic multiverse hypothesis:
7.1 The "who designed God?" objection [T]his objection [Who made God? - VJT] would arise only if either T were constructed solely to explain the fine-tuning, without any independent motivation for believing it, or one considered these other motivations as data and then justified T by claiming that it is the best explanation of all the data. Our main argument, however, is not that T is the best explanation of all the data, but only that given the fine-tuning evidence, LPU strongly confirms T over NSU... The existence of God is not a hypothesis that is being offered as the best explanation of the structure of the universe, and hence it is not relevant whether or not God is an explanatorily better (e.g. simpler) terminus for ultimate explanation than the universe itself. Nonetheless, via the restricted version of the Likelihood Principle (Section 1.3), the various features of the universe can be seen as providing confirming evidence for the existence of God. One advantage of this way of viewing the situation is that it largely reconciles the views of those who stress a need for faith in coming to believe in God and those who stress reason. They each play a complementary role. 8. Conclusion As I developed in Sections 1.3 and 1.4, the fine-tuning argument concludes that, given the evidence of the fine-tuning of the cosmos, LPU [the existing of a life-permitting universe] significantly confirms T [theism] over NSU [the naturalistic single-universe hypothesis]. In fact, as shown in Section 5.2, a good case can be made that LPU conjoined with the existence of evil significantly confirms T over NSU. This does not itself show that T is true, or even likely to be true; or even that one is justified in believing in T. Despite this, I claimed that such confirmation is highly significant – as significant as the confirmation that would be received for moral realism if we discovered that extraterrestrials held the same fundamental moral beliefs that we do and that such an occurrence was very improbable under moral antirealism... This confirmation would not itself show that moral realism is true, or even justified. Nonetheless, when combined with other reasons we have for endorsing moral realism (e.g. those based on moral intuitions), arguably it tips the balance in its favor. Analogous things, I believe, could be said for T [theism].
I hope that clears matters up, regarding the status of the fine-tuning argument for theism.vjtorley
March 9, 2010
March
03
Mar
9
09
2010
01:24 PM
1
01
24
PM
PDT
Pelagius, Sotto Voce, R0b: All of you have discussed the concept of transfinite numbers in previous posts. How does this affect Collins' fine-tuning argument? It may interest you to know that Robin Collins is actually quite friendly to the idea of an infinite universe. Here is what he writes in his essay, "Universe or Multiverse? A Theistic Perspective" at http://home.messiah.edu/~rcollins/Fine-tuning/stanford%20multiverse%20talk.htm :
Indeed, the fact that the multiverse scenario fits well with an idea of an infinitely creative God, and the fact that so many factors in contemporary cosmology and particle physics conspire together to make an inflationary multiverse scenario viable significantly tempts me toward seriously considering a theistic version of it. This temptation is strengthened by the fact that science has progressively shown that the visible universe is vastly larger than we once thought, with a current estimate of some 300 billion galaxies with 300 billion stars per galaxy. Thus, it makes sense that this trend will continue and physical reality will be found to be much larger than a single universe. Of course, one might object that creating a fine-tuned universe by means of a universe generator would be an inefficient way for God to proceed. But this assumes that God does not have any other motive for creation - such as that of expressing his/her infinite creativity and ingenuity - than creating a life-permitting cosmos using the least amount of material. But why would one make this assumption unless one already had a preexisting model of God as something more like a great engineer instead of a great artist? Further, an engineer with infinite power and materials available would not necessarily care much about efficiency. (Emphases mine - VJT.)
In that essay, Collins goes on to argue that even if a multiverse generator of baby universes exists, the multiverse generator itself - whether of the inflationary variety or some other type - still needs to be "well-designed" in order to produce life-sustaining universes. He goes on to argue in detail that an inflationary multiverse generator would need: (i) A mechanism to supply the energy needed for the bubble universes (the inflaton field); (ii) a mechanism to form the bubbles (Einstein's equation of general relativity); (iii) a mechanism to convert the energy of the inflaton field to the normal mass/energy we find in our universe (i.e. Einstein's mass-energy equivalence relation, combined with an hypothesized coupling between the inflaton field and normal mass/energy fields); and (iv) a mechanism that allows enough variation in constants of physics among universes (e.g. superstring theory). Concludes Collins:
In sum, even if an inflationary multiverse generator exists, it along with the background laws and principles have just the right combination of laws and fields for the production of life-permitting universes: if one of the components were missing or different, such as Einstein's equation or the Pauli-exclusion principle, it is unlikely that any life-permitting universes could be produced.
In his more recent essay (2009), "The Teleological Argument:An Exploration of the Fine-Tuning of the Universe" at http://commonsenseatheism.com/wp-content/uploads/2009/09/Collins-The-Teleological-Argument.pdf , Collins discusses the multiverse scenario in much greater depth, in section 6. I'll quote some brief remarks Collins makes about using what he calls an unrestricted multiverse as an argument against fine-tuning. The unrestricted multiverse is one in which all possible worlds exist:
6.2. Critique of the unrestricted multiverse To begin our argument, consider a particular event for which we would normally demand an explanation, say, that of Jane’s rolling a six-sided die 100 times in a row and its coming up on six each time... DTx is the state of affairs of [die] D’s coming up 100 times in a row on six for that particular sequence of die rolls. Normally, we should not accept that DTx simply happened by chance; we should look for an explanation... Now, for any possible state of affairs S – such as DTx – UMU [the unrestricted multiverse - VJT] entails that this state of affairs S is actual... Hence, the mere fact that [UMU] entails the existence of our universe and its life-permitting structure cannot be taken as undercutting the claim that it is improbable without at the same time undercutting claims such as that DTx is improbable.
So there you have it. "Hey, we live in an unrestricted multiverse! Sooner or later, 100 sixes in a row was bound to come up!" I wouldn't like to try that line in Vegas. Next, Collins addresses restricted multiverses (such as inflationary-superstring multiverse), which may still contain an infinite number of universes, but without being exhaustive of all possibilities. I'll quote a brief summary extract:
6.3. The inflationary-superstring multiverse explained and criticized 6.3.5 Conclusion The aforementioned arguments do not show that inflationary cosmology is wrong or even that scientists are unjustified in accepting it. What they do show is that the inflationary multiverse offers no help in eliminating either the fine-tuning of the laws of nature or the special low-entropic initial conditions of the Big Bang. With regard to the special low-entropic initial conditions, it can explain the special conditions of the Big Bang only by hypothesizing some other, even more special, set of initial conditions.
I think it is fair to conclude that the existence of the multiverse, even if confirmed, in no way undermines the fine-tuning argument.vjtorley
March 9, 2010
March
03
Mar
9
09
2010
01:01 PM
1
01
01
PM
PDT
vjtorley, if we're interested in the size of the meteor, it seems that circumference, surface area, and volume are all valid measures. Collins' acknowledgement of Bertrand's Paradox shows that he's at least aware of the issue, and I'll have to read his work before I opine on whether he has resolved the paradox adequately in the case of fine-tuning.R0b
March 9, 2010
March
03
Mar
9
09
2010
12:30 PM
12
12
30
PM
PDT
Sooner Emeritus (#38, #49) Thank you for your posts. Like you, I tend to highlight a lot. It helps if one is reading a lengthy post - you can skim more quickly that way. First, I'd like to offer my sincere apologies for assuming that you were a skeptic, in my earlier posts. My post in #40 was intended purely as a dig at the double standards of some skeptics who balk at the probabilities invoked by the fine-tuning argument, but then proceed to invoke far vaguer "probabilities" (based on nothing more than subjective hunches) when discussing the problem of evil, or the likelihood of there being an incorporeal Deity. Although probability is not my specialty in philosophy, I am certainly aware of the difference between epistemic and physical probabilities, which you mentioned in your last post (#49). So is Collins, whose lengthy essay I'm currently summarizing. Accordingly, I'll cite some remarks by Collins which address your objections. You object to the fine-tuning argument on the ground that it conflates epistemic and physical probabilities:
Only with a time machine could one observe an origin of life. Without a time machine, all one can do is to use information in the present to guess a process in the past, and then assign probability according to the guess (model). This probability is certainly not a physical entity. It derives from the modeler’s supposition as to what was going on 4 billion years ago... It is clearly wrong to infer that an intelligence intervened to change physical probability when the deficit in probability is merely subjective.
Collins responds in section 3.1 of his essay ("The need for epistemic probability") that on the contrary, epistemic probability is extensively used in scientific confirmation, and that it often precedes any application of physical and/or statistical probability:
Consider, for example, the arguments typically offered in favor of the Thesis of Common Ancestry, continental drift theory, and the atomic hypothesis. The Thesis of Common Ancestry is commonly supported by claiming that a variety of features of the world – such as the structure of the tree of life – would not be improbable if this thesis is true, but would be very improbable under other contending, nonevolutionary hypotheses, such as special creation.... Similar lines of reasoning are given for accepting continental drift theory. For example, the similarity between the animal and plant life on Africa and South America millions of years ago was considered to provide significant support for continental drift theory. Why? Because it was judged very unlikely that this similarity would exist if continental drift theory were false, but not if it were true. Finally, consider the use of epistemic probability in the confirmation of atomic theory. According to Wesley Salmon (1984, pp. 219–20), what finally convinced virtually all physical scientists by 1912 of the atomic hypothesis was the agreement of at least 13 independent determinations of Avogadro’s number based on the assumption that atomic theory was correct... Since some of the probabilities in the aforementioned examples involve singular, nonrepeatable states of affairs, they are not based on statistical probabilities, nor arguably other non-epistemic probabilities. This is especially evident for the probabilities involved in the confirmation of atomic theory since some of them involve claims about probabilities conditioned on the underlying structure and laws of the universe being different – e.g. atoms not existing. Hence, they are not based on actual physical propensities, relative frequencies, or theoretical models of the universe's operation. They therefore cannot be grounded in theoretical, statistical, or physical probabilities. (Emphases mine - VJT.)
All in all, I think Collins has made a good case that epistemic probabilities cannot be eliminated from science, and that their use may even precede the use of physical probabilities invoked by scientists. And now, over to you.vjtorley
March 9, 2010
March
03
Mar
9
09
2010
12:26 PM
12
12
26
PM
PDT
Mark Frank (#44) and R0b (#46): You both raise some valid points regarding the scale that we should use when assessing whether a constant is fine-tuned. R0b writes:
Suppose we take his [Collins'] meteor example and look at the size of the meteor rather than its impact location. Do we assume uniform probability over possible circumferences, surface areas, or volumes?
Let's go back to gravity, which I discussed in an earlier post (#34). In that post, I quoted Collins as saying that stars with life-times of more than a billion years (as compared to our sun’s life-time of ten billion years) could not exist if gravity were increased by more than a factor of 3000. Now, I can sympathize with skeptics who might object that being able to increase the strength of gravity up to 3000 times doesn't sound like fine-tuning, unless you set it against the backdrop of a very large range (0 to 10^40). But actually, gravity is much, much more finely-tuned than that, as I found out when reading Collins' latest essay:
2.3.2 Fine-tuning of gravity There is, however, a fine-tuning of gravity relative to other parameters. One of these is the fine-tuning of gravity relative to the density of mass-energy in the early universe and other factors determining the expansion rate of the Big Bang – such as the value of the Hubble constant and the value of the cosmological constant. Holding these other parameters constant, if the strength of gravity were smaller or larger by an estimated one part in 10^60 of its current value, the universe would have either exploded too quickly for galaxies and stars to form, or collapsed back on itself too quickly for life to evolve. The lesson here is that a single parameter, such as gravity, participates in several different fine-tunings relative to other parameters. [Footnote: This latter fine-tuning of the strength of gravity is typically expressed as the claim that the density of matter at the Planck time (the time at which we have any confidence in the theory of Big Bang dynamics) must have been tuned to one part in 10^60 of the so-called critical density (e.g. Davies 1982, p. 89).] (Emphases mine - VJT.)
Look at that. "One part in 10^60." Amazing, isn't it? Wouldn't you agree that's pretty finely-tuned? But Collins isn't finished yet. He has anticipated Mark Frank's objection (#44) that the fine-tuning "surprise factor" diminishes markedly, if one suitably redefines the natural constant:
Once could rewrite Newton’s laws based on a concept, call it rootforce, which is the square root of our current definition of force. In the case rootforce between two objects of mass m1 and m2 = squareroot(G)*squareroot(m1*m2)/r It is much more convenient to deal with force than rootforce – but in what sense is more of a physical reality?
Robin Collins considers an even nastier example, involving 100-th roots:
3.3.2 Restricted Principle of Indifference In the case of the constants of physics, one can always find some mathematically equivalent way of writing the laws of physics in which (W_r)/(W_R) [the ratio of the restricted range of life-permitting values for a physical constant to the total range of values over which the constant can vary - VJT] is any arbitrarily selected value between zero and one. For example, one could write Newton's law of gravity as F = (U^100.m_1.m_2)/r^2, where U is the corresponding gravitational constant such that U^100 = G. If the comparison range for the standard gravitational constant G were from 0 to 10^100.G_0, and the life-permitting range were from 0 to 10^9.G_0, that would translate to a comparison range for U of 0 to 10.U_0 and a life-permitting range of 0 to (1.2).U_0, since 10.U_0 = 10^100.G_0 and (1.2).U_0 = 10^9.G_0. (Here G_0 is the present value of G and U_0 would be the corresponding present value of U.) Thus, using G as the gravitational constant, the ratio, (W_r)/(W_R), would be (10^9.G_0)/(10^100.G_0) = 1/10^91, and using U as the "gravitational constant," it would be (1.2.U_0)/(10.U_0), or 0.12, a dramatic difference! Of course, F = (U^100.m_1.m_2)/r^2 is not nearly as simple as F = (G.m_1.m_2)/r^2, and thus the restricted Principle of Indifference would only apply when using G as one's variable, not U. ...In the next section, however, we shall see that for purposes of theory confirmation, scientists often take those variables that occur in the simplest formulation of a theory as the natural variables. Thus, when there is a simplest formulation, or nontrivial class of such formulations, of the laws of physics, the restricted Principle of Indifference circumvents the Bertrand Paradoxes.
Next, Collins argues that refusal to recognize the restricted Principle of Indifference would have the unacceptable consequence that highly accurate scientific predictions do not count as a valid reason for accepting a scientific theory:
3.3.3. Natural variable assumption Typically, in scientific practice, precise and correct novel predictions are taken to significantly confirm a theory, with the degree of confirmation increasing with the precision of the prediction. We shall argue, however, that the notion of the "precision" of a prediction makes sense only if one privileges certain variables – the ones that I shall call the natural variables. These are the variables that occur in the simplest overall expression of the laws of physics. Thus, epistemically privileging the natural variables as required by the restricted Principle of Indifference corresponds to the epistemic practice in certain areas of scientific confirmation; if scientists did not privilege certain variables, they could not claim that highly precise predictions confirm a theory significantly more than imprecise predictions.... From examples like the one cited earlier, it is also clear that W_R precision also depends on the choice of the natural variable, as we explained for the case of fine-tuning. [W_R is the total range of possible values for a constant of nature - VJT.] So it seems that in order to speak of the predictive SD [significant digit] or W_R precision for those cases in which a theory predicts the correct experimental value for some quantity, one must assume a natural variable for determining the known predictive precision. One could, of course, deny that there exists any nonrelative predictive precision, and instead claim that all we can say is that a prediction has a certain precision relative to the variable we use to express the prediction. Such a claim, however, would amount to a denial that highly accurate predictions, such as those of QED [Quantum Electro-Dynamics - VJT], have any special epistemic merit over predictions of much less precision. This, however, is contrary to the practice of most scientists. In the case of QED, for instance, scientists did take the astounding, known precision of QED’s prediction of the g-factor [gyromagnetic ration - VJT] of the electron, along with its astoundingly accurate predictions of other quantities, such as the Lamb shift, as strong evidence in favor of the theory. Further, denying the special merit of very accurate predictions seems highly implausible in and of itself. Such a denial would amount to saying, for example, that the fact that a theory correctly predicts a quantity to an SD [significant digit] precision of, say, 20 significant digits does not, in general, count significantly more in favor of the theory than if it had correctly predicted another quantity with a precision of two significant digits. This seems highly implausible.
Collins is of course aware of cases where there may be some doubt as to which variable we should use:
[C]onsider the case in which we are told that a factory produces cubes between 0 and 10 meters in length, but in which we are given no information about what lengths it produces. Using our aforementioned principle, we shall now calculate the epistemic probability of the cube being between 9 and 10 meters in length. Such a cube could be characterized either by its length, L, or its volume, V. If we characterize it by its length, then since the range [9,10] is one-tenth of the possible range of lengths of the cube, the probability would be 1/10. If, however, we characterize it by its volume, the ratio of the range of volumes is: [1,000 ? 93]/1,000 = [1,000 ? 729]/1,000 = 0.271, which yields almost three times the probability as for the case of using length. Thus, the probability we obtain depends on what mathematically equivalent variable we use to characterize the situation.
Collins replies that in this particular case, there is a genuine ambiguity:
In analogy to Bertrand’s Cube Paradox for the Principle of Indifference, in the case of the aforementioned cube it seems that we have no a priori way of choosing between expressing the precision in terms of volume or in terms of length, since both seem equally natural. At best, all we can say is that the predicted precision is somewhere between that determined by using length to represent the experimental data and that determined by using volume to represent the experimental data.
Collins' admission that there may be some "hard cases" in no way undermines his fundamental point, that "the notion of the 'precision' of a prediction makes sense only if one privileges certain variables – the ones that I shall call the natural variables... [I]f scientists did not privilege certain variables, they could not claim that highly precise predictions confirm a theory significantly more than imprecise predictions." And returning briefly to the meteor example discussed by R0b above: I think it's pretty clear that area is the relevant variable to consider.vjtorley
March 9, 2010
March
03
Mar
9
09
2010
11:37 AM
11
11
37
AM
PDT
vjtorley (40), You are using "probability" equivocally. I know that you hold a doctoral degree in philosophy, but you probably (pun, as well as example of usage, intended) have not studied the philosophy of probability. The article on interpretation of probability in the online Stanford Encyclopedia of Philosophy is an excellent place to start reading. I probably annoy people with my excessive use of highlighting, but here I go again:
A physicist commenting at another blog offered a fine slogan: “No probability without process.” As best I can recall from my reading in the philosophy of probability, everyone agrees that physical probability should be treated as the relative frequency of outcomes of a repeatable experiment.
The "probability" in your response to this was not physical. In fact, people do not assign numerical probabilities to propositions in that sort of discourse, and it's not clear to me that there is a rational way to do so. The attachment of "probably" to a proposition is often just a rhetorical trick for sidestepping subjective preference and giving the impression of objectivity. As for hoisting me on my own petard, you're making an error that is common at UncommonDescent, namely to jump to the conclusion that someone who objects to your argument objects to God. The fact is that I regard so-called proofs and disproofs of God as idolatry of reason. I should mention that Koonin comes perilously close to giving, and perhaps lays between the lines, an argument from improbability to an infinite multiverse. If he were to make this argument explicit, I would object to it just as I do IDists' arguments from improbability. No one should question whether "I calls 'em like I sees 'em."Sooner Emeritus
March 9, 2010
March
03
Mar
9
09
2010
11:24 AM
11
11
24
AM
PDT
If there was a god, and there was a multi-verse, why bother fine-tuning any particular universe? We might simply have been put in a universe that is most beneficial to us.Toronto
March 9, 2010
March
03
Mar
9
09
2010
11:02 AM
11
11
02
AM
PDT
Mr. Nakashima (#45) Thank you for your post. You raise the very pertinent issue of how the range should be defined for a natural constant of physics:
I’m sorry, I have no prior confidence or rational expectations at the QCD level. Collins does nothing to support an argument that because two forces are in ratio of 1:10^40 that it is appropriate to take 1 as lower bound and 10^40 as upper bound of the possible variation range in either one.
Actually, I believe Collins takes 0 rather than 1 as the lower bound, but that's a minor quibble. The underlying principle that Collins is appealing to here is that the domain of applicability for a physical concept (e.g. gravity, or the strong force) should be taken as the range of values over which we have no reason to believe that the concept would break down. As Collins points out in his essay, we do have reason to believe that at very high energies, our force-related concepts would break down. These high energies represent a natural cutoff point for the application of our force-related concepts. At these high energies, space and time are no longer continuous. Mr. Nakashima, in your post you mentioned the possibility that that our universe is actually fully quantized, and that the Theory of Everything could be a set of cellular automata rules. If I read him correctly, Robin Collins would not be at all fazed if this idea, which has been floated by Ed Fredkin and Stephen Wolfram, actually turned out to be correct. Anyway, without further ado, I'll enclose a relevant quote from Collins' lengthy essay. I hope it addresses your concerns about range:
4.5. Examples of the EI region In the past, we have found that physical theories are limited in their range of applicability – for example, Newtonian mechanics was limited to medium-sized objects moving at slow speeds relative to the speed of light. For fast-moving objects, we require special relativity; for massive objects, General Relativity; for very small objects, quantum theory.... There are good reasons to believe that current physics is limited in its domain of applicability. The most discussed of these limits is energy scale.... The limits of the applicability our current physical theories to below a certain energy scales, therefore, translates to a limit on our ability to determine the effects of drastically increasing a value of a given force strength – for example, our physics does not tell us what would happen if we increased the strong nuclear force by a factor of 10^1,000... Further, we have no guarantee that the concept of a force strength itself remains applicable from within the perspective of the new physics at such energy scales... Thus, by inductive reasoning from the past, we should expect not only entirely unforeseen phenomena at energies far exceeding the cutoff, but we even should expect the loss of the applicability of many of our ordinary concepts, such as that of force strength. The so-called Planck scale is often assumed to be the cutoff for the applicability of the strong, weak, and electromagnetic forces. This is the scale at which unknown quantum gravity effects are suspected to take place thus invalidating certain foundational assumptions on which current quantum field theories are based, such a continuous space-time (see e.g. Peacock 1999, p. 275; Sahni & Starobinsky 1999, p. 44). The Planck scale occurs at the energy of 10^19 GeV (billion electron volts), which is roughly 10^21 higher than the binding energies of protons and neutrons in a nucleus. This means that we could expect a new physics to begin to come into play if the strength of the strong force were increased by more than a factor of ~10^21... Effective field theory approaches to gravity also involve General Relativity's being a low-energy approximation to the true theory. One common proposed cutoff is the Planck scale...
vjtorley
March 9, 2010
March
03
Mar
9
09
2010
10:38 AM
10
10
38
AM
PDT
Mr. Nakashima, Mark Frank and R0b: I've just been having a look at Robin Collins' latest 80-page essay (2009) on fine-tuning, which is available at http://commonsenseatheism.com/wp-content/uploads/2009/09/Collins-The-Teleological-Argument.pdf . It's actually chapter 4 in "The Teleological Argument: An Exploration of the Fine-Tuning of the Universe" in The Blackwell Companion to Natural Theology Edited William Lane Craig and J. P. Moreland. 2009. Blackwell Publishing Ltd. ISBN: 978-1-405-17657-6. (From what I hear, the essays in this volume are all of a high caliber; for skeptics, it's definitely a "must-buy.") I'd strongly recommend that you have a look at Collins' latest essay, because his treatment of the subject is absolutely exhaustive. There is no objection to fine-tuning that he hasn't anticipated, and he rebuts them all. I'll deal with some of the objections that have been raised on this thread in the next few posts.vjtorley
March 9, 2010
March
03
Mar
9
09
2010
10:17 AM
10
10
17
AM
PDT
vjtorley, quoting Collins:
The answer to this question is to require that the proportion used in calculating the probability be between real physical ranges, areas, or volumes, not merely mathematical representations of them. That is, the proportion given by the scale used in one’s representation must directly correspond to the proportions actually existing in physical reality.
Unfortunately, this approach is still arbitrary, as Mark Frank has pointed out. Suppose we take his meteor example and look at the size of the meteor rather than its impact location. Do we assume uniform probability over possible circumferences, surface areas, or volumes?R0b
March 9, 2010
March
03
Mar
9
09
2010
08:12 AM
8
08
12
AM
PDT
Dr Torley, Thanks for the links to Collins' site. I think I've been through some of his stuff before. From the work you directed my attention to, For now, we can think of the unconditional epistemic probability of a proposition is the degree of confidence or belief we rationally should have in the proposition; the conditional epistemic probability of a proposition R on another proposition S can roughly be defined as the degree to which the proposition S of itself should rationally lead us to expect that R is true. Since this epistemic probability is grounded in our notions of confidence and rationality, I found it hard to use in spots where Collins freely admits that he has no idea what the free parameters are at level of QCD, but continues blithely on anyway to assert that ranges are meaningful at other levels. I'm sorry, I have no prior confidence or rational expectations at the QCD level. Collins does nothing to support an argument that because two forces are in ratio of 1:10^40 that it is appropriate to take 1 as lower bound and 10^40 as upper bound of the possible variation range in either one. Surely the range of variation is an attribute of the free parameter itself, not the happenstance ratio of one free parameter to another. Why should the ratio of the mass of the electron to the muon establish a range, and not the electron to the tau, by his reasoning? Since we are on the subject of parameters for life supporting universes, I'll mention that thinkers as diverse as Ed Fredkin and Stephen Wolfram have floated the idea that our universe is actually fully quantized, and that the Theory of Everything could be a set of cellular automata rules. So I think it is appropriate to bring back the CA as life discussion that I had hoped to have with you at the tail of another thread. What say you?Nakashima
March 9, 2010
March
03
Mar
9
09
2010
06:05 AM
6
06
05
AM
PDT
vjtorley You will not be surprised to know that I disagree with Collins all three regards. He expresses his arguments clearly but they are hardly new. I am going to duck the first one. It leads to some difficult discussions about what is a “possible” range for a physical constant and it is not necessary to win this debate to prove my point. The second and third are two sides of the same coin. To say that we don’t know what scale to use when comparing possible ranges of a physical constant, is mathematically identical to saying we don’t know what probability distribution function to use when considering a particular scale. Collins tries to justify the choice of one scale by writing: the proportion given by the scale used in one’s representation must directly correspond to the proportions actually existing in physical reality. but this is not as straightforward as he makes it sound. Consider the analogy of the map that he gives. It may well be true that a meteorite is just as likely to hit any square mile of the earth’s surface as any other. We can estimate that because we know that meteorites come from pretty much any direction and behave fairly chaotically when they hit the atmosphere (i.e. we learn from experience that this is a good scale to use). But consider a radiation based phenomenon from the Sun – say a burst of particular type of radiation hitting the earth’s surface. Such a burst is far more likely to hit a square mile near the equator than a square mile near the poles. In this case the relevant “physical reality” is the disc that the earth presents to Sun rather than the sphere of the earth’s surface. Something similar applies to G. It is true that G corresponds to the force between two unit masses a unit distance apart, but what claim has Force to be a physical reality? It is just a mathematically convenient way of treating the movement of objects. Once could rewrite Newton’s laws based on a concept, call it rootforce, which is the square root of our current definition of force. In the case rootforce between two objects of mass m1 and m2 = squareroot(G)*squareroot(m1*m2)/r It is much more convenient to deal with force than rootforce – but in what sense is more of a physical reality? This leads then into the principle of indifference. Why are we indifferent to equal ranges of G but not indifferent to equal ranges of root G? Although Collins says: Several powerful general reasons can be offered in defense of the Principle of Indifference if it is restricted in the ways explained earlier. He actually offers just two reasons in the passage you quote. First, it has an extraordinarily wide range of applicability. Second,in certain everyday cases, the Principle of Indifference seems the only justification we have for assigning probability. I would argue that the principle of indifference is actually just a heuristic to help us estimate or guess the underlying objective probability. Given that, then the first defence is really irrelevant. If it is a good heuristic then it will work in a wide range of situations. The second case is more interesting. To take the example of the 2-sided die. Collins writes: given that every side of the die is macroscopically symmetrical with every other side, we have no reason to believe that it will land on one side versus any other. Accordingly, we assign all outcomes an equal probability of one in 20. But of course every side is not identical. They have different numbers on them. How, do we know that this is not a relevant difference? Because of our experience with other objects. We have learned what factors determine the probability of an object falling with a particular face upwards. Indeed if you were superstitious you might well believe that whatever is inscribed on the face of the die does determine the probability of that face landing upwards. You might argue that we could remove the numbers. But there will always be differences. One side would the one closest to you before throwing it. One side will be the one that was previously on top etc. The “principle of indifference” is really “the principle of relevant indifference”. And in the end the judge of relevance is experience. Which comes back to the fact that we have no experience to fall back on when it comes fundamental physical constants. I don’t entirely agree with ““No probability without process.” It seems to me that you can substitute observation of repeated trials instead of process. But if you have no possibility of repeated trials then you must have some idea of the process to make even a guess at the probability.Mark Frank
March 9, 2010
March
03
Mar
9
09
2010
04:40 AM
4
04
40
AM
PDT
ROb, Infinity is not a number, it is a concept. The question asked had to do with a measurable number of physical objects. You can visit the question in comment #31. Perhaps you read set theory into the question. I don't.Upright BiPed
March 9, 2010
March
03
Mar
9
09
2010
01:04 AM
1
01
04
AM
PDT
Perhaps I should have been slightly more precise in my last comment. I said that aleph null is not a set, but a number of developments of set theory define cardinal numbers as sets (for instance, 0 might be defined as the empty set, 1 as the set containing the empty set, 2 as the set containing 1 and the empty set, and so on). If you define cardinalities in this way then, yes, aleph null is a set, but so is 4. It remains the case that aleph null is just as much of a number as 4.Sotto Voce
March 8, 2010
March
03
Mar
8
08
2010
11:36 PM
11
11
36
PM
PDT
Upright, As R0b said, aleph null is not a set, it is a cardinality. Just like the set {a, b, c, d} has cardinality 4, the set of all natural numbers has cardinality aleph null. It is just as much of a number as 4 is. As for your claim that 5 cannot be added to the infinite set of even numbers, I take it you mean that it cannot be added without changing the set. That is, once 5 is added, the resulting set is no longer the set of even numbers. This is true, of course, but irrelevant to tgpeeler's point. He/she said that we can always add another world to the set of worlds that are ostensibly part of the multiverse. My interpretation of this claim was that there are logically possible universes that are not physically possible, so there are universes "left out" of the multiverse. Of course, adding such a universe to the set will change the set, just like adding 5 to the set of even numbers, but why does this matter? Maybe I have misinterpreted tgpeeler's point (it was a bit opaque), in which case I would be glad to be corrected.Sotto Voce
March 8, 2010
March
03
Mar
8
08
2010
11:27 PM
11
11
27
PM
PDT
Sooner Emeritus (#38) Thank you for your post. Be careful what you wish for - you just might get what you want. “No probability without process.” That's your epistemic maxim. Let's see where that leads. Improbability is the converse of probability. “No probability without process” entails “No improbability without process.” This means that many anti-theistic arguments from the evil in the world, of the form: "There's probably no God, because if there were a God, then He would have prevented evil X, Y and Z" are invalid, because they don't specify HOW God would have or should have done so. “No improbability without process.” It also means that anti-theistic arguments of the form: "There's probably no God, because all the intelligent beings I've seen are highly fragile, multi-part complex entities" are invalid, because they are based on induction - which is rendered also problematic by your maxim, because mere repetition of a phenomenon tells us nothing about the underlying process, if there is one. And by your maxim, there can be “No probability without process,” we can't calculate the probability of an intelligent Being's existing which is simple (i.e. NOT complex), as most classical theists believe. Finally, not everything can happen by virtue of some underlying process. Some natural processes are basic - they just happen. Quarks exchange gluons. How do they do that? I don't know. That's just what a quark does. Does that mean we can't make probabilistic statements about quarks? At the very least, you will have to concede that anti-theistic probability arguments are as problematic as the theistic arguments you criticize. And I respectfully submit that your maxim does not supply a warrant for induction, or for the degree of trust you should place in your own cognitive processes. To borrow a phrase from Barry, skeptics are indeed hoisted on their own petard.vjtorley
March 8, 2010
March
03
Mar
8
08
2010
11:27 PM
11
11
27
PM
PDT
Upright Biped:
So, there is no number that represents infinity;
Transfinite numbers do.
Aleph null is a set;
Aleph-null is a number -- a transfinite cardinal number.
The number 5 cannot be added to the infinite set of even integers.
Sure it can: {x:x/2∈Z} ∪ {5}R0b
March 8, 2010
March
03
Mar
8
08
2010
09:22 PM
9
09
22
PM
PDT
Mark Frank, A physicist commenting at another blog offered a fine slogan: "No probability without process." As best I can recall from my reading in the philosophy of probability, everyone agrees that physical probability should be treated as the relative frequency of outcomes of a repeatable experiment. A repeatable experiment is, among other things, a known process. Only with a time machine could one observe an origin of life. Without a time machine, all one can do is to use information in the present to guess a process in the past, and then assign probability according to the guess (model). This probability is certainly not a physical entity. It derives from the modeler's supposition as to what was going on 4 billion years ago.
It is the idea that you can legitimately use the principle of indifference in almost any context to come up with an epistemic probability of an outcome and that his has some kind of objective justification independently of any kind of frequency based evidence. Dembski uses it all the time and I am sure it is wrong.
It's not just the unbridled application of the Principle of Indifference, but more generally the fallacy of treating epistemic probabilities as physical probabilities. Dembski rejects natural causation of an easily described, prehistorical event in favor of intelligent design (increase of physical probability) when a hypothetical process has an outcome in the event ("target") with very low subjective probability. It is clearly wrong to infer that an intelligence intervened to change physical probability when the deficit in probability is merely subjective.Sooner Emeritus
March 8, 2010
March
03
Mar
8
08
2010
08:05 PM
8
08
05
PM
PDT
Pelagius, Correct. So, there is no number that represents infinity; any number you sample can be increased - which was exactly TGP's point. Sotto, Aleph null is a set; a set is a ensemble of distinct objects. The number 5 cannot be added to the infinite set of even integers. That particular set does not include the number of DeSoto hubcaps either.Upright BiPed
March 8, 2010
March
03
Mar
8
08
2010
07:11 PM
7
07
11
PM
PDT
Upright Biped, I hope you don't mind me answering on pelagius's behalf. The total number of integers is aleph null. Tgpeeler, You say you can "always add one more to the count". I suspect you are making the same error that Barry makes in his post. An infinite set doesn't have to include everything. It can leave things out, so you can add things to the set. Going back to my earlier example, the set of all even numbers does not contain 5, yet it is an infinite set. I could add 5 to it. So the mere fact that something can be added to a set does not show that it is not infinite. The curious thing about an infinite set is that adding one more member does not actually change the size of the set. Transfinite arithmetic is weird.Sotto Voce
March 8, 2010
March
03
Mar
8
08
2010
03:49 PM
3
03
49
PM
PDT
Upright Biped, There are infinitely many integers.pelagius
March 8, 2010
March
03
Mar
8
08
2010
03:42 PM
3
03
42
PM
PDT
Mark Frank (#30) Thank you for your post. It seems to me that Dr. Robin Collins has already answered many of the questions you ask. In particular, I would refer you to the article "God and Fine-tuning" by Collins at http://home.messiah.edu/~rcollins/Fine-tuning/The%20Evidence%20for%20Fine-tuning.rtf . It carefully avoids making inflated claims about fine-tuning, and even hoses down some of the poorly supported claims in the literature, after putting forward six solid cases of fine-tuning. Let's talk about gravity. You ask:
What is the possible range of values for G?
Here's what Collins has to say:
Now, the forces in nature can be thought of as spanning a range of G_0 to (10^40.G_0), at least in one widely-used dimensionless measure of the strengths of these forces (Barrow and Tipler, pp. 293 - 295). (Here, G_0 denotes the strength of gravity, with (10^40.G_0) being the strength of the strong force.).... [A]s shown in section B of the appendix, stars with life-times of more than a billion years (as compared to our sun's life-time of ten billion years) could not exist if gravity were increased by more than a factor of 3000. This would have significant intelligent life-inhibiting consequences. Of course, an increase in the strength of gravity by a factor of 3000 is a lot, but compared to the total range of strengths of the forces in nature (which span a range of 0 to (10^40.G_0) as we saw above), this still amounts to a one_sided fine_tuning of approximately one part in 10^36. On the other hand, if the strength of gravity were zero (or negative, making gravity repulsive), no stars or other solid bodies could exist. Thus, zero could be considered a lower bound on the strength of gravity. Accordingly, the intelligent-life_permitting values of the gravitational force are restricted to the range 0 to ((3x10^3).G_0), which is about one part in 10^36 of the total range of forces. This means that there is a two_sided fine_tuning of gravity of at least one part in 10^36.
Next, you object to the use of a linear scale, when comparing the range of life-permitting values, L, with the range of allowable values, R.
What scale to use for L and R? Why not logL/logR or L^2/R^2? Assuming it is possible to define R then each of these transformations would greatly change the ratio and it is possible to find a transformation to make the ratio almost any value you want.
This is equivalent to asking: why should we look at G? Why not log G or G^2? Here's what Robin Collins wrote in answer to your question, in an earlier 1998 paper at http://www.discovery.org/a/91 :
The answer to this question is to require that the proportion used in calculating the probability be between real physical ranges, areas, or volumes, not merely mathematical representations of them. That is, the proportion given by the scale used in one's representation must directly correspond to the proportions actually existing in physical reality. As an illustration, consider how we might calculate the probability that a meteorite will fall in New York state instead of somewhere else in the northern, contiguous United States. One way of doing this is to take a standard map of the northern, contiguous United States, measure the area covered by New York on the map (say 2 square inches) and divide it by the total area of the map (say 30 square inches). If we were to do this, we would get approximately the right answer because the proportions on a standard map directly correspond to the actual proportions of land areas in the United States. On the other hand, suppose we had a map made by some lover of the East coast in which, because of the scale used, the East coast took up half the map. If we used the proportions of areas as represented by this map we would get the wrong answer since the scale used would not correspond to real proportions of land areas. Applied to the fine-tuning, this means that our calculations of these proportions must be done using parameters that directly correspond to physical quantities in order to yield valid probabilities. In the case of gravity, for instance, the gravitational constant G directly corresponds to the force between two unit masses a unit distance apart, whereas U does not. (Instead, U corresponds to the square of the force.) Thus, G is the correct parameter to use in calculating the probability. (Emphases mine - VJT.)
Finally, you object to the epistemic principle of indifference. You write:
In the end classical/logical and epistemic probabilities are best guesses at hypothetical frequentist probabilities. An epistemic probability is subjective. It is my attempt to estimate, given certain conditions H (which may represent my current relevant knowledge or a hypothesis), how frequently we would get outcome X were we able to reproduce H many, many times.
I think this line of argumentation is unduly pessimistic, regarding what we can and cannot know. To cite Collins again from a recent paper ("The Teleological Argument: An Exploration of the Fine-Tuning of the Universe" in The Blackwell Companion to Natural Theology):
According to the restricted Principle of Indifference, when we have no reason to prefer anyone value of a variable p over another in some range R, we should assign equal epistemic probabilities to equal ranges of p that are in R, given that p constitutes a "natural variable." A variable is defined as "natural" if it occurs within the simplest formulation of the relevant area of physics. When there is a range of viable natural variables, then one can only legitimately speak of the range of possible probabilities, with the range being determined by probabilities spanned by the lower and upper bound of the probabilities determined by the various choices of natural variables... Several powerful general reasons can be offered in defense of the Principle of Indifference if it is restricted in the ways explained earlier. First, it has an extraordinarily widerange of applicability. As Roy Weatherford notes in his book, Philosophical Foundations of Probability Theory, "an astonishing number of extremely complex problems in probabilitytheory have been solved, and usefully so, by calculations based entirely on the assumptionof equiprobable alternatives [that is, the Principle of Indifference]" (1982, p. 35). Second,in certain everyday cases, the Principle of Indifference seems the only justification we have for assigning probability. To illustrate, suppose that in the last 10 minutes a factory produced the first 20-sided die ever produced (which would be a regular icosahedron). Further suppose that every side of the die is (macroscopically) perfectly symmetrical with every other side, except for each side having different numbers printed on it. (The die we are imagining is like a fair six-sided die except that it has 20 sides instead of six.) Now, we all immediately know that upon being rolled the probability of the die coming up on any given side is one in 20. Yet we do not know this directly from experience with 20-sideddice, since by hypothesis no one has yet rolled such dice to determine the relative frequency with which they come up on each side. Rather, it seems our only justification for assigning this probability is the Principle of Indifference: that is, given that every side of the die is macroscopically symmetrical with every other side, we have no reason to believe that it will land on one side versus any other. Accordingly, we assign all outcomes an equal probabilityof one in 20.
I hope that answers your questions.vjtorley
March 8, 2010
March
03
Mar
8
08
2010
03:42 PM
3
03
42
PM
PDT
Pelagius, If I give you 5 integers (2 4 19 23 7) then there five numbers you have been given. What is the total number of integers?Upright BiPed
March 8, 2010
March
03
Mar
8
08
2010
03:18 PM
3
03
18
PM
PDT
tgpeeler:
I’m curious about the concept of an “infinite” number of universes. This phrase gets thrown around all the time but the very idea of it is incoherent to me. How can there be an actual infinite number of anything physical? If these universes are physical, and at least one of them is, the one I live in, then I can always add one more to the count of how ever many there are. Ergo, no infinite number of universes.
tgpeeler, Your problem is not with physical infinity but rather with the concept of infinity itself. To see this, consider the following argument which contains the same mistake as the one you presented:
Mathematicians claim that the set of integers is infinite. This strikes me as incoherent. How can there be an actual infinite number of integers? I can always add one more to the count of how ever many there are. Ergo, no infinite number of integers.
Do you see the problem?pelagius
March 8, 2010
March
03
Mar
8
08
2010
02:15 PM
2
02
15
PM
PDT
Sotto @ 12 "The claim is that you have an infinite number of universes instantiating different values for fundamental physical constants and different initial conditions." I'm curious about the concept of an "infinite" number of universes. This phrase gets thrown around all the time but the very idea of it is incoherent to me. How can there be an actual infinite number of anything physical? If these universes are physical, and at least one of them is, the one I live in, then I can always add one more to the count of how ever many there are. Ergo, no infinite number of universes. I'm even more curious about how anyone can actually consider the "multiverse" as a solution or explanation for anything but I don't expect an answer to that.tgpeeler
March 8, 2010
March
03
Mar
8
08
2010
11:40 AM
11
11
40
AM
PDT
#28 vjtorley Collins “How to Rigorously Define Fine-Tuning.” is interesting. It is a pity it is incomplete. However, at first glance I can't see it adds anything new to the main philosophical debate. In general when someone attempts to estimate the probability of a physical constant supporting life it comes down to a sum: L/R where L is the range of values that support life and R is the range of "possible" values that constant might take. One level there are two problems with this. 1) The definition of R. What is the possible range of values for G? 2) What scale to use for L and R? Why not logL/logR or L^2/R^2? Assuming it is possible to define R then each of these transformations would greatly change the ratio and it is possible to find a transformation to make the ratio almost any value you want. Collins tries to get to grips with these but not very convincingly. However, I think these problems just reflect a deeper problem with the nature of probability. A problem that underlies much of the ID movement. It is the idea that you can legitimately use the principle of indifference in almost any context to come up with an epistemic probability of an outcome and that his has some kind of objective justification independently of any kind of frequency based evidence. Dembski uses it all the time and I am sure it is wrong. The problems with the principle of indifference are well-known. Most importantly you can come up with different results for the probability of an outcome depending what you are indifferent to. This is reflected in problem 2 above. Are we indifferent to the value of G or log G or G^2 or what? It may appear obvious that you can use the principle of indifference to estimate the probability of a dice coming down with six uppermost but actually this is based on our massive experience of dice-like objects. If in practice sixes came down more often in the long run the principle of indifference would have to concede to the reality of actual frequencies. In the end classical/logical and epistemic probabilities are best guesses at hypothetical frequentist probabilities. An epistemic probability is subjective. It is my attempt to estimate, given certain conditions H (which may represent my current relevant knowledge or a hypothesis), how frequently we would get outcome X were we able to reproduce H many, many times. Once it is phrased like that then we realise how little we can say about the probability of a universe having certain fundamental values. We have no knowledge at all of what H is or how it relates to X.Mark Frank
March 8, 2010
March
03
Mar
8
08
2010
01:57 AM
1
01
57
AM
PDT
Nakashima (20):
At best, we can say that Dr Koonin was engaging in a Bayesian, not frequentist, probabilistic argument, correct?
It seems that Koonin regards the probability of a possible macroscopic history with property A to be the ratio of the (finite) number of possible histories with property A to the (finite) number of possible histories. See "Probability / chance / randomness" in Table 1, especially. This is frequentism, and it assumes that all possible macroscopic histories are equiprobable. But, to adapt Sotto Voce's remarks in 23, why should they be equiprobable? The Principle of Indifference is an outdated dictum to modelers. It does not behoove nature to exhibit indifference. Some modelers would introduce a measure of descriptive complexity on histories, and would say that histories of lower complexity are of higher probability than histories of higher complexity. In the appendix, Koonin starts out by making particular assumptions about the properties of an O-region, and this is essentially a uniformity assumption. Furthermore, these assumptions are based on observations of our own O-region, and I don't know why it should be regarded as typical. So I don't see anything objective in the probabilities he estimates. Koonin ends the appendix by saying,
The model considered here is not supposed to be realistic by any account. It only serves to illustrate the difference in the demands on chance for the origin of different versions of the breakthrough system (see Fig. 1) and hence the connections between these versions and different cosmological models of the universe. [emphasis mine]
I think his belief that he's getting at something objective is evident here. And, as I've said, he's not owning up to his assumptions.Sooner Emeritus
March 7, 2010
March
03
Mar
7
07
2010
11:50 PM
11
11
50
PM
PDT
Hi everyone, I would strongly recommend Robin Collins' Fine-Tuning Website at http://home.messiah.edu/~rcollins/Fine-tuning/ft.htm , for anyone who wants to read a robust defense of the fine-tuning argument. It contains a number of valuable essays on the subject of fine-tuning. For a good overview, I'd recommend Collins' paper, "A Theistic Perspective on the Multiverse Hypothesis," especially the section on the multiverse at http://home.messiah.edu/~rcollins/Fine-tuning/stanford%20multiverse%20talk.htm#_1_5 which argues that a multiverse generator would still need to be designed, and the concluding section at http://home.messiah.edu/~rcollins/Fine-tuning/stanford%20multiverse%20talk.htm#_1_6 which argues that the underlying beauty of the laws of nature is powerful evidence of their having been designed. For readers who don't have a strong philosophical background, Collins' paper, "God, Design, and Fine-Tuning" might be more helpful, as it presents the basic fine-tuning argument, along with Collins' responses to some of the standard objections to the argument. To address methodological concerns that some contributors (Sooner Emeritus, Sotto Voce, Mark Frank and Nakashima) have raised concerning fine-tuning, I'd recommend Collins' paper, "How to Rigorously Define Fine-Tuning." Finally, the paper, "Evidence for Fine-Tuning" is also well worth reading, as it highlights the strongest pieces of scientific evidence for fine-tuning and warns against some scientifically faulty fine-tuning arguments that are still circulating in the literature. Sorry, but that's all I have time for at the moment. I shall return within the next 24 hours.vjtorley
March 7, 2010
March
03
Mar
7
07
2010
07:10 PM
7
07
10
PM
PDT
I think a very solid argument can be made for the existence of God in this universe. As pointed out yesterday, from the double slit experiment, it is impossible for 3 dimensional material reality to give rise to the conscious observation that causes the infinite "higher dimensional" wave to collapse to its "uncertain" 3-D particle state, for it is impossible for 3-D material reality to give rise to that which it is absolutely dependent on for its own reality in the first place. Yet the "cause" of our conscious observation is not sufficient, in and of itself, to explain the "effect" of universal collapse, of the higher dimensional waves to their 3-D states, for each central conscious observer in the universe. Thus a higher cause of a universal consciousness must exist to sufficiently explain the cause for the effect we witness for universal wave collapse. I believe that line of thought is very similar to this ancient line of reasoning: "The 'First Mover' is necessary for change occurring at each moment." Michael Egnor - Aquinas’ First Way http://www.evolutionnews.org/2009/09/jerry_coyne_and_aquinas_first.html I found that centuries old philosophical argument, for the necessity of a "First Mover" accounting for change occurring at each moment, to be validated by quantum mechanics. This is since the possibility for the universe to be considered a "closed system" of cause and effect is removed with the refutation of the "hidden variable" argument. i.e. There must be a sufficient transcendent cause (God/First Mover) to explain the quantum wave collapse to the "uncertain" 3D effect for "each moment" of the universe. This following study solidly refutes the "hidden variable" argument that has been used by materialists to try to get around the Theistic implications of this instantaneous "spooky action at a distance" found in quantum mechanics. Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. http://www.sciencedaily.com/releases/2009/07/090722142824.htm (of note: hidden variables were postulated to remove the need for “spooky” forces, as Einstein termed them—forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.) Though the lack of a hidden variable finding is very nice "icing on the cake", the logic that material reality cannot precede the consciousness it is dependent on for its own reality is what solidly seals the argument for the Theistic position.bornagain77
March 7, 2010
March
03
Mar
7
07
2010
06:17 PM
6
06
17
PM
PDT
1 2 3

Leave a Reply