Intelligent Design

The announced “death” of the Fine-tuning Cosmological Argument seems to have been over-stated

Spread the love

In recent days, there has been a considerable stir in the blogosphere, as  prof Don Page of the University of Alberta has issued two papers and a slide show that purport to show the death of — or at least significant evidence against — the fine-tuning cosmological argument. (Cf here and here at UD. [NB: A 101-level summary and context for the fine-tuning argument, with onward links is here. A fairly impressive compendium of articles, links and videos on fine-tuning is here. Video summary is here, from that compendium. (Privileged Planet at Amazon)])

embedded by Embedded Video

YouTube Direkt

However, an examination of the shorter of the two papers by the professor, will show that he has apparently overlooked a logical subtlety. He has in fact only argued that there may be a second, fine-tuned range of possible values for the cosmological constant. This may be seen from p. 5 of that paper:

. . . with the cosmological constant being the negative of the value for the MUM that makes it have present age

t0 = H0^- 1 = 10^8years/alpha, the total lifetime of the anti-MUM model is 2.44t0 = 33:4 Gyr.

Values of [L] more negative than this would presumably reduce the amount of life per baryon that has condensed into galaxies more than the increase in the fraction of baryons that condense into galaxies in the first place, so I would suspect that the value of the cosmological constant that maximizes the fraction of baryons becoming life is between zero and – LO ~ 3.5 * 10^- 122, with a somewhat lower magnitude than the observed value but with the opposite sign. [Emphases added, and substitutes made for symbols that give trouble in browsers.]

Plainly, though, if one is proposing a range of values that is constrained to within several parts in 10^-122, one is discussing a fairly fine-tuned figure.

Just, you are arguing for a second possible locus of fine-tuning on the other side of zero.

(And, that would still be so even if the new range were 0 to minus several parts in 10^-2 [a few percent], not minus several parts in 10^-122 [a few percent of a trillionth of a trillionth of . . . ten times over]. Several parts in a trillion is rather roughly comparable to the ratio of the size of a bacterium to twice the length of Florida or the lengths of  Cuba or Honshu in Japan or Cape York in Australia or Great Britain or Italy )

I take liberty to scoop out and highlight my response in Dr Torley’s thread, but first let me discuss the issue of the Multiverse by citing John Leslie’s famous analogy of the fly on the wall from his classic Our Place in the Cosmos:

. . . the need for such explanations [of our evidently fine-tuned cosmos set to an operating point favourable to Carbon-chemistry, cell based intelligent life] does not depend on any estimate of how many universes would be observer-permitting, out of the entire field of possible universes.

Claiming that our universe is ‘fine tuned for observers’, we base our claim on how life’s evolution would apparently have been rendered utterly impossible by comparatively minor alterations in physical force strengths, elementary particle masses and so forth. There is no need for us to ask whether very great alterations in these affairs would have rendered it fully possible once more, let alone whether physical worlds conforming to very different laws could have been observer-permitting without being in any way fine tuned.

Here it can be useful to think of a fly on a wall, surrounded by an empty region. A bullet hits the fly Two explanations suggest themselves. Perhaps many bullets are hitting the wall or perhaps a marksman fired the bullet. There is no need to ask whether distant areas of the wall, or other quite different walls, are covered with flies so that more or less any bullet striking there would have hit one. The important point is that the local area contains just the one fly. . . . [Emphases and paragraphing added.]

In short, so long as there is a local sensitivity in the life-permitting cluster of parameters, that needs to be adequately explained.

In addition, a multiverse model has to be able to explain the existence of a multiverse set up so that it will search such a small domain sufficiently well that it is likely to capture the fine-tuned life-permitting set, rather than the equivalent of the badly set-up bread making machine that turns out burned hockey-pucks or half-baked doughy messes.

That too, arguably requires fine-tuning.

Or, as Robin Collins so memorably put it:

Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist. The temperature, for example, was set around 70 °F and the humidity was at 50%; moreover, there was an oxygen recycling system, an energy gathering system, and a whole system for the production of food. Put simply, the domed structure appeared to be a fully functioning biosphere. What conclusion would we draw from finding this structure? Would we draw the conclusion that it just happened to form by chance? Certainly not. Instead, we would unanimously conclude that it was designed by some intelligent being. Why would we draw this conclusion? Because an intelligent designer appears to be the only plausible explanation for the existence of the structure. That is, the only alternative explanation we can think of–that the structure was formed by some natural process–seems extremely unlikely. Of course, it is possible that, for example, through some volcanic eruption various metals and other compounds could have formed, and then separated out in just the right way to produce the “biosphere,” but such a scenario strikes us as extraordinarily unlikely, thus making this alternative explanation unbelievable.

The universe is analogous to such a “biosphere,” according to recent findings in physics . . . .  Scientists call this extraordinary balancing of the parameters of physics and the initial conditions of the universe the “fine-tuning of the cosmos”  . . .  For example, theoretical physicist and popular science writer Paul Davies–whose early writings were not particularly sympathetic to theism–claims that with regard to basic structure of the universe, “the impression of design is overwhelming” (Davies, 1988, p. 203) . . .

The responses in Dr Torley’s thread follow:

[Continued here]

28 Replies to “The announced “death” of the Fine-tuning Cosmological Argument seems to have been over-stated

  1. 1
    second opinion says:

    The fine tuning of the universe is actually an argument against ID and I found a reference that supports my claim:

    “A user’s guide to design arguments” Trent Dougherty and Ted Poston in Religious Studies (2008), 44: 99-110.

    They basically argue that the probability of the fine tuning being true is inverse proportional to the probability that life it intelligently designed. This is pretty much what I have been saying before.

  2. 2
    kairosfocus says:

    2nd O:

    You are a bit tangential, but that’s fine for the moment; as the issues are connected.

    I think you should read and watch here; then come back to us on the specifics.

    Cheers,

    GEM of TKI

  3. 3
    DrBot says:

    KF (apologies – just a note in passing – haven’t had time to follow up on our discussions on other threads)

    I’ve never been a big fan of fine tuning arguments myself, neither the ‘for’ or ‘against’ ones, mostly because I just don’t think we really know enough to make any sensible statements.

    Firstly, there is no accounting for luck! so although our universe might be improbable, it could also be just that – a chance event. It is a weak argument against fine tuning though, but one that is universal and unavoidable – maybe we really did just beat the odds! – Because this is an argument you can apply to anything, it is not worth considering for long so I won’t say any more on it.

    The second thing is slightly more interesting and is about frames of reference. We can look at cosmological constants and speculate about what might have been if they were different but there may be much more ‘higher’ structure to the universe, or that the universe depends on which we can’t see or measure, and which actually requires a constant to be the way it is – in other words from a different frame of reference (including for example Gods) a cosmological constant may be at the value it is because it can hold no other value – it is not a parameter to be tuned, it just looks like that from our perspective.

    In this sense some elements of our universe that appear fine tuned may not be in reality (i.e. the concept is meaningless) so trying to estimate the probability of them being at a certain level is also meaningless.

    Without a deeper understanding of our universe we can’t yet answer some of these questions so my position is simply to say, we just don’t know enough to be certain yet, although the inferences we can make based on what we DO know are compelling – I just prefer to try and avoid getting carried away when inferences imply that what I want to believe is actually true 😉

  4. 4
    kairosfocus says:

    Dr Bot:

    Please read the previously linked then address the issues specifically on the merits.

    The issues hinge, not on what we do not know about our cosmos, but on what we do know, especially once the big bang model became credible as the framework in which we must think about the world and how it came to be, and since we understood how stars work.

    If you think this is a big swath, just take on the foundations of water, H2O, in light of how deeply the key properties of water tie into foundational dynamics, parameters and laws of the cosmos.

    You will see that the key section on fine tuning keys in on water.

    Observe the following excerpts from the section on cosmological finetuning here:

    ____________________

    E1: Robin Collins on biospheres:

    >> Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist. The temperature, for example, was set around 70 °F and the humidity was at 50%; moreover, there was an oxygen recycling system, an energy gathering system, and a whole system for the production of food. Put simply, the domed structure appeared to be a fully functioning biosphere. What conclusion would we draw from finding this structure? Would we draw the conclusion that it just happened to form by chance? Certainly not. Instead, we would unanimously conclude that it was designed by some intelligent being. Why would we draw this conclusion? Because an intelligent designer appears to be the only plausible explanation for the existence of the structure. That is, the only alternative explanation we can think of–that the structure was formed by some natural process–seems extremely unlikely. Of course, it is possible that, for example, through some volcanic eruption various metals and other compounds could have formed, and then separated out in just the right way to produce the “biosphere,” but such a scenario strikes us as extraordinarily unlikely, thus making this alternative explanation unbelievable.

    The universe is analogous to such a “biosphere,” according to recent findings in physics . . . . Scientists call this extraordinary balancing of the parameters of physics and the initial conditions of the universe the “fine-tuning of the cosmos” . . . For example, theoretical physicist and popular science writer Paul Davies–whose early writings were not particularly sympathetic to theism–claims that with regard to basic structure of the universe, “the impression of design is overwhelming” (Davies, 1988, p. 203) . . .

    [[Cf. also here. Short summary here. Elsewhere, Collins notes how noted cosmologist Roger Penrose has estimated that “[[i]in order to produce a universe resembling the one in which we live, the Creator would have to aim for an absurdly tiny volume of the phase space of possible universes — about 1/(10^(10^123)) of the entire volume . . .” That is, 1 divided by 10 followed by one less than 10^123 zeros. By a long shot, there are not enough atoms in the observed universe [~10^80] to fully write out the fraction.] >>

    E2, Sir Fred Hoyle, a lifelong atheist-agnostic, on the relationship between H, C and O:

    >> From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16 >>

    And:

    >> I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [[“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12] >>

    E3, D. Halsmer, J. Asper, N. Roman, T. Todd on good old H2O:

    >> The remarkable properties of water are numerous. Its very high specific heat maintains relatively stable temperatures both in oceans and organisms. As a liquid, its thermal conductivity is four times any other common liquid, which makes it possible for cells to efficiently distribute heat. On the other hand, ice has a low thermal conductivity, making it a good thermal shield in high latitudes. A latent heat of fusion only surpassed by that of ammonia tends to keep water in liquid form and creates a natural thermostat at 0°C. Likewise, the highest latent heat of vaporization of any substance – more than five times the energy required to heat the same amount of water from 0°C-100°C – allows water vapor to store large amounts of heat in the atmosphere. This very high latent heat of vaporization is also vital biologically because at body temperature or above, the only way for a person to dissipate heat is to sweat it off.

    Water’s remarkable capabilities are definitely not only thermal. A high vapor tension allows air to hold more moisture, which enables precipitation. Water’s great surface tension is necessary for good capillary effect for tall plants, and it allows soil to hold more water. Water’s low viscosity makes it possible for blood to flow through small capillaries. A very well documented anomaly is that water expands into the solid state, which keeps ice on the surface of the oceans instead of accumulating on the ocean floor. Possibly the most important trait of water is its unrivaled solvency abilities, which allow it to transport great amounts of minerals to immobile organisms and also hold all of the contents of blood. It is also only mildly reactive, which keeps it from harmfully reacting as it dissolves substances. Recent research has revealed how water acts as an efficient lubricator in many biological systems from snails to human digestion. By itself, water is not very effective in this role, but it works well with certain additives, such as some glycoproteins. The sum of these traits makes water an ideal medium for life. Literally, every property of water is suited for supporting life. It is no wonder why liquid water is the first requirement in the search for extraterrestrial intelligence.

    All these traits are contained in a simple molecule of only three atoms. One of the most difficult tasks for an engineer is to design for multiple criteria at once. … Satisfying all these criteria in one simple design is an engineering marvel. Also, the design process goes very deep since many characteristics would necessarily be changed if one were to alter fundamental physical properties such as the strong nuclear force or the size of the electron. [[“The Coherence of an Engineered World,” International Journal of Design & Nature and Ecodynamics, Vol. 4(1):47-65 (2009). HT: ENV.] >>

    And,

    >> The explanation has to do with fusion within stars. Early [[stellar, nuclear fusion] reactions start with hydrogen atoms and then produce deuterium (mass 2), tritium (mass 3), and alpha particles (mass 4), but no stable mass 5 exists. This limits the creation of heavy elements and was considered one of “God’s mistakes” until further investigation. In actuality, the lack of a stable mass 5 necessitates bigger jumps of four which lead to carbon (mass 12) and oxygen (mass 16). Otherwise, the reactions would have climbed right up the periodic table in mass steps of one (until iron, which is the cutoff above which fusion requires energy rather than creating it). The process would have left oxygen and carbon no more abundant than any other element. >>
    __________________

    Ask yourself about5 the implications of the pattern you see above, in the context of the literally dozens of parameters that are left free in the physics, or the just plain brute facts of things like ratio of matter to antimatter, and the balance of + and – charges in the cosmos to within 1 in 10^37.

    Or, think about how the mass of the cosmos had to be to within 1 in 10^60 or so to get the expansion just right, or the cosmos we enjoy would have been impossible. That is in effect the ratio of the number of atoms in one grain of sand to the number of atoms in the observed cosmos.

    GEM of TKI

    ____________________

    GEM of TKI

  5. 5
    second opinion says:

    I read your link but I fail to see the relevance to what I’saying. I am simply pointing out the the fine tuning and ID don’t go well together. I think from this post it should be pretty clear what I mean.

  6. 6
    DrBot says:

    KF,

    I have carefuly considered these issues, my point about frames of reference still stands but if you feel offended or threatened by it then I am happy to withdraw from the debate.

  7. 7
    vjtorley says:

    kairosfocus

    Thanks for this in-depth analysis. The “fly on the wall” analogy is certainly apposite here. Great article!

  8. 8
    kairosfocus says:

    Dr Bot and 2nd O:

    I see your remarks.

    There are some substantial issues on the table that need to be addressed substantially. So, I am not really satisfied to see how you are waving off the issues highlighted by men like Sir Fred Hoyle.

    Maybe, I need to be direct.

    Astrophysics and related cosmology are origins science done right, with serious models, backed up by abundant observational evidence.

    A massive molecular cloud collapsing into a H-ball under gravity, and heating up will credibly ignite H fusion, and depending on the balance of mass, we get he patterns on the H-r diagram, which in turn are supported strongly by the evidence of clusters, which show the main sequence die-off [as I illustrate here in my discussion on the H-R diagram.]

    There is a reason why I insisted on drawing Fig G.4 myself, to integrate the key observational details.

    In short the physics is there, the dynamics are adequate, and there are oodles of observational anchor points. (By sharpest contrast, my central concern for the OOL and origin of body plan level biodiversity cases, is that the suggested dynamics are simply not adequate to sustain the process of origin of functional coded information and integrated executing machinery.)

    And notice, my focus is not on Cosmological constants or the like, but on just plain ordinary water. Water is based on H and O, and has pivotal properties for terrestrial planets and for life. those properties are shaped by core physics of the cosmos, and the abundances are again shaped by core physics of the cosmos.

    Add in C, with that tellingly close resonance, and you are 3/4 way there to life chemistry.

    Water is the acid test case study for fine tuning. It would be a designer’s nightmare to get a simple entity with such a cluster of interdependent and critical properties.

    And Carbon, the side-kick to water, is the connector block molecule of life.

    Put that with a physics that generates H, He [a nuclear building block nucleus], C and O as the four most abundant atoms in the universe.

    I smell exquisite, elegant cosmos-scale design there, as someone who has had to design systems in his time. Too many convenient coincidences.

    Waaaay too many.

    When I therefore see the balking, selectively hyperskeptical reaction to the case where origins science is done right, and contrast the credulity that so often happens when it is mushy at best, that flips up several warning flags for me.

    That is not a case of my taking umbrage, it is a case of my getting diagnostic on what seems to be going on.

    GEM of TKI

  9. 9
    second opinion says:

    Sorry kairosfocus to be so blunt, but you are not even taking note of what I am saying. I am not claiming that the universe is not fine tuned. The only thing I am saying is that a fine tuned universe and ID are incompatible. And that is because the same fine tuned processes or water molecules or whatever that are necessary to sustain life are the very same that are necessary to produce and to evolve life.

  10. 10
    kairosfocus says:

    2nd O:

    Pardon, I do not know if I was clear enough before: the claim “a fine tuned universe and ID are incompatible” is so far off from the balance of informed views on the matter [Hoyle — a Nobel-equivalent prize holder — is just primus inter pares], that it needs to be justified in detail.

    When for instance, you make claims like: “that is because the same fine tuned processes or water molecules or whatever that are necessary to sustain life are the very same that are necessary to produce and to evolve life,” that is an assertion, not a substantiation. More to the point, it highlights the question of the empirically supported causal factor that best explains functionally specific, complex organisation and associated information: intentional, intelligently directed configuration.

    The same issue holds for your remark in the other thread:

    Even if you demonstrate by other means that the laws of nature are insufficient to produce life or evolve life than you could only conclude that the fine tuning must be limited thus this limited fine tuning in turn can not become a positive argument for ID.[12, death of fine tuning thread]

    It seems to me that the underlying problem — pardon my being direct, i have to be specific — is that you have not adequately examined the empirically substantiated patterns of cause and their characteristic signs. Law is not the only substantiated cause, and if you mean mechanical necessity, that accounts only for low contingency outcomes. High contingency comes from chance and/or design, and these have sharply distinct characteristics.

    Perhaps by law you mean to incorporate necessity and chance, but that does not account adequately for functionally specific, complex organisation and associated information. Your car is not a spontaneous ourcome of c hance and necessity in a junkyard, say by a tornado hitting it.

    For instance, the text in this thread is a case of FSCI, and is best explained on intelligent design. By direct contrast, a chaotic string of characters like — kifwijuggueh93ebykhgkftonfoibder — is easily enough explained on chance.

    So, the key issue is whether the cosmos and life in it show the distinctive feature best explained by intentionally and intelligently directed configuration: functionally specific, complex organisation and associated information.

    Thus, your hinted at explanation is one that misses the mark on the difference between an observationally anchored physics of stars that can account for formation of O, and a cosmology that accounts for origin of H per a big bang — with evident fine-tuning — on the one hand, and the gaping gap on accounting for the origin of biological information in origin of life and origin of body plan level biodiversity.

    When it comes to the question of why that fine tuning is most credibly accounted for on design, let me lay out a few steps [going beyond mere links], to see if that summary will help:

    _________________

    1 –> the Hubble discovery of cosmological red shift scaling with distance points to an origin of our cosmos, now generally estimated at 13.7 BYA.

    2 –> Thus [on basic logic of cause, cf how a fire is based on the factors heat, fuel and oxidiser: each necessary, jointly sufficient], our cosmos, having a beginning, is contingent and is caused.

    3 –> Contingency of the observed cosmos, and of matter as we observe it — the latter is per E = m*c^2, inter-convertible into energy and so is contingent as well — implies at least one underlying necessary and external [as being self-caused would be a logical absurdity: that which hath as of yet no existence, plainly can have no causal power] causal factor.

    4 –> that is, we are pointing to an external factor that if “switched off” would block the contingent from emerging, and if “switched on” will be an enabling factor. [Think of what happens if there is no fuel in your car: its engine, a combustion device, cannot start.]

    5 –> One well-established way to look at the source of cause is to observe that there are three main patterns of causal factors, mechanical necessity, stochastic, credibly undirected contingency, and intelligently and intentionally directed configuration. As explained in details here, each leaves characteristic signs.

    6 –> In summary, necessity leads to low contingency, so it is not a credible explanation for the aspects of a phenomenon, process or object that are highly contingent. (Having an origin at a specific time is a contingent aspect of our observed cosmos, and the relevant atomic matter in it has an origin that is subsequent to that singularity.)

    7 –> High contingency (per massive observation) traces to chance and/or design. Of these, chance is associated with stochastic patterns, and is dominated by probability distributions such that the overwhelming trend is to be in observable macrostates that carry statistical weights that greatly exceed unusual macrostates. This, for instance is the explanation for the second law of thermodynamics. E.g. it is logically possible for the O2 molecules in the room where you sit to spontaneously unmix and rush to one end, leaving you gasping, i.e. we see an unmixed macrostate. However the sheer number of mixed microstates [detailed, specific distributions of mass and energy across the air molecules] so overwhelms the number of unmixed microstates that this would not credibly be observed on spontaneous once in the history of the observed cosmos.

    7 –> So, if you see a room with the O2 molecules separated out, you have excellent reason to infer that the unusual and counter-flow condition is deliberately occasioned, not a spontaneous observation. [For more details cf Appendix 1 my always linked, through my handle in the LH column.]

    8 –> That is, we here see a simple case of how presence at isolated islands of function in a large enough config space, is a strong sign of intentionally, intelligently directed configuration, aka design. (1,000 yes/no decisions to get to the relevant island is enough to overwhelm the spontaneous search capacity of our observed cosmos: 1.07*10^301 states vs 10^150 possible Planck-time states of the 10^80 atoms of our cosmos, across a lifespan of 50 mn times the usual estimated duration since the big bang.)

    9 –> That is, we are at the point highlighted by Wicken in 1979: functionally specific, complex organisation is in our observation the product of direct design. And, as Marks and Dembski have shown in their recent cluster of glorified common sense papers, for selective processes to reach such states, intelligent guidance is the only credible source, through active information that overwhelms the deep isolation of islands of organised function. These are absolutely general considerations.

    10 –> Our observed cosmos is a contingent, complex, functional entity with finely tuned conditions to achieve facilitation of C-chemistry, cell based, intelligent life, even through a multiverse [cf Leslie and his Fly on the Wall analogy].

    11 –> Unless a necessary being [i.e. one that is self-sufficient and independent of “switch on/off” external causal factors . . . in the days of the now defunct Steady State cosmology, this was imagined to be the observed cosmos, but that collapsed with the discovery of the 2.7 K background blackbody/cavity radiation from the Big Bang] with the requisite capacity is shown to be IMPOSSIBLE [and no-one has as of yet succeeded in showing such to be impossible], on cumulative inference to best explanation, this strongly points away from being a likely outcome of chance configs, to origin in a necessary being with the purpose, knowledge and power to effect such a cosmos. This is not the God of theism, but it is a cosmic architect — where the science [yes, science] aided by logic, points. (NB: The linked goes onwards from there to the worldview level import of our finding ourselves to be radically contingent, conscious, minded, enconscienced, morally governed creatures. That is warranted on inference to best explanation, but goes beyond the province of natural science.)

    12 –> By way of utter contrast, and in tight summary of the already linked: the attempt to infer that the complex, organised function of C-chemistry, cell based life is a spontaneous chance + necessity occurrence, founders on the deep isolation of relevant islands of function. origin of body plans similarly requires increments of functional organisation and information that are again simply not credible on the dynamics of chance and necessity.

    13 –> That is, on inference to best explanation of complex, highly organised functional configurations, the origin of our observed cosmos, of C-chemistry cell based life, and of embryologically feasible — a crucial test, e.g. (after the presumed initial feasibility estimates for viability, as such exercises cost a lot more than the ordinary way to make baby mice with enthusiastic, unskilled, quite willing and happy labour!) some 15% of gene knockout mice experiments end in spontaneous failure in the womb — body-plan diversity from pond scum to us, are all best explained by design.
    _______________

    Kindly, notice the links and steps, i.e. I have given substantiating details for my claims; to date, you have not.

    Even a solid link or two would do. [Just copy and paste the URL from your browser’s address window.]

    For preference, I would appreciate a summary on points, even just a bullet-point list of topics. Kindly, focus on your inference to the best, empirically anchored explanation of the origin of functionally specific, complex organisation and associated information.

    Otherwise, what you are saying begins to come over as selectively hyperskeptical and dismissive of the actual balance of warrant on observation and theoretical explanation.

    G’day

    GEM of TKI

  11. 11
    gpuccio says:

    second opinion:

    Well, I have read your linked post, and I don’t see how that supports the view that “The fine tuning of the universe is actually an argument against ID”.

    Let’s see. You present three possibilities.

    “1) The universe is not fine tuned. This would not be to bad for ID (the intelligent design of biological life). ID could still be true.”

    Well, the conclusion is certainly true, but the premise is, IMO, wrong. It depends on whether one accepts or not the fine tuning arguments. I do accept them.

    “2) The universe is fine tuned. In this case the laws of nature are fine tuned not only to make life possible and to sustain life but also to produce life and to evolve life.”

    This is simply not true. It would correspong more or less to a TE position. I believe that all ID arguments falsify this scenario.

    “3) The universe is fine tuned but only to a certain degree. It is only fine tuned to make life possible and sustain life but not to produce or evolve life. This could very well be.”

    And this is what I believe to be true.

    But you go on:

    “But the problem is that this only becomes an argument in favor of ID if you can demonstrate that the fine tuning is actually limited.”

    Yes, and so? We can demonstrate that. The ID theory is exactly about showing that the laws of nature are not able to generate biological information without the active intervention of a designer.

    And then you say:

    “Even if you demonstrate by other means that the laws of nature are insufficient to produce life or evolve life”…

    which is exactly what ID does…

    “than you could only conclude that the fine tuning must be limited thus this limited fine tuning in turn can not become a positive argument for ID.”

    Why? Here I don’t follow you any more.

    Lety’s try to be clear.

    The fine tuning arguments and the ID arguments (let’s call them “the biological ID arguments”, to avoid confusion) are different and independent. As you correctly say, one set of arguments could be true without logically implying the truth of the other.

    The reason for that is that they start from different empirical observations, and apply forms of reasoning which have striking similarities, but are anyway formally different.

    So, we agree on the independence of the two sets of arguments.

    But that does not mean that the conclusions are completely independent.

    What I mean is: both sets of arguments argue for design and a designer, the first for a designer of initial conditions of the universe, the second for a designer of biological information inside the universe and its history.

    Now, that’s quite a connection. Both sets of arguments conclude for the central importance of consciousness and intelligent design to explain reality.

    So, if we could conclude that design is central to the existence of the universe as we know it, how can you affirm that such a notion is not a strong general support to the independent conclusion that design is central to the existence of life as we know it?

    After all, both philosophy and science are about understanding general principles which explain what we empirically observe.

    Two independent sets of strong arguments for design as the most inmportant shaping cause of reality are certainly a very good support to the general design approach to reality.

  12. 12
    DrBot says:

    KF,

    As I tried to elucidate, I have no objection to the idea of fine tuning and find the evidence compelling – there are however counter arguments that are worth considering – namely that the things we do not yet know (and the things we perhaps cannot know) could turn what we do know on its head.

    I am not ‘waiving off’ the arguments, just lightly suggesting alternatives to be considered as part of a civilized and informal discussion. It is not clear from your responses if you understood my points though as you don’t seem to have addressed them.

    If you disagree with these arguments that is fine, you don’t even have to respond, but I do object to the rude and patronizing manner of your response. I fully appreciate and respect that your faith in the veracity of the fine tuning arguments but please try and show some respect to others who are not as convinced by engaging with their arguments with charity, rather than employing rhetorical dismissals such as the selective hyper-skepticism slander.

    I have faith in God, but I make a habit of being skeptical about everything else – that is a good thing for a scientist to be – but if you are unable to deal with people holding different positions to your own (or just playing devils advocate in order to tease out some important issues) without recourse to uncivil rhetoric then their is little point in me continuing what I thought were going to be some enjoyable and polite discussions on a topic of great interest to me.

  13. 13
    kairosfocus says:

    GP:

    Excellent thoughts as usual.

    You will see from my outline above, and the major post on the subject here [the points raised in this need to be cogently addressed by objectors to ID], that I point to the central importance and significance of the inference to design, in the context of the three known causal factors.

    In that context, we can see, for instance:

    2 –>As an illustration, we may discuss a falling, tumbling die:

    Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.

    But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

    [Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]

    The three factors are independent the one of the other, and are marked by contrasting characteristic outcomes. Whether or no the fine-tuning of the cosmos is limited, if the cosmos shows evident signs of being on a narrow island of function in a very large space of possible configs, that is sufficient to strongly point to design of the cosmos.

    And, we see on the subject of this thread, how in trying to object to the finely tuned value of the Cosmological Constant, professor Page ended up proposing another similarly finely tuned value. We are talking here of variations well within 1 part in 10^- 120.

    Similarly, as we look at the key H2O molecule, we find that its many crucial characteristics for C-chemistry cell based life, and the prevalence of both C and O in the cosmos, pivot on fundamental physical constants and conditions, which are co-tuned. As Sir Fred Hoyle said (and as was already highlighted at 4 above):

    From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16]

    Similarly, and as also cited, he noted on the stellar furnaces that forge the O and C atoms that with H make the core foundation for the chemical basis of life:

    I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [[“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

    In short, Sir Fred as primus inter pares [here I nod to your esteemed ancestors], is pointing out that we have a pivotal set of atoms, that require some carefully balanced factors to make them form and form in adequate abundance in the cosmos as a whole. It turns out that these two atoms are — apart from H and He, the most abundant in the cosmos, and their abundance pivots on two closely matched resonances.

    The underlying factors are built into the underlying physics of the cosmos. And that subtle balance of physics then paves the way for the miracle molecule H2O. Extreme function, pivoting on finely tuned physics of the cosmos.

    Going beyond that, the overall pattern of a cosmos in which we can get terrestrial planets like ours in galactic habitable zones for spiral galaxies, runs into dozens and dozens of specific parameters and factors, cumulatively requiring astonishingly precise settings for the origin of the cosmos and its physics. In short, we have discovered Robin Collins’ biosphere that is carefully set up and balanced at an operating point for life: our planetary home.

    Highly specific and finely tuned function again.

    Not a credible product of undirected chance, even through a multiverse model, as was pointed out by Leslie and Collins: lone fly on a stretch of wall swotted by a bullet, and the need for setting up a cosmos bakery to get nicely turned out loaves, not burned hockey pucks.

    Such, as has been explained in previously linked materials, and as outlined above, is not a credible product of chance plus necessity without direction.

    When it comes to origin of life, the same basic issue obtains: whence, finely tuned, complex and functionally specific organisation?

    There is a lot of blind faith that in some still little pond struck by lightning, life will spontaneously organise itself. Sorry, not well warranted.

    Life in the cell embeds a von Neumann self replicator [vNSR], and one that targets creation of a metabolising automaton on a digitally coded, stored code that works with a cluster of effecting machines. the code is already FSCI, but the integration of metabolising automaton and vNSR is decisive:

    ___________________

    >> w, following von Neumann generally (and as previously noted), such a machine uses . . .

    (i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a “clanking replicator” as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility;

    (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with

    (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:

    (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by

    (v) either:

    (1) a pre-existing reservoir of required parts and energy sources, or

    (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

    Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.

    That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]

    This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.

    Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations.

    In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature. >>
    ___________________

    The only known and credible source of codes is an intelligent, language-using designer. The only known and credible source of algorithms and programs is similarly an intelligent programmer. The only empirically credible source of the communication protocols and integrated elements to make a digital tape work as a signal in a communication system, is a communication engineer. the only credible source of a composite system of machines that through step by step information controlled processes creates objects under the control of a digital code is an engineer. The only known and empirically credible source of a flexible programmed digitally controlled machine is an engineer.

    So, when we find all of these integrated in the so-called simple living cell, the best explanation is not laws of mechanical necessity and chance circumstances and processes, but design. [Cf the thought exercises here and here.]

    Even something so comparatively simple as finding all the O2 molecules in a room at one end, is best explained on design!

    So, I must challenge 2nd O’s concept that “laws of nature” are sufficient to account for cell based life and body plan level biodiversity.

    Similarly, such laws of nature in some substratum cosmos bubbling up sub-cosmi at random that somehow have a distribution of random parameters that so happen to search the zone of cosmological configs where we sit with an exceedingly fine grid, do not begin to adequately explain our observed cosmos. The operating point is too isolated, and too sensitive — recall,t he single parameter being discussed in the main for this thread is a question of 3 or 4 in 10^-122.

    The exquisite fineness of the life-permitting range should already be telling us something. And this, on just one parameter.

    by the time he was finished, Penrose was talking of finding 1 cell in a phase space of 10^[10^123] cells. We could not even write that number fully out with the resources of the whole cosmos.

    Nope, the evidence strongly supports that the best explanation for isolated islands of function, whether at cosmological or biological scales, is design. And not merely by assertion, but on empirical evidence.

    Evidence that needs to be faced and squarely addressed.

    GEM of TKI

  14. 14
    kairosfocus says:

    Dr Bot:

    Pardon, I have made no ad hominems, but have addressed substantial issues and called you to do the same.

    For, when one exerts an inconsistent degree of warrant for claims one is willing to accept and those one is evidently inclined to dismiss [even as a devil’s advocate], that is selective standards, and when there is good reason to see that the warrant for the rejected claims is reasonable relative to the possible and relevant degree of warrant, dismissal is hyperskeptical.

    So, pardon a direct rebuttal: the attempt to suggest that terms like “selective hyperskepticism” or “Cliffordian evidentialism” are abusive, is atmosphere poisoning rhetoric on your part, not mine.

    Yes, there is no shortage of evolutionary materialists out there who use red herrings, strawmen and ad hominems habitually, to reinforce a pattern of inconsistency in standard of warrant. The proper response to that, with all due respect, is to correct, not to mimic [on the assumption of devil’s advocacy].

    For, the pattern is inherently atmosphere clouding, poisoning and polarising.

    And, I therefore submit that to point out the rhetorical steps being used, to point out their objectively distractive and fallacious nature is not abusive but corrective. I have spoken to objectively verifiable verbal acts, not to persons.

    Now, at 3 you made some general remarks, and at 4 I asked you to substantiate.

    To date you have not, nor does it seem you intend to.

    I have long since spent a fair anount of time laying out why factors like chance and luck are not adequate to account for the fine-tuned operating point of the cosmos, and for the functionally specific complex organisation of life. Or, for FSCI and related complex organisation in general.

    My remarks above were premised on the accessibility of that foundational discussion.

    I did not try to specifically, step by step rebut on points in your at 3 [as I thought the general response and prior work was adequate], but plainly that is now necessary given your unwarranted accusations just above.

    Let me clip and comment on labelled points:

    ____________________

    I’ve never been a big fan of fine tuning arguments myself, neither the ‘for’ or ‘against’ ones,

    a –> a personal opinion

    mostly because I just don’t think we really know enough to make any sensible statements.

    b –> I specifically directed attention to the case of water, which is a matter of what we do know, in terms of its function, and in terms of the credible roots of the required Oxygen atoms to bond with H

    Firstly, there is no accounting for luck!

    c –> on the contrary, there is a whole field of mathematical study of chance

    d –> the field of statistical thermodynamics explores this at molecular and atomic levels

    so although our universe might be improbable, it could also be just that – a chance event.

    e –> the issue is an inference to best explanation, on empirical and analytical warrant. As shown in details here, “chance” is not an adequate or cogent explanation for functionally specific complex organisation and assicated information

    It is a weak argument against fine tuning though, but one that is universal and unavoidable – maybe we really did just beat the odds! – Because this is an argument you can apply to anything, it is not worth considering for long so I won’t say any more on it.

    f –> For good reason as just highlighted and lined, you abandon the line, having suggested it and thereby planting the “last resort” that we can always attribute to chance and this inference is incorrigible.

    The second thing is slightly more interesting and is about frames of reference. We can look at cosmological constants and speculate about what might have been if they were different but there may be much more ‘higher’ structure to the universe,

    g –> the inference to super-law or cluster of super-laws of necessity at the level of the observed cosmos, simply shifts the fine-tuning up one level as was specifically addressed in the linked discussion on cosmological fine tuning

    h –> More to the point,the focal issue between us is not cosmological constants [though the thread’s OP is on correcting a dismissal of the CC by inferring to another possible, equally fine tuned value], but something we do know a lot about that is firmly within the observed cosmos: WATER

    i –> Which in turn takes us straight to the requisites to set up a cosmos in which C and O are of the kind of relevant abundance as has been discussed.

    or that the universe depends on which we can’t see or measure,

    j –> Water is seen, is measurable, and its constituents are observable.

    k –> On a lot of stellar observation and relevant physical analysis, we have good grounds for inferring the source of C and O in stars, and eh significance of the relevant resonances — which are written into the laws of physics, and thus demonstrate a key case of fine tuning.

    and which actually requires a constant to be the way it is

    l –> inference to super-law or cosmos-programming, i.e legislating (or in our more modern terms, programming) the physics of the cosmos.In either case the finetuning is still preserved and points to the known source of complex functionally specific organisation: design.

    – in other words from a different frame of reference (including for example Gods) a cosmological constant may be at the value it is because it can hold no other value

    m –> Super-law again, as already addressed, and as was sitting in the linked IOSE discussion all along. That a local law or parameter is or “must” be set at a given value points to a purpose that requires that value

    [ . . . ]

  15. 15
    kairosfocus says:

    – it is not a parameter to be tuned, it just looks like that from our perspective.

    n –> Question-begging. If ther eis a cluster of super laws that force parameters etc to certain values in our cosmos, that goes to the next level: whence and why those functionally specific super-laws that drive the organisation of our cosmos and put it at an operating point that facilitates C-chemistry, cell based, intelligent life?

    In this sense some elements of our universe that appear fine tuned may not be in reality (i.e. the concept is meaningless) so trying to estimate the probability of them being at a certain level is also meaningless.

    o –> In effect you are trying to conclude that by promoting fine tuning up to a higher level through super-laws, you have got rid of fine tuning. This is a turtles all the way down infinite regress.

    p –> It is cut away at one stroke by Occam’s razor: hypotheses are not to be multiplied without necessity.

    q –> There is another, simpler, empirically relevant explanation: we routinely and only observe functionally specific, complex organisation and associated information being caused by intelligence, and we have a good analysis on islands of function in vast config spaces that points to design as superior to chance, where also, high contingency is simply not reasonably explained in the end by mechanical necessity.

    r –> this is a commonplace of scientific reasoning, so why the resort to turtles all the way down, apart from aversion to the possibility of design as a superior explanation of FSCO/I — there, got a new abbreviation?

    Without a deeper understanding of our universe we can’t yet answer some of these questions so my position is simply to say, we just don’t know enough to be certain yet,

    s –> Red herring led, again, away to a strawman: whether in the IOSE or in the thread above, we have highlighted WATER, which is a case where we DO know, and which points straight to the hearts of stars and the cosmological foundational physics that makes H, He, C and O he four most abundant elements, and the elements that are most relevant to making a cosmos suitable for C-chemistry, cell based life. (He makes the list as it is a key nuclear constructor part, despite its chemical no-reactivity.)

    although the inferences we can make based on what we DO know are compelling –

    t –> This is the point long since raised by Locke in the beginnings of his essay on Human understanding, Intro Section 5:

    Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 – 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 – 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 – 2 & 13, Ac 17, Jn 3:19 – 21, Eph 4:17 – 24, Isaiah 5:18 & 20 – 21, Jer. 2:13, Titus 2:11 – 14 etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 – 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly. [Emphases added. Text references also added, to document the sources of Locke’s biblical allusions and citations.]

    I just prefer to try and avoid getting carried away when inferences imply that what I want to believe is actually true 😉

    u –> It is appropriate to point to the provisionality or limitations of human reasoning, in science or elsewhere, but to go beyond that to complain of not having daylight when we have candlelight enough, is precisely the selective hyperskepticism issue already pointed out.

    v –> Beyond a certain point, playing at devil’s advocate [if that is what you imply by the smiley], reaches reductio ad absurdum, and becomes counter-productive, especially in a context where there is a veritable plague of self-referentially inconsistent selectively hyperskeptical thought in our day on this and linked subjects.

    w –> As you will see from the case of 2nd O, devil’s advocacy [if that is what it is] ends up giving confusing talking points to those who struggle to understand that we routinely and confidently infer to design on evidence of FSCO/I, for excellent reason, and that by sharpest contrast with OOL and OO body plans, the physics that explains what happens in stars is well understood and empirically well supported by direct observations.

    x –> When a key aspect of that physics, the synthesis of O and C, then leads forward to the setting up of terrestrial planets that are the bases for C-chemistry, cell based life, and back to the foundational physics of the cosmos, the import of that functionally specific physics leading to the basis for life needs to be faced.

    ____________________

    I trust that the above remarks on points will come across as it is intended: specific substantiating points, in a context where details have already been linked. Where breakdowns in lines of reasoning have been highlighted, it has been to correct error, not abuse persons.

    Good day

    GEM of TKI

  16. 16
    DrBot says:

    My dear KF, this is getting quite bizzare:

    Pardon, I have made no ad hominems

    And I have not accused you of attacking me personally.

    You suggested that I was dismissing your arguments without warrant – being hyperskeptical about your arguments but not about others (hence ‘selective’) – this is a totally wrong and unwarranted claim on your part, I have stated that I find your arguments compelling but that I am naturally skeptical of all (including my own).

    An ad hominem would be an attack on me personally, not a claim that I was being evasive in my arguments.

    You seem to be demanding that I reserve my skepticism for arguments other than your own – I hope this is not the case.

    Now, at 3 you made some general remarks, and at 4 I asked you to substantiate.

    To date you have not, nor does it seem you intend to.

    I’m not quite sure what you mean by this. I made two comments:

    We can never account for luck – this is self evidently true but of little value to the debate – it could be that the repeated results from ANY scientific experiments were actually a fluke, but in all probability they are not. Which is why I mention it only in passing. We could just be lucky, but in all probability we were not.

    My other comment is of a similar nature but a little more specific, we do not know everything about the universe, and we might not ever be able to know. These facts will always leave open the possibility that we have failed to understand something about the nature of the universe which has a bearing on how we calculate the probabilities of cosmological constants. We man in fact be underestimating how improbably fine tuned the universe is.

    They are just comments made in passing, not claims, just idle thoughts on the matter. You are free to dismiss them if you like but you seem to be reading far more into them, re my intentions and beliefs, than I intended when I made them. They are not claims, just thoughts so why should I need to substantiate them any further?

    I have long since spent a fair anount of time laying out why factors like chance and luck are not adequate …
    My remarks above were premised on the accessibility of that foundational discussion.

    Yes, which is why i premised my post with this: (apologies – just a note in passing ..) to indicate that I was making a comment in passing and hadn’t had time to read your extensive background material.

    I’ve never been a big fan of fine tuning arguments myself, neither the ‘for’ or ‘against’ ones,

    a –> a personal opinion

    Yes, I was offering a personal perspective on the matter and highlighted a couple of fairly insubstantial issues that are easily dismissed. I don’t understand why doing this is causing problems – I’m basically agreeing with you, just expressing my natural tendancy to be cautions!

    WATER

    I AGREE!

    In effect you are trying to conclude that by promoting fine tuning up to a higher level through super-laws, you have got rid of fine tuning. This is a turtles all the way down infinite regress.

    No, that is not what I am saying – see above – Fine tuning could dissapear at a higher level for some parameters, but it could also increase or appear for others. We don’t have enough information to know yet so although we can make inferences based on what we do know there is simply a limit to the strength of any claim in this regard.

    Without a deeper understanding of our universe we can’t yet answer some of these questions so my position is simply to say, we just don’t know enough to be certain yet,

    s –> Red herring led, again, away to a strawman

    I was simply stating my personal position on the matter – you can tell me I’m wrong, that is fine, but you are accusing me of lying (that I am intentionally deploying red herrings and straw man arguments) without warrant.

    I AM NOT, please stop.

  17. 17
    Collin says:

    I think that Mr. Page does not distinguish between fine tuning and perfect tuning.

    Although if I played a violin and hit the right tempo within 1 * 10^22 seconds, I’d call that “perfect tuning.” No pun intended.

  18. 18
    kairosfocus says:

    Dr BOT:

    Pardon me, but kindly explain, from 12 supra:

    I do object to the rude and patronizing manner of your response. I fully appreciate and respect that your faith in the veracity of the fine tuning arguments but please try and show some respect to others who are not as convinced by engaging with their arguments with charity, rather than employing rhetorical dismissals such as the selective hyper-skepticism slander.

    When I looked back over the thread, I wondered even more where this outburst had come from. If this is what mere devil’s advocacy and a light hearted attitude/discussion are about, pardon my desire for considerably less of it. (There is an old Caribbean story about a frog talking to the small boy approaching, stones in hand: “fun fe yuh is death to me.” Please, moderate.)

    As touching selective hyperskepticism, you will notice that I made no personalisation of arguments as “mine” but instead addressed the question of specifically Cliffordian Evidentialism [most familiar in our time from Sagan’s popularised phrase about how “extraordinary claims require extraordinary [ADEQUATE] evidence”].

    You will see above how I have pointed out in this thread, in the linked discussion of the inference to design, and in the online course previously presented, our common inference on best explanation from sign to signified.

    When we have a strong pattern like the known source of FSCO/I, and the analysis of specific and isolated islands of function in configuration spaces to go with it, the rejection or dismissal of say the implications of the close resonances for the C and O atoms, in the context of the significance of C and H2O for life and terrestrial planets generally, is a case of sliding standards of warrant that are easier for what one wishes to allow, and unreasonably harder for those one wishes to reject.

    In the case of 2nd O who was clearly primarily in view in my above remarks, we can see the astonishing case of thinking that such fine tuning as exists is not a warrant for inferring design at cosmos levels, whilst absence of adequate dynamics for accounting for FSCO/I in OOL and OO body plans is no obstacle to evidently accepting the idea that chance and necessity can account for them without intelligence.

    However, let me lay such to one side, in the hope that we can address substance.

    1 –> I repeat, we do account for “luck” through the mathematics of chance, and the associated patterns that show up in statistical distributions. When the outcomes are favourable, we claim to have been lucky, and when not, to have been unlucky. (But think of the difference between the consistent “luck” of skilled and unskilled anglers, and the factors that explain that difference. TGP will bear me out on this one!)

    2 –> In addition, we all routinely believe in cause and effect bonds, as we do not believe we inhabit a chaos in which things routinely happen for no reason, anywhere or out of nowhere and nothing.

    3 –> Our finitude and fallibility [as Locke highlighted] only mean that our inferences and knowledge base on the whole are inevitably provisional and hopefully progressive. They do not warrant complaining of want of sunlight when we have candles enough for our work.

    4 –> In the case of H2O, we have a vast amount of very specific and reliable knowledge on its properties and functionality.

    5 –> Similarly, our understanding of the physics of fusion, fission and atoms is strong enough to not only build bombs [sadly] and hope to build fusion power plants [now several years out, a main problem being plasma confinement and control], but also to have an analysis of the stellar life cycles that is strongly anchored to a large body of direct — and supportive — observations.

    6 –> Thus, water is an apt case in point for our reflections on fine tuning. H, O, and stellar fusion, driven by the laws and parameters of physics and the parameters and initial conditions of the singularity.

    7 –> Super-law type explanations are in fact postponements of the fine-tuning one level at a time, tend to be empirically uncontrolled speculations, and are by implication turtles all the way down arguments.

    8 –> The lady in question with whoever was the astronomer, was in effect dismissing by successive increment. And whatever your intent, the effect on those who feed from the talking points, is that they fall into the same trap.

    9 –> The Occam’s razor response and the insistence that inference to best explanation of FSCO/I are therefore well-warranted responses.

    10 –> As for red herrings and strawmen, I think on fair comment, most people who end up in that common trap do so by following tangents and inadvertently distorting opposed views and perceptions of persons.

    11 –> ironically, the inference and accusation that I have suggested that you are deliberately lying, is a classic example.

    12 –> It also underscores just how dangerous the trifecta: RdHerr->StrMan –> AdHom is, as you have in fact implied that I indulged a false accusation. That unnecessarily clouds the issue, it confuses the issue, it polarises the atmosphere,and it poisons it.

    13 –> Devil’s advocacy that goes down the trifecta road is a very, very dangerous exercise in rhetoric. Especially when compounded by the sort of implications and insinuations that are now beginning to appear in your claims.

    ___________________

    G’day sir.

    GEM of TKI

  19. 19
    kairosfocus says:

    F/N: To change the tone for a fresher and more positive atmosphere, I suggest onlookers may wish to look at the compilation page here, which is linked in para 1 of the OP. Very relevant to the water issue.

    Here’s a snippet, from p. 1:

    ____________________

    >> Our “Tailor-made” Universe:
    New Scientific Study Begs the Philosophical Question, “Who’s the tailor?”

    http://www.arn.org/docs/pearce.....090200.htm

    Pick a universe, any universe. How many hypothetical universes would support life?

    Possibly only one, say the authors of a new study. Published in the July issue of Science, the report says that if the physical forces within stars were only slightly different, our universe would be almost devoid of carbon and oxygen, and life would not exist.

    The findings bring scientists face to face with the question of design. “I am not a religious person, but I could say this universe is designed very well for the existence of life,” said Heinz Oberhummer, astrophysicist at the University of Vienna, Austria.

    Mr. Oberhummer and his colleagues used computers to simulate the process by which helium burns to produce carbon and oxygen during the red-giant stage of a star’s life. They found that even slight changes in either the strong or weak nuclear force would destroy nearly all the carbon or oxygen inside stars-making life impossible.

    “The basic forces in the universe are tailor-made for the production of … carbon-based life,” Mr. Oberhummer told Space.com.

    It’s a new day when scientists who are not “religious persons” are compelled to use the language of design. Mr. Oberhummer’s discovery adds to the enormous number of “cosmic coincidences” uncovered by cosmology–intricate balances among the universe’s fundamental forces. For example, if the force of gravity were only slightly stronger, all stars would be red dwarfs, too cold to support life. If it were slightly weaker, all stars would be blue giants, burning too briefly for life to develop.

    In the atom, the mass of the neutron is delicately balanced with that of the proton; otherwise, protons would decay into neutrons, making life impossible.

    “Imagine a universe-creating machine, with thousands of dials representing the gravitational constant, the charge on the electron, the mass of the proton, and so on,” said Steve Meyer of Whitworth College. “Each dial has many possible settings, and even the slightest change would make a universe where life was impossible.” Yet each dial is set to the exact value needed to sustain life-for no known reason.

    As Mr. Oberhummer put it, “we have no idea why the strengths of the forces are fine-tuned” to support life. The reasonable answer seems to be that someone intended it that way.

    To avoid that surprising conclusion, cosmologists are scrambling to craft alternative explanations. Some adopt the “many worlds” hypothesis, suggesting that there exist an infinite number of universes. Most would be dark and lifeless, but by sheer probability a few might be suitable for life–and we happen to live in one.

    How do scientists account for these zillions of universes? Some say mini-universes crowd together within a larger universe like bubbles in foam. Others propose an oscillating universe–continually expanding, collapsing, then expanding again to form new universes with different physical laws. Strangest by far is physicist Hugh Everett’s notion that all possible states of a quantum interaction are actualized, so that slightly different versions of our universe are constantly splitting off–creating a near-infinitude of new universes at every moment.

    What’s the evidence for these other universes? There is none. By definition, they cannot be observed. Nor has anyone offered a plausible scientific explanation for how they arise. “There is no hint as to what causal mechanism would produce such a splitting,” complained philosopher John Earman–which renders it akin to a “miracle.”

    Moreover, the hypothesis violates the principle of simplicity. As Guillermo Gonzalez of the University of Washington told World, “Invoking an infinitude of unobservable universes to explain the one observable universe is a grotesque violation of Occam’s razor,” the principle that entities should not be multiplied unnecessarily.

    Other cosmologists try to explain design by a quasi-pantheistic philosophy that attributes intelligence and foresight to the universe itself. In The Fifth Miracle, Paul Davies says, “the laws of the universe are cunningly contrived to coax life into being”; they “somehow know in advance about life and its vast complexity.” This year’s Templeton prize-winner, Freeman Dyson, muses that “the universe in some sense must have known we were coming.”

    Of course, the idea of a conscious universe, or of unknowable universes sprouting like mushrooms, goes beyond science and into philosophy. This opens a new opportunity for Christians, says philosopher William Lane Craig. “Cosmology has broken down the boundary between physics and metaphysics,” he told World. “And once the door is opened to metaphysics, you can’t stop the theist from coming in the door, too.”

    If the universe appears “tailor-made” for life, perhaps the simplest explanation is that it was tailor-made. >>
    _____________________

    Tailoring is a sign strongly pointing to a tailor, or, applying our little expression:

    I: [si] –> O, on W

    Snip, snip.

  20. 20
    kairosfocus says:

    F/N 2:

    WLC responds to Carrier at no 20, p. 2 of the linked:

    http://www.reasonablefaith.org.....38;id=6123
    ___________________

    >> That the universe is fine-tuned for the existence of intelligent life is a pretty solidly established fact and ought not to be a subject of controversy. By “fine-tuning” one does not mean “designed” but simply that the fundamental constants and quantities of nature fall into an exquisitely narrow range of values which render our universe life-permitting. Were these constants and quantities to be altered by even a hair’s breadth, the delicate balance would be upset and life could not exist.

    Carrier is mistaken when he asserts that there are only about six physical constants in contemporary physics; on the contrary, the standard model of particle physics involves a couple dozen or so. The figure six may be derived from Sir Martin Rees’ book Just Six Numbers (New York: Basic Books, 2000), in which he focuses attention on six of these constants which must be finely tuned for our existence. But this is just a selection of the constants there are, and new constants, unknown in the 19th century, like the so-called cosmological constant, which must be fine-tuned to one part in 10^120 in order for life to exist, are being discovered as physics advances.

    In addition to these constants, there are also the arbitrary quantities which serve as boundary conditions on which the laws of nature operate, such as the level of entropy in the early universe, which are also fine-tuned for life. If one may speak of a pattern, it would be that fine-tuning, like a stubborn bump in the carpet, just won’t go away: when it is suppressed in one place, it pops up in another. Moreover, although some of the constants may be related so that a change in the value of one will upset the value another, others of the constants, not to mention the boundary conditions, are not interdependent in this way. In any case, there’s no reason at all to suspect so happy a coincidence that such changes would exactly compensate for one another so that in the aftermath of such an alteration life could still exist. It appears that fine-tuning is here to stay.

    Now there are only three ways to account for this remarkable fine-tuning of the cosmos for intelligent life: physical necessity, chance, or design. The contemporary debate is over which of these is the best explanation of the observed fine-tuning. Carrier seems to prefer either of the alternatives to design.

    Physical necessity is the hypothesis that the constants and quantities had to have the values they do, so that the universe is of physical necessity life-permitting. Now on the face of it this alternative is extraordinarily implausible. It requires us to believe that a life-prohibiting universe is physically impossible. But surely it does seem possible. If the primordial matter and anti-matter had been differently proportioned, if the universe had expanded just a little more slowly, if the entropy of the universe were marginally greater, any of these adjustments and more would have prevented a life-permitting universe, yet all seem perfectly possible physically. The person who maintains that the universe must be life-permitting is taking a radical line which requires strong proof. But there isn’t any; this alternative is simply put forward as a bare possibility.

    Sometimes physicists do speak of a yet to be discovered Theory of Everything (T.O.E.), but such nomenclature is, like so many of the colorful names given to scientific theories, quite misleading. A T.O.E. actually has the limited goal of providing a unified theory of the four fundamental forces of nature, to reduce gravity, electromagnetism, the strong force, and the weak force to one fundamental force carried by one fundamental particle. Such a theory will, we hope, explain why these four forces take the values they do, but it will not even attempt to explain literally everything.

    For example, in the most promising candidate for a T.O.E. to date, super-string theory or M-Theory, the physical universe must be 11-dimensional, but why the universe should possess just that number of dimensions is not addressed by the theory. Moreover, M-Theory fails to predict uniquely the values of the constants of nature. It turns out that string theory allows a “cosmic landscape” of around 10^500 different universes governed by the present laws of nature but with different values of the physical constants. Moreover, even though there may be a huge number of possible universes lying within the life-permitting region of the cosmic landscape, nevertheless that life-permitting region will be unfathomably tiny compared to the entire landscape, so that the existence of a life-permitting universe [by chance] is fantastically improbable. Indeed, given the number of constants that require fine-tuning, it is far from clear that 10^500 possible universes is enough to guarantee that even one life-permitting world will appear by chance in the landscape!

    All this has been said with respect to the constants alone; there is still nothing to explain the arbitrary quantities put in as boundary conditions. The extraordinarily low entropy condition of the early universe would be a good example of an arbitrary quantity which seems to have just been put in at the creation as an initial condition. There is no reason to think that showing every constant and quantity to be physically necessary is anything more than a pipe-dream.

    So what about the alternative of chance? This is the “multiple universe” hypothesis mentioned by Carrier. The multiple universe hypothesis is essentially an effort on the part of partisans of chance to multiply their probabilistic resources in order to reduce the improbability of the occurrence of fine-tuning. (The more spins of the roulette wheel, the better the chances of your number coming up!) The very fact that otherwise sober scientists must resort to such a remarkable hypothesis is a sort of backhanded compliment to the design hypothesis. It shows that the fine-tuning does cry out for explanation. But is the multiple universe hypothesis as plausible as the design hypothesis?

    I’m not at all impressed by Carrier’s appeal to familiarity as an argument for preferring the multiple universe hypothesis. For we have no experience whatsoever of other universes—the multiple universe hypothesis is a bold venture in metaphysical cosmology. Our familiarity with our universe does nothing to warrant the appeal to other universes as familiar entities—at least not more so than the design hypothesis. For while we are likewise not familiar with designers of universes, we certainly are familiar with minds and the products of intelligent design, so that the appeal to a designer as the best explanation of the fine-tuning is an appeal to a familiar explanatory entity. Indeed, theists have sometimes been accused of anthropomorphism in this regard!

    Moreover, while we have no evidence of the existence of multiple universes, we do have independent reasons for believing in the existence of an ultramundane designer of the universe, namely, the other arguments for the existence of God, which I have defended elsewhere.

    Finally, Carrier is mistaken when he opines that we cannot know that multiple universes do not exist and therefore agnosticism is the only justified conclusion. (Interesting to compare this conclusion with the frequent atheist claim that in the absence of evidence for God we should conclude that God does not exist! Do you see the inconsistency?) He is unaware of the potentially lethal objections to the multiple universe hypothesis that have been lodged by physicists like Roger Penrose of Oxford University (The Road to Reality [New York: Alfred A. Knopf, 2005], pp. 762-5). Simply stated, if our universe is but one member of an infinite world ensemble of randomly varying universes, then it is overwhelmingly more probable that we should be observing a much different universe than that which we in fact observe.

    Penrose calculates that the odds of our universe’s low entropy condition obtaining by chance alone are on the order of 1:10^[10^(123)], an inconceivable number. The odds of our solar system’s being formed instantly by random collisions of particles is, on the other hand, about 1:10^[10^(60)], a vast number, but inconceivably smaller than 10^[10^(123)]. Penrose calls it “chicken feed” by comparison! So if our universe were but one member of a collection of randomly ordered worlds, then it is vastly more probable that we should be observing a much smaller universe. Observable universes like that are much more plenteous in the ensemble of universes than worlds like ours and, therefore, ought to be observed by us if the universe were but one random member of an ensemble of worlds.

    Or again, if our universe is but one random member of a world ensemble, then we ought to be observing highly extraordinary events, like horses’ popping into and out of existence by random collisions, or perpetual motion machines, since these are vastly more probable than all of nature’s constants and quantities falling by chance into the virtually infinitesimal life-permitting range. Since we do not have such observations, that fact strongly disconfirms the multiple universe hypothesis. Penrose concludes that multiple universe explanations are so “impotent” that it is actually “misconceived” to appeal to them to explain the special features of the universe.

    Since the alternative of chance stands or falls with the multiple universe hypothesis, that alternative is seen to be very implausible. It therefore seems that the fine-tuning of the universe is plausibly due neither to physical necessity nor to chance. It follows that the fine-tuning is therefore due to design, unless the design hypothesis can be shown to be even more implausible than its competitors. >>

    ___________________

    Snip, snip, snip . . .

  21. 21
    DrBot says:

    KF:

    Please read the previously linked then address the issues specifically on the merits

    This was clearly a ‘slap down’ designed to put me in my place prior to a lecture ignoring both points I made. You could have simply ignored my points, or politely stated that you disagree. The slap down and the lecture were unplesant and unwarranted.

    I said:

    Without a deeper understanding of our universe we can’t yet answer some of these questions so my position is simply to say, we just don’t know enough to be certain yet,

    You replied:

    s –> Red herring led, again, away to a strawman:

    1-> This implies that you think we can be certain – I disagree, I would say we can be almost certain.
    2-> A ‘red herring’ is a deliberate tactic designed to lead astray, a straw man is an argument deliberatly constructed as a caricature of an opponents, which is then knocked down with the claim of defeating the opponents real argument.
    3-> I took these comments about herring and straw to be directed at me – they were accusations, claims that I was being in some way dishonest – that is the core nature of red herring and straw men arguments: dishonesty.
    4-> I was not being dishonest, my comments were not red herrings or straw men, I object to the accusation but appologise if you were not addressing them at me.

    your latest comment at 18 re-states a long list of points that I have no objection to (and have not substantially objected to), and which underlie your premise – which I generally agree with! The fact that you have repeated these points again and in the face of my general support for these points suggests that you are not reading the substance of my posts (or I am not writing them well enough, for which I would appologise) Because of this I will repeat myself again in the hope that you won’t feel the need to re-post this long list yet again – I find these arguments compelling and believe the inference to design is warranted – I am just cautious about ANY absolute claim.

  22. 22
    kairosfocus says:

    Dr Bot:

    Enough has already been said on the side issues that have come up to show the trifecta in action, and more exchanges will only add to the evident polarisation and distraction from the main issue, which is itself highly significant.

    let us not forget: prof Page’s piece led to a stir in the web, and my observation of the inference to fine tuning he made on p 5 of his paper, which he knew or should have known was an inference to fine tuning, was not announced in the headline or abstract of his paper.

    So it was a significant point to draw attention to it.

    As touching red herrings, strawmen and ad hominems, I repeat, most often the distractions have been on tangential issues, the strawmen are based on misperceptions [often deriven by expectations and apperception], and the resulting ad hominems are linked to the two.

    BTW, I should note that contrary to your suggestion, an inference to best explanation on empirical evidence is not an absolute claim. That is a caricature or strawman distortion of my view. But, it is doubtless not deliberately projected unto me. (Never mind that I have been careful to highlight the difference between warrant on IBE and claimed absolute proof.)

    As to the deliberateness involved, it is fair comment for me to note that any time that one diverts a discussion from issues on the merits and main theme, there is a certain willfulness involved, as one knows or should know that s/he is talking about a different subject. (Notice, how, on a blog with OP on prof Page’s overstatement of his findings, we are here talking about red herrings, in a context tracing to increasingly tangential and poisoned issues.)

    Let me add, some examples from a definition page:

    # “We admit that this measure is popular. But we also urge you to note that there are so many bond issues on this ballot that the whole thing is getting ridiculous.”

    # “Argument” for a tax cut:

    “You know, I’ve begun to think that there is some merit in the Republican’s tax cut plan. I suggest that you come up with something like it, because If we Democrats are going to survive as a party, we have got to show that we are as tough-minded as the Republicans, since that is what the public wants.”

    # “Argument” for making grad school requirements stricter:

    “I think there is great merit in making the requirements stricter for the graduate students. I recommend that you support it, too. After all, we are in a budget crisis and we do not want our salaries affected.”

    None of these would likely be a calculated propaganda-style move. But they do reflect poor habits of inference and reasoning, and lend themselves to the trifecta pattern I have pointed out. They also show the involvement of the will in shifting subject.

    However, that involvement of the will is quite different from the issue of how cold-bloodedly calculated it is.

    I ask you to return to a focus on the primary issue at stake, starting with the issue in the OP.

    In that context, I will add a further footnote, that will help return focus to the primary issue.

    G’day

    GEM of TKI

  23. 23
    kairosfocus says:

    F/N 3: A third extract; this from no 21 in the same forum:

    ____________________

    >> The Universe Fine-Tuned for Life
    Taeil Albert Bai
    Stanford University, Stanford, CA 94305

    http://quake.stanford.edu/~bai/finetuning.pdf

    Einstein once said,“What really interests me is whether God had any choice in the creation of
    the world. This is a fundamental question.” Compared to this question, all other questions seem
    trivial. Yes, God would have had many choices if He had wanted to create a barren universe.

    However, in order to create a universe where life is possible, with the same set of natural laws as
    ours, it seems that He had only limited choices. According to recent findings, the values of physical constants should have been fine-tuned to make the emergence of life in the universe possible.
    This was first noticed by Brandon Carter,[1] and the notion was recently popularized in several
    books.[2,3]

    There are many physical constants such as the speed of light c, the gravitational constant G,
    Planck’s constant h, and Boltzmann’s constant k. The electron mass, proton mass, and constants determining the magnitudes of electromagnetic interaction, strong interaction, and weak interaction are also regarded as fundamental constants. We do not know why these fundamental constants
    have the actual values they do. We simply measure them to find their values.
    For example, we
    know that the speed of light, which is the maximum speed in the universe, is 300,000 kilometers
    per second (about 186,000 miles per second). But we do not know why the speed of light should have this particular value.

    To explain the theory of relativity and quantum theory to the public, George Gamow wrote a popular book entitled Mr. Tompkins in wonderland.[4] To make the relativistic and quantum
    effects noticeable in daily activities in Wonderland, Gamow set the value of c much smaller than
    its actual value and the values of G and h much larger than their actual values. For example, a
    bicyclist in Wonderland can see city blocks becoming shorter as he speeds up because his speeds
    are relativistic (comparable to c). In Wonderland, hunters have difficulty shooting game animals
    because their positions are fuzzy due to quantum uncertainty.

    In Wonderland, the values of c, G, and h are different from their actual values by enormously
    large factors. If the value of any one of these physical constants had been set even slightly differently in the beginning of our universe, however, it would be a totally different place. Life could not have emerged in such a universe. In some cases, even if life had emerged, it would not be possible for intelligent life forms to emerge. I explain briefly only simple cases, because most of the arguments for this are highly technical. (To readers who are deeply interested in this subject, I recommend The Accidental Universe.)

    A brief explanation about the requirements for the life on earth is necessary. All living things
    on earth are carbon-based. That is, carbon atoms that have four chemical bonding hooks act as
    chain links to make complex molecules.
    All living creatures depend directly or indirectly on pho-tosynthesis. Ecosystems teeming with life were recently found on the deep ocean floors where no
    – 2 –
    sunlight can penetrate; these organisms get energy from sulfur compounds emitted from hydrother-
    mal vents. However, scientists conjecture that they feed on the carcasses of great whales on ocean
    floors (which indirectly depend on photosynthesis for life) while migrating along the sea floor from
    one thermal vent to another.[5]

    Visible light is necessary for photosynthesis. Each photon of infrared light has too low an
    energy for photosynthesis. On the other hand, each photon of ultraviolet light has too high an
    energy and is harmful to life. Life forms on other planets may utilize different chemical reactions
    than photosynthesis on earth, but the energy levels of chemical reactions of complex molecules
    are similar, being determined by the magnitude of electromagnetic interaction. Therefore, we also
    expect that life forms on other planets are sustained by visible light.

    Can stars other than the sun support life? The intensity of light emitted by a given object
    depends on its wavelength or frequency. How the intensity changes as a function of frequency is called the spectrum of light.

    The spectrum of light emitted by a star is determined by its surface
    temperature, which is, in turn, influenced by the energy generation rate in the stellar core and
    by the surface area. The energy generation rate and the surface area are, in turn, determined by
    many physical constants such as the magnitudes of strong interaction, gravitational interaction,
    and electromagnetic interaction, and by the electron mass, the proton mass, and the speed of light.

    We can divide main-sequence stars into two classes: blue giants and red dwarfs. Blue giants
    are massive stars, and energy generated in the core of a blue giant is transported by propagation
    of light through the stellar interior. Because blue giants emit copious ultraviolet light, they are not suitable for supporting life. Red dwarfs are low-mass stars, and energy generated in the core of a
    red dwarf is transported mainly by convection. (In a heated pot, energy is transported from the
    bottom to the top by the convection of water.) Red dwarfs emit mainly infrared light, whose energy
    is too feeble to support life. In terms of their characteristics, sun-like stars fall between red dwarfs and blue giants: both convection and radiation play roles in transporting energy in such stars, and they emit most of their energy in the visible band, which supports photosynthesis. Because most
    stars happen to be situated near the boundary between the blue-giant regime and the red-dwarf
    regime, a slight change in the value of one of the above-mentioned physical constants one way or
    the other would push all stars to become blue giants or to become red dwarfs. In order to have
    sun-like stars in the universe which can sustain life, the values of these fundamental constants must
    be fine-tuned.

    Let us consider the consequences in a change of the magnitude of the strong force, as an example. If the magnitude of the strong interaction were slightly higher, the nuclear fusion rates inside stars would be higher than they are now. The star would expand because it would become
    hotter. The exact change in the stellar structure would have to be investigated by numerical
    simulations. Because of the increased fusion rate, however, the lifetimes of stars would decrease.

    Carbon, oxygen, and nitrogen are currently the most abundant chemical elements after hydrogen
    and helium. However, if the strong interaction were somewhat stronger than it is now, these
    – 3 –
    elements would be less abundant because they would more easily fuse to form heavier elements in
    the stellar interior. Hence, heavy elements would be more abundant.
    With carbon less abundant, it is doubtful whether carbon-based life would arise in such a universe.

    If the magnitude of strong interaction were greater by only two percent, two protons could
    combine to form a nucleus made of just two protons. This process, which is governed by strong
    interaction, would be much more rapid than the deuteron formation, which is governed by weak
    interaction. In this case, all hydrogen would have been converted to helium during the Big Bang
    nucleosynthesis. Without hydrogen, stars would shine by combining helium into carbon, and stellar life would be several million years instead of billions of years. Such stellar lifetimes are too short to
    allow the evolution of life, considering that it took about 800 million years for the earth to produce even the simplest organisms. However, this point is moot; because, without hydrogen, there would be no water, which is also a prerequistie to life.

    There are ninety-two natural elements. What determines the number of natural elements? The
    magnitudes of strong interaction and electromagnetic interaction determine the nuclear structure,
    and their relative magnitudes determine the number of natural elements.
    Strong interaction, an
    attractive force operating between nucleons (protons and neutrons), is a short-range interaction and
    operates only in distances shorter than 10^?13 centimeter (one ten-trillionth of one centimeter). On the other hand, electromagnetic interaction is a long-range interaction whose magnitude is inversely proportional to the square of the distance between two electric charges. Therefore, a proton in a heavy nucleus is pushed by electric forces of all other protons while it is pulled only by nearby nucleons in the nucleus. It follows that the electric repulsive force exerted on a proton increases
    as the number of nucleons in the nucleus increases; however, the attractive force due to strong
    interaction does not increase after the nucleon number exceeds a certain threshold.

    Therefore, very heavy elements are loosely bound and some of them decay naturally. Such
    elements are called radioactive. If the magnitude of strong interaction were slightly weaker than it actually is, the number of stable elements would be smaller, and iron could be radioactive. Iron is a constituent of human blood cells. It is not clear whether other elements could substitute the function of iron in blood cells. Without heavy elements like calcium, however, big animals
    requiring bones to maintain their structure would not be able to emerge. If the magnitude of
    strong interaction were weak enough to make carbon, nitrogen, and oxygen radioactive, then, life
    would not be possible at all.

    A more dramatic change would occur in the nucleosynthesis process if the magnitude of strong
    interaction were decreased by five percent: a proton and a neutron would not be able to combine
    to form a deuteron. Deuteron formation is the first step of nuclear synthesis; thus, without the
    first step, nucleosynthesis would not be possible at all. Without a stellar energy source and heavy
    chemical elements, no life would be possible.

    [Let us consider that the magnitude of weak interaction. When the iron core [this, after cycles of burning heavier and heavier elements] of a massive star exceeds 1.4 times the mass of the sun, it suddenly collapses, and neutrinos emitted from the core
    – 4 –
    push out the stellar envelope to cause a supernova explosion. The neutrino reaction within the
    stellar envelope is governed by weak interaction. Therefore, if the magnitude of weak interaction
    were slightly less than it is now, supernova explosions would not be possible.
    Supernova explo-
    sions expel heavy elements synthesized deep inside massive stars into interstellar space.
    Therefore, without supernova explosions, planets like earth would not have heavy elements, some of which are essential to life. In addition to carbon, nitrogen, and oxygen, sulfur and phosphorus are such elements.[6] Iron in hemoglobin in our blood cells is necessary to carry oxygen; calcium is required for making bones. Therefore, unless the magnitude of the weak force is fine-tuned, life could not emerge in the universe.

    If the gravitational constant were larger than its current value, stars would be more tightly bound, with their central temperatures increasing. The increase of the central pressure and the temperature of the sun would increase the nuclear energy generation rate. In order to radiate more energy at the surface, the temperature and/or the area of the surface should increase. However, the stronger gravity would tend to decrease the surface area. Therefore, the surface temperature of the sun would have to be higher than it is now, emitting the bulk of its energy in ultraviolet radiation. The solar-mass stars would be like blue giants, unsuitable for supporting life. With stronger gravity, some low-mass stars would emit most of their energy in visible light, suitable for supporting life. However, such stars would not stay in the main-sequence stage long enough to preside over the long evolutionary history of life.

    Similarly, a slight change in the magnitude of the electric force, the speed of light, Planck’s
    constant, or Boltzmann’s constant
    would have dire consequences: the universe would not be able to produce life. A slight change in the mass of the electron would also be disastrous.] >>
    ____________________

    Snip, snip, snip, snip!

  24. 24
    second opinion says:

    gpuccio
    I made the observation that ID proponents use the apparent fine tuning of the universe as supporting evidence for ID (the intelligent design of biological life). I don’t think it is and you seem at least partly to agree with me. In short: This kind of fine tuning argument basically says that the natural processes are designed to enable and sustain life. ID implicitly claims that natural processes are not designed to produce or evolve life. The former statement clearly does not support the later.

    I think you could even go a step further and have a case that the fine tuning actually falsifies ID. I could claim that water is fine tuned for abiogenesis or the mutation rates on earth are fine tuned for evolution for example and than fine tuning and ID would be in conflict. Obviously you would have to demonstrate the fine tuning for the production and evolution of life as thoroughly as for the enabling and sustaining of life. But it is probably not worth discussing that because I am not an expert in this field.

  25. 25
    kairosfocus says:

    2nd O:

    The issue is getting clearer:

    [2nd O, 24:] This kind of fine tuning argument basically says that the natural processes are designed to enable and sustain life. ID implicitly claims that natural processes are not designed to produce or evolve life. The former statement clearly does not support the later.

    1 –> Let us notice the highlighted contrast, and set it in context:

    (i) the cosmological design inference addresses a cosmos whose physical parameters are fine-tuned and as a result act “to enable and sustain” the chemistry required for cell based life.

    (ii) The biological design inference addresses the need for something extra than mere blind physical necessity and/or chance processes to explain digitally coded, functionally specific complex information in life forms, and

    (iii) since blind mechanical necessity and/or chance processes do not explain symbols and rules for their meaningful or prescriptive use, the physics from (i) is an underpinning for life but through the challenge at (ii) it is unable “to produce or evolve” life and its body-plan level biodiversity.

    2 –> That is, the two sides of the design inference are quite consistent and indeed mutually supportive, once we see that enabling and sustaining are not the same as producing or transforming at body-plan innovation level.

    3 –> A fine tuned cosmos, produces the materials required for, and sites on which life may come to be.

    4 –> But such a fine-tuned cosmos — per relevant plausibility issues on isolated islands of function in vast config spaces dominated by non-functional forms — does not then credibly spontaneously originate life, as the blind mechanisms and chance circumstances and processes are simply massively probabilistically incapable of credibly configuring required molecules, codes, algorithms, and mechanism such as von Neumann self-replicators, required for life to come to be.

    5 –> Such FSCO/I has only one empirically warranted, routinely observed, unquestionably known source: intentionally and intelligently directed configuration, i.e. design.

    6 –> So strong is this observation and the issue/analysis of islands of isolated function in vast config spaces, that we can confidently conclude that FSCO/I is a reliable sign signifying he presence of design as cause. {cf previous UD post on the design inference, here, and the previously linked on OOL.]

    7 –> Similarly, the origin of body plans by darwinian mechanisms is utterly implausible, for reasons discussed here, as repeatedly linked.

    8 –> Those reasons boil down to the same problem of accounting for islands of deeply isolated function on forces and processes of chance and mechanical necessity.

    9 –> Such forces are simply inadequate to cogently account for the diversity of life forms. FSCI, again, is a reliable sign of design as best explanation.

    _____________________

    In short, the two design inferences mutually reinforce. Failure to recognise that, is apparently driven by the assumption that chemical evolution on blind chance and necessity in a still warm electrified pond or more modern site, is a credible explanation. Also, that Darwinian type mechanisms can account for he 10’s of millions of bits worth of incremental FSCI required to account for new body plans.

    Neither is a credible claim, on inspection of the implications of deeply isolated islands of function in large config spaces.

    By contrast, design is a routinely observed source of dFSCI.

    GEM of TKI

  26. 26
    second opinion says:

    I’m sorry kairosfocus but the

    Failure to recognise that

    is driven by logic.

    The two statements reinforce each other in the same way as the statements “the car is green” and “the car has not four doors”.

    But thank you for your reply and for trying to understand me.

  27. 27
    kairosfocus says:

    2nd O:

    I thank you for your prompt response.

    I believe I have already outlined why your perceived conflict is erroneous.

    I need only summarise here:

    1: If the key parameters and laws of he cosmos were slightly different, the cosmos would be radically hostile to C-chemistry, cell based life.

    2: For instance, there would not be anything more than H and He in the cosmos, or there might be so little of O or C or even H that C-chemistry, aqueous medium cell based life would not be possible.

    3: Other slight changes would have even more deleterious effects on the provision of materials and sites for cell based life.

    4: So, cosmological fine tuning addresses getting to the sites and materials used in life, i.e the provision of causally and logically necessary material conditions for the sort of life we are discussing.

    5: In addition, cosmological fine tuning is best explained on the intentionally directed organisation of the physics, parameters and initial conditions of he cosmos we observe. Thus, the stage is set for life to be made.

    6: However, that does not at all get us to the causally sufficient conditions for such life to originate, once we see the significance of FSCO/I.

    7: For, blind chance plus necessity are not credibly capable of creating the 100 – 1,000+ k bits of FSCI required for simplest cell based life; but that sort of quantum of dFSCI is routinely produced by intelligence.

    8: On the sign of dFSCI, we have excellent reason to infer that the living cell, made of the materials provided by a fine tuned cosmos, and on sites that are habitable based on the same fine tuning, is an artifact of design.

    9: Going further,the increments of 10 – 100+ Mbits of additional dFSCI required to account for dozens of major body plans (much less in the window of time associated with the Cambrian life revolution on the usual timelines), similarly are beyond the credible reach of blind chance + necessity. But such quanta of dFSCI are empirically known to be produced by intelligence. So, body plan level biodiversity is also credibly the product of design.

    ____________________

    In short, the two inferences are independent on investigative methods, but supportive on results and findings.

    Further to this, to overturn the inferences, objectors need to show that undirected chance plus necessity empirically produce functionally specific complex organisation and related information. That demonstration — if credibly achieved — would also arguably overthrow a considerable body of thermodynamics.

    As it stands, both observation and analysis of random walks in large config spaces containing islands of function, strongly indicate that such an observation will not be feasible.

    GEM of TKI

  28. 28
    kairosfocus says:

    Update:

    With some help from UD’s technical folks [HT: TCS], I have added a video clip that explains the design inference on fine tuning.

    GEM of TKI

Leave a Reply