ID Foundations Intelligent Design

ID Foundations: The design inference, warrant and “the” scientific method

Spread the love

[Continued, from here]

16 –> Dembski provides an answer, in his recent announcement of a vision/ purpose statement for the Evolutionary Informatics Lab:

Intelligent design is the study of patterns in nature best explained as the product of intelligence . . . Archeology, forensics, and the search for extraterrestrial intelligence (SETI) all fall under this definition. In each of these cases, however, the intelligences in question could be the result of an evolutionary process. But what if patterns best explained as the product of intelligence exist in biological systems? [That is, the digitally coded, functionally specific complex information needed to explain the origin of observed, cell-based life, and the origin of body-plan level biodiversity]  . . . By looking to information theory, a well-established branch of the engineering and mathematical sciences, evolutionary informatics shows that patterns we ordinarily ascribe to intelligence, when arising from an evolutionary process, must be referred to sources of information external to that process [nb: as it is not seriously credible that complex algorithmic or linguistic, specifically functional information comes about by in effect “lucky noise”]. Such sources of information may then themselves be the result of other, deeper evolutionary processes. But what enables these evolutionary processes in turn to produce such sources of information? Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality’s ability to produce the required information. Evolutionary informatics . . . thus points to the need for an ultimate information source qua intelligent designer. [Emphases added.]
17  –> In short, the key problem is that when applied to biological systems, the design inference points to intelligent design of key features of life. Thus, we see a challenge to the methodological naturalism that is a key plank of the evolutionary materialistic paradigm for origins science.
18 –> That paradigm posits that scientific explanation “must” be naturalistic, which raises the issue of the proper definition of science: is science based on empirical observation, modelling and analysis towards progressively more accurate descriptions and explanations of our world? Or,  must it be limited to naturalistic explanations that can only appeal to forces of chance and necessity on origins questions? And if the latter, does this structurally bias science against a possibly true explanation?
19 –> UD lead blogger Barry Arrington, has put the dilemma quite well:

Today, for the sake of argument only, let us make two assumptions:

1.  First, let us assume that the design hypothesis is correct, i.e., that living things appear to be designed for a purpose because they were in fact designed for a purpose.

2.  Second, let us assume [presumably, by the “rule” of methodological naturalism] that the design hypothesis is not a scientific hypothesis, which means that ID proponents are not engaged in a scientific endeavor, or, as our opponents so often say, “ID is not science.”

From these assumptions, the following conclusion follows:  If the design hypothesis is correct and at the same time the design hypothesis may not be advanced as a valid scientific hypothesis, then the structure of science prohibits it from discovering the truth about the origin of living things . . . .

No one can know with absolute certainty that the design hypothesis is false.  It follows from the absence of absolute knowledge, that each person should be willing to accept at least the possibility that the design hypothesis is correct, however remote that possibility might seem to him.  Once a person makes that concession, as every honest person must, the game is up.  The question is no longer whether ID is science or non-science.  The question is whether the search for the truth of the matter about the natural world should be structurally biased against a possibly true hypothesis. [“What if it’s true?” Uncommon Descent, Aug. 6, 2010. (Emphasis added.)]

20 –> And, that leads to a clash with Lewontinian a priori materialism, which is dominant in scientific and related institutions. As Lewontin summarises the dominant school of thought:
. . . to put a correct view of the universe into people’s heads we must first get an incorrect view out . . .   the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth . . . . To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.  [From: “Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis added.]

21 –> The loaded language shows that this is plainly an ideological a priori, and as Arrington pointed out,  it has the effect of biasing conclusions on origins science; raising the question, is scientific knowledge intended to be an accurate and well warranted view of reality, i.e as far as possible a truthful one? If so, then the imposition of a priori materialism is improper.
22 –> As Design thinker Philip Johnson therefore observed in response to professor Lewontin:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”
. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
23 –> On balance, then, the explanatory filter is a reasonable approach, and one that fits into the general scientific method. The controversies that surround it — apart from the teething troubles of any new departure in the sciences — are evidently largely ideological.
24 –> So, whether or not one is inclined to accept the above Dembski formulation, one should be at least aware of this perspective and its rationale if one is to be properly educated on origins science.
(NB: similarly, one should know how informed Creationists have worked out their various Young Earth or Old Earth Creation views and how they respond to their critics, e.g. here, here, here, here and here.)

25 –> The design inference explanatory filter, however, is not just confined to variants on complex, specified information.  this becomes evident once we see that such CSI [or FSCI or dFSCI]. For, as Wicken pointed out, the information in question is in the context of complex, functional organisation.

26 –> So, while we may prioritise CSI and FSCI, the same basic question extends to the evident irreducible complexity of many functional systems in life forms. For instance ENV reports that , in the notorious Dover trial in 2005, ID researcher Scott Minnich testified about his gene knock-out studies on the bacterial flagellum,  as follows:

One mutation, one part knock out, it can’t swim. Put that single gene back in we restore motility. Same thing over here. We put, knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We’ve done that with all 35 components of the flagellum, and we get the same effect. [Dover Trial, Day 20 PM Testimony, pp. 107-108.]

27 –> That is, across 35 proteins associated with the iconic bacterial flagellum that appears at the top of this blog, we see a pattern of well-matched, interacting parts that give rise to a composite function.  That function is complex, and is based on 35 components that are each necessary, and once assembled, are jointly sufficient to implement a working flagellum, a sort of outboard motor for bacteria that works by whirling around and pushing the bacterium forward. (When reversed, the motion causes tumbling, used to change direction of travel. )

28 –> But, couldn’t this evolve from the T3SS toxin injector? ENV (same article) is again helpful:

Ken Miller has been making the same objections about irreducible complexity and the bacterial flagellum for a long time. In his Dover testimony, his book Only a Theory, and in other writings he argues that irreducible complexity for the flagellum is refuted because about 10 flagellar proteins can also be used to construct a toxin-injection machine (called the Type-III Secretory System, or T3SS) that some predatory bacteria use to kill other cells . . . .

As New Scientist reported:

One fact in favour of the flagellum-first view is that bacteria would have needed propulsion before they needed T3SSs, which are used to attack cells that evolved later than bacteria. Also, flagella are found in a more diverse range of bacterial species than T3SSs. “The most parsimonious explanation is that the T3SS arose later,” says biochemist Howard Ochman at the University of Arizona in Tucson

Second, the T3SS is composed of only about 1/4 of the proteins in the flagellum, and does not help one account for how the fundamental function of the flagellum–its propulsion system–evolved. The unresolved challenge that the irreducible complexity of the flagellum continues to pose for Darwinian evolution is starkly summarized by William Dembski: “At best the T[3]SS represents one possible step in the indirect Darwinian evolution of the bacterial flagellum. But that still wouldn’t constitute a solution to the evolution of the bacterial flagellum. What’s needed is a complete evolutionary path and not merely a possible oasis along the way. To claim otherwise is like saying we can travel by foot from Los Angeles to Tokyo because we’ve discovered the Hawaiian Islands. Evolutionary biology needs to do better than that.”36

29 –> In short, IC systems raise the question that new functional organisation that requires multiple well-matched co-ordinated parts, soon enough requires such a degree of coordinated structure, that chance variation loses plausibility as a source of the required configuration. Which challenges the dominant [neo-]Darwinian model of evolution.

30 –> In short the question of common descent [which Michael Behe, the originator of the IC concept as a plank of design theory, accepts]  is now separated from that of the proposed Darwinian mechanism of undirected chance variation, natural selection on differential reproductive success, and similar “Blind Watchmaker” mechanisms.

31 –> Going further, the complex functional organisation approach extends to the discovery that the universe we observe credibly had a beginning [usually dated at ~ 13.7 BYA], and that it sits at a finely balanced, fine-tuned operating point that facilitates Carbon-chemistry, cell based life. For, it is at least strongly arguable (even, through the multiverse model) that the best explanation of such organisation is intentionally and intelligently directed, purposeful organization of the physics and circumstances of the observed cosmos.

________________

So, we may freely conclude that the design inference is well warranted as a means of credibly assigning cause across necessity, chance and design. It fits in well with the generic scientific method, and comports well with the concept that a major purpose of science is that it should progressively seek to discover the well-warranted, empirically supported truth about our world, based on observation, experiment, analysis, explanatory modelling [aka theorising] and unfettered responsible discussion among the informed.

But, its implications for origins science are nothing short of revolutionary, and so it is controversial. That controversy goes to the heart of what science is and what it is meant to do.

Consequently, since science is so important to the modern world, we all need to understand the issues, and the design perspective.

36 Replies to “ID Foundations: The design inference, warrant and “the” scientific method

  1. 1
    vjtorley says:

    Hi kairosfocus,

    Thanks very much for a great post. I have three questions for you.

    (1) Professor Dembski has remarked: “If we can explain by means of a regularity, chance and design are automatically precluded.” But of course there are many (including Dembski himself) who also believe that the regularities (more precisely, laws) of nature are themselves designed. So my question is: how would you explicate the difference between the way in which the laws of nature are designed, and the way in which FCSI is the product of design?

    (2) As you and I are both aware from a recent exchange of views with Aiguy, one objection to the case for cosmic ID is that the inductive evidence that all designers are physical is just as strong as the overwhelming inductive evidence that all instances of FCSI exceeding a certain threshold of complexity (1000 bits) were intelligently designed. (Or is it?) So it seems the only kind of Designer we’re entitled to reason to is an embodied one. How would you respond to that line of argument? One possible line of response that has occurred to me is that the case for design isn’t just built on inductive logic, but also on abductive logic – whereas the case for all designers being physical is an example of inductive logic.

    (3) By the way, how many bits of FCSI are needed to specify the values of the fundamental constants, to the degree of precision required for life to emerge? Has anyone calculated this number? Just curious.

    Thanks again for a great post, kairosfocus.

    ++++++++++

    ED: Fixed a bad tag — looks like if you reverse the solidus and the i, you push through an infinite italicisation.

  2. 2
    kairosfocus says:

    Dr Torley:

    Let’s see how the usual objectors respond to the post and the two background posts, in light of the build-up over the past week or so.

    You have also raised some significant concerns. Let me scoop and reply, point by point:

    1: how would you explicate the difference between the way in which the laws of nature are designed, and the way in which FCSI is the product of design?

    As p.2 of the post discusses briefly, the issue is one of the context of the design inferences.

    FSCI relates to inferring design of objects observed to incorporate it into their functionality, whether a computer [think microcoded MPU, or just the Word 2007 install — “I HATE the ribbon, uncle Bill!”] or a living cell.

    Recall, the inference on sign construct:

    I:[si] –> O, on W

    At this level, physical laws are a given, and we have no immediate reason to suspect that the laws of our observed cosmos are designed.

    But then, lift your eyes to the cosmos, and look at its credible beginning and fine-tuning to sit at an operating point that is locally — lone fly on the wall swotted by a bullet — isolated and facilitates the sort of C-chemistry, cell based life we experience.

    From cosmological observations, and related reasoning, we have warrant to infer that that beginning entails a beginner, and that the complex fine-tuning [BTW, the linked has a table of five parameters that pushes us past the FSCI threshold] entails intelligently directed configuration of physics to create a habitat suitable for life.

    On one particular parameter, we are looking at a degree of tuning — 1 in about 10^ 60 — comparable to the ratio of one sand grain to the atomic matter of the observed universe. (And yes, that leaves off the dark matter. The point was to highlight just how finely tuned such a ratio is.)

    2: one objection to the case for cosmic ID is that the inductive evidence that all designers are physical is just as strong as the overwhelming inductive evidence that all instances of FCSI exceeding a certain threshold of complexity (1000 bits) were intelligently designed. (Or is it?) So it seems the only kind of Designer we’re entitled to reason to is an embodied one. How would you respond to that line of argument?

    Nope, it’s not a full induction, it is an analogical argument. So, immediately, it falls to the issue of what points of comparison are material: all higher animals are embodied, but only one class is fully, conceptually and abstractly linguistic. So, mere embodiment cannot explain abstract, verbal reasoning ability, a critical point to the required logico-mathematical and linguistic reasoning.

    Second, consider the PC you are working on. By far and away most people who use computers could not design or build them. So, mere brain size and verbal ability do not explain ability to design this class of systems.

    Computer engineers are deeply knowledgeable, highly talented, well-trained, intelligent people.

    In short, the issue is not crude physicality or possession of a brain, but possession of a MIND, with the knowledge and intellectual skills to carry out the relevant class of designs.

    And, while many would love to insist that we only consider embodied minds, the problem is that this is an expression of a priori Lewontinian materialism, not any empirically sound inference on evidence. It is a question-begging deduction resting on a bad analogy, not a cogent induction.

    As the Derek Smith model I often point to — and which AIG pointedly repeatedly ignores — shows, the two-tier controller architecture is compatible with diverse possible ways of getting to the supervisory controller, even for embodied intelligences.

    Then, when we look at the cosmological design inference, it is reasonable to infer to a mind who is a necessary being, and is knowledgeable and powerful enough to build a cosmos. Such a mind would be prior to anything we have a right to call matter; which from mass-energy equivalence a la Einstein, is plainly contingent.

    3: how many bits of FCSI are needed to specify the values of the fundamental constants, to the degree of precision required for life to emerge? Has anyone calculated this number?

    My linked selected just five parameters and went past 1,000 bits.

    I think Penrose had was it 1 in 10^(10^123) as precision of the Creator’s aim, which is a LOT more bits.

    BA 77 will recall the number and source better than I can just now.
    _____________

    GEM of TKI

  3. 3
    DrBot says:

    KF, facinating post!

    I watched an interesting talk on microprocessor design the other month and your post reminded me of it, particuarly in relation to human designers.

    If you take a look at the silicon microarchetecture of modern processors they are, apart from the orderly memory, a mess. The reason (and the reason this technology has progressed so fast) is that they are designed by computers. As designers we have some quite severe limitations but we are clever enough to invent mechanistic processes to generate designs that exceed the capabilities of human designers alone. We use our intelligence to specify target behaviours and create processes to generate systems that meet those requirements but the resulting systems can be difficult for us to understand.

    Of course when it comes to God, if God is an all powerfull being then these limitations don’t apply so I guess I was wondering (rather vaguely 🙂 ) how the limited abilities of human designers link into the chain of reasoning that allows us to infer that we were designed, and if it has any implications at all?

    For example, is it reasonable to infer that the creator might have created mechanisms to aid further creation – I realise I’m skimming dangerously close to the idea of theistic evolution here but the question is independent of evolutionary arguments – there are plenty of other mechanisms we can concieve of that can aid a designer!

    One other note (Just skimming because I’m a bit busy so apologies if I missed the deeper reasoning):

    all higher animals are embodied, but only one class is fully, conceptually and abstractly linguistic.

    Is this warranted? How do we know that other higher animals can’t reason and use abstract symbols in the same way, just not to our level. In other words could it be that we are just (much) further along a continuum of cognitive abilities (rooted in embodied brains), rather than on the other side of a wall nessecitated by something extra. I guess the question that this begs is how can we tell?

  4. 4
    bornagain77 says:

    kf and Dr. Torley,

    “how many bits of FCSI are needed to specify the values of the fundamental constants, to the degree of precision required for life to emerge? Has anyone calculated this number?”

    This is a very interesting question in that the constants are ‘transcendent information’ in and of themselves and do not reduce to a material basis but instead dictate what the ‘material’ basis of energy-matter will do, a ‘material’ basis which reduces to transcendent information itself as clearly illustrated by Quantum Teleportation.,,,

    Ions have been teleported successfully for the first time by two independent research groups
    Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,,
    http://www.rsc.org/chemistrywo.....ammeup.asp

    Atom takes a quantum leap – 2009
    Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
    “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.
    http://www.freerepublic.com/fo.....1769/posts

    A while back I recall someone tried to employ Szostak’s functional information equation to deduce approximately how many functional information bits would be required for the ‘Privileged Planet’ parameters, but I felt this approach was really a disservice to the problem we are facing since it failed to build a proper foundation for addressing the problem.,,,

    but to back up a bit and to try to put this problem more fully in context,, First it must be remembered that materialists are loathe to admit that the transcendent universal constants are even constant in the first place since materialism presupposes variance of the transcendent universal constants. Yet it is shown that the ‘material’ basis of reality is in fact governed by these ‘transcendent’ universal constants that are in fact CONSTANT:

    Stability of Coulomb Systems in a Magnetic Field – Charles Fefferman
    Excerpt of Abstract: I study N electrons and M protons in a magnetic field. It is shown that the total energy per particle is bounded below by a constant independent of M and N, provided the fine structure constant is small. Here, the total energy includes the energy of the magnetic field.
    http://www.jstor.org/pss/2367659?cookieSet=1

    Testing Creation Using the Proton to Electron Mass Ratio
    Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.,,,
    http://www.reasons.org/Testing.....nMassRatio

    Latest Test of Physical Constants Affirms Biblical Claim – Hugh Ross – September 2010
    Excerpt: The team’s measurements on two quasars (Q0458- 020 and Q2337-011, at redshifts = 1.561 and 1.361, respectively) indicated that all three fundamental physical constants have varied by no more than two parts per quadrillion per year over the last ten billion years—a measurement fifteen times more precise, and thus more restrictive, than any previous determination. The team’s findings add to the list of fundamental forces in physics demonstrated to be exceptionally constant over the universe’s history. This confirmation testifies of the Bible’s capacity to predict accurately a future scientific discovery far in advance. Among the holy books that undergird the religions of the world, the Bible stands alone in proclaiming that the laws governing the universe are fixed, or constant.
    http://www.reasons.org/files/e.....010-03.pdf

    Dr Sheldon has written a defense of ‘non-variance’ of the ‘fine-structure’ constant here on one of your old threads Dr. Torley:

    http://www.uncommondescent.com.....ent-367471

    ,,, Yet how would one go about calculating functional information inherent with the ‘transcendent information constants’ when the denominator for total possible values approaches infinite, if it is not in fact infinite???

    As Dr. Bruce Gordon explains:

    BRUCE GORDON: Hawking’s irrational arguments – October 2010
    Excerpt: Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them.
    http://www.washingtontimes.com.....arguments/

    and it should also be remembered that we are dealing with far more than a few constants:

    Systematic Search for Expressions of Dimensionless Constants using the NIST database of Physical Constants
    Excerpt: The National Institute of Standards and Technology lists 325 constants on their website as ‘Fundamental Physical Constants’. Among the 325 physical constants listed, 79 are unitless in nature (usually by defining a ratio). This produces a list of 246 physical constants with some unit dependence. These 246 physical constants can be further grouped into a smaller set when expressed in standard SI base units.,,,
    http://www.mit.edu/~mi22295/co.....tants.html

    It should also be remembered in trying to ascertain FCSI that these constants are ‘irreducible complex’

    “If we modify the value of one of the fundamental constants, something invariably goes wrong, leading to a universe that is inhospitable to life as we know it. When we adjust a second constant in an attempt to fix the problem(s), the result, generally, is to create three new problems for every one that we “solve.” The conditions in our universe really do seem to be uniquely suitable for life forms like ourselves, and perhaps even for any form of organic complexity.” Gribbin and Rees, “Cosmic Coincidences”, p. 269

    So Dr. Torley and kf that is a very, very, basic outline of the problem to give you some food for thought,,

  5. 5
    bornagain77 says:

    kf are these the references you were talking about:

    Roger Penrose discusses initial entropy of the universe. – video
    http://www.youtube.com/watch?v=WhGdVMBk6Zo

    The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose
    Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).”
    http://www.pul.it/irafs/CD%20I.....enrose.pdf

    How special was the big bang? – Roger Penrose
    Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123.
    (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989)
    http://www.ws5.com/Penrose/

    Infinitely wrong – Sheldon – November 2010
    Excerpt: So you see, they gleefully cry, even [1 / 10^(10^123)] x infinity = 1! Even the most improbable events can be certain if you have an infinite number of tries.,,,Ahh, but does it? I mean, zero divided by zero is not one, nor is 1/infinity x infinity = 1. Why? Well for starters, it assumes that the two infinities have the same cardinality.
    http://procrustes.blogtownhall.....rong.thtml

    This 1 in 10^10^123 number, for the time-asymmetry of the initial state of the ‘ordered entropy’ for the universe, also lends strong support for ‘highly specified infinite information’ creating the universe since;

    “Gain in entropy always means loss of information, and nothing more.”
    Gilbert Newton Lewis – Eminent Chemist

    “Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
    Tom Siegfried, Dallas Morning News, 5/14/90 – Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article
    http://www.bible.ca/tracks/dp-lawsScience.htm

    Thermodynamic Argument Against Evolution – Thomas Kindell – video
    http://www.metacafe.com/watch/4168488

  6. 6
    bornagain77 says:

    Dr. Torley, it seems Dr. Sheldon’s ‘Infinitely Wrong’ article is no longer at that link I cited but may be found on this page, about the third article down:

    http://procrustes.blogtownhall.com/page1

  7. 7
    kairosfocus says:

    DrBot

    Thanks for a usefully stimulating comment, even if you had to snatch a few minutes from a busy day for it.

    Pardon a few notes:

    1: If you take a look at the silicon microarchetecture of modern processors they are, apart from the orderly memory, a mess . . . . they are designed by computers.

    I disagree, a bit. Algorithms are developed and programs are written by programmers; validated then run, creating a constellation of interacting modules to allow speculative, out of order instruction execution, pipelines of astonishing depth, parallel processing etc. The computers running these programs have no goals, they have no intentions, they simply execute instructions, closing and opening electrical circuits.

    The modern equivalent of Liebniz’s mill wheels grinding away at one another mindlessly. And so, programs have no common sense: GIGO still obtains, unless someone was clever enough to write an error trap that catches the problem before it wreaks havoc — like that reversed solidus in comment no 1.

    All of the intelligent, functional organisation came in from without. And though the rumours that Uncle Billy was seen buying up banana plantations over in central America are not true, some would suggest that hat is not too far from the truth.

    (And nope, I am actually allergic to raw bananas: they have to be boiled, baked or the like before I can eat them.)

    But even so, very imperfect design — including rather clumsy or convoluted text of posts — is still design and it is still detectable by the inference filter.

    2: We use our intelligence to specify target behaviours and create processes to generate systems that meet those requirements but the resulting systems can be difficult for us to understand.

    30+ years back, so was a hand-drawn complex circuit diagram for a storage tube cathode ray oscilloscope.

    3: I was wondering (rather vaguely 🙂 ) how the limited abilities of human designers link into the chain of reasoning that allows us to infer that we were designed, and if it has any implications at all?

    We are designers, and that is what is relevant. That we are finite and fallible does not change that fact — just, it means that we have to spend a fairly long time debugging and troubleshooting in a multiple, interacting fault environment to get the complex system right. (The echo of remembered frustration and long hours tracking down yet another subtle bug, is real.)

    4: is it reasonable to infer that the creator might have created mechanisms to aid further creation – I realise I’m skimming dangerously close to the idea of theistic evolution here but the question is independent of evolutionary arguments – there are plenty of other mechanisms we can concieve of that can aid a designer!

    Actually, modern Young Earth Creationists often believe that ability to vary to fit niches within more or less taxonomic families [cats, dogs, etc] is a part of the original design. Much of that not so much by injection of additional info by mutations, but by isolation and extraction of specialised sub-pops from an original blended pop, like a good part of how dogs seem to have come from the original dog-wolf.

    The immune system seems to use targetted random search strategies.

    Robustness due to adaptability is a reasonable design goal, if you can get it. Hard for us to do just yet.

    5: How do we know that other higher animals can’t reason and use abstract symbols in the same way, just not to our level. n other words could it be that we are just (much) further along a continuum of cognitive abilities (rooted in embodied brains), rather than on the other side of a wall nessecitated by something extra.

    You will note that my point was that there is a relative difference, and that the spectrum’s extremes between typical animals and people shows that mere embodiment and having brains does not explain the ability to do technically sophisticated reasoning, analyses and designs that rely on proficiency with abstract symbols and concepts.

    Fundamentally, if we were using Tigerton’s laws of motion, or Dolphinstein’s theory of relativity, or Chimpck’s Qunatum theory it would not affect the basic point. It is not the mere brain, but the quality of mind and knowledge that count.

    Notice how, when I turned to computer engineering, I pointed out how most people who use the machines don’t understand them in detail, nor are they able to design or develop them. That takes deep knowledge, high skill in analysis and synthesis, and years of experience in the disciplines. In short the issue is not having a brain or a body, but having a mind.

    And, the Derek Smith model — looks like I am going to have to do a mind-body issues and design theory foundation post at some point, DV — points out ways in which a two-tier controller model allows for the brain-body subsystem to serve as an input-output multiple input, multiple output smart control loop that is then supercvised by a higher order controller. I suspect tha thigher order controller can be in some cases done in Silicon and software, in others may be in different aspects of brains, and in yet others may be open to a mind of fundamentally different substance from atomic matter, that influences matter though some sort of quantum gateway.

    If you are inclined to doubt me on this, consider how dark matter swamps out the palpable atomic matter on the cosmic scale and seems to be non-electromagnetic, if the bullet galaxy cluster collision with the dark halo separate from the X-ray emitting collision is to be believed.

    Dark energy is similarly mysterious, and between the two, we are looking at about 4% of the detected cosmos that we know anything of serious substance about.

    Here is wiki on the Bullet cluster:

    The most direct observational evidence to date for dark matter is in a system known as the Bullet Cluster. In most regions of the universe, dark matter and visible material are found together,[29] as expected because of their mutual gravitational attraction. In the Bullet Cluster, a collision between two galaxy clusters appears to have caused a separation of dark matter and baryonic matter. X-ray observations show that much of the baryonic matter (in the form of 107– 108 Kelvin[30] gas, or plasma) in the system is concentrated in the center of the system. Electromagnetic interactions between passing gas particles caused them to slow down and settle near the point of impact. However, weak gravitational lensing observations of the same system show that much of the mass resides outside of the central region of baryonic gas. Because dark matter does not interact by electromagnetic forces, it would not have been slowed in the same way as the X-ray visible gas, so the dark matter components of the two clusters passed through each other without slowing down substantially. This accounts for the separation. Unlike the galactic rotation curves, this evidence for dark matter is independent of the details of Newtonian gravity, so it is held as direct evidence of the existence of dark matter.[30] Another galaxy cluster, known as the Train Wreck Cluster/Abell 520, seems to have its dark matter completely separated from both the galaxies and the gas in that cluster, which presents some problems for theoretical models.[31]

    Frankly, we do not begin to know enough about the cosmos to be materialists with any confidence. The exotic stuff we are beginning to know about is already 25 times the familiar stuff we know!

    So, there is a lot of room for a real mind that has real interactions with a brain-body system.

    And, with our seeing — ever since Ein-/Dolphin-stein — that matter is interconvertible with energy, we know that matter is inherently contingent. Indeed, that is a part of our big bang model of origins of atomic matter. That, in the end calls for the root cause of a material cosmos (even through a multiverse) being a necessary and non-matter being. One who on the local isolation and precision fine tuning of our cosmos, is capable of specific, complex subtle intelligent design. A mind before all matter, in short, and the ground of all matter.

    Sure such a being is mysterious. but in a cosmos riddled with dark matter and dark energy, we should be getting used to that by now.
    ________________

    But, any way, this stuff is all based on the basic inference to design.

    The prime question is, does the original post help us understand that inference and its warrant in a scientific context?

    If there are gaps or obscurities, where and what do you think should be done about them?

    GEM of TKI

  8. 8
    kairosfocus says:

    BA 77:

    Thanks.

    Also, I found out the bug: I had manipulated the Adobe flash settings manager control page parameters a bit too aggressively, and set the caches on my PC to zero.

    When I thought on how I saw the problem in my no 2 browser, Safari, it dawned: it has to be in-common software for vids . . . Flash.

    And yes there is an Adobe page that will have in it your flash video downloads and look-ats etc. (What, you didn’t know that? Until not so long ago, neither did I — thought Flash lived completely on my machine, like a good little download. Better take a look and see what they have on you!)

    Anybody got a 3rd party Flash viewer that does not play games like that?

    GEM of TKI

  9. 9
    kairosfocus says:

    Folks:

    Did a quick Google search.

    Found this criticism as the first to show up under the post title:

    ID Foundations: The design inference, warrant and “the” scientific method

    Try using the explanatory filter on the Old Testament….in fact, you can’t because the data is obviously fiction, epic poetry. The irony is that a design sense (but not necessarily ‘intelligent’ design) is present in the Axial Age as a whole.

    Sounds like the old red herring led away to a Creationist strawman to be soaked in oil of ad hominems and ignited to me.

    On the technical point the OT has in it hundreds and hundreds of pp of materials, replete with dFSCI, leading to the inference that it is designed. Its text, whether in Hebrew and Aramaic, or Septuagint, or translations including English, is also directly known to be intelligently and intentionally configured to express a particular message.

    So, it is intelligently designed on inference and on observation; i.e yet another case supporting the correctness of the design inference on signs such as FSCI.

    Of course the focus of the comment was a bit of mockery based on twisting the meaning of “intelligent,” showing the objector’s contempt. If that is what the objector has had to resort to, then the point in the original post above is well made.

    And, methinks there are many eminently qualified scholars who would beg to differ with the broad-brush dismissive evaluation of the Bible being given above.
    ________________

    ADDED: Cf Hugenberger on historicity of OT esp, here

    Also, Gaskell’s notes on Modern Astronomy, the Bible and Creation, here (for which he has been subjected to disgraceful “expulsion”)

    For the NT and gospel, cf here, noting here on the general question of building a sound worldview
    _________________

    But that is not a focus for this blog thread.

    GEM of TKI

    _____________

    (F/N: Sir Darwiniana, do kindly look here to the take-down of the NCSE for its endorsing the ID = Creationism smear. Onlookers, see why first we had to do some rubble clearing?)

  10. 10
    DrBot says:

    KF, thanks for the considered response, if you will forgive me I’ll respond briefly on a few points (I’m up late, just finished writing a lecture on AI, now I need to sleep!)

    My phrasing ‘designed by computers’ was not implying that humans werent the ultimate source of design, I was highlighting how we use computers (and other tech) to perform design tasks for us, in particular nowadays design tasks that are seemingly intractable when approached with a pen and paper (i.e created by a person from the bottom up). In the case of computers they, and their design synthesis and optimisation algorithms, can generate vastly complex systems from much simpler specifications (provided by us) using mechanistic rules (designed by us)

    I agree that we are ultimatly the designer but we can (or can we?) usefully use the word designer to refer to the automated system – we create the rules, the computer generates the design – from analysis of this design we can infer that at some point in the causal chain an intelligence was involved.

    Going back to computer cores for the moment – people don’t ‘design’ masks for etching microprocessors any more, we design systems that perform this task for us. The person who gave the lecture that highlighted this was the emminent computer scientist Professor Stephen Furber – he used the words (paraphrasing) ‘back when we created the ARM processor we designed these by hand but now they are just so complex that they have to be designed by computers – it is to hard for us) – Is it valid or useful to use this language?

    This leads to a couple of interesting questions. The first, already asked in a way, is can we concieve of an ultimate creator that, rather than designing everything from the bottom up, creates mechanisms (they don’t have to be material in the sense of functioning in our material universe) to generate designs for them – Can or does God use engines of creation?

    The second question from this is is it possible when we infer design to start to examine if these hypothetical engines of creation were involved – is some of the complexity we see (and consist of) a result of, forgive the phrase, a ‘sub designer’?

    Quickly, on this bit (because I’m rambling on more than I intended 😉 )

    the spectrum’s extremes between typical animals and people shows that mere embodiment and having brains does not explain the ability to do technically sophisticated reasoning

    I’m just not convinced that this reasoning holds up on its own – why can’t we just be further along a path – why does the extreme distance nessecitate some extra (new and unique) stuff and not just some order of magnitude more of the existing stuff.

    One thing I’ve learned from studying AI is that embodiment is (in some camps) regarded as critical for intelligence – it is needed to ground abstract symbols in the real (But I’m not sure I agree yet!)

  11. 11
    bornagain77 says:

    Please excuse kf and Dr.Bot, but I have a few points that may be of interest:

    The following is an excellent recent interview with Dr. Marks beginning about 5 minutes into the podcast. Robert Marks explains exactly why Artificial Intelligence for computers has been a failure, as it was originally conceived, since it is found computers cannot create ‘information’ as ‘minds’ can…

    Robert J. Marks II interview with Tom Woodward, on “Darwin or Design?”
    http://podcast.den.liquidcompa.....38;event_i

    Here are a few of the papers Marks-Dembski have published:

    LIFE’S CONSERVATION LAW – William Dembski – Robert Marks – Pg. 13
    Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case.
    http://evoinfo.org/publication.....ation-law/

    Conservation of Information in Computer Search (COI) – William A. Dembski – Robert J. Marks II – Dec. 2009
    Excerpt: COI puts to rest the inflated claims for the information generating power of evolutionary simulations such as Avida and ev.
    http://evoinfo.org/publication.....nt-reason/

    Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism – Dembski – Marks – Dec. 2009
    Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it.
    http://evoinfo.org/publication.....gic-avida/

    Evolutionary Informatics – William Dembski & Robert Marks
    Excerpt: The principal theme of the lab’s research is teasing apart the respective roles of internally generated and externally applied information in the performance of evolutionary systems.,,, Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality’s ability to produce the required information. Evolutionary informatics, while falling squarely within the information sciences, thus points to the need for an ultimate information source qua intelligent designer.
    http://evoinfo.org/

    “Computers are no more able to create information than iPods are capable of creating music.”
    Robert Marks

    further note:

    The Law of Physicodynamic Insufficiency – Dr David L. Abel – November 2010
    Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.”
    http://www.scitopics.com/The_L.....iency.html

    further note:

    Though the authors of the ‘Evolution of the Genus Homo’ paper appear to be thoroughly mystified by the fossil record, they never seem to give up their blind faith in evolution despite the disparity they see first hand in the fossil record. In spite of their philosophical bias, I have to hand it to them for being fairly honest with the evidence though. I especially like how the authors draw out this following ‘what it means to be human’ distinction in their paper:

    “although Homo neanderthalensis had a large brain, it left no unequivocal evidence of the symbolic consciousness that makes our species unique.” — “Unusual though Homo sapiens may be morphologically, it is undoubtedly our remarkable cognitive qualities that most strikingly demarcate us from all other extant species. They are certainly what give us our strong subjective sense of being qualitatively different. And they are all ultimately traceable to our symbolic capacity. Human beings alone, it seems, mentally dissect the world into a multitude of discrete symbols, and combine and recombine those symbols in their minds to produce hypotheses of alternative possibilities. When exactly Homo sapiens acquired this unusual ability is the subject of debate.”

    The authors of the paper try to find some evolutionary/materialistic reason for the extremely unique ‘information capacity’ of humans, but of course they never find a coherent reason. Indeed why should we ever consider a process, which is utterly incapable of ever generating any complex functional information at even the most foundational levels of molecular biology, to suddenly, magically, have the ability to generate our brain which can readily understand and generate functional information? A brain which has been repeatedly referred to as ‘the Most Complex Structure in the Universe’? The authors never seem to consider the ‘spiritual angle’ for why we would have such a unique capacity for such abundant information processing.

    Genesis 3:8
    And they (Adam and Eve) heard the voice of the LORD God walking in the garden in the cool of the day…

    John 1:1-1
    In the beginning, the Word existed. The Word was with God, and the Word was God.

    The following video is far more direct in establishing the ‘spiritual’ link to man’s ability to learn new information, in that it shows that the SAT (Scholastic Aptitude Test) scores for students showed a steady decline, for seventeen years from the top spot or near the top spot in the world, after the removal of prayer from the public classroom by the Supreme Court in 1963. Whereas the SAT scores for private Christian schools have consistently remained at the top, or near the top, spot in the world:

    The Real Reason American Education Has Slipped – David Barton – video
    http://www.metacafe.com/watch/4318930

    The following video, which I’ve listed before, is very suggestive to a ‘spiritual’ link in man’s ability to learn new information in that the video shows that almost every, if not every,founder of each discipline of modern science was a devout Christian:

    Christianity Gave Birth To Science – Dr. Henry Fritz Schaefer – video
    http://vimeo.com/16523153

  12. 12
    kairosfocus says:

    DrBot:

    I will briefly follow up before going off to help a son with his math HW, having had to help with astronomy part of geography just a bit earlier.[Turns out he had an impossible triangle construction to do.]

    We are saying the same thing on spectrums, just with different emphases: my point hinges on the fact of the spectrum and that whether we are within humans or going across species lines, it is not embodiment as such that is the crucial point of comparison but mental function. That — as I pointed out too — would hold in a world where the Newton analogue was a Tigeroid, the Einstein analogue a Delphinoid, and the Planck analogue a Chimpoid.

    When it comes to the designs, speaking of the PCs as designers is comparable to the people who talk to their PCs, pleading with Word to give them back their document. There ain’t no smarts dere dat wazn’t put in.

    PC’s as we both know, are passive, dynamically inert machines that are organised into complex combinations of parts that under certain initial and onward intervening conditions, will carry out algorithms that we find useful.

    Smarts in, smarts out, and GIGO, too.

    As to the MPU designs, my understanding — haven’t been keeping in close touch recently once we started going well beyond 1 mn transistors on a chip — is that basically we have a hierarchy of modules, from gates up to subsystems, and we have algorithms for making sure the interconnexions are right. Yup, the masks used to be made by hand, and are so complex now they cannot be made by hand and we probably have to interface as users at very high level [BTW, how are the micro stripline techniques keeping up with the RF wave effects . . . or are there heuristics that allow us to set rules of thumb to minimise the issue], but there is nowt there that is not in principle already there in any automation.

    The PC is carrying out a detailed programme, but it has no common sense. We have to set it up right,and make sure it keeps right every step of the way.

    On engines of creation, some would view the fine tuning of the cosmos as an engine of creation.

    Chance variation is simply not capable of generating FSCI for the reasons laid out above:too much config space, too fast. Smart heuristics or beacons or maps would have to be built in. And that is why most evolutionists are theistic evolutionists when pressed hard enough.

    In education circles, the issue of embodiment is that the abstraction is based on the concrete. But, a PC should tell us that unless there is an abstraction engine there with capacity to do it in the first place, no go sir. Napoleon once took a complaining officer to some mules and told him these two have been with me on every campaign, but are still mules; no ability for reflective observation, inference and warrant — as b/g note discussed — and no capacity. This capability is distinctly mental.

    I suspect this may be a valid point to Plato’s idea of forms and the world of forms.

    I repeat, we simply do not know enough about the cosmos to be materialists yet; and evolutionary materialism is inherently self referentially absurd, on multiple grounds. Materialism is a quasi-religion living off promissory notes that it simply cannot redeem.

    GEM of TKI

  13. 13
    kairosfocus says:

    F/N: Intelligent design, by itself as a scientific endeavour, has no commitment on the nature of intelligence. It is an inference to design as artifact, not to the intelligence behind the design. As a matter of philosophy, the cosmological and teleological issues on our contingent, fine tuned cosmos, point to a necessary being who is architect of the cosmos. As matter as we know it is contingent, that necessary being cannot be of material substance like that of our world.

  14. 14
    kairosfocus says:

    BA: useful points to ponder, as usual. Robert Marks in particular is one real bright boy. G

    PS: For those who were offput by the objection elsewhere that to excerpt and post significant materials is not an argument, it plainly is. I have learned some very useful things form BA’s video scoops and quotes.

  15. 15
    DrBot says:

    KF, briefly because I have a busy day and probably won’t get a chance to participate again for a few …

    I realised I answered my own question last night re does/did god create engines of creation. The answer is clearly yes because we were created (designed) but we are also capable of design (creating) – we are one of those engines. I guess from the perspective of computers and design the question is then – can we create designers? and more specifically, can we create designers that can create designers? (etc, etc)

    To re-phrase – can we ‘put in’ to our creations this ingredient that allows us to function in the way we do?

    Relating this back to animals for a moment – I still don’t think your answer regarding our advanced mental abilities nessecitates us being seperate, rather than just more advanced, that other animals – remember they were created as well and could have the same ‘ingredients’, just to a lesser degree or not fully enabled.

    Lets go back a moment:

    the spectrum’s extremes between typical animals and people shows that mere embodiment and having brains does not explain the ability to do technically sophisticated reasoning

    This implies that something else is required for us to reason (beyond just a physical brain and body) – this something may also be present in other animals! This takes us back to classic issues of philospohy and the problems of introspection, how do we tell if something else has a mind (consciousness) when we have no emperical measure as yet? (e.g. John Searle and the Chinese Room problem in AI) All we can do at the moment is talk to other people and infer that they have minds like us, we can’t talk to animals in any meaningful way (yet) or critically – experience their world – what it is truly like to be them.

    Animals do solve problems and some even create objects. I don’t think we are able to say with any certainty yet that they don’t use some form of reasoning, or even employ symbolic abstractions in some primitive way – given out tendancy to anthropomorphise things in our world it is also hard to study!). It is for this reason that your argument, that our mental abilities imply something extra because of seperation, isn’t warranted – indeed I don’t think it is necessary for your wider argument!

  16. 16
    gpuccio says:

    vjtorley:

    As usual, you raise very interesting points. I would like to add some personal comments about them to the very good work already done by kf.

    “So my question is: how would you explicate the difference between the way in which the laws of nature are designed, and the way in which FCSI is the product of design?”

    I would say that we have to distinguish between a designed algorithm which generates regular outputs, and its outputs.

    While the outputs, which exhibit some form of regularity, and are therefore compressible and explainable by a necessity mechanism (the algorithm), do not exhibit FSCI, the algorithm which generates them, if complex enough, does.

    IOWs, a computer, including its software, is certainly an example of designed object, exhibiting a lot of FSCI.

    A computer’s output, even if very complex, can anyway be explained by the computer which generates it (including all the input information). Even if the output were more complex of the computer itself, it could anyway be explained by the system which has, by necessity, generated it, and therefore its K complexity would at most be equal to the initial complexity of the system.

    That’s, IMO, the fundamental limit of necessity: it cannot create new, truly original complex information.

    Obviously, a follower of strong AY would object that humans are computers too, and that therefore their outputs are the result of necessity.

    But that is simply false. Strong AI is simply the most stupid theory ever conceived.

    Because the difference is in a simple word: consciousness.

    Consciousness cannot be explained in terms of mechanisms and necessity. It is a completely different level of reality.

    And the amazing ability of us humans to generate FSCI is absolutely related to our being conscious intelligent free beings. That is obvious in our direct perceptions, but is also supported by the fundamental observation that true FSCI is never found in any non conscious system.

    So, just to go back to an old example: could a computer write Hamlet, or something similar?

    The answer is: no. Noy without first having Hamlet in input, or as an oracle in its software.

    The reason is simple: Hamlet is a very complex bundle of meanings, feelings, purposes, and beauty. That is its true structure.

    Only a conscious intelligent being can have representations of meanings, feelings, purposes and beauty. Those concepts cannot even be defined without a reference to conscious representations.

    Therefore, a complex output whose intrinsic structure is fully dedicated to expressing those concepts in a rich and satisfying and unique form can never, never come out of any system which does not include a conscious, intelligent agent who can represent those states and then is able to express them through matter.

    So, my point is very clear: a computer will never become conscious, will never represent meanings and feelings and purposes, and threfore will never write Hamlet.

    IOWs, a computer will never generate new, truly original FSCI.

    So, to answer your original question: to speak of the laws of nature is a difficult task, because anyway it implies a regress to a “pre-observed universe” condition. It can be done, but it inevitably implies strong philosophical choices.

    That said, I could imagine the laws of nature as some algorithm which rules the manifested universe as its output (or at least the necessity part of it). I believe they are designed, but to affirm that in terns of the concept of FSCI is not a simple task, because we have really no definite idea of what those laws are, of how they work, and least of all of their complexity. The cosmological argument, in terms of search space of the fundamental constants, is a very good argument, but IMO it still leaves many open problems. It is good, but not so good and purely empirical as the argument for design in biological beings.

  17. 17
    kairosfocus says:

    Hi GP:

    Quite good thoughts as usual.

    FSCI, especially when it is digitally coded — notice my addition overnight to point 10 of the original post [and the remarks in points i to k of b/g note 1 on the implications of the comunication network as a system, once we have received, recognised and decoded a message: this is an inference to design in the face of the abstract possibility of “lucky noise”] — is a pretty direct index of mind at work, at some level in the chain of causes. That is, sufficiently complex and meaningful clusters of symbols of language [and phonemes are as much symbols as are letters or ideograms and numerals], whether used to communicate or to provide data and give instructions for an algorithmic process, bespeak intentional, choosing, acting mind at work.

    And yes, the computer shows how we may automate the process, as the numerically controlled machine tool did before, or going back to C18, the Jacquard loom Similarly the cam-bar driven mechanical device — going back to C18 [and beyond to antiquity] and used to make automatons — is also a programmed entity, but the information there is analogue and non-verbal.

    (NB: That is a part motivation for my discussion here on how 2-d and 3-d networks of nodes, arcs and interfaces can be reduced to digitally coded FSCI. Indeed, Babbage’s analytical engine, 1837 – 1871 could be seen as a digitalisation of the cam-bar type automaton, as a general, programmable calculating device i.e. a computer. Unfortunately, even though the attempt drastically advanced machining technology and was apparently at least marginally feasible, wiki very properly laments that “funding and political support” on adequate scale were not there. The time for big, gov’t funded science was not yet.)

    We might even profitably discuss how an algorithm could be set up to establish the physics of a life-habitable cosmos. Or even to set up a multiverse that scans the domain of possibilities in the neighbourhood of our sub-cosmos, in such a way that we get life-viable sub cosmi. (I make the detour through the multiverse in anticipation of a rhetorical counter; I hold that on Occam’s razor, in absence of direct evidence of such a multiverse, we have no good reason to infer to a multiverse. A quasi-infinite multiplication without necessity is cut away by Occam on steroids. That is as opposed to the possibility that the designer/architect and builder of our cosmos might have good reasons to build other cosmi. In that sense,t eh biblical view traditional in our culture, is a multiverse view, as, e.g. heaven is obviously seen therein as another world that seems to be able to be present to and intersect with space-time in our own.)

    I agree that consciousness is a very distinct part of our experience (as well as that of some higher animals, it seems), and that it is certainly not true of present computers and those — digital or analogue [a cam-bar is a program! and, an analogue computer is a computer!] — programmable automata we call robots. I add, that in our case, it is joined to a superlative degree of verbal-linguistic, logical-analytical and imaginative ability. We can literally create model-worlds in our heads, and envision what it would be to live in them — BTW, a gateway to both the gedankenexperiment so beloved of Einstein, and today’s scientific visualisation simulations.

    [The potential problem being when we lose the ability to distinguish such an imaginative world from the one we live in; hence also that collective, manipulated madness that Plato described in his Parable of the Cave, on false vs true enlightenment and the implications for not only epistemology and metaphysics but the socio-political sphere. Was that what the AZ shooter of sad recent events, was thinking about on his conscious dreaming metaphor?]

    Since it is so mysterious, and in light of the Derek Smith model, I am not so sure that we will not be able to eventually find a way to trigger this. Our own existence — including here the eloquent testimony of the dFSCI in our DNA — shows that it is POSSIBLE to create a conscious, physically instantiated entity. Biological reproduction, that it is possible for such entities to reproduce themselves, with future generations of such conscious creatures.

    How twerdun, I know not, but that would be a wonderful discovery for AI, if it ever can attain to that.

    The ultimate design, I would say: R Daneel Olivaw. (Though I rather doubt the need for positrons!)

    Certainly the Derek Smith model provides a general architecture for such an entity.

    GEM of TKI

  18. 18
    kairosfocus says:

    Onlookers (and Dr Bot):

    It will take a while to properly respond on points to Dr [I, Ro]Bot — I have already set up my Safari panel in a parallel window for step by step reference.

    But in the meanwhile, my remarks to GP are a foretaste of where I will be going.

    So, please, enjoy the onward links as a window on a fascinating area of intellectual and technological history.

    Pardon the time to respond in a way that does justice to DrBot’s thoughtful and positive contributions.

    One wishes that more UD threads would develop like this one is.

    G

    PS: Dr Bot, do you have an early prototype of R Daneel hiding in your lab basement? If you have that or anything of significant interest on the design and development of intelligent automata [and any notions of how consciousness can arise beyond, if we are a matter-energy world and consciousness arose spontaneously once it can do so again — cf here for a sci fic world that premises off that, the Dahak world and the notion of a galaxy-spanning imperium — the trilogy by Weber shows a moon-size ship that has a computer core that over 50,000 years becomes spontaneously conscious and becomes a pivotal character in a story; available in print and as ebook], why not tell us a bit of the story?

  19. 19
    kairosfocus says:

    Dr [I, Ro]Bot:

    [At least, I assume that is what you are hinting at, pardon. 🙂 .]

    I am thinking that a key part of the fear-factor is the meme that a DESIGN CENTRIC VIEW of science is a progress stopper. An examination of fig A in the OP, will show that it should not be. Once a design is identified in nature, that opens up reverse-engineering and forward-engineering our own way. And so, scien ce becomes an exercise in reverse-engineering the world: identifying he principles used to build it and make it work, with the confidence that if ‘twerdun once we can do it too. And in fact, an honest survey of the rise of modern science will show that this is the basic view of the pioneers over the past 350 – 450 or so years.

    (I note that even the much-despised “fundy” pretrib premil eschatology has a variant by Bloomfield, where our planet’s story is phase I of in effect a cosmos development project. The redeemed humanity becomes a — why not a network of such sites for in effect a federation of races, including the mysterious Angels? — site for infinite expansion across the cosmos through endless ages. In that view, BTW, the New Jerusalem envisioned by John looks astonishingly like a large artificial satellite-port (probably of pyramidical design) as a gateway to the cosmos for our planet! In short, I am suggesting that we call a truce in the culture war and rethink a lot of hostile assumptions.)

    Okay, let us now look at a fascinating set of issues, step by step:

    1: I answered my own question last night re does/did god create engines of creation. The answer is clearly yes because we were created (designed) but we are also capable of design (creating) – we are one of those engines.

    Yes, and that is pregnant with import. It is possible to create embodied, creative designing agents. And in our case, we are also procreative, so the possibility of self-replicating agents a la von Neumann’s self-replicator arises.

    It may even be sensible to base such a self replicator — we are now at the Drexler self-replicating automaton — on a small modular, adaptable unit, the analogue of the living cell. And since Carbon is a very handy molecule, why not do it with C-tech nanomachines in an artifical cell with a built in storage bank?

    Cyborgs, in short, not just robots.

    But, robots would be interesting too. Just, I think the need for governance controls a la conscience will become vital. Asimov’s 3 laws are relevant. [Think of what a robot suicide bomber with a built in nuke weapon could do. Maybe, that it is hard to do such, is a safeguard to keep us from blowing ourselves up until we sort ourselves out on our dilemma of being finite, fallible, fallen and too often destructively ill-willed.]

    2: from the perspective of computers and design the question is then – can we create designers? and more specifically, can we create designers that can create designers? (etc, etc)

    Providing we can crack the imaginative, self-directing supervisory controller problem. It is plainly doable, for we are like that and to a limited extent so are higher animals.

    Notice, I am here explicitly putting us on a spectrum as autonomous, carbon technology robots that are self-replicating, through a sexual cycle that allows for genetic mix-match.

    This, explicitly, also includes the ability to observe, to infer, and respond actively to the world, taking in feedback on what works and what does not. The observer model in B/G note 1, is not confined to us:

    I: [si] –> O, on W

    Once we are able to observe and infer, we can construct world models and act on them, adjusting to increase success. thus, we see how entities like that on a internal education program, can become learning systems.

    (By contrast, we can speculate, the necessary being cosmos Architect would be already deeply knowledgeable, and would probably be able to access all space-time points through some sort of hyper-net. But, that is speculative as already said.)

    3: d mental abilities nessecitates us being seperate, rather than just more advanced, that other animals – remember they were created as well and could have the same ‘ingredients’, just to a lesser degree or not fully enabled.

    As you will see form my remarks this morning to GP, we agree here. My point was that the analogy used by AIG was fundamentally misdirected.

    Using Tigerton, Dolphinstein and Chimpck physics would not make a difference to the point that the locus of capability in imaginative, powerfully abstract conceptual thought is mental, not bodily. And among humans — with the same basic biological equipment and capacities, only the knowledgeable and skilled need apply for computer engineering jobs.

    Further to this, we know that we know very little about the cosmos as a whole: the dark matter conundrum is decisive. Notice, we have observational evidence from the Bullet Cluster [and the train-wreck cluster], that dark matter acts gravitationally, but apparently NOT electromagnetically. Even the atomic nucleus is as much an electrical as a strong force system: the neutrons dilute down electrostatic repulsions and contribute to the short-range gluing action of the strong force.

    And, Dark matter dwarfs atomic, electrically acting matter on the cosmic scale.

    So, why should it be suddenly so strange and derided to think that there is what we could call a mental substance capable of feeding into the brain-body system and interacting with it?

    Time for materialists to wheel and tun, and come again . . .

    4: This takes us back to classic issues of philospohy and the problems of introspection, how do we tell if something else has a mind (consciousness) when we have no emperical measure as yet? (e.g. John Searle and the Chinese Room problem in AI)

    So, we should keep an open mind, and accept the testimony of the first facts of our experience and observation: we are minded, conscious, enconscienced creatures with FSCI-rich, intricately designed bodies, in a world that also seems to be –SB would say: screams that it is — designed.

    It is only the pall cast by a priori materialism that holds back the force of that fairly obvious and common-sense view.

    5: Animals do solve problems and some even create objects. I don’t think we are able to say with any certainty yet that they don’t use some form of reasoning, or even employ symbolic abstractions in some primitive way

    I agree. I am only pointing to the spectrum to emphasise that it is mindedness, not embodiment, that is the locus of designing ability.

    6: your argument, that our mental abilities imply something extra because of seperation, isn’t warranted

    That is not my argument. My point is that embracing the higher animals as manifesting similar but more primitive forms of mental abilities and consciousness, and observing the diversity among human beings, we can see that it is not embodiment but midnednes thatis the true locus of comparison for design.

    So, looking back to the Derek Smith model, we have a way in which we can see a lower order input-output MIMO [multiple input, multiple output] control loop with internal state and orientation in the world feedback through proprioception cybernetic loop with a supervising higher order controller. That cybernetic model is rich with possibilities. Once we have the loop, we can then integrate the higher order subsystem that senses and directs, without being locked up in the loop.

    Bring to bear the now more or less observational fact that we know there is at least one more class of substance in our cosmos, dark matter.

    Just for fun, put in the Penrose Hameroff hypothesis of gravitonic, influencing and informational interaction at neural microtubule level. (Maybe it works another way, but this allows us to at least think and discuss in terms of what we can observe to date. Remember, there is more dark matter around than atomic matter.)

    And, voila, we have a viable crude model for a minded, embodied entity that has a mind that is not merely emergent from the body and supervenes on it without causal efficacy.

    And, what if mind is another substance entirely, that still has the capabilities for informational interaction with the brain-body MIMO cybernetic entity?

    Just to be provocative, let us call that substance: SPIRIT or SOUL.

    Do we not see that it might be possible to integrate such with a brain-body loop through quantum level interfaces, along which qu-bits travel back and forth happily? Giving us massively parallel processing power.

    And, in the context of somehow being self-conscious and self-directing [taken as plausible facts of introspection of conscious being . . . on the Feyerabend principle that if it looks fruitful, add it to the scientific toolbox, without locking into any hard and fast set of tools, techniques and principles that define all and only scientific methods], do we not now see that agent cause is a reasonable thing?

    ______________

    At this point, we are deep into gedankenexperiment type speculations, but the SCIENTIFIC point is if we do not re-open our imaginative space to think about possibilities and embrace credible and relevant facts, we cannot confidently infer to a truly reasonable best [albeit provisional] explanation.

    So, let us re-open our minds.

    GEM of TKI

    PS: I forgot, the 5th Imperium sci fi series has also another class of less than virtuous conscious computing engines that captivate an entire race into a high tech Plato’s Cave world that turns them into cosmic scale destructive monsters . . . , in short, once self-directing machines are in our imaginative prospect, ethics is dead centre as a serious issue.

  20. 20
    kairosfocus says:

    F/N: This, from my always linked online note App 8 [HT: Frosty], may also be stimulating:

    _______________________

    >> 7 –> Further, as UD commenter Frosty pointed out in the linked UD thread, Leibnitz long ago highlighted one of the key challenges to an emergentist, property- and/or emanation- of- matter view of perception [and thence consciousness etc.], in The Monadology, 16 – 17. So, giving a little context to see what Leibnitz means by monads etc, and without endorsing, let us simply reflect on what is now probably a very unfamiliar way to look at things; noting his astonishing remarks on the analogy of the mill in no 17:

    1. The monad, of which we will speak here, is nothing else than a simple substance, which goes to make up compounds; by simple, we mean without parts.

    2. There must be simple substances because there are compound substances; for the compound is nothing else than a collection or aggregatum of simple substances.

    3. Now, where there are no constituent parts there is possible neither extension, nor form, nor divisibility. These monads are the true atoms [i.e. “indivisibles,” the original meaning of a-tomos] of nature, and, in a word, the elements of things . . . .

    6. We may say then, that the existence of monads can begin or end only all at once, that is to say, the monad can begin only through creation and end only through annihilation. Compounds, however, begin or end by parts . . . .

    14. The passing condition which involves and represents a multiplicity in the unity, or in the simple substance, is nothing else than what is called perception. This should be carefully distinguished from apperception or consciousness . . . .

    16. We, ourselves, experience a multiplicity in a simple substance, when we find that the most trifling thought of which we are conscious involves a variety in the object. Therefore all those who acknowledge that the soul is a simple substance ought to grant this multiplicity in the monad . . . .

    17. It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist.

    8 –> We may bring this up to date by making reference to more modern views of elements and atoms, through an example from chemistry. For instance, once we understand that ions may form and can pack themselves into a crystal, we can see how salts with their distinct physical and chemical properties emerge from atoms like Na and Cl, etc. per natural regularities (and, of course, how the compounds so formed may be destroyed by breaking apart their constituents!). However, the real issue evolutionary materialists face is how to get to mental properties that accurately and intelligibly address and bridge the external world and the inner world of ideas. This, relative to a worldview that accepts only physical components and must therefore arrive at other things by composition of elementary material components and their interactions per the natural regularities and chance processes of our observed cosmos. Now, obviously, if the view is true, it will be possible; but if it is false, then it may overlook other possible elementary constituents of reality and their inner properties. Which is precisely what Liebnitz was getting at. >>

    _______________________

    Worth at least a thought or two.

  21. 21
    bornagain77 says:

    kairosfocus, since you are very good at math, and deal with extremely low probabilities all the time, I thought you might really appreciate this article trying to put 1 in 10^157 in context:

    The Case for Jesus the Messiah — Incredible Prophecies that Prove God Exists By Dr. John Ankerberg, Dr. John Weldon, and Dr. Walter Kaiser, Jr.
    Excerpt: But, of course, there are many more than eight prophecies. In another calculation Stoner used 48 prophecies (even though he could have used 456) and arrived at the extremely conservative estimate that the probability of 48 prophecies being fulfilled in one person is one in 10^157.
    How large is the number 10^157? 10^157 contains 157 zeros! Let us try to illustrate this number using electrons. Electrons are very small objects. They are smaller than atoms. It would take 2.5 times 10^15 of them, laid side by side, to make one inch. Even if we counted four electrons every
    second and counted day and night, it would still take us 19 million years just to count a line of electrons one inch long.
    But how many electrons would it take if we were dealing with 10^157 electrons? Imagine building a solid ball of electrons that would extend in all directions from the earth a length of 6 billion light years. The distance in miles of just one light year is 6.4 trillion miles. That would be a big ball! But not big enough to measure 10^157 electrons.
    In order to do that, you must take that big ball of electrons reaching the length of 6 billion light years long in all directions and multiply it by 6 x 10^28! How big is that? It’s the length of the space required to store trillions and trillions and trillions of the same gigantic balls and more. In fact, the space required to store all of these balls combined together would just start to “scratch the surface” of the number of electrons we would need to really accurately speak about 10^157.
    But assuming you have some idea of the number of electrons we are talking about, now imagine marking just one of those electrons in that huge number. Stir them all up. Then appoint one person to travel in a rocket for as long as he wants, anywhere he wants to go. Tell him to stop and segment a part of space, then take a high-powered microscope and find that one marked electron in that segment.
    What do you think his chances of being successful would be? It would be one in 10^157.
    Remember, this number represents the chance of only 48 prophecies coming true in one person (there are 456 total prophecies concerning Jesus).
    http://www.johnankerberg.org/A.....1103-3.pdf

  22. 22
    kairosfocus says:

    BA:

    That is one way to try to imagine the size and significance of a stupendously large number.

    By comparison, there are credibly some 10^80 atoms in the observable universe, about 10^60 times the number in a grain of sand.

    The configuration space of just 1,000 bits [125 bytes, or about 20 words worth] is 1.07*10^301, or about 10^150 times the number of Planck-time quantum states of the observed cosmos across its thermodynamic lifespan, in turn about 50 million times the time often held to have elapsed since the big bang, some 13.7 BYA.

    That is why 1,000 bits worth of linguistically or algorithmically functional text is well beyond the credible reach of our observed cosmos, on undirected chance plus blind mechanical necessity. A search in the cosmic haystack on the scope of our cosmos, would not even begin to be significant as a sample of the config space of just 1,000 bits.

    So, if you see 1,000 bits worth of digitally coded textual information, that is a message or is algorithmically and specifically functional, you can be highly confident that it is the product of intentional and intelligent configuration.

    That is, of design.

    And so, for instance, we can be confident that the DNA of living systems is designed, as the DNA starts at over 100,000 bits and goes up into billions, and is indisputably functional based on codes. And if you doubt that analysis, produce a case where dFSCI, of 1,000 or more bits — remember, about 20 words of typical English will do — has been credibly observed to have resulted form blind chance and mechanical necessity. If you do so, the design inference will collapse [and probably statistical thermodynamics with it].

    Of course, no such case is presented, and we can be quite confident on the above analysis, that none will be forthcoming.

    No wonder we see only silence or dismissive distractive red herring and strawman tactics from the ever present ID critics, once the above original post was put up.

    Silence can speak loudly indeed . . .

    The root reason this is disputed, is that many are in the grips of a priori evolutionary materialism, as Lewontin so plainly documents:

    It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.

    [From: “Billions and Billions of Demons,” NYRB, January 9, 1997.]

    GEM of TKI

  23. 23
    bornagain77 says:

    kf, it seems some of these probabilities just can’t even be properly fathomed by mere mortal minds:

    “The probability for the chance of formation of the smallest, simplest form of living organism known is 1 in 10^340,000,000. This number is 10 to the 340 millionth power! The size of this figure is truly staggering since there is only supposed to be approximately 10^80 (10 to the 80th power) electrons in the whole universe!”
    (Professor Harold Morowitz, Energy Flow In Biology pg. 99, Biophysicist of George Mason University)

    Probabilities Of Life – Don Johnson PhD. – 38 minute mark of video
    a typical functional protein – 1 part in 10^175
    the required enzymes for life – 1 part in 10^40,000
    a living self replicating cell – 1 part in 10^340,000,000
    http://www.vimeo.com/11706014

    Dr. Morowitz did another probability calculation working from the thermodynamic perspective with a already existing cell and came up with this number:

    DID LIFE START BY CHANCE?
    Excerpt: Molecular biophysicist, Horold Morowitz (Yale University), calculated the odds of life beginning under natural conditions (spontaneous generation). He calculated, if one were to take the simplest living cell and break every chemical bond within it, the odds that the cell would reassemble under ideal natural conditions (the best possible chemical environment) would be one chance in 10^100,000,000,000. You will have probably have trouble imagining a number so large, so Hugh Ross provides us with the following example. If all the matter in the Universe was converted into building blocks of life, and if assembly of these building blocks were attempted once a microsecond for the entire age of the universe. Then instead of the odds being 1 in 10^100,000,000,000, they would be 1 in 10^99,999,999,916 (also of note: 1 with 100 billion zeros following would fill approx. 20,000 encyclopedias)
    http://members.tripod.com/~Black_J/chance.html

  24. 24
    kairosfocus says:

    F/N: I have updated the OP point 6 to bring out the issues of the self-moved agent-designer implicit in e.g. my acting to compose and transmit a textual post in English.

    I particularly must draw attention to the following remarks by Plato on the self-moved agent, as he speaks in the voice of the Athenian Stranger in The Laws, Bk X:

    ____________________

    >> Ath. . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change? Impossible. But when the self-moved changes other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second.

    [[ . . . .]

    Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it?

    Cle. You mean to ask whether we should call such a self-moving power life?

    Ath. I do.

    Cle. Certainly we should.

    Ath. And when we see soul in anything, must we not do the same-must we not admit that this is life?

    [[ . . . . ]

    Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul?

    Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things? [he here moves to a form of cosmological argument]

    Cle. Certainly not; the soul as being the source of motion, has been most satisfactorily shown to be the oldest of all things. >>
    ____________________

    This raises the point that to act, we need to be able to freely choose, then to move say our fingers to type. In this case to thence compose that which has in it dFSCI. And the issue of freedom of action, to be self-moved or free enough in will to do so, comes to the fore.

    Thus, the issue of design, an empirical reality, raises serious questions about the source of designs, and onward — on the worldviews plane [not the scientific one addressed in the OP] — the source of the design and configuration of the world.

    GEM of TKI

  25. 25
    kairosfocus says:

    BA:

    The point that however you calculate them, the odds against spontaneous origin of life by chance and necessity triggering favourable chemistry in some still warm pond [or whatever scenario is being favoured today] is well made.

    Simply on DNA, the odds of getting to OBSERVED life by chance and necessity only are staggering.

    Then, look at how DNA is a functional component in a metabolic system that embeds a von Neumann self-replicating facility.

    Such a vNSR as an additional facility requires:

    (i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a “clanking replicator” as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility;

    (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with

    (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:

    (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by

    (v) either:

    (1) a pre-existing reservoir of required parts and energy sources, or

    (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

    Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.

    That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]

    This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.

    Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations.

    In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.

    This is found in your friendly local “simple” — what a misnomer — living cell.

    That is why the explanatory filter so strongly points tothe cell as a product of design.

    Then, to move up to accounting for major body plans starting with say teh Cambrian fossil life revolution, we have to account for 10’s of millions of additional bits of bio-information and systems for embryological development.

    Dozens of times over.

    Again, the explanatory filter strongly implicates design.

    And, in reply we meet only a priori materialism.

    That is why prof Philip Johnson’s reply to Lewontin is so cutting:

    For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

    . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

    Now, let us hear the response on the merits from the champions of Darwinism and evolutionary materialism.

    GEM of TKI

  26. 26
    bornagain77 says:

    But kairos, we have already received our reply from ‘the champions of Darwinism and evolutionary materialism’ on the ‘information problem’. It is best stated as thus:

    http://www.youtube.com/watch?v=CQFEY9RIRJA

  27. 27
    kairosfocus says:

    BA:

    As in “chirp, chirp, chirp . . . ” little cricket?

    Let’s see if they can take time from the NCSE talking points about creationism in cheap tuxedos — as already addressed — to answer on the merits.

    Waiting . . .

    (If no cogent answer is forthcoming on the merits in any reasonable time [it’s 2+ days on this post already . . . ], that strongly suggests that — atmosphere poisoning rhetorical distractors aside [cf comment no. 9 above] — the issue of the basic legitimacy of the inference to design as a properly scientific inference is over.)

    GEM of TKI

  28. 28
    kairosfocus says:

    Onlookers:

    You might find an interesting comparison at Climate Audit, on the balance of issues and rhetorical strategies. Especially, in light of my earlier remarks on the NCSE’s endorsement and hosting of the ID = Creationism smear.

    Wagon-circling, distractive atmosphere-poisoning and posing on one’s magisterial power do not address the issue on the merits.

    So, let us wait . . .

    GEM of TKI

  29. 29
    bornagain77 says:

    kf this hot off the press article from ENV should really get your dander up:

    Condescension, Sneers, and Outright Misrepresentations of Intelligent Design Pass For Scholarship in Synthese
    http://www.evolutionnews.org/2.....42641.html

  30. 30
    kairosfocus says:

    BA:

    Let’s just say that when a Journal with pretensions to sober scholarship hands over its introductory essay to the deputy director of an agit-prop agency demonstrably pursuing an atmosphere-poisoning false accusation smear, i.e. NCSE, then devotes the whole journal more or less to the partyline talking points that duck the main issues, then the state of scholarship is soberlingly low; as in, it is not clear that the patient will make it out of the ICU.

    If the NCSE propagandists were really confident of their case at the level of a phil journal, what they would have done is invited a panel of ID and Creationism supporters to present their cases, in a context where there would be critiques form a Darwinist panel and responses to critiques, and do likewise on the other side. Then, a panel of philosophers of science or better yet, experienced jurists with knowledge of scientific matters would render their verdicts, with explanation.

    Instead, we saw a clear shoot ’em in the back bushwhacking.

    Shameless.

    But, ENV has caught a very interesting slip-up by Kelly C. Smith:

    “what we need to do is develop a single example of macroevolution which presents a representative sample of the evidence behind the construction of the series in a very simple, user-friendly fashion.”

    This, ten years after Wells’ book, Icons of Evolution blew up the ten leading icons of evolution over the past 150 years. What an eloquent inadvertent admission on the true state of the evidence on the claimed “fact” of evolution! (Cf. here and here on NSTA, NAS and NCSE on that claimed “fact.” Also cf the critical review here on OOL and here on origin of biodiversity.)

    Maybe several leading ID scholars should check out whether the journal has a circulation of any size in the UK — in one case, 23 copies sold was enough — and sue for libel there.

    But, on the merits, the evident failure to speak cogently to the substantial matters at stake, tells us that science is rapidly losing its integrity, and huge swathes of philosophy — the vaunted meta discipline — are happy to go along.

    Telling

    GEM of TKI

  31. 31
    kairosfocus says:

    Breaking:

    Astronomer Gaskell was awarded US$ 125,000 in a settlement of his discrimination suit against U Kentucky. [HT: UD Thread, follow developments there.]

    ENV’s money shot comment:

    What this case shows is that if you express any form of doubt about Darwin–even if you are totally open to a theistic evolution position–you might be labeled a “creationist” and face discrimination in the academy. What you actually believe doesn’t matter. And whether your views are scientifically defensible doesn’t matter. What matters are the perceptions and fears of your colleagues and conforming to a climate of intolerance towards Darwin-skeptics. Sadly, this culture of intolerance cost a highly qualified astronomer an excellent job at UK.

    And that climate of hostility is being stirred up by the NCSE and ilk.

    For shame!

    GEM of TKI

  32. 32
    tragic mishap says:

    Quick question: Does the commonly cited number of 10^80 atoms in the universe include dark matter?

  33. 33
    tragic mishap says:

    I suppose it probably doesn’t, since it’s an estimate of hydrogen atoms anyway and most dark matter is supposed to be non-atomic.

  34. 34
    kairosfocus says:

    TM:

    Nope.

    Strictly it is the number of baryons. (Counting as atoms, at the scale involved is being generous sand conservative.)

    Dark matter is several times the scale, but of mysterious composition, as it interacts gravitationally [how it was and is detected] but apparently not electromagnetically.

    The Bullet Cluster case loks like a galactic cluster collision with the atomic matter interacting — X-ray source [high energy interactions!] — but he dark matter halos have evidently acted almost like ghosts, and so are displaced from the centre of X-ray emissions.

    GEM of TKI

  35. 35
    kairosfocus says:

    Update:

    I have added a key cite from Dr Dembski on the design process that shows what intelligence means, how designers use it, and why the result often reflects functionally specific complex organisation and information. HT: ENV.

    GEM of TKI

  36. 36

Leave a Reply