Uncommon Descent Serving The Intelligent Design Community

HeKS strikes gold again, or, why strong evidence of design is so often stoutly resisted or dismissed

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

New UD contributor HeKS notes:

The evidence of purposeful design [–> in the cosmos and world of life]  is overwhelming on any objective analysis, but due to Methodological Naturalism it is claimed to be merely an appearance of purposeful design, an illusion, while it is claimed that naturalistic processes are sufficient to achieve this appearance of purposeful design, though none have ever been demonstrated to be up to the task. They are claimed to be up to the task only because they are the only plausible sounding naturalistic explanations available.

He goes on to add:

The argument for ID is an abductive argument. An abductive argument basically takes the form: “We observe an effect, x is causally adequate to explain the effect and is the most common [–> let’s adjust: per a good reason, the most plausible] cause of the effect, therefore x is currently the best explanation of the effect.” This is called an inference to the best explanation.

When it comes to ID in particular, the form of the abductive argument is even stronger. It takes the form: “We observe an effect, x is uniquely causally adequate to explain the effect as, presently, no other known explanation is causally adequate to explain the effect, therefore x is currently the best explanation of the effect.”

Abductive arguments [–> and broader inductive arguments] are always held tentatively because they cannot be as certain as deductive arguments [–> rooted in known true premises and using correct deductions step by step], but they are a perfectly valid form of argumentation and their conclusions are legitimate as long as the premises remain true, because they are a statement about the current state of our knowledge and the evidence rather than deductive statements about reality.

Abductive reasoning is, in fact, the standard form of reasoning on matters of historical science, whereas inductive reasoning is used on matters in the present and future.

And, on fair and well warranted comment, design is the only actually observed and needle in haystack search-plausible cause of functionally specific complex organisation and associated information (FSCO/I) which is abundantly common in the world of life and in the physics of the cosmos. Summing up diagramatically:

csi_defn

Similarly, we may document the inductive, inference to best current explanation logic of the design inference in a flow chart:

explan_filterAlso, we may give an iconic case, the protein synthesis process (noting the functional significance of proper folding),

Proteinsynthesis

. . . especially the part where proteins are assembled in the ribosome based on the coded algorithmic information in the mRNA tape threaded through the Ribosome:

prot_transln

And, for those who need it, an animated video clip may be helpful:

[youtube aQgO5gGb67c]

So, instantly, we may ask: what is the only actually — and in fact routinely — observed causal source of codes, algorithms, and associated co-ordinated, organised execution machinery?

ANS: intelligently directed contingency, aka design, where there is no good reason to assume, imply or constrain such intelligence to humans.

Where also, FSCO/I or even the wider Complex Specified Information is not an incoherent mish-mash dreamed up by silly brainwashed or machiavellian IDiots trying to subvert science and science education by smuggling in Creationism while lurking in cheap tuxedos, but instead the key notions and the very name itself trace to events across the 1970’s and into the early 1980’s as eminent scientists tried to come to grips with the evidence of the cell and of cosmology, as was noted in reply to a comment on the UD Weak Argument Correctives:

. . . we can see across the 1970′s, how OOL researchers not connected to design theory, Orgel (1973) and Wicken (1979) spoke on the record to highlight a key feature of the organisation of cell based life:

ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]

WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [ –> i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [ –> originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.]

At the turn of the ’80′s Nobel-equivalent prize-holding astrophysicist and lifelong agnostic, Sir Fred Hoyle, went on astonishing record:

Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

Based on things I have seen, this usage of the term Intelligent Design may in fact be the historical source of the term for the theory.

The same worthy also is on well-known record on cosmological design in light of evident fine tuning:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16]

A talk given to Caltech (For which the above seems to have originally been conclusive remarks) adds:

The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn’t so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn’t give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it’s easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem – the information problem . . . .

I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn’t convince myself that even the whole universe would be sufficient to find life by random processes – by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . .

Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

These words in the same talk must have set his audience on their ears:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

So, then, why is the design inference so often so stoutly resisted?

LEWONTIN, 1997: . . . to put a correct view of the universe into people’s heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting] . . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [Billions and billions of demons, NYRB Jan 1997. If you imagine that the above has been “quote mined” kindly read the fuller extract and notes here on, noting the onward link to the original article.]

NSTA BOARD, 2000: The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts [–> as in, Phil Johnson was dead on target in his retort to Lewontin, science is being radically re-defined on a foundation of a priori evolutionary materialism from hydrogen to humans] . . . .

Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations [–> the ideological loading now exerts censorship on science] supported by empirical evidence [–> but the evidence is never allowed to speak outside a materialistic circle so the questions are begged at the outset] that are, at least in principle, testable against the natural world [–> but the competition is only allowed to be among contestants passed by the Materialist Guardian Council] . . . .

Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements [–> in fact this imposes a strawman caricature of the alternative to a priori materialism, as was documented since Plato in The Laws, Bk X, namely natural vs artificial causal factors, that may in principle be analysed on empirical characteristics that may be observed. Once one already labels “supernatural” and implies “irrational,” huge questions are a priori begged and prejudices amounting to bigotry are excited to impose censorship which here is being insitutionalised in science education by the national science teachers association board of the USA.] in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases added.]

MAHNER, 2011: This paper defends the view that metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle . . . .

Metaphysical or ontological naturalism (henceforth: ON) [“roughly” and “simply”] is the view that all that exists is our lawful spatiotemporal world. Its negation is of course supernaturalism: the view that our lawful spatiotemporal world is not all that exists because there is another non-spatiotemporal world transcending the natural one, whose inhabitants—usually considered to be intentional beings—are not subject to natural laws . . . .

ON is not part of a deductive argument in the sense that if we collected all the statements or theories of science and used them as premises, then ON would logically follow. After all, scientific theories do not explicitly talk about anything metaphysical such as the presence or absence of supernatural entities: they simply refer to natural entities and processes only. Therefore, ON rather is a tacit metaphysical supposition of science, an ontological postulate. It is part of a metascientific framework or, if preferred, of the metaparadigm of science that guides the construction and evaluation of theories, and that helps to explain why science works and succeeds in studying and explaining the world. Now this can be interpreted in a weak and a strong sense. In the weak sense, ON is only part of the metaphysical background assumptions of contemporary science as a result of historical contingency; so much so that we could replace ON by its antithesis any time, and science would still work fine. This is the view of the creationists, and, curiously, even of some philosophers of science (e.g., Monton 2009). In the strong sense, ON is essential to science; that is, if it were removed from the metaphysics of science, what we would get would no longer be a science. Conversely, inasmuch as early science accepted supernatural entities as explainers, it was not proper science yet. It is of course this strong sense that I have in mind when I say that science presupposes ON. [In, his recent Science and Education article, “The role of Metaphysical Naturalism in Science” (2011) ]

In short, there is strong evidence of ideological bias and censorship in contemporary science and science education on especially matters of origins, reflecting the dominance of a priori evolutionary materialism.

To all such, Philip Johnson’s reply to Lewontin of November 1997 is a classic:

For scientific materialists the materialism comes first; the science comes thereafter. [Emphasis original.] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

. . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [Emphasis added.] [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

Please, bear such in mind when you continue to observe the debate exchanges here at UD and beyond. END

Comments
Thomas2: The problem with the comment @61, is that he misunderstands the concept of specification. The mere existence of a physical object, whether the rock or the imprint it leaves in mud is does not constitute a specification. Furthermore, an object does not contain information by its mere existence. (See my detailed posts on that issue.) The only information contained in nightlight's scenario @61 is the information that is (i) created by an intelligent being, (ii) as a result of examining the physical object with tools and instruments of investigation, and (iii) encoded in some kind of language. Once that information is created by the intelligent being (not by the rock) then, yes, that information can constitute a "specification" -- meaning it has some function, or meaning, or references something outside of itself. The specification exists in the information created by the intelligent being, not in the object itself. nightlight is completely wrong in thinking that the kind of information that exists in the cell -- functional, representative information -- has anything to do with the kind of "information" he imagines exists in a rock rolling down a hill and leaving an impression in the mud. It is a massive category mistake and a fatal logical error.Eric Anderson
November 6, 2014
November
11
Nov
6
06
2014
08:10 AM
8
08
10
AM
PDT
KF @ #137 Interesting exposition. You will appreciate that I was addressing the issues a conceptual level rather than analytical, but I have also appreciated the more detailed treatments from you and others. I know that NL somewhat blinds himself to the weaknesses of his own position by an apparently rigid prior commitment to an unsustainable philosophical position (together with a consequent misplaced and unhelpful disdain for IDers in general and DI-IDers in particular), but nevertheless he makes some interesting points and I would also be interested in what your own response to NL @ #61 would be.Thomas2
October 19, 2014
October
10
Oct
19
19
2014
05:00 PM
5
05
00
PM
PDT
EA, a communication system always implies additional info and functionally specific complex organisation; the FSCO/I estimate on the coding is thus a conservative count. KFkairosfocus
October 16, 2014
October
10
Oct
16
16
2014
05:27 AM
5
05
27
AM
PDT
NL & Thomas2 (also, attn EA): Over the past few days, here we were side-swiped by a storm now turned serious hurricane headed for Bermuda (I hope it misses . . . ). Power and comms were lost for a while, and there's some catch-up. I note first that T2 is quite correct that the design inference in itself is not a universal design detector much less a designer detector. Nor is it a universal decoder. Given the limitations underscored by theory of computation, that is no surprise. Having noted that, NL, functionally specific, complex organisation and associated information (FSCO/I) -- which BTW is what is relevant to the world of cell based life per WmAD's note in NFL that in life forms specification is cashed out as function, and what was put on the table by Orgel and Wicken in OOL research in the '70's -- is observable and in principle measurable. In fact, three years ago, between VJT, Paul Giem and the undersigned, we put together a metric model that builds on Dembski's ideas and which is pretty direct and simple: Chi_500 = I*S - 500, bits beyond the solar system threshold I is any reasonable info-carrying capacity metric, defined most easily on the chain of structured Y/N questions required to specify functional configs in a space of possible configs (cf. ASCII strings or AutoCAD files . . . EA, I will come to you soon). Yes it is contextual, but so is any reasonable informational situation. It takes 7 bits to specify an ASCII character (bit # 8 is a parity check), and 16 a Unicode one, even if they look the same, as the latter comes from a much broader field. S is a so-called dummy variable connected to observations. Its default is 0, corresponding to high contingency being assumed to be due to chance. This is tied to the null hyp idea, that if blind chance can reasonably account for something you cannot justify rejecting it. But, on objective warrant for functional specificity, S goes to 1. If S = 0, the metric is locked at - 500 bits. If S = 1, you need 500 bits plus worth of complexity for the metric to go positive. That is what is required for something on the gamut of our solar system of 10^57 atoms interacting or acting at fast chem rates of 10^14/s, for 10^17 s to be credibly beyond the reach of chance discovering an island of function of that degree of complexity. In effect, give each atom in our solar system a tray of 500 coins, and flip them and read the pattern every 10^-14 s. For the duration of the solar system to date. Such a set of observations (a blind search of the config space for 500 bits) would in effect sample as one straw-size to a cubical haystack 1,000 light years across. About as thick as our barred-spiral galaxy at its central bulge; off in the direction of Sagittarius, which is now up in the night sky. If such were superposed on our galactic neighbourhood, we could say with all but utter logical certainty, we could certainly say with empirical reliability tantamount to practical certainty, that such a search strategy will be practically infeasible and will predictably fail. We would reliably only pick up straw never mind many thousands of stars etc. Similarly, precisely because of the strict demands of right components in the right places and overall arrangement and coupling to achieve specific function -- text in English, a steam turbine, a watch etc -- FSCO/I will come as very narrow zones in a much broader space of possibilities. That is why FSCO/I is astonishingly resistant to search strategies relying on blind chance and/or mechanical necessity. And, that is why, as an empirical fact on trillions of observations, it is a highly reliable sign of design as cause. That is, of intelligently, goal-oriented, directed configuration. Designers have knowledge, skill and creative capacity, as a matter of observation; i.e. designers (of whatever ultimate source) are possible. Design is a process that uses intelligence to configure components to achieve a desired functional end. Thus, we are epistemically entitled, as a matter of induction [specifically abductive inference to best explanation] to infer design as the causal process that best accounts for FSCO/I, absent empirical observation otherwise. And, to let the metaphysical chips fall and lie where they will. So, when we see a watch in a field, we easily infer design. Even if the watch has the additional property Paley proposed, of self replication, that would be additional FSCO/I. Likewise, when we see C-Chemistry, aqueous medium, molecular nanotech using cells that use codes and algorithms to effect a kinematic von Neumann self replication facility, we have excellent inductive reason to infer to design. And, on the plain testimony of Lewontin and others, including Dawkins, it is plain that on the contrary to the all too common turnabout accusation, it is a priori materialists who are begging big metaphysical questions and who in the teeth of the sort of analysis just outlined are inferring to chance of the gaps as magic-working demi-urge. Oops, a demi-urge is a designer; it is blind chance of the gaps. So, the talking points deployed above at 128 fall flat, NL. Time for fresh thinking. KF PS: EA, it is that possibility of 3-d info record that led me to broaden to FSCO/I several years ago, and to point out that per AutoCAD etc, discussion on coded strings is WLOG. You may wish to read here on in context, note point iii and onward links. PPS: Let me add from the same IOSE introsumm page: >> In simple terms, noted ID Scientist William Dembski, argues:
We know from experience that intelligent agents build intricate machines that need all their parts to function [[--> i.e. he is specifically discussing "irreducibly complex" objects, structures or processes for which there is a core group of parts all of which must be present and properly arranged for the entity to function (cf. here, here and here)], things like mousetraps and motors. And we know how they do it -- by looking to a future goal and then purposefully assembling a set of parts until they’re a working whole. Intelligent agents, in fact, are the one and only type of thing we have ever seen doing this sort of thing from scratch. In other words, our common experience provides positive evidence of only one kind of cause able to assemble such machines. It’s not electricity. It’s not magnetism. It’s not natural selection working on random variation. It’s not any purely mindless process. It’s intelligence . . . . When we attribute intelligent design to complex biological machines that need all of their parts to work, we’re doing what historical scientists do generally. Think of it as a three-step process: (1) locate a type of cause active in the present that routinely produces the thing in question; (2) make a thorough search to determine if it is the only known cause of this type of thing; and (3) if it is, offer it as the best explanation for the thing in question. [[William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 20-21, 53 (InterVarsity Press, 2010). HT, CL of ENV & DI.]
Philosopher of Science Stephen Meyer similarly argues the same point in more detail in his response to a hostile review of his key 2009 Design Theory book, Signature in the Cell:
The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . .
He then gives even more details, with particular reference to the origin of cell-based life:
The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details in the linked page]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . . For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory. While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen. Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA. As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules. Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences. This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers. It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . . [[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[--> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . . [[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to "natural[[istic] causes"] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself. Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on. We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it. But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]
Thus, in the context of a pivotal example -- the functionally specific, complex information stored in the well-known genetic code -- we see laid out the inductive logic and empirical basis for design theory as a legitimate (albeit obviously controversial) scientific investigation and conclusion. >>kairosfocus
October 16, 2014
October
10
Oct
16
16
2014
04:04 AM
4
04
04
AM
PDT
kairosfocus @134: Excellent point about AutoCAD files -- thanks for sharing that example. I guess that would mean that to the extent we have the relevant formal computational system in place (3D graphics representation, measurement parameters, spatial orientation parameters, and so on), then we can use that "language," if you will, to represent a physical object in three-dimensional space. That should allow us to calculate the "C" part of a physical object. This, of course, is not quite the same as calculating "C" in a vaccum, but perhaps that is never possible anyway. Perhaps it is the case that "C" can only be calculated once we have set up a formal representation and measurement system. After the background system is in place, then we can calculate that we need X "amount" of information in order to be able to represent object X. Similar situation exists with "Shannon information" and any other descriptive system. Anyway, thanks for bringing that example up. It helps tie a couple of fundamental issues together in my mind. ----- Also your point about reproduction (made previously) is right on the money. Materialists like to point to reproduction as though it solves the complexity problem, when in fact it contributes to the problem.Eric Anderson
October 13, 2014
October
10
Oct
13
13
2014
06:28 PM
6
06
28
PM
PDT
#128 nightlight – You have gone to some trouble to explain your position, but I only have time to respond in brief. First, you have not addressed my key questions how would you scientifically describe/define mindful design, and how would you scientifically detect it, reliably and predictably? (In the context of ID, I mean detect design without prior knowledge of or access to the putative designing mind). Whether the mind of the "intelligent designer" is ultimately reducible to explanation/description by scientific laws or not would be irrelevant to these questions: you know that there are conscious minds which make decisions which can be physically applied directly to nature to produce results in nature which nature cannot otherwise directly produced by mindless unconscious operations. This is the domain in which ID works, not a hypothetical domain conceivably more remote. Turning to your response, (1) there is no "theology" in the methodology of ID: the theory stands or falls on its scientific merits alone (whatever the theological motivations of IDers or the theological implications of some of its findings/predictions). And the usual meaning of "god of the gaps" is a gratuitous invocation of the direct action of God/the supernatural based solely on the absence of evidence for a "natural" explanation of some physical data. ID, on the other hand, invokes the direct or ultimate action of a designer based upon the evidence of absence of an alternative "natural" cause together with positive evidence for design. Your formulation does not appear to me to fit into that description. Regarding the tricky terms micro-evolution -v- macro-evolution, the distinction is not that some parallel operations in nature are the proximate result of lawful activity and some the proximate result of design, but that design (macro-evolution) underpins law (micro-evolution) – perhaps comparable to the case of your chess programme designer and the chess programme’s subsequent playing. However, this point is really beside the point: what we are concerned with is nature as we find it, not as we insist it must be to in order to accord with some prior conceived metaphysical or theological theory or belief which we have imposed on nature or on our methods for understanding nature. Dembski's ID theory attempts to unequivocally describe the properties of an intelligently designed entity (ie, provide a law describing it) in nature where these properties can be exclusively identified when present. Where those properties are exclusively found, a design inference and can be reliably be made (and, I personally would say, an associated testable hypothesis formulated). Finally in your point (1), if the designer were capricious, so what? Dembski's proposed scientific procedure looks for consistency in the design, not the designer; (it doesn't look for the designer at all!). Considering your point (2), I have studiously tired to avoid "semantics" - that is why, in part, I asked you to provide a definition of intelligent design. What I am concerned with is the discernible nature of reality, and I if you don't like my wording – change it: the concept of "design" is grounded in observational reality in nature – so it should be capable of scientific definition (leading to scientific investigation); indeed, it almost seems to me that it is you who are tripping yourself up with semantics framed around an a priori refusal to recognise the scientific reality of the existence of things which are intelligently designed (assuming that I haven't misunderstood or under-appreciated your case [as I did Barry's earlier in this chain]). Concerning your argument, I don’t consider that CSI "detect[s] some objective property, such as 'design' or 'intelligence'" – rather, it is an objective property of an intelligently designed object or phenomenon, and when it is observed/evaluated to be present to the methodical exclusion of other potential sources of CSI then you can reliably make a design inference. Questions of compression relate to the measurement or quantification of CSI, and hence to the confidence in making a design inference, but it is the presence of CSI (with the methodical exclusion of other potential non-intelligent sources of CSI) which marks intelligent design. CSI measures not gaps, therefore, but confidence. In that sense it could be said, perhaps, to be measure of "inadequacy", but the confidence levels are set so high in Dembski's formulation that that should not be an issue. The rest of your response bears further examination, but, for the reason just given, I don't see that it undermines the science or reliability of Dembski’s basic method. [Post Script: on semantics: ID is a phrase with several different uses – which is an annoying weakness: when referring to Behe's or Dembski’s methodologies it is strictly scientific; when referring to arguments over MN etc, it is science philosophy; when used in its typical sound-bite definition, it is an overarching concept; or it can refer to other theories of design detection which fall within the scope of the preceding sound-bite definition; or it can refer to the movement which promotes ID; or, less frequently, it can simply be an 18th/19th/20th century philosophical phrase distinguishing the work in nature of a divine author/architect/creator from work in nature which isn't; or it simply refers to the act of a mind in planning a design, or to the physical result of such an act. In this discussion I have limited my use to the first two and the last two, and I have attempted to make it clear which use is intended by the context I within which I have employed the term on each occasion. Note also that in this discussion, it is only in cases of the first usage that I claim that methodologically it is strictly scientific.]Thomas2
October 12, 2014
October
10
Oct
12
12
2014
05:44 PM
5
05
44
PM
PDT
HeKS: There is a bridge from one to the other of the two kinds of specified complexity you see, once functionality that depends on particular complex organisation is in play. That's why I have ever so often highlighted that discussion on strings is WLOG, once one reckons with say an AutoCAD dwg file for a system. That is, we may exactly describe a cluster of well matched, properly arranged, correctly coupled components through in effect a structured chain of Y/N -- one-bit of info-carrying capacity -- questions. Obviously, there is room for variation, as engineered systems must live with tolerances, clearances and whatnot. Hence, islands of function in a larger space of possible configurations. (And yes, configuration spaces are closely related to phase and state spaces in Math, Physics and control engineering.) Function, of course, is observed, and in this context may be inferred from performance and points to purpose and intelligently directed contrivance. As say Paley highlighted 200 years ago using the example of stumbling across a watch in a field. Paley also properly answered the but it replicates itself objection, by envisioning a time-keeping, self replicating watch as a possibility/thought exercise in Ch II of his Nat Theol . . . something I have yet to see dismissive objectors answer seriously on the merits. (Yes, looks a whole lot like a 200 year strawman argument!} This is an ADDITIONAL, extremely complex function, requiring in effect recording itself and the steps to assemble itself, as well as the effecting machinery. He of course used the language of what we would call mechanical analogue computing, where cam bars and followers store and "read" programs once set to spinning in synchronisation. (That's how the C18 automatons such as writing or game playing "robots" worked.) From about 1948, Von Neumann brought this to the digital world by envisioning the kinematic self-replicating machine with a "blueprint" stored in a control tape in effect. Which is of course what AutoCAD or the like does. But with implication of a language, algorithms, communication subsystems and controlled execution machinery. The living cell does all of this using molecular nanotech. Based on C-Chemistry, aqueous medium molecular nanomachines in a gated, encapsulated metabolising automaton. Where Yale-lock style prong height patterns are used to store info, e.g. in the classic 3-letter D/RNA codons such as AUG which means both start and load with methionine. Further code points are elongate and add amino acid XXX, then we have STOP. Where halting is a major algorithm design challenge. After this we look at chaperoned folding (note more stable but non functional 3-d shapes forming prions) to functional shape not simply deducible from AA sequences. Where just to fold properly we have deeply isolated fold domains in AA chain space within wider organic chemistry, and thousands of such with a large number being of a very few members. The notion that blind search mechanisms tracing to chance forces and those of blind mechanical necessity acting on the gamut of our solar system or the observed cosmos did that is maximally implausible. The ONLY empirically observed, needle in haystack search plausible candidate is design. But, we are dealing with an absolutist ideology utterly committed to their being no design in life or the cosmos beyond what their just so stories allow them to accept as emerging. That is, ideology and worldview level question-begging evolutionary materialist a prioris dominate in the academy, science institutions and the media as well as education, leading to locking in absurdities as orthodoxy. Evolutionary materialist orthodoxy. That's why in replying to Lewontin, Philip Johnson observed:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Our job is to simply hold ground as a force in being that cannot be crushed by power enforcing absurdity. Often, in the face of misunderstanding, confusion, willful misrepresentation, no concession to IDiots attitudes, expulsion, smears, slander and abuse of institutional or even legal power. Eventually, the obvious truth wins because of its inherent merits. And the sort of desperate selective hyperskepticism we are seeing is a good sign. Amazingly "skeptic" circles are having to debate this, in the face of sex abuse scandals and demands to disbelieve victims unless they can provide absolute proof. The answer I see from women and others is that they make ordinary not extraordinary claims so only ordinary warrant is needed. All they need at first level is to ask themselves soberly, what is the ordinary explanation for FSCO/I, and why is that so for trillions of test cases without exception? Then, at second level, they need to ask, why are they making the distinction and demand an arbitrarily high and unreasonable degree of warrant for what they are disinclined to accept that they don't where something is not at stake. Then they can see that adequate, reasonable and consistent standards of warrant for matters of empirical fact or cause are what we need and can justify. But, when passions are engaged, such will be a struggle. But ever, we must struggle to live by reason not passion and bad habits of passion. KF All of this means that language, codes,kairosfocus
October 12, 2014
October
10
Oct
12
12
2014
04:21 AM
4
04
21
AM
PDT
Sounds like a question from Drumming 101Joe
October 11, 2014
October
10
Oct
11
11
2014
07:04 PM
7
07
04
PM
PDT
Nightlight, what makes a symbol a symbol?Upright BiPed
October 11, 2014
October
10
Oct
11
11
2014
06:59 PM
6
06
59
PM
PDT
@Eric Anderson #124
HeKS: For example, when we compare a string of random characters and a Shakespearean sonnet, everyone can tell that there is an important difference between the two. And they can tell it immediately, without ever running any kind of mathematical calculation. That is because they are assessing the string at the “S” level, not the “C” level.
Agreed.
Indeed, it does not even matter whether the sonnet is more “complex” than the string of random characters. As long as the sonnet is adequately complex, which is readily apparent from a quick glance.
I think this is true, but I also think there's room for confusion here, because there could be two legitimate uses and measures/indicators of "complexity" here. On the one hand, a section of text meets the requirements for the more common meaning of "complex", which is, "consisting of many well-matched parts". We have individual letters that work together to form words, words to form sentences, sentences to form paragraphs, etc. Typically, meaning starts at the level of the word, consisting of multiple letters, but the meanings of many words placed together gives us a kind of system that produces a concept that is in some way more than the sum of its parts. Furthermore, the context of the words working together can alter the meaning of the individual words, or even whole phrases, as in the case of idioms. So when we have a section of text we have a kind of self-contained little system made up many well-matched parts, and therefore complex, and it happens to function to impart a meaning. On the other hand, the section of text at the scale of the individual letters forms a string that is "complex" in the sense of being "improbable", and this is something that can be calculated. I think there can also be further confusion as to the meaning of "information". Like I said earlier, when Dembski uses the term "information" in the context of CSI, he simply means the actualizing of some possibility to the exclusion of all others and the ensuing reduction of uncertainty. However, when dealing with a section of text, we have a different sort of "information" which is semiotic. It is specified according the rules of, say, English grammar, for the purpose of imparting a message. So there are sort of two different contexts in which it makes perfect sense to use the term "Complex Specified Information", but between the two contexts, the only word that retains a consistent meaning is "specified".
So in the vast majority of cases in the real world we never even need to do a calculation of the “C” in order to determine design. Indeed, in some cases a calculation of “C” itself is quite challenging.
Yes. It almost seems like the best and most reliable method of design detection is for a human to look at something in its context apart from any governing ideology and determine whether they think it's designed. Weird, huh? :)
Part of the reason we tend to focus on strings of letters and Shannon information and Kolmogorov complexity and so forth is that we can deal with extremely simple systems and extremely simple calculations. Try, however, calculating the “amount” of complexity in my car’s transmission, for example — it is no small task.
Yes, and this is why in many (most?) cases it would probably be impossible to calculate an exact CSI value for a designed object. First of all, you'd need to recognize that it is a different type of CSI being measured than that being measured in relation to chance hypotheses, because the "C" would refer to 'many well-matched parts' rather than 'improbability'. And second, there is evidently no consistent way to measure that kind of complexity that is valid across all domains. I was talking to Winston Ewert about this, trying to figure out if there might some way to consistently measure an amount of "complexity" across all multi-part systems, and an idea occurred to me: What if the complexity of a multi-part system could be measured by determining the amount of entropy/disorder that would be expected in a randomly occurring system of that size and then measuring the degree of divergence from that expected level of entropy in the system in question. The degree of divergence could then be a consistent form a measurement in any multi-part system because in any given case it would be a measure of the diverge from expectation in that particular system were it randomly occurring. Of course, keep in mind that, as I've said, I'm not a math guy at all and I wouldn't really know how you'd go about doing this. That said, after I had come up with that concept I started looking up some stuff on complexity and found that I had arrived at something pretty close to a method of measuring complexity that is already in use:
Predictive information (Bialek et al., 2001), while not in itself a complexity measure, can be used to separate systems into different complexity categories based on the principle of the extensivity of entropy. Extensivity manifests itself, for example, in systems composed of increasing numbers of homogeneous independent random variables. The Shannon entropy of such systems will grow linearly with their size. This linear growth of entropy with system size is known as extensivity. However, the constituent elements of a complex system are typically inhomogeneous and interdependent, so that as the number of random variables grows the entropy does not always grow linearly. The manner in which a given system departs from extensivity can be used to characterize its complexity. - (http://www.scholarpedia.org/article/Complexity)
I think that it might be worthwhile to pursue this idea, though I'm probably not in the best position to run with it considering my lack of a math background.
Thus, there are two related, but slightly different, criticisms that are often brought up against CSI but which, I believe, both miss the mark: First, we have some critics who demand a mathematical calculation of CSI itself, which reflects a misunderstanding of the “S” part of the concept. (Incidentally, the term “specification” is hard for some people to grasp; in most cases, thinking of the “S” as “substantive” is just as good.)
Yeah, as I've also said before, I don't consider myself an expert on this issue, but I've spent some time looking into it and having a lengthy discussion with Ewert to get, I think, a good grasp on the methodology and logic. But at the start, I thought maybe it would be possible to measure some kind of "degree of specification", but after thinking about it for a bit I realized that this would actually just amount to a further measure of improbability relative to the smaller space of possibilities matching the specification and that this would only be possible if the specification had an incredibly precise "ideal match" inside of a larger generalized match, so that we could measure how close the pattern under consideration was to the target specification relative to all the other configurations that would fall within the general range of match possibilities but were further away from the ideal target match. But this just isn't likely to be the case. For example, what string of English text would be an ideal match to the English language? None that I can think. Or at least none that would be capable of conveying meaning. And so, like you've said, the 'S' is not a calculable amount. There is either a match or there isn't.
Second, we have some critics (such as Elizabeth Liddle) who argue that if we cannot make a precise calculation of all improbabilities — with all parameters, with all particles involved, at an exhaustive level of detail, such as, say, the probabilities related to abiogenesis — then we cannot calculate the “C” with absolute precision and, therefore, the claim goes, we cannot conclude that we are dealing with CSI. This latter criticism is essentially a demand for omniscience, coupled with a head-in-the-sand denial of the way we actually draw inferences of complexity all the time in the real world.* * In fact, this approach of Liddle’s is much more nefarious. In practice it operates as a science-stopper in that it asserts we cannot make any progress or draw any reasonable inferences until we know absolutely everything. It is a refusal to even consider the possibility of design until we know everything.
Yes, this seems to be the standard approach. The design inference is not a design deduction. It cannot be made with absolute certainty and nobody that I've seen claims it can. Rather, it is the determination that design is overwhelmingly the most plausible explanation for some object, event, pattern, etc. And yet, many critics attempt to attack it as though it is claimed that the design inference is held with absolute, irrevocable certainty, and they insist that because we can't weigh every logically possible naturalistic explanation, both known and unknown, it is NEVER justified to infer that an intelligent cause is the best explanation based on everything know at this point in time. It is a naturalism-of-the-gaps argument that insists we assume, contrary to all evidence and the trend of evidence, that everything is explicable by reference to natural causes. And, of course, if we never find that naturalistic explanation we should eternally remain in a state of self-imposed ignorance rather than appeal to the one cause we know is capable of producing the effect, namely, intelligence. Liddle does it. Nightlight does it. Guillermoe does (did?) it. It's all the rage.
Anyway, I’m not outlining this so much for you, as I think you’ve laid out things in pretty good detail. Just wanted to flesh out some thoughts a bit.
It's appreciated. I'm interested in all the discussion and information on this issue that I can get (and have time for). HeKSHeKS
October 11, 2014
October
10
Oct
11
11
2014
05:07 PM
5
05
07
PM
PDT
Eric at 127, I'm looking forward to it. Mung at 118, I'll have my people call your people. HeKS at 117, His/her model is incoherent. The bluster helps conceal that fact.Upright BiPed
October 11, 2014
October
10
Oct
11
11
2014
04:15 PM
4
04
15
PM
PDT
You are first given two strings of symbols
Two strings of wha? What is a symbol? What makes a symbol a symbol?Upright BiPed
October 11, 2014
October
10
Oct
11
11
2014
03:59 PM
3
03
59
PM
PDT
#121 Thomas2
(1) Whether the actions of an "intelligent agency" are capricious or not, Dembski's method only detects "intelligent design" in certain circumstances: that doesn't mean that "design" is only present in those certain circumstances.
There are two major problems with that statement: 1) DI's ID definitely does not use CSI in the weaker sense you suggest since they constantly talk about micro-evolution, which is supposedly result of "natural laws" and macro-evolution which is supposedly intelligently designed or guided. That's an exemplary instance of the 'god of gaps' theology. 2) Even your weaker CSI semantics is fundamentally flawed (the DI's strong CSI semantics is grossly flawed, though). Namely, CSI does not detect some objective property, such as 'design' or 'intelligence', of the observed phenomena, but rather it merely detects how inadequate some theory is in describing the observed data i.e. CSI measures inadequacy of a given theory about observed phenomena, not the phenomena proper. I think the most transparent and ideologically neutral description of CSI concept is in terms of compression and algorithmic complexity (this is based on Rissanen's MDL principle from 1970s, which itself is based on 1960s algorithmic complexity), so let me explain the issue with #2 in those (MDL) terms. You are first given two strings of symbols: * D = represents data of the observed phenomena, * S = specification for D The combined (concatenated) string X = S + D is compressible because symbols of S predict (to some degree) symbols from D. For concreteness, let's also use simple example -- let D be actual results of 1000 'coin tosses' (1 or 0 digits for heads and tails) that some stage magician claims he can telekinetically control with 100% accuracy with his mind. To prove that, he writes down his prediction string S with 1000 symbols 0 and 1, before the coin tosses. Then he feeds the prediction S into his magic machine, which then "mechanically" tosses a coin 1000 times onto the table built into it, recording the 1000 results. Experiments show that his predictions are always correct. Therefore the combined string of 2000 bits: X = S + D is compressible, since it has 50% redundancy (you only need to receive the first 1000 bits of string X to reconstruct i.e. to decompress the entire 2000 bits of X). Let's call CX the compressed string of X, which will have length of Length(CX)=1000 bits. Suppose now you have a theory T, an algorithm for simulating the tossing machine (via physics of coin tosses) which aims to explain the observed process. You input into T the specification S, then run your simulated tossing algorithm T and get on output a string TD(S) of 1000 bits, which is a theoretical prediction by theory T of outcomes D under specification S. Now, analogously to string X above, we form a combined string: Y = S + TD(S) of 2000 bits. Then we run a compression algorithm on Y and get the compressed string CY which has some Length(CY). Now we define: Gap(X,Y) = Length(CY) - Length(CX) A theory T which fully explains the process has Gap(X,Y)=0. Any theory which explains it incompletely will yield a non-zero gap. If for example, theory T just simulates random tosses, then string Y will be incompressible, hence Gap(X,Y) will be 1000 bits. (Dembski in his CSI definition sets the maximum gap allowed by a number of tries provided by the known universe to 120 bits or some of that order, but we won't use that aspect here.) If algorithm T can simulate physics with tighter control of the toss outcome, then it can get more correct tosses (matches with specification S) than a random toss which has 50% chance/toss, hence string Y will be somewhat compressible e.g. Length(CY) may be 1400 bits, thus the Gap(X,Y) will be 400 bits. It is the size of this gap that allegedly detects intelligence behind the machine (process). Namely, if the theory T simulates tossing machine which makes random unbiased tosses, there will be a large Gap(X,Y) of 1000 bits, indicating that the tossing machine was rigged (by 'intelligent agency', the stage magician) to match its outcomes to the specification S. Alternatively, one can view the Gap(X,Y) as a measure of the inadequacy of theory T which seeks to model the tossing process. Either way, the Gap(X,Y) merely measures the discrepancy between the theory T of the phenomena (the theoretical predictions of the tossing machine results) and the phenomena (the actual results of the tossing machine). Hence the Gap(X,Y) is not a property of the machine itself, but merely a property of a relation between theory T of the machine and the machine. The existence of non-zero gap merely tells us that theory T is not how the machine works. Hence, the problem #2 with your statement is that even your weaker semantics for CSI (which is proportional to the above gap, modulo Dembski's resource threshold R), interprets the Gap(X,Y) as the property of the object itself (of the tossing machine), while it is actually only a relation ('distance') between your theory and the object. Hence, even if you insist on anthropomorphic characterization of the Gap as a measure of "intelligence", the most you can say from positive non-zero gap is that 'tossing machine' is more "intelligent" than your theory T. Yet, even your weaker CSI semantics asserts (in effect) that nonzero Gap measures "intelligence" of the machine unconditionally i.e. it claims that Gap ("intelligence") is the sole property of the machine. That is the previously mentioned confusion between the map and the territory that DI's ID injected into the subject i.e. elevating the epistemological entity Gap (which measures how bad your theory of the object is) into ontological entity (property of the object itself). While you distanced yourself from the capricious part-time "intelligent agency" of DI's ID, you (along with apparently most others here at UD) have still been taken in by their more fundamental sleight of hand. In ideologically neutral language, all you can really say from the presence non-zero Gap (or CSI with Dembski's threshold) is that the machine (universe) doesn't work as the theory T assumes, plus that the Gap value quantifies (in Rissanen's MDL language) how bad the theory is. The Dembski's CSI work was focused on showing inadequacy of neo-Darwinian algorithm (random mutation + natural selection) in explaining biological structure in the cell and it successfully demonstrated that neo-Darwinian algorithm is highly inadequate in explaining the observed structures. The same method was also applied to the theories of life and fine tuning of physical laws, albeit resulting in much lesser impact since these theories are recognized as being much weaker than neo-Darwinism (where the 'science is settled' allegedly). In any case, all those results show is that existent biological theories are inadequate in explaining observed biological artifacts. The CSI method doesn't yield anything dramatic in physics or chemistry since those theories are far more polished and better tuned to the observations than biological theories. Discovery Institute's ID mischaracterizes the above epistemological distinctions in quality between different theories as ontological distinctions, by attributing them to nature itself i.e. it labels as "nature" the phenomena for which the present human theories are accurate by CSI criteria (physics, chemistry), while phenomena for which present human theories are inaccurate (such as biology) are somehow guided by some "intelligent agency" which is outside of "nature". Even more absurdly, this "intelligent agency" allegedly sneaks into the "nature" every now and then to help molecules arrange some other way than what the dumb "nature" (the phenomena that physics and chemistry can model accurately) was doing with them on its own. Yeah, sure, that's how it all works, since it makes so much sense.nightlight
October 11, 2014
October
10
Oct
11
11
2014
01:56 PM
1
01
56
PM
PDT
UB @125:
ID proponents and opponents alike have wasted hundreds of thousands of words (mistakenly) attempting to calculate the CSI of objects that are not even information to begin with.
Good way of putting it. Incidentally, I've got a short post in already the works that I hope in some small way may help people think through this issue. Maybe I can get it up tomorrow or Monday . . .Eric Anderson
October 11, 2014
October
10
Oct
11
11
2014
12:27 PM
12
12
27
PM
PDT
F/N: There is entirely too much strawman tactic hyperskeptical dismissiveness about. I suggest NL et al cf here as to exactly how a metric in bits beyond a complexity threshold set relative to atomic and temporal resources of solar system or observed cosmos can be developed. It is also closely linked to what we mean when we speak of a Word file size is about, and is also connected to the discussion on functionally specific complex information. Chi_500 = I*S - 500, bits beyond the solar system threshold, where I is an information capacity metric (e.g. how many y/no questions must be answered to specify the state of an entity), and S is a variable that defaults to 0 but on objective warrant for functional specificity, is set to 1. KFkairosfocus
October 11, 2014
October
10
Oct
11
11
2014
10:53 AM
10
10
53
AM
PDT
The “S” part is not, in my view, calculable, as some kind of mathematically-based construct.
Exactly. ID proponents and opponents alike have wasted hundreds of thousands of words (mistakenly) attempting to calculate the CSI of objects that are not even information to begin with. But when an analysis is made of genuine information, then the object under analysis is invariably the representation ... and representations CAN HAVE NO calculable physical relation to the objects that establish their specification.Upright BiPed
October 11, 2014
October
10
Oct
11
11
2014
10:22 AM
10
10
22
AM
PDT
HeKS: Further to #123, I should add that when people demand a mathematical calculation of "CSI" it is evidence that they do not understand what CSI is. This is part of the disconnect we often find when discussing different strings of characters. For example, when we compare a string of random characters and a Shakespearean sonnet, everyone can tell that there is an important difference between the two. And they can tell it immediately, without ever running any kind of mathematical calculation. That is because they are assessing the string at the "S" level, not the "C" level. Indeed, it does not even matter whether the sonnet is more "complex" than the string of random characters. As long as the sonnet is adequately complex, which is readily apparent from a quick glance. So in the vast majority of cases in the real world we never even need to do a calculation of the "C" in order to determine design. Indeed, in some cases a calculation of "C" itself is quite challenging. Part of the reason we tend to focus on strings of letters and Shannon information and Kolmogorov complexity and so forth is that we can deal with extremely simple systems and extremely simple calculations. Try, however, calculating the "amount" of complexity in my car's transmission, for example -- it is no small task. The upshot of this is that in the real world we rarely even do a precise calculation of "C" to determine whether we have enough complexity. To be sure, it is useful for us to be able to do the calculations on simple strings of characters, or sequences of nucleotides or amino acids, in order to show that there is a solid foundation behind the principle and to give real examples of what we are talking about, and to provide some kind of baseline for the amount of complexity needed. But calculating "C" in other areas is much more challenging, even when the "C" is obviously there. Note: I believe that, in principle, it is possible to precisely calculate "C". However, coming up with precise parameters and knowing all the physical aspects that need to be calculated is, in some cases, no small task. Thus, there are two related, but slightly different, criticisms that are often brought up against CSI but which, I believe, both miss the mark: First, we have some critics who demand a mathematical calculation of CSI itself, which reflects a misunderstanding of the "S" part of the concept. (Incidentally, the term "specification" is hard for some people to grasp; in most cases, thinking of the "S" as "substantive" is just as good.) Second, we have some critics (such as Elizabeth Liddle) who argue that if we cannot make a precise calculation of all improbabilities -- with all parameters, with all particles involved, at an exhaustive level of detail, such as, say, the probabilities related to abiogenesis -- then we cannot calculate the "C" with absolute precision and, therefore, the claim goes, we cannot conclude that we are dealing with CSI. This latter criticism is essentially a demand for omniscience, coupled with a head-in-the-sand denial of the way we actually draw inferences of complexity all the time in the real world.* ----- Anyway, I'm not outlining this so much for you, as I think you've laid out things in pretty good detail. Just wanted to flesh out some thoughts a bit. ----- * In fact, this approach of Liddle's is much more nefarious. In practice it operates as a science-stopper in that it asserts we cannot make any progress or draw any reasonable inferences until we know absolutely everything. It is a refusal to even consider the possibility of design until we know everything.Eric Anderson
October 11, 2014
October
10
Oct
11
11
2014
10:18 AM
10
10
18
AM
PDT
HeKS @120:
No, no, no! You do not calculate the CSI (the high improbability of the specificity) of an event that was designed. That would be calculating the probability, or rather improbability, of something that was done intentionally, which is incoherent. Probabilities are not calculated on intentional events. The probabilities are used to weight whether the event in question might be something other than designed. This is why the design inference is the last step in the process. Once you have calculated the CSI on the basis of all relevant chance hypotheses and found that it is high on all of them, you eliminate chance as a likely explanation and you are done with the CSI calculations. You cannot calculate a specific CSI value for a designed event. You can only calculate specific CSI values for an event in relation to relevant chance hypotheses.
Well said. I would just add as a minor clarification, if I might, that we don't calculate "CSI." Ever. The "C" part can be calculated, per various parameters, none of which need be a be-all-and-end-all, but need to be adequate to give us confidence that we have eliminated certain possibilities. I think you've stated that elsewhere, so just wanted to confirm. The "S" part is not, in my view, calculable, as some kind of mathematically-based construct. It is a recognition of what we see in the real world around us (and what we ourselves feel and do as intelligent beings), namely, it captures our recognition of the reality of such things as purpose, intent, goals, function, meaning and so forth. The substance, if you will, behind the complexity.Eric Anderson
October 11, 2014
October
10
Oct
11
11
2014
09:46 AM
9
09
46
AM
PDT
HeKS:
You do not calculate the CSI (the high improbability of the specificity) of an event that was designed.
You cannot calculate a specific CSI value for a designed event.
But as confusing as it can be, it is also understandable because, for example, it is perfectly sensible to speak of something like DNA having Complex Specified Information even after making a design inference IF one is using the term “Complex” according to its more common meaning of “having many well-matched parts”.
I'm going to let these statements be the last word in the debate. I enjoyed discussing this with you. (I mean that sincerely -- you're quite bright, articulate, and good-natured.)R0bb
October 11, 2014
October
10
Oct
11
11
2014
07:55 AM
7
07
55
AM
PDT
#115 Nightlight – Sadly, your response is pretty well what I expected. In response: (1) Whether the actions of an "intelligent agency" are capricious or not, Dembski's method only detects "intelligent design" in certain circumstances: that doesn't mean that "design" is only present in those certain circumstances. It's not there are two kind of design, but that design, by Dembski's method, can only be reliably inferred in certain circumstances. [It is no different in this respect from Newton’s Law of Gravitation, or the models I use every day in my work to describe and predict hydraulic behaviour]. And for those circumstances it proposes a uniform law-like description of the signs that conscious deliberate wilful mindful activity is involved. (2) If you think my definition/descriptions of "design" are unscientific, then please propose a more scientific one. "Intelligent Design" is real in nature (whether it be ultimately reducible exclusively to necessity and/or chance or not); it is not a mere anthropomorphic or philosophical concept so define "design" scientifically, and then propose a scientific test for it. (3) The DI's approach and Dembski's method do not>/b>, as evidence for design, grasp or invoke gaps between our theories and our knowledge: they invoke the positive evidence of specified complexity, but limit any design inference based upon recognising specified complexity to those cases where, from a scientific analysis, specified complexity cannot alternatively have been the result of mere law-like or chance processes. [Note that in the grand scheme of things "law-like" and "chance" behaviours may themselves be ultimately reducible to "design" (rather than the other way round); the DI's approach and Dembski's method simply do not pre-judge these issues: they deal with nature as it is found, not as we may presuppose it must be – hence their approach in principle is properly scientific.] Whilst noting your distinction between epistemological and ontological categories, I would have thought that a good epistemology should be capable of getting the best attainable grasp on ontology, and that science is ultimately about describing and explaining nature in reality, not according to a preconceived and arbitrary framework (such as metaphysical naturalism) but as it actually is. Your take on CSI is truly interesting (and challenging), but in the larger picture you appear to be constrained by a philosophical straight jacket which attempts to tell reality how it should be rather than trying to reliably determine how it is. You have a conscious mind: you think and you are aware of thinking and you can think about yourself thinking; you can also make decisions, plan things and follow through on those plans such that parts of the physical world become changed from what they would otherwise have been if you had not so planned and acted, and if they had been left to chance and necessity alone. In short, your mind can design things which will result in different outcomes in nature from what would have happened to those parts of nature if left to undirected or law-like processes. "Intelligent Design" is therefore a reality in your own experience: so if you think that the DI/Dembski's approach is scientifically faulty, please present a genuinely scientific definition of design and a reliable method for detecting it.Thomas2
October 11, 2014
October
10
Oct
11
11
2014
04:47 AM
4
04
47
AM
PDT
@R0bb 119
That isn’t a misunderstanding of the challenge — it doesn’t refer to the challenge at all. It’s simply a consequence of your understanding of what it means for something to produce N bits of CSI. But what I said is ambiguous, so I’ll say it differently: If we know that EVENT_X is designed, how do we calculate the CSI in EVENT_X? According to your reasoning, we have to calculate it based on the hypothesis of design. But nobody has ever calculated CSI in this way, and Dembski says that it doesn’t make sense.
No, no, no! You do not calculate the CSI (the high improbability of the specificity) of an event that was designed. That would be calculating the probability, or rather improbability, of something that was done intentionally, which is incoherent. Probabilities are not calculated on intentional events. The probabilities are used to weight whether the event in question might be something other than designed. This is why the design inference is the last step in the process. Once you have calculated the CSI on the basis of all relevant chance hypotheses and found that it is high on all of them, you eliminate chance as a likely explanation and you are done with the CSI calculations. You cannot calculate a specific CSI value for a designed event. You can only calculate specific CSI values for an event in relation to relevant chance hypotheses. Again, I explained all this in my last comment and your counter-challenge is incoherent and seems to be based on a continued misunderstanding of terminology that you should not still have after reading my last post.
Conversely, in Scenario 2, if we know for a fact that EVENT-X was actually caused by HYP-B, then EVENT-X would actually exhibit 10,000 bits of CSI and we could then say that natural processes produced 10,000 bits of CSI in this instance. This follows from your assumption that the amount of CSI exhibited by an event is defined in terms of the actual probability of the event, i.e. the probability given the actual process that caused the event. My point continues to be that you can’t find anything in the ID literature to support this assumption. I invite you again to provide a reference.
I've already explained all this. What I've said follows as a matter of basic logic and you've seen Winston Ewert, the very person you originally cited, confirm that I was "exactly right" on this. So are we now back to some assertion that Ewert just doesn't understand all this stuff? I will say that I think your confusion on this issue is not entirely your own fault. Different proponents of ID have sometimes used the term Complex Specified Information in different contexts. But as confusing as it can be, it is also understandable because, for example, it is perfectly sensible to speak of something like DNA having Complex Specified Information even after making a design inference IF one is using the term "Complex" according to its more common meaning of "having many well-matched parts". In this case, one would be using CSI as a descriptive term for one or more features of a system rather than as a calculated value of the system's improbability on chance hypotheses. And if this is what one means, that a system has many well-matched parts, that it matches an independent specification, and that it has some kind of semiotic dimension, what descriptive term could be more apt than "Complex Specified Information"? Personally, I think this is the more intuitive context in which to use the term CSI, which is why I think it would be more helpful if the CSI related to improbability was renamed for clarity to replace the "complex" with "highly improbable" or something of that nature.HeKS
October 11, 2014
October
10
Oct
11
11
2014
12:47 AM
12
12
47
AM
PDT
Your conclusion about how to reword Barry’s challenge might as well have come right out of thin air.
It did -- I made it up. It's a counter-challenge, and I'm sorry I didn't make that clear. Because of your interpretation of what it means for something to produce N bits of CSI, you can't meet this counter-challenge. But according to typical ID claims, it should be a no-brainer. So how do you account for that contradiction?
Probabilities of its occurrence if it had been caused by some different process are irrelevant to the actual probability or improbability of its occurrence.
I agree 100%, and I've never claimed or implied otherwise.
Conversely, in Scenario 2, if we know for a fact that EVENT-X was actually caused by HYP-B, then EVENT-X would actually exhibit 10,000 bits of CSI and we could then say that natural processes produced 10,000 bits of CSI in this instance.
This follows from your assumption that the amount of CSI exhibited by an event is defined in terms of the actual probability of the event, i.e. the probability given the actual process that caused the event. My point continues to be that you can't find anything in the ID literature to support this assumption. I invite you again to provide a reference.
It is only under those circumstances that we can get an actual measure of the CSI associated with the event, because it is the only way we can get an actual rather than purely hypothetical measure of the improbability of the event, which is a calculation that is entirely dependent upon the chance process that actually brought it about.
You're saying that if we don't know what actually caused an event, we can't get an actual measure of the CSI associated with the event. Again, I invite you to present anything from the ID literature to support this claim.
Nothing in any of this suggests that “to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis [of] design.” This is simply a complete misunderstanding of the nature of the challenge, which is about demonstrating that a natural process is capable of producing a large amount of CSI.
That isn't a misunderstanding of the challenge -- it doesn't refer to the challenge at all. It's simply a consequence of your understanding of what it means for something to produce N bits of CSI. But what I said is ambiguous, so I'll say it differently: If we know that EVENT_X is designed, how do we calculate the CSI in EVENT_X? According to your reasoning, we have to calculate it based on the hypothesis of design. But nobody has ever calculated CSI in this way, and Dembski says that it doesn't make sense. So can you meet the counter-challenge? Show me one example – just one; that’s all I need – of an intelligent agent creating 500 bits of complex specified information.R0bb
October 10, 2014
October
10
Oct
10
10
2014
11:46 PM
11
11
46
PM
PDT
Upright BiPed, If you haven't yet patented the spleen-vent do I have your permission to do so?Mung
October 10, 2014
October
10
Oct
10
10
2014
08:51 PM
8
08
51
PM
PDT
@Upright BiPed I know, right? At this point every comment seems to consist of mischaracterizing some aspect of the design inference, asserting that everything is reducible to some kind of law (whether we know about it or not), talking about hypothetical universe-governing algorithms that are front-loaded (without addressing where these algorithms come from, or where the information that is front-loaded into them comes from, or how we might be able to reason out the answer to those questions), and then throwin in some kind of talk about theology and how it is 'fuzzy wuzzy'. I get a sense that I'm wasting my time. Again.HeKS
October 10, 2014
October
10
Oct
10
10
2014
08:34 PM
8
08
34
PM
PDT
Nightlight, just imagine how much more potent your rhetorical spleen-venting would be if you just had the stomach to actually address the evidence of design - without all the hot air. The simple fact of the matter is that you can't do it. The coherence would eat your lunch. Thus, the foot stomping certainty is a must.Upright BiPed
October 10, 2014
October
10
Oct
10
10
2014
08:15 PM
8
08
15
PM
PDT
#114 Thomas2
A) That means it's range of applicability is limited: it doesn't hold that the designer's actions are capricious and occasional, only that the occasions when they can be reliably be detected by this method are limited [ie, design in nature may be far more widespread and uniform that Dembski's method can detect]. B) What scientific method would you prefer for detecting design?
Even in the span of one post you end up in self-contradictory statements, which are result of the Discover Institute's superfluous, anti-scientific concoction wrapped around the pro-scientific CSI method for detecting and quantifying lawfulness (or compressibility of raw phenomena data). Namely, if according to your proposition (A), 'intelligent agency' (designer) is not capricious, coming in and out of the "nature" to help out "natural laws" at its whim, then his actions are present in everything at all times. Hence, there aren't two separate kinds of artifacts or signs of the designer, his 'dumb chores' (i.e. the dumb "nature", the part compliant with "natural laws") and his 'intelligent designs' (part not compliant with "natural laws") that needs to be specially detected as distinct from his 'dumb chores'. Yet, in your proposition (B) you ask precisely for a way to detect such distinction between the two kinds of artifacts or signs, that accoring to your proposition (A) shouldn't exist. A coherent position, if one wishes to use anthropomorphic terms "design" and "intelligent agency", is that everything is intelligently designed and upheld in its lawful operation at all times and all places by the same intelligent agency, active throughout, leaving no gaps. There are no two of them, one dumb and the other smart, or lternativelky, one of them which is sometimes asleep (at which time the "nature" and "natural laws" are doing the 'dumb chores' of the universe) and sometimes awake and getting involved with its "intelligent designs" to help the dumb "nature" which in its stupidity always gets stuck at some "irreducibly complex" puzzle while the 'intelligent agency' was asleep. That incoherence and cognitive dissonance between the valuable CSI foundation and DI's superfluous contraption on top of it, is perfectly illustrated in your own post, between positions A and B, that is trying to defend the DI's ID. The real detection of intelligence is detection of lawfulness (or compressibility or comprehensibility of nature), which is what CSI method actually detects and quantifies (in bits). The lawfulness that manifests in already known natural laws thus indicates underlying intelligence all by itself. That's how Newton, Maxwell, Einstein and other great scientists saw it, recognizing the mind of God in the lawfulness of nature expressed as mathematically elegant and beautiful equations that captured the subtle patterns and regularities in the observed phenomena. When CSI is applied to neo-Darwinism as the model for observed biological complexity, the CSI detects that neo-Darwinist algorithm (random mutation + natural selection) leaves a very large gap in the amount of lawfulness it predicts compared to the amount of observed lawfulness in biological artifacts. Hence, neo-Darwinist algorithm is not how the actual nature works to produce biological complexity, since it is missing the real pattern (unsurprising for a 19-th century relic, despite its 20th century facelift; it should have been already scrapped and stored into a museum right next to the Ptolomean astronomy, when the DNA coding was discovered in 1950s). Similar gaps (maybe even bigger) are detected by CSI in the theories for origin of life and the fine tuning of physical laws for life. This is unfortunately the very place where the Discovery Institute's ID took a wrong turn, parting the way with science and logic, debasing the CSI method as well as the otherwise nice terms 'intelligence' and 'design' in the muck it layed on top of them. Namely, DI's ID grasped onto those gaps and declared -- there it is, that's where the 'intelligent agency' (God) did his work, in those gaps, while the "nature" did the rest (of 'dumb chores') via "natural laws". But those gaps are not the gaps between the real nature (with its real natural laws) and the observed phenomena. They are the gaps between our present theories of those phenomena (or theories of nature) and the actually observed phenomena (the way the nature actually works). There cannot be any gap or conflict between how the nature really works and the observed phenomena, since the latter is merely one manifestation of the former. The DI's ID has in effect mischaracterized the epistemological categories (our present theories about biological complexity and natural laws) as ontological categories (as "nature" and way that this "nature" works on its own, without the help of the transcendental 'intelligent agency' which is outside of this fictitious "nature" and which intervenes in this fictitious "nature's" workings only now and then, outside of "natural" laws and purely at its whim). Through this sleight of hand, the epistemological gaps were dressed up and recast as ontological gaps, leaving a bit of empty space in this misrepresented ontological realm for another ontological entity, the "intelligent agency" of DI's ID. So, this "strategy" is easily recognizible as the same old 'god of (ever shrinking) gaps' that the more traditional religions have burned their fingers on centuries ago. This is why official Catholics, Orthodox Christians and Jews wouldn't touch DI's ID with a ten foot pole. Despite of this rejection and shunning by those who should have been by all reason their natural allies, the Discovery Institute and its supporters are doubling down on their wrong turn, bumbling merrily toward the cliff, as if itching to learn those same lessons again, the hard way.nightlight
October 10, 2014
October
10
Oct
10
10
2014
07:56 PM
7
07
56
PM
PDT
#111 Nightlight - My own concept of the designer is not that which you attribute to the DI, and I doubt if it is Dembski's either: what Dembski's proposal does is provide a "law" for describing/detecting intelligent design (the planned intentional output of, ulitmately, a wilful conscious thinking process/mind), [not designers], reliably without making false postive identifications. That means it's range of applicability is limited: it does'nt hold that the designer's actions are capricious and occasional, only that the occasions when they can be reliably be detected by this method are limited [ie, design in nature may be far more widespread and uniform that Dembski's method can detect]. It's thus not unlike Newton's law of gravitation which mathematically describes the gravitational behaviour of masses without either sourcing or explaining gravity, and does so in a way which is only reliable over a limited or "capricious" range. What scientific method would you prefer for detecting design? [claiming that "design" isn't a coherent or scientific category would be evasion, not a valid answer. Darwin and Dawkins understood/understand the concept of design: it was/is not their argument that it does not exist, but that in biological systems it is only apparent and better explained by certain natural processes. So, how would you reliably identify it scientifically?]Thomas2
October 10, 2014
October
10
Oct
10
10
2014
04:24 PM
4
04
24
PM
PDT
#112 HeKS
In this context, Dembski defines "information" simply as the elimination of possibilities, the reduction of uncertainty, or the realization of one possibility to the exclusion of all others.
That's fine too, since no one knows what the "all other possibilities" are. You always have to make assumption and that's what gives you the probability p, hence information as log(1/p). That's in essence no different than having to select origin x=0 before you can say that some object has x=500 yards. The absolute value CSI=500 buts has no significance on its own just as coordinate x=500 yards has no significance on its own. They are both relative quantities, meaningful only with respect to some previously chosen convention (the computational model for CSI or in analogy, the origin of coordinate system x=0). The usually cited CSI figures of biological systems thus only eliminates neo-Darwinian model as the source of evolutionary CSI since that model leaves unaccounted for gap of say 500 bits per some protein. But the neo-Darwinian model merely allows simple probabilistic distribution functions for the initial & boundary conditions (IBC) of the molecules in their "random mutation" algorithmic cog. If the real IBC (along with the physical laws) are non-probabilistic, being result of some underlying computation, then the CSI computed relative to IBC + physical laws assumed in neo-Darwinian model is a useless figure. The problem with Discovery Institute's ID (which you seem to be defending) is that from the above inadequacy of neo-Darwinian model, they leap to a far fetched "conclusion" that no model of any sort can be adequate, hence we must reject scientific method altogether (the pursuit of more adequate algorithmic models) and accept their anti-scientific deus ex machina "solution" -- "intelligent agency" which every now and then, at its whim, jumps in out of "nature" to help the "natural laws" solve the puzzles of "irreducible complexity" or some such observed in biological systems, which allegedly no "natural laws" (of any kind) can ever solve. That's a childish confusion between epistemological categories (the neo-Darwinain model for evolution and presently known physical laws in case of origin & fine tuning problems) and ontological categories (the actual algorithms and lawful processes behind the observed phenomena, which science presumes to exist, whatever the part we presently know may be). The DI's ID essentially promotes those transient epistemological entities (the present knowledge of natural laws and models we could come up with so far) into a totality of lawful ontological entities ever possible. Then, since the present epistemological or scientific models fail to describe observations, DI's ID declares that no lawful ontological entities can exist at all that could be behind those phenomena ever, and therefore some transcendental, lawless (capricious) "deus ex machina" must be involved in producing those phenomena. If present natural laws and models are inadequate for explaining observations, the real natural science doesn't leap to invoke lawless entities, but maintains faith into ultimate lawfulness all the way down and seeks to find the improved lawful entities which could explain the observations.nightlight
October 10, 2014
October
10
Oct
10
10
2014
03:30 PM
3
03
30
PM
PDT
@nightlight #109
But if we want to know how much CSI is actually associated with an event produced by natural processes, we have to know the actual probability of the occurrence of that event. But in order to know the actual probability of the occurrence of the event, we need to know the actual chance-based process that caused the occurrence of the event in the first place, since this is what the probability calculation must be based on in order to have any relevance to reality.
You are perfectly correct here, but only as far as you go. What you have arrived at is the realization that the amount of “information” (which is the ‘I’ in ‘CSI’) is a relative quantity, like saying coordinate x of some object is 500 yards, which only means that object is 500 yards away from the arbitrarily chosen origin x=0 of the coordinate system (see earlier post on this point).
Well, actually, this is just another example of the terminology being somewhat confusing. In this context, Dembski defines "information" simply as the elimination of possibilities, the reduction of uncertainty, or the realization of one possibility to the exclusion of all others. When it comes to CSI, the inclusion of the term "information" in this specific context seems to simply be a way to reference that we're dealing with an actualized reality out of a sea of possibility. It's not really a variable in the CSI calculation, which is why I focused my comment on the issues of complexity (high improbability) and specification. To use your 500 yards analogy, it's not that the 'I' in CSI is '500 yards away' from something. Rather, it's that the 'C' is '500 yards away' from something. But not simply from some single, arbitrarily chosen landmark. The 'C' is '500 yards away' (i.e. high improbable) from (i.e. relative to) any known relevant chance process that might conceivably be able to explain it.
But then for some “mysterious” reason you pulled back, stopping short of the next natural reasoning step. We can solve the “mystery” if we follow up your truncated reasoning just few more steps where it will become clear why you had to abruptly halt it. Following up in your own words, is there a way to be certain that you truly “know the actual process” that “caused the occurrence of the event” ? In fact, you have no way of knowing that, unless perhaps you can prove that you are an omniscient being.
Can we truly know the actual process that caused the occurrence of the event? Umm, well, that depends now, doesn't it? If the process was actually observed causing the event then yes, of course we can. And that's what I was talking about in relation to meeting Barry's challenge (and the reason why he said no question-begging was allowed). However, what you seem to be trying to get at here is that even when a high CSI calculation results from all known, relevant chance hypotheses proposed to explain an event, ID can't conclusively use that fact to infer design with irrevocable certainty. Well, gee, welcome to the party. Nobody says you can. The inference is always subject to future falsification if some new naturalistic process is discovered and proposed as an explanation and the event in question turns out not to have a high CSI value on that new chance hypothesis. But the mere logical possibility of that happening does not mean that we should forever refrain from inferring a best explanation based on the current state of our knowledge, which is what a design inference is.
Consequently, what you are really saying about large CSI of the structures in live organisms that you computed is: “if God were as smart as I am presently, he would have needed to input this amount of CSI into the construction of this structure.”
No, what is being said is that, based on the entirety of human knowledge and experience up to this point in history, it is far, far more probable that this event is the result of design than that it is the result of some natural process. It is not a matter of putting "this amount of CSI into" an event, because that statement isn't even coherent. It translates to saying the designer, whether God or whoever else, put "this amount of high improbability into this event" that happens to match an independent specification. That's a nonsensical statement, because an event that is the product of intent is not improbable ... it is only improbable with reference to chance hypotheses that might be proposed to account for it. Furthermore, a design inference proposes an ultimate explanation for some phenomenon, not a methodological one. So if it turned out that the event happened because it was intended to happen, but what allowed for that intent to come to fruition was the existence of these undiscovered, hypothetical, unfathomably efficient, universe-ruling algorithms you're so fond of, consisting of a few brilliantly simple lines of code into which the information to cause the desired event was front-loaded, it would still be true that the event was a product of design, and at multiple levels no less, and so the design inference would still be valid. As far as I can tell, the rest of your comment seems to be a mixture of misunderstanding the nature of a design inference and asserting that everything is reducible to natural laws; though laws that seem to be governed by brilliantly designed but so-far undiscovered algorithms into which the information for every desired effect is front-loaded.HeKS
October 10, 2014
October
10
Oct
10
10
2014
01:55 PM
1
01
55
PM
PDT
#110 Thomas2
you are (almost wilfully) missing the elephant in the room. If you don't think Dembski's done it, how would you propose that we can consistently and reliably detect design
I have no issue with using informal anthropomorphic metaphors, such as design, intelligence, consciousness in informal theological or philosophical discussions, and as a personal heuristics. The problem is with Discovery Institute's ID misbranding such informal chit-chat as natural science that one should teach in science courses. What makes it much worse is the explicit anti-scientific nature attributed to the 'intelligent agency' -- the 'intelligent agency' of DI's ID is a capricious being, apparently jumping in and out at its whim, to allegedly improve upon and fix this or that "inadequacy" of "natural" processes, to "help natural processes" solve some "irreducibly complex" design puzzle that otherwise stumped these "natural processes". This messy, scientifically incoherent picture offered by DI's ID is result of hopeless entanglement, intertwining and conflating epistemological categories (our present knowledge and models of the processes in the universe i.e. what DI calls "natural laws") with ontological categories (the real processes, possibly unknown, operating in the universe). The actual scientifically valuable contribution of the CSI detection and quantification in biology is that it reveals the inadequacy of simple minded neo-Darwinian algorithm, the random mutation + natural selection, to account for the observed features of biological systems. Additional key contribution is the (no free lunch) realization that probabilistic models based on initial & boundary conditions satisfying simple distribution functions (such as Gaussian, Poissonian, Binomial, etc) are not only inadequate for modeling evolution of life, but also of the origin of life and the fine tuning of the presently known physical laws for life. That inadequacy indicates that those initial and boundary conditions (IBC) are much more subtle than was imagined and are not expressible at all in terms of simple probabilistic distribution functions. The next more general (non-probabilistic) type of IBC is not some capricious anti-scientific 'intelligent agency' of DI's ID that sits outside of it all and jumps in and out at its whim, but rather the results of algorithmic processes performed by a computational substratum that underpins our presently known physical laws. The research seeking to uncover and reverse engineer/decompile these underlying computational processes and their algorithms is in fact well under way on multiple fronts as sketched in earlier post (general overview & links here).nightlight
October 10, 2014
October
10
Oct
10
10
2014
01:01 PM
1
01
01
PM
PDT
1 2 3 5

Leave a Reply