Uncommon Descent Serving The Intelligent Design Community

HeKS strikes gold again, or, why strong evidence of design is so often stoutly resisted or dismissed

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

New UD contributor HeKS notes:

The evidence of purposeful design [–> in the cosmos and world of life]  is overwhelming on any objective analysis, but due to Methodological Naturalism it is claimed to be merely an appearance of purposeful design, an illusion, while it is claimed that naturalistic processes are sufficient to achieve this appearance of purposeful design, though none have ever been demonstrated to be up to the task. They are claimed to be up to the task only because they are the only plausible sounding naturalistic explanations available.

He goes on to add:

The argument for ID is an abductive argument. An abductive argument basically takes the form: “We observe an effect, x is causally adequate to explain the effect and is the most common [–> let’s adjust: per a good reason, the most plausible] cause of the effect, therefore x is currently the best explanation of the effect.” This is called an inference to the best explanation.

When it comes to ID in particular, the form of the abductive argument is even stronger. It takes the form: “We observe an effect, x is uniquely causally adequate to explain the effect as, presently, no other known explanation is causally adequate to explain the effect, therefore x is currently the best explanation of the effect.”

Abductive arguments [–> and broader inductive arguments] are always held tentatively because they cannot be as certain as deductive arguments [–> rooted in known true premises and using correct deductions step by step], but they are a perfectly valid form of argumentation and their conclusions are legitimate as long as the premises remain true, because they are a statement about the current state of our knowledge and the evidence rather than deductive statements about reality.

Abductive reasoning is, in fact, the standard form of reasoning on matters of historical science, whereas inductive reasoning is used on matters in the present and future.

And, on fair and well warranted comment, design is the only actually observed and needle in haystack search-plausible cause of functionally specific complex organisation and associated information (FSCO/I) which is abundantly common in the world of life and in the physics of the cosmos. Summing up diagramatically:

csi_defn

Similarly, we may document the inductive, inference to best current explanation logic of the design inference in a flow chart:

explan_filterAlso, we may give an iconic case, the protein synthesis process (noting the functional significance of proper folding),

Proteinsynthesis

. . . especially the part where proteins are assembled in the ribosome based on the coded algorithmic information in the mRNA tape threaded through the Ribosome:

prot_transln

And, for those who need it, an animated video clip may be helpful:

[youtube aQgO5gGb67c]

So, instantly, we may ask: what is the only actually — and in fact routinely — observed causal source of codes, algorithms, and associated co-ordinated, organised execution machinery?

ANS: intelligently directed contingency, aka design, where there is no good reason to assume, imply or constrain such intelligence to humans.

Where also, FSCO/I or even the wider Complex Specified Information is not an incoherent mish-mash dreamed up by silly brainwashed or machiavellian IDiots trying to subvert science and science education by smuggling in Creationism while lurking in cheap tuxedos, but instead the key notions and the very name itself trace to events across the 1970’s and into the early 1980’s as eminent scientists tried to come to grips with the evidence of the cell and of cosmology, as was noted in reply to a comment on the UD Weak Argument Correctives:

. . . we can see across the 1970′s, how OOL researchers not connected to design theory, Orgel (1973) and Wicken (1979) spoke on the record to highlight a key feature of the organisation of cell based life:

ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]

WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [ –> i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [ –> originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.]

At the turn of the ’80′s Nobel-equivalent prize-holding astrophysicist and lifelong agnostic, Sir Fred Hoyle, went on astonishing record:

Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

Based on things I have seen, this usage of the term Intelligent Design may in fact be the historical source of the term for the theory.

The same worthy also is on well-known record on cosmological design in light of evident fine tuning:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16]

A talk given to Caltech (For which the above seems to have originally been conclusive remarks) adds:

The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn’t so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn’t give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it’s easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem – the information problem . . . .

I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn’t convince myself that even the whole universe would be sufficient to find life by random processes – by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . .

Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

These words in the same talk must have set his audience on their ears:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

So, then, why is the design inference so often so stoutly resisted?

LEWONTIN, 1997: . . . to put a correct view of the universe into people’s heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting] . . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [Billions and billions of demons, NYRB Jan 1997. If you imagine that the above has been “quote mined” kindly read the fuller extract and notes here on, noting the onward link to the original article.]

NSTA BOARD, 2000: The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts [–> as in, Phil Johnson was dead on target in his retort to Lewontin, science is being radically re-defined on a foundation of a priori evolutionary materialism from hydrogen to humans] . . . .

Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations [–> the ideological loading now exerts censorship on science] supported by empirical evidence [–> but the evidence is never allowed to speak outside a materialistic circle so the questions are begged at the outset] that are, at least in principle, testable against the natural world [–> but the competition is only allowed to be among contestants passed by the Materialist Guardian Council] . . . .

Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements [–> in fact this imposes a strawman caricature of the alternative to a priori materialism, as was documented since Plato in The Laws, Bk X, namely natural vs artificial causal factors, that may in principle be analysed on empirical characteristics that may be observed. Once one already labels “supernatural” and implies “irrational,” huge questions are a priori begged and prejudices amounting to bigotry are excited to impose censorship which here is being insitutionalised in science education by the national science teachers association board of the USA.] in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases added.]

MAHNER, 2011: This paper defends the view that metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle . . . .

Metaphysical or ontological naturalism (henceforth: ON) [“roughly” and “simply”] is the view that all that exists is our lawful spatiotemporal world. Its negation is of course supernaturalism: the view that our lawful spatiotemporal world is not all that exists because there is another non-spatiotemporal world transcending the natural one, whose inhabitants—usually considered to be intentional beings—are not subject to natural laws . . . .

ON is not part of a deductive argument in the sense that if we collected all the statements or theories of science and used them as premises, then ON would logically follow. After all, scientific theories do not explicitly talk about anything metaphysical such as the presence or absence of supernatural entities: they simply refer to natural entities and processes only. Therefore, ON rather is a tacit metaphysical supposition of science, an ontological postulate. It is part of a metascientific framework or, if preferred, of the metaparadigm of science that guides the construction and evaluation of theories, and that helps to explain why science works and succeeds in studying and explaining the world. Now this can be interpreted in a weak and a strong sense. In the weak sense, ON is only part of the metaphysical background assumptions of contemporary science as a result of historical contingency; so much so that we could replace ON by its antithesis any time, and science would still work fine. This is the view of the creationists, and, curiously, even of some philosophers of science (e.g., Monton 2009). In the strong sense, ON is essential to science; that is, if it were removed from the metaphysics of science, what we would get would no longer be a science. Conversely, inasmuch as early science accepted supernatural entities as explainers, it was not proper science yet. It is of course this strong sense that I have in mind when I say that science presupposes ON. [In, his recent Science and Education article, “The role of Metaphysical Naturalism in Science” (2011) ]

In short, there is strong evidence of ideological bias and censorship in contemporary science and science education on especially matters of origins, reflecting the dominance of a priori evolutionary materialism.

To all such, Philip Johnson’s reply to Lewontin of November 1997 is a classic:

For scientific materialists the materialism comes first; the science comes thereafter. [Emphasis original.] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

. . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [Emphasis added.] [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

Please, bear such in mind when you continue to observe the debate exchanges here at UD and beyond. END

Comments
Thomas2: The problem with the comment @61, is that he misunderstands the concept of specification. The mere existence of a physical object, whether the rock or the imprint it leaves in mud is does not constitute a specification. Furthermore, an object does not contain information by its mere existence. (See my detailed posts on that issue.) The only information contained in nightlight's scenario @61 is the information that is (i) created by an intelligent being, (ii) as a result of examining the physical object with tools and instruments of investigation, and (iii) encoded in some kind of language. Once that information is created by the intelligent being (not by the rock) then, yes, that information can constitute a "specification" -- meaning it has some function, or meaning, or references something outside of itself. The specification exists in the information created by the intelligent being, not in the object itself. nightlight is completely wrong in thinking that the kind of information that exists in the cell -- functional, representative information -- has anything to do with the kind of "information" he imagines exists in a rock rolling down a hill and leaving an impression in the mud. It is a massive category mistake and a fatal logical error. Eric Anderson
KF @ #137 Interesting exposition. You will appreciate that I was addressing the issues a conceptual level rather than analytical, but I have also appreciated the more detailed treatments from you and others. I know that NL somewhat blinds himself to the weaknesses of his own position by an apparently rigid prior commitment to an unsustainable philosophical position (together with a consequent misplaced and unhelpful disdain for IDers in general and DI-IDers in particular), but nevertheless he makes some interesting points and I would also be interested in what your own response to NL @ #61 would be. Thomas2
EA, a communication system always implies additional info and functionally specific complex organisation; the FSCO/I estimate on the coding is thus a conservative count. KF kairosfocus
NL & Thomas2 (also, attn EA): Over the past few days, here we were side-swiped by a storm now turned serious hurricane headed for Bermuda (I hope it misses . . . ). Power and comms were lost for a while, and there's some catch-up. I note first that T2 is quite correct that the design inference in itself is not a universal design detector much less a designer detector. Nor is it a universal decoder. Given the limitations underscored by theory of computation, that is no surprise. Having noted that, NL, functionally specific, complex organisation and associated information (FSCO/I) -- which BTW is what is relevant to the world of cell based life per WmAD's note in NFL that in life forms specification is cashed out as function, and what was put on the table by Orgel and Wicken in OOL research in the '70's -- is observable and in principle measurable. In fact, three years ago, between VJT, Paul Giem and the undersigned, we put together a metric model that builds on Dembski's ideas and which is pretty direct and simple: Chi_500 = I*S - 500, bits beyond the solar system threshold I is any reasonable info-carrying capacity metric, defined most easily on the chain of structured Y/N questions required to specify functional configs in a space of possible configs (cf. ASCII strings or AutoCAD files . . . EA, I will come to you soon). Yes it is contextual, but so is any reasonable informational situation. It takes 7 bits to specify an ASCII character (bit # 8 is a parity check), and 16 a Unicode one, even if they look the same, as the latter comes from a much broader field. S is a so-called dummy variable connected to observations. Its default is 0, corresponding to high contingency being assumed to be due to chance. This is tied to the null hyp idea, that if blind chance can reasonably account for something you cannot justify rejecting it. But, on objective warrant for functional specificity, S goes to 1. If S = 0, the metric is locked at - 500 bits. If S = 1, you need 500 bits plus worth of complexity for the metric to go positive. That is what is required for something on the gamut of our solar system of 10^57 atoms interacting or acting at fast chem rates of 10^14/s, for 10^17 s to be credibly beyond the reach of chance discovering an island of function of that degree of complexity. In effect, give each atom in our solar system a tray of 500 coins, and flip them and read the pattern every 10^-14 s. For the duration of the solar system to date. Such a set of observations (a blind search of the config space for 500 bits) would in effect sample as one straw-size to a cubical haystack 1,000 light years across. About as thick as our barred-spiral galaxy at its central bulge; off in the direction of Sagittarius, which is now up in the night sky. If such were superposed on our galactic neighbourhood, we could say with all but utter logical certainty, we could certainly say with empirical reliability tantamount to practical certainty, that such a search strategy will be practically infeasible and will predictably fail. We would reliably only pick up straw never mind many thousands of stars etc. Similarly, precisely because of the strict demands of right components in the right places and overall arrangement and coupling to achieve specific function -- text in English, a steam turbine, a watch etc -- FSCO/I will come as very narrow zones in a much broader space of possibilities. That is why FSCO/I is astonishingly resistant to search strategies relying on blind chance and/or mechanical necessity. And, that is why, as an empirical fact on trillions of observations, it is a highly reliable sign of design as cause. That is, of intelligently, goal-oriented, directed configuration. Designers have knowledge, skill and creative capacity, as a matter of observation; i.e. designers (of whatever ultimate source) are possible. Design is a process that uses intelligence to configure components to achieve a desired functional end. Thus, we are epistemically entitled, as a matter of induction [specifically abductive inference to best explanation] to infer design as the causal process that best accounts for FSCO/I, absent empirical observation otherwise. And, to let the metaphysical chips fall and lie where they will. So, when we see a watch in a field, we easily infer design. Even if the watch has the additional property Paley proposed, of self replication, that would be additional FSCO/I. Likewise, when we see C-Chemistry, aqueous medium, molecular nanotech using cells that use codes and algorithms to effect a kinematic von Neumann self replication facility, we have excellent inductive reason to infer to design. And, on the plain testimony of Lewontin and others, including Dawkins, it is plain that on the contrary to the all too common turnabout accusation, it is a priori materialists who are begging big metaphysical questions and who in the teeth of the sort of analysis just outlined are inferring to chance of the gaps as magic-working demi-urge. Oops, a demi-urge is a designer; it is blind chance of the gaps. So, the talking points deployed above at 128 fall flat, NL. Time for fresh thinking. KF PS: EA, it is that possibility of 3-d info record that led me to broaden to FSCO/I several years ago, and to point out that per AutoCAD etc, discussion on coded strings is WLOG. You may wish to read here on in context, note point iii and onward links. PPS: Let me add from the same IOSE introsumm page: >> In simple terms, noted ID Scientist William Dembski, argues:
We know from experience that intelligent agents build intricate machines that need all their parts to function [[--> i.e. he is specifically discussing "irreducibly complex" objects, structures or processes for which there is a core group of parts all of which must be present and properly arranged for the entity to function (cf. here, here and here)], things like mousetraps and motors. And we know how they do it -- by looking to a future goal and then purposefully assembling a set of parts until they’re a working whole. Intelligent agents, in fact, are the one and only type of thing we have ever seen doing this sort of thing from scratch. In other words, our common experience provides positive evidence of only one kind of cause able to assemble such machines. It’s not electricity. It’s not magnetism. It’s not natural selection working on random variation. It’s not any purely mindless process. It’s intelligence . . . . When we attribute intelligent design to complex biological machines that need all of their parts to work, we’re doing what historical scientists do generally. Think of it as a three-step process: (1) locate a type of cause active in the present that routinely produces the thing in question; (2) make a thorough search to determine if it is the only known cause of this type of thing; and (3) if it is, offer it as the best explanation for the thing in question. [[William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 20-21, 53 (InterVarsity Press, 2010). HT, CL of ENV & DI.]
Philosopher of Science Stephen Meyer similarly argues the same point in more detail in his response to a hostile review of his key 2009 Design Theory book, Signature in the Cell:
The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . .
He then gives even more details, with particular reference to the origin of cell-based life:
The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details in the linked page]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . . For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory. While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen. Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA. As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules. Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences. This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers. It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . . [[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[--> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . . [[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to "natural[[istic] causes"] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself. Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on. We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it. But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]
Thus, in the context of a pivotal example -- the functionally specific, complex information stored in the well-known genetic code -- we see laid out the inductive logic and empirical basis for design theory as a legitimate (albeit obviously controversial) scientific investigation and conclusion. >> kairosfocus
kairosfocus @134: Excellent point about AutoCAD files -- thanks for sharing that example. I guess that would mean that to the extent we have the relevant formal computational system in place (3D graphics representation, measurement parameters, spatial orientation parameters, and so on), then we can use that "language," if you will, to represent a physical object in three-dimensional space. That should allow us to calculate the "C" part of a physical object. This, of course, is not quite the same as calculating "C" in a vaccum, but perhaps that is never possible anyway. Perhaps it is the case that "C" can only be calculated once we have set up a formal representation and measurement system. After the background system is in place, then we can calculate that we need X "amount" of information in order to be able to represent object X. Similar situation exists with "Shannon information" and any other descriptive system. Anyway, thanks for bringing that example up. It helps tie a couple of fundamental issues together in my mind. ----- Also your point about reproduction (made previously) is right on the money. Materialists like to point to reproduction as though it solves the complexity problem, when in fact it contributes to the problem. Eric Anderson
#128 nightlight – You have gone to some trouble to explain your position, but I only have time to respond in brief. First, you have not addressed my key questions how would you scientifically describe/define mindful design, and how would you scientifically detect it, reliably and predictably? (In the context of ID, I mean detect design without prior knowledge of or access to the putative designing mind). Whether the mind of the "intelligent designer" is ultimately reducible to explanation/description by scientific laws or not would be irrelevant to these questions: you know that there are conscious minds which make decisions which can be physically applied directly to nature to produce results in nature which nature cannot otherwise directly produced by mindless unconscious operations. This is the domain in which ID works, not a hypothetical domain conceivably more remote. Turning to your response, (1) there is no "theology" in the methodology of ID: the theory stands or falls on its scientific merits alone (whatever the theological motivations of IDers or the theological implications of some of its findings/predictions). And the usual meaning of "god of the gaps" is a gratuitous invocation of the direct action of God/the supernatural based solely on the absence of evidence for a "natural" explanation of some physical data. ID, on the other hand, invokes the direct or ultimate action of a designer based upon the evidence of absence of an alternative "natural" cause together with positive evidence for design. Your formulation does not appear to me to fit into that description. Regarding the tricky terms micro-evolution -v- macro-evolution, the distinction is not that some parallel operations in nature are the proximate result of lawful activity and some the proximate result of design, but that design (macro-evolution) underpins law (micro-evolution) – perhaps comparable to the case of your chess programme designer and the chess programme’s subsequent playing. However, this point is really beside the point: what we are concerned with is nature as we find it, not as we insist it must be to in order to accord with some prior conceived metaphysical or theological theory or belief which we have imposed on nature or on our methods for understanding nature. Dembski's ID theory attempts to unequivocally describe the properties of an intelligently designed entity (ie, provide a law describing it) in nature where these properties can be exclusively identified when present. Where those properties are exclusively found, a design inference and can be reliably be made (and, I personally would say, an associated testable hypothesis formulated). Finally in your point (1), if the designer were capricious, so what? Dembski's proposed scientific procedure looks for consistency in the design, not the designer; (it doesn't look for the designer at all!). Considering your point (2), I have studiously tired to avoid "semantics" - that is why, in part, I asked you to provide a definition of intelligent design. What I am concerned with is the discernible nature of reality, and I if you don't like my wording – change it: the concept of "design" is grounded in observational reality in nature – so it should be capable of scientific definition (leading to scientific investigation); indeed, it almost seems to me that it is you who are tripping yourself up with semantics framed around an a priori refusal to recognise the scientific reality of the existence of things which are intelligently designed (assuming that I haven't misunderstood or under-appreciated your case [as I did Barry's earlier in this chain]). Concerning your argument, I don’t consider that CSI "detect[s] some objective property, such as 'design' or 'intelligence'" – rather, it is an objective property of an intelligently designed object or phenomenon, and when it is observed/evaluated to be present to the methodical exclusion of other potential sources of CSI then you can reliably make a design inference. Questions of compression relate to the measurement or quantification of CSI, and hence to the confidence in making a design inference, but it is the presence of CSI (with the methodical exclusion of other potential non-intelligent sources of CSI) which marks intelligent design. CSI measures not gaps, therefore, but confidence. In that sense it could be said, perhaps, to be measure of "inadequacy", but the confidence levels are set so high in Dembski's formulation that that should not be an issue. The rest of your response bears further examination, but, for the reason just given, I don't see that it undermines the science or reliability of Dembski’s basic method. [Post Script: on semantics: ID is a phrase with several different uses – which is an annoying weakness: when referring to Behe's or Dembski’s methodologies it is strictly scientific; when referring to arguments over MN etc, it is science philosophy; when used in its typical sound-bite definition, it is an overarching concept; or it can refer to other theories of design detection which fall within the scope of the preceding sound-bite definition; or it can refer to the movement which promotes ID; or, less frequently, it can simply be an 18th/19th/20th century philosophical phrase distinguishing the work in nature of a divine author/architect/creator from work in nature which isn't; or it simply refers to the act of a mind in planning a design, or to the physical result of such an act. In this discussion I have limited my use to the first two and the last two, and I have attempted to make it clear which use is intended by the context I within which I have employed the term on each occasion. Note also that in this discussion, it is only in cases of the first usage that I claim that methodologically it is strictly scientific.] Thomas2
HeKS: There is a bridge from one to the other of the two kinds of specified complexity you see, once functionality that depends on particular complex organisation is in play. That's why I have ever so often highlighted that discussion on strings is WLOG, once one reckons with say an AutoCAD dwg file for a system. That is, we may exactly describe a cluster of well matched, properly arranged, correctly coupled components through in effect a structured chain of Y/N -- one-bit of info-carrying capacity -- questions. Obviously, there is room for variation, as engineered systems must live with tolerances, clearances and whatnot. Hence, islands of function in a larger space of possible configurations. (And yes, configuration spaces are closely related to phase and state spaces in Math, Physics and control engineering.) Function, of course, is observed, and in this context may be inferred from performance and points to purpose and intelligently directed contrivance. As say Paley highlighted 200 years ago using the example of stumbling across a watch in a field. Paley also properly answered the but it replicates itself objection, by envisioning a time-keeping, self replicating watch as a possibility/thought exercise in Ch II of his Nat Theol . . . something I have yet to see dismissive objectors answer seriously on the merits. (Yes, looks a whole lot like a 200 year strawman argument!} This is an ADDITIONAL, extremely complex function, requiring in effect recording itself and the steps to assemble itself, as well as the effecting machinery. He of course used the language of what we would call mechanical analogue computing, where cam bars and followers store and "read" programs once set to spinning in synchronisation. (That's how the C18 automatons such as writing or game playing "robots" worked.) From about 1948, Von Neumann brought this to the digital world by envisioning the kinematic self-replicating machine with a "blueprint" stored in a control tape in effect. Which is of course what AutoCAD or the like does. But with implication of a language, algorithms, communication subsystems and controlled execution machinery. The living cell does all of this using molecular nanotech. Based on C-Chemistry, aqueous medium molecular nanomachines in a gated, encapsulated metabolising automaton. Where Yale-lock style prong height patterns are used to store info, e.g. in the classic 3-letter D/RNA codons such as AUG which means both start and load with methionine. Further code points are elongate and add amino acid XXX, then we have STOP. Where halting is a major algorithm design challenge. After this we look at chaperoned folding (note more stable but non functional 3-d shapes forming prions) to functional shape not simply deducible from AA sequences. Where just to fold properly we have deeply isolated fold domains in AA chain space within wider organic chemistry, and thousands of such with a large number being of a very few members. The notion that blind search mechanisms tracing to chance forces and those of blind mechanical necessity acting on the gamut of our solar system or the observed cosmos did that is maximally implausible. The ONLY empirically observed, needle in haystack search plausible candidate is design. But, we are dealing with an absolutist ideology utterly committed to their being no design in life or the cosmos beyond what their just so stories allow them to accept as emerging. That is, ideology and worldview level question-begging evolutionary materialist a prioris dominate in the academy, science institutions and the media as well as education, leading to locking in absurdities as orthodoxy. Evolutionary materialist orthodoxy. That's why in replying to Lewontin, Philip Johnson observed:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Our job is to simply hold ground as a force in being that cannot be crushed by power enforcing absurdity. Often, in the face of misunderstanding, confusion, willful misrepresentation, no concession to IDiots attitudes, expulsion, smears, slander and abuse of institutional or even legal power. Eventually, the obvious truth wins because of its inherent merits. And the sort of desperate selective hyperskepticism we are seeing is a good sign. Amazingly "skeptic" circles are having to debate this, in the face of sex abuse scandals and demands to disbelieve victims unless they can provide absolute proof. The answer I see from women and others is that they make ordinary not extraordinary claims so only ordinary warrant is needed. All they need at first level is to ask themselves soberly, what is the ordinary explanation for FSCO/I, and why is that so for trillions of test cases without exception? Then, at second level, they need to ask, why are they making the distinction and demand an arbitrarily high and unreasonable degree of warrant for what they are disinclined to accept that they don't where something is not at stake. Then they can see that adequate, reasonable and consistent standards of warrant for matters of empirical fact or cause are what we need and can justify. But, when passions are engaged, such will be a struggle. But ever, we must struggle to live by reason not passion and bad habits of passion. KF All of this means that language, codes, kairosfocus
Sounds like a question from Drumming 101 Joe
Nightlight, what makes a symbol a symbol? Upright BiPed
@Eric Anderson #124
HeKS: For example, when we compare a string of random characters and a Shakespearean sonnet, everyone can tell that there is an important difference between the two. And they can tell it immediately, without ever running any kind of mathematical calculation. That is because they are assessing the string at the “S” level, not the “C” level.
Agreed.
Indeed, it does not even matter whether the sonnet is more “complex” than the string of random characters. As long as the sonnet is adequately complex, which is readily apparent from a quick glance.
I think this is true, but I also think there's room for confusion here, because there could be two legitimate uses and measures/indicators of "complexity" here. On the one hand, a section of text meets the requirements for the more common meaning of "complex", which is, "consisting of many well-matched parts". We have individual letters that work together to form words, words to form sentences, sentences to form paragraphs, etc. Typically, meaning starts at the level of the word, consisting of multiple letters, but the meanings of many words placed together gives us a kind of system that produces a concept that is in some way more than the sum of its parts. Furthermore, the context of the words working together can alter the meaning of the individual words, or even whole phrases, as in the case of idioms. So when we have a section of text we have a kind of self-contained little system made up many well-matched parts, and therefore complex, and it happens to function to impart a meaning. On the other hand, the section of text at the scale of the individual letters forms a string that is "complex" in the sense of being "improbable", and this is something that can be calculated. I think there can also be further confusion as to the meaning of "information". Like I said earlier, when Dembski uses the term "information" in the context of CSI, he simply means the actualizing of some possibility to the exclusion of all others and the ensuing reduction of uncertainty. However, when dealing with a section of text, we have a different sort of "information" which is semiotic. It is specified according the rules of, say, English grammar, for the purpose of imparting a message. So there are sort of two different contexts in which it makes perfect sense to use the term "Complex Specified Information", but between the two contexts, the only word that retains a consistent meaning is "specified".
So in the vast majority of cases in the real world we never even need to do a calculation of the “C” in order to determine design. Indeed, in some cases a calculation of “C” itself is quite challenging.
Yes. It almost seems like the best and most reliable method of design detection is for a human to look at something in its context apart from any governing ideology and determine whether they think it's designed. Weird, huh? :)
Part of the reason we tend to focus on strings of letters and Shannon information and Kolmogorov complexity and so forth is that we can deal with extremely simple systems and extremely simple calculations. Try, however, calculating the “amount” of complexity in my car’s transmission, for example — it is no small task.
Yes, and this is why in many (most?) cases it would probably be impossible to calculate an exact CSI value for a designed object. First of all, you'd need to recognize that it is a different type of CSI being measured than that being measured in relation to chance hypotheses, because the "C" would refer to 'many well-matched parts' rather than 'improbability'. And second, there is evidently no consistent way to measure that kind of complexity that is valid across all domains. I was talking to Winston Ewert about this, trying to figure out if there might some way to consistently measure an amount of "complexity" across all multi-part systems, and an idea occurred to me: What if the complexity of a multi-part system could be measured by determining the amount of entropy/disorder that would be expected in a randomly occurring system of that size and then measuring the degree of divergence from that expected level of entropy in the system in question. The degree of divergence could then be a consistent form a measurement in any multi-part system because in any given case it would be a measure of the diverge from expectation in that particular system were it randomly occurring. Of course, keep in mind that, as I've said, I'm not a math guy at all and I wouldn't really know how you'd go about doing this. That said, after I had come up with that concept I started looking up some stuff on complexity and found that I had arrived at something pretty close to a method of measuring complexity that is already in use:
Predictive information (Bialek et al., 2001), while not in itself a complexity measure, can be used to separate systems into different complexity categories based on the principle of the extensivity of entropy. Extensivity manifests itself, for example, in systems composed of increasing numbers of homogeneous independent random variables. The Shannon entropy of such systems will grow linearly with their size. This linear growth of entropy with system size is known as extensivity. However, the constituent elements of a complex system are typically inhomogeneous and interdependent, so that as the number of random variables grows the entropy does not always grow linearly. The manner in which a given system departs from extensivity can be used to characterize its complexity. - (http://www.scholarpedia.org/article/Complexity)
I think that it might be worthwhile to pursue this idea, though I'm probably not in the best position to run with it considering my lack of a math background.
Thus, there are two related, but slightly different, criticisms that are often brought up against CSI but which, I believe, both miss the mark: First, we have some critics who demand a mathematical calculation of CSI itself, which reflects a misunderstanding of the “S” part of the concept. (Incidentally, the term “specification” is hard for some people to grasp; in most cases, thinking of the “S” as “substantive” is just as good.)
Yeah, as I've also said before, I don't consider myself an expert on this issue, but I've spent some time looking into it and having a lengthy discussion with Ewert to get, I think, a good grasp on the methodology and logic. But at the start, I thought maybe it would be possible to measure some kind of "degree of specification", but after thinking about it for a bit I realized that this would actually just amount to a further measure of improbability relative to the smaller space of possibilities matching the specification and that this would only be possible if the specification had an incredibly precise "ideal match" inside of a larger generalized match, so that we could measure how close the pattern under consideration was to the target specification relative to all the other configurations that would fall within the general range of match possibilities but were further away from the ideal target match. But this just isn't likely to be the case. For example, what string of English text would be an ideal match to the English language? None that I can think. Or at least none that would be capable of conveying meaning. And so, like you've said, the 'S' is not a calculable amount. There is either a match or there isn't.
Second, we have some critics (such as Elizabeth Liddle) who argue that if we cannot make a precise calculation of all improbabilities — with all parameters, with all particles involved, at an exhaustive level of detail, such as, say, the probabilities related to abiogenesis — then we cannot calculate the “C” with absolute precision and, therefore, the claim goes, we cannot conclude that we are dealing with CSI. This latter criticism is essentially a demand for omniscience, coupled with a head-in-the-sand denial of the way we actually draw inferences of complexity all the time in the real world.* * In fact, this approach of Liddle’s is much more nefarious. In practice it operates as a science-stopper in that it asserts we cannot make any progress or draw any reasonable inferences until we know absolutely everything. It is a refusal to even consider the possibility of design until we know everything.
Yes, this seems to be the standard approach. The design inference is not a design deduction. It cannot be made with absolute certainty and nobody that I've seen claims it can. Rather, it is the determination that design is overwhelmingly the most plausible explanation for some object, event, pattern, etc. And yet, many critics attempt to attack it as though it is claimed that the design inference is held with absolute, irrevocable certainty, and they insist that because we can't weigh every logically possible naturalistic explanation, both known and unknown, it is NEVER justified to infer that an intelligent cause is the best explanation based on everything know at this point in time. It is a naturalism-of-the-gaps argument that insists we assume, contrary to all evidence and the trend of evidence, that everything is explicable by reference to natural causes. And, of course, if we never find that naturalistic explanation we should eternally remain in a state of self-imposed ignorance rather than appeal to the one cause we know is capable of producing the effect, namely, intelligence. Liddle does it. Nightlight does it. Guillermoe does (did?) it. It's all the rage.
Anyway, I’m not outlining this so much for you, as I think you’ve laid out things in pretty good detail. Just wanted to flesh out some thoughts a bit.
It's appreciated. I'm interested in all the discussion and information on this issue that I can get (and have time for). HeKS HeKS
Eric at 127, I'm looking forward to it. Mung at 118, I'll have my people call your people. HeKS at 117, His/her model is incoherent. The bluster helps conceal that fact. Upright BiPed
You are first given two strings of symbols
Two strings of wha? What is a symbol? What makes a symbol a symbol? Upright BiPed
#121 Thomas2
(1) Whether the actions of an "intelligent agency" are capricious or not, Dembski's method only detects "intelligent design" in certain circumstances: that doesn't mean that "design" is only present in those certain circumstances.
There are two major problems with that statement: 1) DI's ID definitely does not use CSI in the weaker sense you suggest since they constantly talk about micro-evolution, which is supposedly result of "natural laws" and macro-evolution which is supposedly intelligently designed or guided. That's an exemplary instance of the 'god of gaps' theology. 2) Even your weaker CSI semantics is fundamentally flawed (the DI's strong CSI semantics is grossly flawed, though). Namely, CSI does not detect some objective property, such as 'design' or 'intelligence', of the observed phenomena, but rather it merely detects how inadequate some theory is in describing the observed data i.e. CSI measures inadequacy of a given theory about observed phenomena, not the phenomena proper. I think the most transparent and ideologically neutral description of CSI concept is in terms of compression and algorithmic complexity (this is based on Rissanen's MDL principle from 1970s, which itself is based on 1960s algorithmic complexity), so let me explain the issue with #2 in those (MDL) terms. You are first given two strings of symbols: * D = represents data of the observed phenomena, * S = specification for D The combined (concatenated) string X = S + D is compressible because symbols of S predict (to some degree) symbols from D. For concreteness, let's also use simple example -- let D be actual results of 1000 'coin tosses' (1 or 0 digits for heads and tails) that some stage magician claims he can telekinetically control with 100% accuracy with his mind. To prove that, he writes down his prediction string S with 1000 symbols 0 and 1, before the coin tosses. Then he feeds the prediction S into his magic machine, which then "mechanically" tosses a coin 1000 times onto the table built into it, recording the 1000 results. Experiments show that his predictions are always correct. Therefore the combined string of 2000 bits: X = S + D is compressible, since it has 50% redundancy (you only need to receive the first 1000 bits of string X to reconstruct i.e. to decompress the entire 2000 bits of X). Let's call CX the compressed string of X, which will have length of Length(CX)=1000 bits. Suppose now you have a theory T, an algorithm for simulating the tossing machine (via physics of coin tosses) which aims to explain the observed process. You input into T the specification S, then run your simulated tossing algorithm T and get on output a string TD(S) of 1000 bits, which is a theoretical prediction by theory T of outcomes D under specification S. Now, analogously to string X above, we form a combined string: Y = S + TD(S) of 2000 bits. Then we run a compression algorithm on Y and get the compressed string CY which has some Length(CY). Now we define: Gap(X,Y) = Length(CY) - Length(CX) A theory T which fully explains the process has Gap(X,Y)=0. Any theory which explains it incompletely will yield a non-zero gap. If for example, theory T just simulates random tosses, then string Y will be incompressible, hence Gap(X,Y) will be 1000 bits. (Dembski in his CSI definition sets the maximum gap allowed by a number of tries provided by the known universe to 120 bits or some of that order, but we won't use that aspect here.) If algorithm T can simulate physics with tighter control of the toss outcome, then it can get more correct tosses (matches with specification S) than a random toss which has 50% chance/toss, hence string Y will be somewhat compressible e.g. Length(CY) may be 1400 bits, thus the Gap(X,Y) will be 400 bits. It is the size of this gap that allegedly detects intelligence behind the machine (process). Namely, if the theory T simulates tossing machine which makes random unbiased tosses, there will be a large Gap(X,Y) of 1000 bits, indicating that the tossing machine was rigged (by 'intelligent agency', the stage magician) to match its outcomes to the specification S. Alternatively, one can view the Gap(X,Y) as a measure of the inadequacy of theory T which seeks to model the tossing process. Either way, the Gap(X,Y) merely measures the discrepancy between the theory T of the phenomena (the theoretical predictions of the tossing machine results) and the phenomena (the actual results of the tossing machine). Hence the Gap(X,Y) is not a property of the machine itself, but merely a property of a relation between theory T of the machine and the machine. The existence of non-zero gap merely tells us that theory T is not how the machine works. Hence, the problem #2 with your statement is that even your weaker semantics for CSI (which is proportional to the above gap, modulo Dembski's resource threshold R), interprets the Gap(X,Y) as the property of the object itself (of the tossing machine), while it is actually only a relation ('distance') between your theory and the object. Hence, even if you insist on anthropomorphic characterization of the Gap as a measure of "intelligence", the most you can say from positive non-zero gap is that 'tossing machine' is more "intelligent" than your theory T. Yet, even your weaker CSI semantics asserts (in effect) that nonzero Gap measures "intelligence" of the machine unconditionally i.e. it claims that Gap ("intelligence") is the sole property of the machine. That is the previously mentioned confusion between the map and the territory that DI's ID injected into the subject i.e. elevating the epistemological entity Gap (which measures how bad your theory of the object is) into ontological entity (property of the object itself). While you distanced yourself from the capricious part-time "intelligent agency" of DI's ID, you (along with apparently most others here at UD) have still been taken in by their more fundamental sleight of hand. In ideologically neutral language, all you can really say from the presence non-zero Gap (or CSI with Dembski's threshold) is that the machine (universe) doesn't work as the theory T assumes, plus that the Gap value quantifies (in Rissanen's MDL language) how bad the theory is. The Dembski's CSI work was focused on showing inadequacy of neo-Darwinian algorithm (random mutation + natural selection) in explaining biological structure in the cell and it successfully demonstrated that neo-Darwinian algorithm is highly inadequate in explaining the observed structures. The same method was also applied to the theories of life and fine tuning of physical laws, albeit resulting in much lesser impact since these theories are recognized as being much weaker than neo-Darwinism (where the 'science is settled' allegedly). In any case, all those results show is that existent biological theories are inadequate in explaining observed biological artifacts. The CSI method doesn't yield anything dramatic in physics or chemistry since those theories are far more polished and better tuned to the observations than biological theories. Discovery Institute's ID mischaracterizes the above epistemological distinctions in quality between different theories as ontological distinctions, by attributing them to nature itself i.e. it labels as "nature" the phenomena for which the present human theories are accurate by CSI criteria (physics, chemistry), while phenomena for which present human theories are inaccurate (such as biology) are somehow guided by some "intelligent agency" which is outside of "nature". Even more absurdly, this "intelligent agency" allegedly sneaks into the "nature" every now and then to help molecules arrange some other way than what the dumb "nature" (the phenomena that physics and chemistry can model accurately) was doing with them on its own. Yeah, sure, that's how it all works, since it makes so much sense. nightlight
UB @125:
ID proponents and opponents alike have wasted hundreds of thousands of words (mistakenly) attempting to calculate the CSI of objects that are not even information to begin with.
Good way of putting it. Incidentally, I've got a short post in already the works that I hope in some small way may help people think through this issue. Maybe I can get it up tomorrow or Monday . . . Eric Anderson
F/N: There is entirely too much strawman tactic hyperskeptical dismissiveness about. I suggest NL et al cf here as to exactly how a metric in bits beyond a complexity threshold set relative to atomic and temporal resources of solar system or observed cosmos can be developed. It is also closely linked to what we mean when we speak of a Word file size is about, and is also connected to the discussion on functionally specific complex information. Chi_500 = I*S - 500, bits beyond the solar system threshold, where I is an information capacity metric (e.g. how many y/no questions must be answered to specify the state of an entity), and S is a variable that defaults to 0 but on objective warrant for functional specificity, is set to 1. KF kairosfocus
The “S” part is not, in my view, calculable, as some kind of mathematically-based construct.
Exactly. ID proponents and opponents alike have wasted hundreds of thousands of words (mistakenly) attempting to calculate the CSI of objects that are not even information to begin with. But when an analysis is made of genuine information, then the object under analysis is invariably the representation ... and representations CAN HAVE NO calculable physical relation to the objects that establish their specification. Upright BiPed
HeKS: Further to #123, I should add that when people demand a mathematical calculation of "CSI" it is evidence that they do not understand what CSI is. This is part of the disconnect we often find when discussing different strings of characters. For example, when we compare a string of random characters and a Shakespearean sonnet, everyone can tell that there is an important difference between the two. And they can tell it immediately, without ever running any kind of mathematical calculation. That is because they are assessing the string at the "S" level, not the "C" level. Indeed, it does not even matter whether the sonnet is more "complex" than the string of random characters. As long as the sonnet is adequately complex, which is readily apparent from a quick glance. So in the vast majority of cases in the real world we never even need to do a calculation of the "C" in order to determine design. Indeed, in some cases a calculation of "C" itself is quite challenging. Part of the reason we tend to focus on strings of letters and Shannon information and Kolmogorov complexity and so forth is that we can deal with extremely simple systems and extremely simple calculations. Try, however, calculating the "amount" of complexity in my car's transmission, for example -- it is no small task. The upshot of this is that in the real world we rarely even do a precise calculation of "C" to determine whether we have enough complexity. To be sure, it is useful for us to be able to do the calculations on simple strings of characters, or sequences of nucleotides or amino acids, in order to show that there is a solid foundation behind the principle and to give real examples of what we are talking about, and to provide some kind of baseline for the amount of complexity needed. But calculating "C" in other areas is much more challenging, even when the "C" is obviously there. Note: I believe that, in principle, it is possible to precisely calculate "C". However, coming up with precise parameters and knowing all the physical aspects that need to be calculated is, in some cases, no small task. Thus, there are two related, but slightly different, criticisms that are often brought up against CSI but which, I believe, both miss the mark: First, we have some critics who demand a mathematical calculation of CSI itself, which reflects a misunderstanding of the "S" part of the concept. (Incidentally, the term "specification" is hard for some people to grasp; in most cases, thinking of the "S" as "substantive" is just as good.) Second, we have some critics (such as Elizabeth Liddle) who argue that if we cannot make a precise calculation of all improbabilities -- with all parameters, with all particles involved, at an exhaustive level of detail, such as, say, the probabilities related to abiogenesis -- then we cannot calculate the "C" with absolute precision and, therefore, the claim goes, we cannot conclude that we are dealing with CSI. This latter criticism is essentially a demand for omniscience, coupled with a head-in-the-sand denial of the way we actually draw inferences of complexity all the time in the real world.* ----- Anyway, I'm not outlining this so much for you, as I think you've laid out things in pretty good detail. Just wanted to flesh out some thoughts a bit. ----- * In fact, this approach of Liddle's is much more nefarious. In practice it operates as a science-stopper in that it asserts we cannot make any progress or draw any reasonable inferences until we know absolutely everything. It is a refusal to even consider the possibility of design until we know everything. Eric Anderson
HeKS @120:
No, no, no! You do not calculate the CSI (the high improbability of the specificity) of an event that was designed. That would be calculating the probability, or rather improbability, of something that was done intentionally, which is incoherent. Probabilities are not calculated on intentional events. The probabilities are used to weight whether the event in question might be something other than designed. This is why the design inference is the last step in the process. Once you have calculated the CSI on the basis of all relevant chance hypotheses and found that it is high on all of them, you eliminate chance as a likely explanation and you are done with the CSI calculations. You cannot calculate a specific CSI value for a designed event. You can only calculate specific CSI values for an event in relation to relevant chance hypotheses.
Well said. I would just add as a minor clarification, if I might, that we don't calculate "CSI." Ever. The "C" part can be calculated, per various parameters, none of which need be a be-all-and-end-all, but need to be adequate to give us confidence that we have eliminated certain possibilities. I think you've stated that elsewhere, so just wanted to confirm. The "S" part is not, in my view, calculable, as some kind of mathematically-based construct. It is a recognition of what we see in the real world around us (and what we ourselves feel and do as intelligent beings), namely, it captures our recognition of the reality of such things as purpose, intent, goals, function, meaning and so forth. The substance, if you will, behind the complexity. Eric Anderson
HeKS:
You do not calculate the CSI (the high improbability of the specificity) of an event that was designed.
You cannot calculate a specific CSI value for a designed event.
But as confusing as it can be, it is also understandable because, for example, it is perfectly sensible to speak of something like DNA having Complex Specified Information even after making a design inference IF one is using the term “Complex” according to its more common meaning of “having many well-matched parts”.
I'm going to let these statements be the last word in the debate. I enjoyed discussing this with you. (I mean that sincerely -- you're quite bright, articulate, and good-natured.) R0bb
#115 Nightlight – Sadly, your response is pretty well what I expected. In response: (1) Whether the actions of an "intelligent agency" are capricious or not, Dembski's method only detects "intelligent design" in certain circumstances: that doesn't mean that "design" is only present in those certain circumstances. It's not there are two kind of design, but that design, by Dembski's method, can only be reliably inferred in certain circumstances. [It is no different in this respect from Newton’s Law of Gravitation, or the models I use every day in my work to describe and predict hydraulic behaviour]. And for those circumstances it proposes a uniform law-like description of the signs that conscious deliberate wilful mindful activity is involved. (2) If you think my definition/descriptions of "design" are unscientific, then please propose a more scientific one. "Intelligent Design" is real in nature (whether it be ultimately reducible exclusively to necessity and/or chance or not); it is not a mere anthropomorphic or philosophical concept so define "design" scientifically, and then propose a scientific test for it. (3) The DI's approach and Dembski's method do not>/b>, as evidence for design, grasp or invoke gaps between our theories and our knowledge: they invoke the positive evidence of specified complexity, but limit any design inference based upon recognising specified complexity to those cases where, from a scientific analysis, specified complexity cannot alternatively have been the result of mere law-like or chance processes. [Note that in the grand scheme of things "law-like" and "chance" behaviours may themselves be ultimately reducible to "design" (rather than the other way round); the DI's approach and Dembski's method simply do not pre-judge these issues: they deal with nature as it is found, not as we may presuppose it must be – hence their approach in principle is properly scientific.] Whilst noting your distinction between epistemological and ontological categories, I would have thought that a good epistemology should be capable of getting the best attainable grasp on ontology, and that science is ultimately about describing and explaining nature in reality, not according to a preconceived and arbitrary framework (such as metaphysical naturalism) but as it actually is. Your take on CSI is truly interesting (and challenging), but in the larger picture you appear to be constrained by a philosophical straight jacket which attempts to tell reality how it should be rather than trying to reliably determine how it is. You have a conscious mind: you think and you are aware of thinking and you can think about yourself thinking; you can also make decisions, plan things and follow through on those plans such that parts of the physical world become changed from what they would otherwise have been if you had not so planned and acted, and if they had been left to chance and necessity alone. In short, your mind can design things which will result in different outcomes in nature from what would have happened to those parts of nature if left to undirected or law-like processes. "Intelligent Design" is therefore a reality in your own experience: so if you think that the DI/Dembski's approach is scientifically faulty, please present a genuinely scientific definition of design and a reliable method for detecting it. Thomas2
@R0bb 119
That isn’t a misunderstanding of the challenge — it doesn’t refer to the challenge at all. It’s simply a consequence of your understanding of what it means for something to produce N bits of CSI. But what I said is ambiguous, so I’ll say it differently: If we know that EVENT_X is designed, how do we calculate the CSI in EVENT_X? According to your reasoning, we have to calculate it based on the hypothesis of design. But nobody has ever calculated CSI in this way, and Dembski says that it doesn’t make sense.
No, no, no! You do not calculate the CSI (the high improbability of the specificity) of an event that was designed. That would be calculating the probability, or rather improbability, of something that was done intentionally, which is incoherent. Probabilities are not calculated on intentional events. The probabilities are used to weight whether the event in question might be something other than designed. This is why the design inference is the last step in the process. Once you have calculated the CSI on the basis of all relevant chance hypotheses and found that it is high on all of them, you eliminate chance as a likely explanation and you are done with the CSI calculations. You cannot calculate a specific CSI value for a designed event. You can only calculate specific CSI values for an event in relation to relevant chance hypotheses. Again, I explained all this in my last comment and your counter-challenge is incoherent and seems to be based on a continued misunderstanding of terminology that you should not still have after reading my last post.
Conversely, in Scenario 2, if we know for a fact that EVENT-X was actually caused by HYP-B, then EVENT-X would actually exhibit 10,000 bits of CSI and we could then say that natural processes produced 10,000 bits of CSI in this instance. This follows from your assumption that the amount of CSI exhibited by an event is defined in terms of the actual probability of the event, i.e. the probability given the actual process that caused the event. My point continues to be that you can’t find anything in the ID literature to support this assumption. I invite you again to provide a reference.
I've already explained all this. What I've said follows as a matter of basic logic and you've seen Winston Ewert, the very person you originally cited, confirm that I was "exactly right" on this. So are we now back to some assertion that Ewert just doesn't understand all this stuff? I will say that I think your confusion on this issue is not entirely your own fault. Different proponents of ID have sometimes used the term Complex Specified Information in different contexts. But as confusing as it can be, it is also understandable because, for example, it is perfectly sensible to speak of something like DNA having Complex Specified Information even after making a design inference IF one is using the term "Complex" according to its more common meaning of "having many well-matched parts". In this case, one would be using CSI as a descriptive term for one or more features of a system rather than as a calculated value of the system's improbability on chance hypotheses. And if this is what one means, that a system has many well-matched parts, that it matches an independent specification, and that it has some kind of semiotic dimension, what descriptive term could be more apt than "Complex Specified Information"? Personally, I think this is the more intuitive context in which to use the term CSI, which is why I think it would be more helpful if the CSI related to improbability was renamed for clarity to replace the "complex" with "highly improbable" or something of that nature. HeKS
Your conclusion about how to reword Barry’s challenge might as well have come right out of thin air.
It did -- I made it up. It's a counter-challenge, and I'm sorry I didn't make that clear. Because of your interpretation of what it means for something to produce N bits of CSI, you can't meet this counter-challenge. But according to typical ID claims, it should be a no-brainer. So how do you account for that contradiction?
Probabilities of its occurrence if it had been caused by some different process are irrelevant to the actual probability or improbability of its occurrence.
I agree 100%, and I've never claimed or implied otherwise.
Conversely, in Scenario 2, if we know for a fact that EVENT-X was actually caused by HYP-B, then EVENT-X would actually exhibit 10,000 bits of CSI and we could then say that natural processes produced 10,000 bits of CSI in this instance.
This follows from your assumption that the amount of CSI exhibited by an event is defined in terms of the actual probability of the event, i.e. the probability given the actual process that caused the event. My point continues to be that you can't find anything in the ID literature to support this assumption. I invite you again to provide a reference.
It is only under those circumstances that we can get an actual measure of the CSI associated with the event, because it is the only way we can get an actual rather than purely hypothetical measure of the improbability of the event, which is a calculation that is entirely dependent upon the chance process that actually brought it about.
You're saying that if we don't know what actually caused an event, we can't get an actual measure of the CSI associated with the event. Again, I invite you to present anything from the ID literature to support this claim.
Nothing in any of this suggests that “to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis [of] design.” This is simply a complete misunderstanding of the nature of the challenge, which is about demonstrating that a natural process is capable of producing a large amount of CSI.
That isn't a misunderstanding of the challenge -- it doesn't refer to the challenge at all. It's simply a consequence of your understanding of what it means for something to produce N bits of CSI. But what I said is ambiguous, so I'll say it differently: If we know that EVENT_X is designed, how do we calculate the CSI in EVENT_X? According to your reasoning, we have to calculate it based on the hypothesis of design. But nobody has ever calculated CSI in this way, and Dembski says that it doesn't make sense. So can you meet the counter-challenge? Show me one example – just one; that’s all I need – of an intelligent agent creating 500 bits of complex specified information. R0bb
Upright BiPed, If you haven't yet patented the spleen-vent do I have your permission to do so? Mung
@Upright BiPed I know, right? At this point every comment seems to consist of mischaracterizing some aspect of the design inference, asserting that everything is reducible to some kind of law (whether we know about it or not), talking about hypothetical universe-governing algorithms that are front-loaded (without addressing where these algorithms come from, or where the information that is front-loaded into them comes from, or how we might be able to reason out the answer to those questions), and then throwin in some kind of talk about theology and how it is 'fuzzy wuzzy'. I get a sense that I'm wasting my time. Again. HeKS
Nightlight, just imagine how much more potent your rhetorical spleen-venting would be if you just had the stomach to actually address the evidence of design - without all the hot air. The simple fact of the matter is that you can't do it. The coherence would eat your lunch. Thus, the foot stomping certainty is a must. Upright BiPed
#114 Thomas2
A) That means it's range of applicability is limited: it doesn't hold that the designer's actions are capricious and occasional, only that the occasions when they can be reliably be detected by this method are limited [ie, design in nature may be far more widespread and uniform that Dembski's method can detect]. B) What scientific method would you prefer for detecting design?
Even in the span of one post you end up in self-contradictory statements, which are result of the Discover Institute's superfluous, anti-scientific concoction wrapped around the pro-scientific CSI method for detecting and quantifying lawfulness (or compressibility of raw phenomena data). Namely, if according to your proposition (A), 'intelligent agency' (designer) is not capricious, coming in and out of the "nature" to help out "natural laws" at its whim, then his actions are present in everything at all times. Hence, there aren't two separate kinds of artifacts or signs of the designer, his 'dumb chores' (i.e. the dumb "nature", the part compliant with "natural laws") and his 'intelligent designs' (part not compliant with "natural laws") that needs to be specially detected as distinct from his 'dumb chores'. Yet, in your proposition (B) you ask precisely for a way to detect such distinction between the two kinds of artifacts or signs, that accoring to your proposition (A) shouldn't exist. A coherent position, if one wishes to use anthropomorphic terms "design" and "intelligent agency", is that everything is intelligently designed and upheld in its lawful operation at all times and all places by the same intelligent agency, active throughout, leaving no gaps. There are no two of them, one dumb and the other smart, or lternativelky, one of them which is sometimes asleep (at which time the "nature" and "natural laws" are doing the 'dumb chores' of the universe) and sometimes awake and getting involved with its "intelligent designs" to help the dumb "nature" which in its stupidity always gets stuck at some "irreducibly complex" puzzle while the 'intelligent agency' was asleep. That incoherence and cognitive dissonance between the valuable CSI foundation and DI's superfluous contraption on top of it, is perfectly illustrated in your own post, between positions A and B, that is trying to defend the DI's ID. The real detection of intelligence is detection of lawfulness (or compressibility or comprehensibility of nature), which is what CSI method actually detects and quantifies (in bits). The lawfulness that manifests in already known natural laws thus indicates underlying intelligence all by itself. That's how Newton, Maxwell, Einstein and other great scientists saw it, recognizing the mind of God in the lawfulness of nature expressed as mathematically elegant and beautiful equations that captured the subtle patterns and regularities in the observed phenomena. When CSI is applied to neo-Darwinism as the model for observed biological complexity, the CSI detects that neo-Darwinist algorithm (random mutation + natural selection) leaves a very large gap in the amount of lawfulness it predicts compared to the amount of observed lawfulness in biological artifacts. Hence, neo-Darwinist algorithm is not how the actual nature works to produce biological complexity, since it is missing the real pattern (unsurprising for a 19-th century relic, despite its 20th century facelift; it should have been already scrapped and stored into a museum right next to the Ptolomean astronomy, when the DNA coding was discovered in 1950s). Similar gaps (maybe even bigger) are detected by CSI in the theories for origin of life and the fine tuning of physical laws for life. This is unfortunately the very place where the Discovery Institute's ID took a wrong turn, parting the way with science and logic, debasing the CSI method as well as the otherwise nice terms 'intelligence' and 'design' in the muck it layed on top of them. Namely, DI's ID grasped onto those gaps and declared -- there it is, that's where the 'intelligent agency' (God) did his work, in those gaps, while the "nature" did the rest (of 'dumb chores') via "natural laws". But those gaps are not the gaps between the real nature (with its real natural laws) and the observed phenomena. They are the gaps between our present theories of those phenomena (or theories of nature) and the actually observed phenomena (the way the nature actually works). There cannot be any gap or conflict between how the nature really works and the observed phenomena, since the latter is merely one manifestation of the former. The DI's ID has in effect mischaracterized the epistemological categories (our present theories about biological complexity and natural laws) as ontological categories (as "nature" and way that this "nature" works on its own, without the help of the transcendental 'intelligent agency' which is outside of this fictitious "nature" and which intervenes in this fictitious "nature's" workings only now and then, outside of "natural" laws and purely at its whim). Through this sleight of hand, the epistemological gaps were dressed up and recast as ontological gaps, leaving a bit of empty space in this misrepresented ontological realm for another ontological entity, the "intelligent agency" of DI's ID. So, this "strategy" is easily recognizible as the same old 'god of (ever shrinking) gaps' that the more traditional religions have burned their fingers on centuries ago. This is why official Catholics, Orthodox Christians and Jews wouldn't touch DI's ID with a ten foot pole. Despite of this rejection and shunning by those who should have been by all reason their natural allies, the Discovery Institute and its supporters are doubling down on their wrong turn, bumbling merrily toward the cliff, as if itching to learn those same lessons again, the hard way. nightlight
#111 Nightlight - My own concept of the designer is not that which you attribute to the DI, and I doubt if it is Dembski's either: what Dembski's proposal does is provide a "law" for describing/detecting intelligent design (the planned intentional output of, ulitmately, a wilful conscious thinking process/mind), [not designers], reliably without making false postive identifications. That means it's range of applicability is limited: it does'nt hold that the designer's actions are capricious and occasional, only that the occasions when they can be reliably be detected by this method are limited [ie, design in nature may be far more widespread and uniform that Dembski's method can detect]. It's thus not unlike Newton's law of gravitation which mathematically describes the gravitational behaviour of masses without either sourcing or explaining gravity, and does so in a way which is only reliable over a limited or "capricious" range. What scientific method would you prefer for detecting design? [claiming that "design" isn't a coherent or scientific category would be evasion, not a valid answer. Darwin and Dawkins understood/understand the concept of design: it was/is not their argument that it does not exist, but that in biological systems it is only apparent and better explained by certain natural processes. So, how would you reliably identify it scientifically?] Thomas2
#112 HeKS
In this context, Dembski defines "information" simply as the elimination of possibilities, the reduction of uncertainty, or the realization of one possibility to the exclusion of all others.
That's fine too, since no one knows what the "all other possibilities" are. You always have to make assumption and that's what gives you the probability p, hence information as log(1/p). That's in essence no different than having to select origin x=0 before you can say that some object has x=500 yards. The absolute value CSI=500 buts has no significance on its own just as coordinate x=500 yards has no significance on its own. They are both relative quantities, meaningful only with respect to some previously chosen convention (the computational model for CSI or in analogy, the origin of coordinate system x=0). The usually cited CSI figures of biological systems thus only eliminates neo-Darwinian model as the source of evolutionary CSI since that model leaves unaccounted for gap of say 500 bits per some protein. But the neo-Darwinian model merely allows simple probabilistic distribution functions for the initial & boundary conditions (IBC) of the molecules in their "random mutation" algorithmic cog. If the real IBC (along with the physical laws) are non-probabilistic, being result of some underlying computation, then the CSI computed relative to IBC + physical laws assumed in neo-Darwinian model is a useless figure. The problem with Discovery Institute's ID (which you seem to be defending) is that from the above inadequacy of neo-Darwinian model, they leap to a far fetched "conclusion" that no model of any sort can be adequate, hence we must reject scientific method altogether (the pursuit of more adequate algorithmic models) and accept their anti-scientific deus ex machina "solution" -- "intelligent agency" which every now and then, at its whim, jumps in out of "nature" to help the "natural laws" solve the puzzles of "irreducible complexity" or some such observed in biological systems, which allegedly no "natural laws" (of any kind) can ever solve. That's a childish confusion between epistemological categories (the neo-Darwinain model for evolution and presently known physical laws in case of origin & fine tuning problems) and ontological categories (the actual algorithms and lawful processes behind the observed phenomena, which science presumes to exist, whatever the part we presently know may be). The DI's ID essentially promotes those transient epistemological entities (the present knowledge of natural laws and models we could come up with so far) into a totality of lawful ontological entities ever possible. Then, since the present epistemological or scientific models fail to describe observations, DI's ID declares that no lawful ontological entities can exist at all that could be behind those phenomena ever, and therefore some transcendental, lawless (capricious) "deus ex machina" must be involved in producing those phenomena. If present natural laws and models are inadequate for explaining observations, the real natural science doesn't leap to invoke lawless entities, but maintains faith into ultimate lawfulness all the way down and seeks to find the improved lawful entities which could explain the observations. nightlight
@nightlight #109
But if we want to know how much CSI is actually associated with an event produced by natural processes, we have to know the actual probability of the occurrence of that event. But in order to know the actual probability of the occurrence of the event, we need to know the actual chance-based process that caused the occurrence of the event in the first place, since this is what the probability calculation must be based on in order to have any relevance to reality.
You are perfectly correct here, but only as far as you go. What you have arrived at is the realization that the amount of “information” (which is the ‘I’ in ‘CSI’) is a relative quantity, like saying coordinate x of some object is 500 yards, which only means that object is 500 yards away from the arbitrarily chosen origin x=0 of the coordinate system (see earlier post on this point).
Well, actually, this is just another example of the terminology being somewhat confusing. In this context, Dembski defines "information" simply as the elimination of possibilities, the reduction of uncertainty, or the realization of one possibility to the exclusion of all others. When it comes to CSI, the inclusion of the term "information" in this specific context seems to simply be a way to reference that we're dealing with an actualized reality out of a sea of possibility. It's not really a variable in the CSI calculation, which is why I focused my comment on the issues of complexity (high improbability) and specification. To use your 500 yards analogy, it's not that the 'I' in CSI is '500 yards away' from something. Rather, it's that the 'C' is '500 yards away' from something. But not simply from some single, arbitrarily chosen landmark. The 'C' is '500 yards away' (i.e. high improbable) from (i.e. relative to) any known relevant chance process that might conceivably be able to explain it.
But then for some “mysterious” reason you pulled back, stopping short of the next natural reasoning step. We can solve the “mystery” if we follow up your truncated reasoning just few more steps where it will become clear why you had to abruptly halt it. Following up in your own words, is there a way to be certain that you truly “know the actual process” that “caused the occurrence of the event” ? In fact, you have no way of knowing that, unless perhaps you can prove that you are an omniscient being.
Can we truly know the actual process that caused the occurrence of the event? Umm, well, that depends now, doesn't it? If the process was actually observed causing the event then yes, of course we can. And that's what I was talking about in relation to meeting Barry's challenge (and the reason why he said no question-begging was allowed). However, what you seem to be trying to get at here is that even when a high CSI calculation results from all known, relevant chance hypotheses proposed to explain an event, ID can't conclusively use that fact to infer design with irrevocable certainty. Well, gee, welcome to the party. Nobody says you can. The inference is always subject to future falsification if some new naturalistic process is discovered and proposed as an explanation and the event in question turns out not to have a high CSI value on that new chance hypothesis. But the mere logical possibility of that happening does not mean that we should forever refrain from inferring a best explanation based on the current state of our knowledge, which is what a design inference is.
Consequently, what you are really saying about large CSI of the structures in live organisms that you computed is: “if God were as smart as I am presently, he would have needed to input this amount of CSI into the construction of this structure.”
No, what is being said is that, based on the entirety of human knowledge and experience up to this point in history, it is far, far more probable that this event is the result of design than that it is the result of some natural process. It is not a matter of putting "this amount of CSI into" an event, because that statement isn't even coherent. It translates to saying the designer, whether God or whoever else, put "this amount of high improbability into this event" that happens to match an independent specification. That's a nonsensical statement, because an event that is the product of intent is not improbable ... it is only improbable with reference to chance hypotheses that might be proposed to account for it. Furthermore, a design inference proposes an ultimate explanation for some phenomenon, not a methodological one. So if it turned out that the event happened because it was intended to happen, but what allowed for that intent to come to fruition was the existence of these undiscovered, hypothetical, unfathomably efficient, universe-ruling algorithms you're so fond of, consisting of a few brilliantly simple lines of code into which the information to cause the desired event was front-loaded, it would still be true that the event was a product of design, and at multiple levels no less, and so the design inference would still be valid. As far as I can tell, the rest of your comment seems to be a mixture of misunderstanding the nature of a design inference and asserting that everything is reducible to natural laws; though laws that seem to be governed by brilliantly designed but so-far undiscovered algorithms into which the information for every desired effect is front-loaded. HeKS
#110 Thomas2
you are (almost wilfully) missing the elephant in the room. If you don't think Dembski's done it, how would you propose that we can consistently and reliably detect design
I have no issue with using informal anthropomorphic metaphors, such as design, intelligence, consciousness in informal theological or philosophical discussions, and as a personal heuristics. The problem is with Discovery Institute's ID misbranding such informal chit-chat as natural science that one should teach in science courses. What makes it much worse is the explicit anti-scientific nature attributed to the 'intelligent agency' -- the 'intelligent agency' of DI's ID is a capricious being, apparently jumping in and out at its whim, to allegedly improve upon and fix this or that "inadequacy" of "natural" processes, to "help natural processes" solve some "irreducibly complex" design puzzle that otherwise stumped these "natural processes". This messy, scientifically incoherent picture offered by DI's ID is result of hopeless entanglement, intertwining and conflating epistemological categories (our present knowledge and models of the processes in the universe i.e. what DI calls "natural laws") with ontological categories (the real processes, possibly unknown, operating in the universe). The actual scientifically valuable contribution of the CSI detection and quantification in biology is that it reveals the inadequacy of simple minded neo-Darwinian algorithm, the random mutation + natural selection, to account for the observed features of biological systems. Additional key contribution is the (no free lunch) realization that probabilistic models based on initial & boundary conditions satisfying simple distribution functions (such as Gaussian, Poissonian, Binomial, etc) are not only inadequate for modeling evolution of life, but also of the origin of life and the fine tuning of the presently known physical laws for life. That inadequacy indicates that those initial and boundary conditions (IBC) are much more subtle than was imagined and are not expressible at all in terms of simple probabilistic distribution functions. The next more general (non-probabilistic) type of IBC is not some capricious anti-scientific 'intelligent agency' of DI's ID that sits outside of it all and jumps in and out at its whim, but rather the results of algorithmic processes performed by a computational substratum that underpins our presently known physical laws. The research seeking to uncover and reverse engineer/decompile these underlying computational processes and their algorithms is in fact well under way on multiple fronts as sketched in earlier post (general overview & links here). nightlight
#109 Nightlight - Despite the really interesting stuff you can extract from CSI, I can't help thinking that you are (almost wilfully) missing the elephant in the room. If you don't think Dembski's done it, how would you propose that we can consistently and reliably detect design (I do not mean designers, who may be inaccessible to observation, but just whether something has been designed by a mind)? Thomas2
#107 HeKS
But if we want to know how much CSI is actually associated with an event produced by natural processes, we have to know the actual probability of the occurrence of that event. But in order to know the actual probability of the occurrence of the event, we need to know the actual chance-based process that caused the occurrence of the event in the first place, since this is what the probability calculation must be based on in order to have any relevance to reality.
You are perfectly correct here, but only as far as you go. What you have arrived at is the realization that the amount of "information" (which is the 'I' in 'CSI') is a relative quantity, like saying coordinate x of some object is 500 yards, which only means that object is 500 yards away from the arbitrarily chosen origin x=0 of the coordinate system (see earlier post on this point). But then for some "mysterious" reason you pulled back, stopping short of the next natural reasoning step. We can solve the "mystery" if we follow up your truncated reasoning just few more steps where it will become clear why you had to abruptly halt it. Following up in your own words, is there a way to be certain that you truly "know the actual process" that "caused the occurrence of the event" ? In fact, you have no way of knowing that, unless perhaps you can prove that you are an omniscient being. Consequently, what you are really saying about large CSI of the structures in live organisms that you computed is: "if God were as smart as I am presently, he would have needed to input this amount of CSI into the construction of this structure." So what? What if God's IQ is different than your IQ? Shouldn't we allow for that a possibility, perhaps? Wouldn't that make the actual CSI (from the actual process) different than the claimed figure? In other words, CSI=500 bits to construct some object is as universally significant as saying coordinate x of the object is 500 yards. Marveling at how some protein got to have CSI=500 bits is like marveling at how some rock got to have coordinate x=500 yards -- it got it because you happened to set the origin x=0 of coordinate system 500 yards to the left of that rock, that's how. Similarly, that protein has got 500 bits of CSI because you "HeKS" personally happened to be able to come up so far with a kind of computing system and an algorithm running on it that needs 500 bit program (code+data) to reproduce it. Big too-doo, lets trumpet that around the world. Hence, for any quantitative CSI claim you make, you need to effectively retract it immediately with a qualifier "to the best of my (algorithmic) ingenuity". Of course, such retraction transforms the alleged major "scientific discovery" of universal truths into merely a fanciful way to disclose the "state of your (algorithmic) ingenuity." While that disclosure may be of interest perhaps to your teacher or to your employer, it is certainly not a "scientific discovery" worthy trumpeting in science courses around the world from now on. That also renders "Barry's challenge" scientifically vacuous. But the more important (than the above vacuity) unfortunate side effect of wrapping the CSI concept into the wishful and superfluous concoctions of Discovery Institute's ID is that it buries and debases the genuinely valuable CSI findings by Dembski and others on whose research his works was built upon. As explained in a previous post, the real CSI finding of universal importance is that phenomena in nature are lawful or compressible and how much lawful/compressible i.e. they are computable using less front loaded information than what the raw data of the phenomena would suggest. The CSI is then a way to quantify that difference i.e. it is a way to quantify the lawfulness in nature. Note that unlike the vacuity of the absolute claims like 'rock has x=500 yards', this is a relative quantification, like saying rock is 500 yards to the right of that cliff from which it broke off. Of course, that finding is not only perfectly harmonious with the basic premise of natural science, comprehensibility of nature, but it also corroborates its defining mission which is none other than discovering the nature's compression algorithms (natural laws) i.e. finding the 'go of it' as James Clerk Maxwell used to put it. nightlight
R0bb:
But to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis, namely design.
Cuz you say so or do you have a valid reason? What we do is to determine if CSI is present. That alone is evidence for design for the reason provided. That said if you or anyone else ever demonstrates that nature, operating freely, can produce CSI, the presence of CSI will no longer indicate intelligent design. Joe
@R0bb #102 Hi R0bb, I'm not really sure what's happening in the discussion between you and I, but I have to assume that there is some kind of serious, fundamental misunderstanding between us, because nothing you said in your comment has anything to do with what I said or in any way follows from it. Your conclusion about how to reword Barry's challenge might as well have come right out of thin air. Operating on the assumption that there is, indeed, some kind of fundamental misunderstanding going on here that has led to all this confusion, I'm going to try this once more, from the start, to reason this through with you. If I happen to dwell on some point you're already aware of, you'll have to forgive me, cause I don't want to chance further misunderstanding. Now, let's start at the beginning. What does it mean to say that some object, pattern, event, system, etc. has "CSI", or Complex Specified Information? Well, it's primarily the first two words we need to concern ourselves with in terms of the methodology and logic of calculating CSI, so let's consider them individually. Complex The word "complex" is used in two primary senses. The first and most commonly used meaning is, "consisting of many well-matched parts". The second meaning of "complex" is, "improbable". When it comes to calculating a value of Complex Specified Information, "complex" refers to the second meaning, "improbable". [As a side point, a part of me thinks that some amount of confusion could be avoided if the name was changed from Complex Specified Information (CSI) to Highly-Improbable Specified Information (HISI).] Now, recognizing that the "complexity" of CSI corresponds to improbability, there are a few things that are vitally important to understand. First, it is incoherent to discuss improbability apart from a chance hypothesis. While we can talk about the probability of a quarter landing heads-up or a rolled die coming up 3, we don't talk about the probability of a person intentionally placing a quarter heads-up on a table, or of purposefully setting down a die so that it shows the number 3. These latter types of events are determined by intentional action rather than being governed in some respect by random or unforeseeable processes. Second, improbability values do not exist in a vacuum, nor are they inherent to a pattern, event, etc. Rather, a measure of the improbability of some event, pattern, etc., is directly connected to a specific chance hypothesis that seeks to explain the event, and it is only valid in relation to that particular chance hypothesis used to make the calculation. Let's consider a very simple example. Imagine a case where you are considering the occurrence of an event, which we'll call EVENT-X, for which two chance hypotheses, which we'll call HYP-A and HYP-B, have been offered to explain the occurrence of EVENT-X. After doing some math that I won't attempt, suppose we determine that the chances of EVENT-X happening given HYP-A as the proposed explanation are 1 in 3, while the chances of EVENT-X happening given HYP-B as the proposed explanation are 1 in 10,000. In determining this, we cannot say that EVENT-X is inherently either probable or improbable in and of itself. What we can say is that EVENT-X is probable on HYP-A, but is improbable on HYP-B. Assuming for a moment that we don't already know what actually caused EVENT-X to occur, and assuming that HYP-A and HYP-B are the only known relevant chance hypotheses that might be able to account for the occurrence of EVENT-X, it is reasonable for us to conclude that HYP-A is the proper explanation, since the occurrence of EVENT-X is highly probable on HYP-A, while it would be highly improbable on HYP-B. But now let's change this up and say that we actually know what caused EVENT-X to occur and it really was HYP-A that got the job done. If this is the case, the occurrence of EVENT-X was not improbable, because it was actually very probable given the process that caused it. We cannot say that the occurrence of EVENT-X was actually highly improbable because the odds of it occurring would have been 1 in 10,000 if it had occurred as a result of HYP-B. The 1 in 10,000 odds have no validity apart from a calculation that assumes HYP-B was the cause, which it wasn't. But now let's change it up again, and suppose that we still know what actually caused EVENT-X to occur, but it was really HYP-B rather than HYP-A. In this case we can say that the occurrence of EVENT-X was highly improbable, because EVENT-X occurred as a result of HYP-B and the chances of it occurring on HYP-B were only 1 in 10,000. If HYP-B was the culprit, we cannot turn around and say that the occurrence of EVENT-X really wasn't improbable after all because the odds of it occurring would have been 1 in 3 if it had occurred as a result of HYP-A. Just as was the case before, the 1 in 3 odds have no validity apart from a calculation that assumes HYP-A was the cause, which it wasn't. So, to recap, while still assuming that the actual chance cause is really known, the probability or improbability of the occurrence of some event depends entirely on the chance process that actually brought it about. Probabilities of its occurrence if it had been caused by some different process are irrelevant to the actual probability or improbability of its occurrence. One cannot simply port probabilities between different chance hypotheses, nor can one smuggle the probability associated with an incorrect chance hypothesis over to the event itself to avoid the probability or improbability that is calculated on the basis of the correct chance hypothesis. Specified What does it mean to say that some event or pattern is specified? An event, pattern, object, etc. is considered to match the requirement of specification when 1) the configuration of its make-up or structure falls within a range of possibilities that is subject to a relatively simple, generalized description (i.e. the specification), and 2) where the pattern, object, event under consideration is an independent instantiation of the specification, which means that the specification itself cannot be in the causal chain of the instantiation. CSI We can now put this together to consider how the Design Inference is made and how something is determined to be an example of CSI. To say that something exhibits a high degree of CSI is to say that it constitutes a highly improbable match to an independent specification. But again, because this is a matter of probability/improbability, what it really means is that it constitutes a match to an independent pattern that would be highly improbable to arise through any known and relevant chance process. Before such a determination can be made, one must consider all known, relevant chance processes that might be capable of bringing about the pattern, event, etc. in question. The only way that the pattern, event, etc. will be determined to exhibit a high degree of CSI is if it meets the necessary requirements for that designation under all relevant chance hypotheses that might be capable of explaining it. If some event exhibits very high CSI under some chance hypotheses (because of being highly improbable to occur by those processes) but exhibits little or no CSI under other chance hypotheses (because of being highly probable to occur by those processes), the event will not be considered to have passed muster and it will not be considered to exhibit a high degree of CSI. Instead, it will be assumed that the occurrence of the event is explainable by reference to one of the chance hypotheses that rendered its occurrence probable. It is very important to understand that the event will not be considered to have a high degree of CSI simply because one or some of the proposed chance hypotheses might have led to a high calculation of CSI. To repeat, in order for an event to be determined to exhibit a high degree of CSI it needs to be highly improbable under all relevant chance hypotheses, not just some. If the event does meet all requirements - including a high degree of improbability - under all chance hypotheses, then it will be deemed to exhibit a high degree of CSI, which, once again, means that it will be deemed to be a match to (or an instantiation of) an independent specification that is highly improbable to occur by means of any known naturalistic processes. On this basis, it will be inferred that the event was a product of design. That's how the situation plays out when the actual cause is not already known. But now let's flip things around and use our earlier example of EVENT-X on HYP-A and HYP-B to see how it would work to determine the CSI associated with EVENT-X when we already know the cause. Suppose now that we calculate the CSI associated with EVENT-X on the assumption of HYP-A as being 3 bits, but we calculate the CSI associated with EVENT-X on HYP-B as being 10,000 bits. Now let's picture two scenarios. In Scenario 1, we know for a fact that EVENT-X is properly explained by HYP-A. How many bits of CSI do we then conclude are associated with EVENT-X? The answer is 3 bits. Why? Because the calculation of 3 bits is exclusively associated with HYP-A and is only valid under HYP-A as it is based on a calculation of probability that is exclusively associated with and relevant to HYP-A. And if EVENT-X exhibits 3 bits of CSI and was brought about by the natural process connected to HYP-A, how many bits of CSI did a natural process produce in this instance? Again, obviously, 3 bits. We cannot appeal to the fact that HYP-B led to a calculation of 10,000 bits and so that is how many bits of CSI natural processes really produced in this instance, because HYP-B didn't cause EVENT-X in this scenario, so we cannot smuggle over a calculation of 10,000 bits that is only valid and relevant to the discarded hypothesis (HYP-B) and which has no actual connection to reality. Conversely, in Scenario 2, if we know for a fact that EVENT-X was actually caused by HYP-B, then EVENT-X would actually exhibit 10,000 bits of CSI and we could then say that natural processes produced 10,000 bits of CSI in this instance. And just like before, we could not appeal to the fact that EVENT-X only exhibited 3 bits of CSI under HYP-A and so that is the amount of CSI we should associate with EVENT-X, because we would know that EVENT-X was actually caused by HYP-B and was not caused by HYP-A, which means the 3 bits calculation was a purely hypothetical value strictly associated with a false hypothesis and has no connection to reality. Revenge of the Challenge In finally coming to Barry's challenge, we must properly understand the circumstances that are implied by it. And what are those circumstances? Well, it requires that we know the event we're measuring was actually caused by natural processes and that we know specifically what natural process brought about the event, pattern, object, etc. that we're calculating a CSI value on. It is only under those circumstances that we can get an actual measure of the CSI associated with the event, because it is the only way we can get an actual rather than purely hypothetical measure of the improbability of the event, which is a calculation that is entirely dependent upon the chance process that actually brought it about. If the CSI value of the event turns out to be over 500 bits when the calculation is made with reference to that chance process that was actually responsible for the event, then Barry's challenge will have been met. But if the CSI value turns out to be lower than 500 bits the challenge will not have been met What one absolutely cannot do is come up with a high CSI calculation based on the assumption of a false hypothesis and then attempt to port that CSI value over to the event in order to claim that the challenge has been met. This simply doesn't work. Such a value would be completely, utterly, and absolutely irrelevant to the challenge. What it would be is simply a calculation of how much CSI would have been produced by a natural process if some other process that rendered the occurrence of the event highly improbable had actually been the one to produce it. One can imagine these kinds of scenarios all they like, but such imaginings and hypotheticals are irrelevant to the challenge. Nothing in any of this suggests that "to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis [of] design." This is simply a complete misunderstanding of the nature of the challenge, which is about demonstrating that a natural process is capable of producing a large amount of CSI. In order to demonstrate that a natural process can produce a large amount of CSI, the correct hypothesis obviously has to be a natural one, not one that appeals to design (if it was designed then it wasn't natural). But if we want to know how much CSI is actually associated with an event produced by natural processes, we have to know the actual probability of the occurrence of that event. But in order to know the actual probability of the occurrence of the event, we need to know the actual chance-based process that caused the occurrence of the event in the first place, since this is what the probability calculation must be based on in order to have any relevance to reality. You can't just choose whatever chance hypothesis you like because it happens to provide a high CSI calculation. The challenge is about what natural processes can actually do, not about what they might hypothetically be able to do. If you want to meet a challenge asking you to demonstrate that natural processes can actually do something specific, then you need to demonstrate that they can actually do that specific thing. If the challenge is to show that natural processes can produce a large amount of CSI, then you can only point to events, object, patterns, etc. that you know for a fact were produced by natural processes, which demands that you know what the actual process was that produced it. You must then show that the event, object, pattern, etc. in question is calculated to have a large amount of CSI when the calculation is made on the basis of that natural process that you know to have been the cause. Honestly, beyond what I've written here, I don't know what else I could possibly say to make this any more clear. Take care, HeKS HeKS
HeKS:
Here’s the problem: The fact that those presidents had faces did not cause the likeness of those faces to be carved into those rocks. The brute fact that George Washington had a face did not, as a matter of physical necessity, and through purely natural processes, cause the complex process to occur that resulted in a giant likeness of his face appearing on a mountainside in South Dakota.
Further, even if we granted that the face caused the likeness, there are many presidents all of whom had a face, and only four likenesses. I suppose we'll just have to wait to see if likenesses of the faces of the other presidents appear, and until then, just take it on faith that they will. There's no law against it, you know. Mung
#104 Nightlight - Noting that some things in nature are the result of intentional mindfull design, Dembski has attemped to formulate and develop a scientific way of reliably describing/detecting such design. His religion-free proposal says that certain observations scrutinised and analysed in a specific manner can unequivocally justify an intelligent design hypothesis; and when such hypotheses are made, it seems to me that you should normally have material for further investigation and test in the natural world. Thus you have in Dembski's proposal a genuine natural scientific law. Your observation that the particular tool of CSI (an attempt to give improved precision and quantification to "specified complexity") actually detects lawfulness or compressibilty in the observed phenomena might possibly be the case, but it does not replace the legitimate, repeatable and testable inference to a design hypothesis. [PS: When in this context, Dembski is wearing his science hat, he does not employ "theological verbiage"; and as regards any philosophical reasoning he might use, it should be noted that without philosophy science cannot work - it is dead, going nowhere.] Thomas2
#103 Essentially ID proposes a scientific law for detecting design. What the Dembski's CSI method, stripped of the above fluff and scientifically vacuous theological/philosophical verbiage, actually detects is lawfulness or compressibility in the observed phenomena. Hence all it shows is that the observed phenomena can be computed using much smaller front loading than their raw (uncompressed) appearance would suggest. nightlight
#101 Nightlight- In a nutshell, ID as a would-be natural scientific theory states that where in nature an entity exhibits non-deterministic, appropriately statistically significant, tractable and conditionally independent specified complexity then an unequivocal intelligent design inference/hypothesis can be made, (where the resulting design hypothesis should then itself be subject to appropriate test and CSI is a particular way of defining and quantifying specified complexity). [Note that religious views don't come into it.] Essentially ID proposes a scientific law for detecting design. How is this not a form of natural science? Thomas2
HeKS, Thanks for contacting Ewert. I have to admit that I'm surprised at his answer. I would think that the ramifications of such an interpretation would be unacceptable to ID proponents. For example, consider Joe's statement in #80:
The only evidence that we have says that CSI only comes from intelligent agencies.
Joe, and every other IDist that I know, consider it uncontroversial that CSI comes from intelligent agents. But to determine whether an agent has produced CSI, you have to do the calculation in terms of the correct hypothesis, namely design. No IDist has ever done that, and Dembski argues that the concept doesn't even make sense. (See here where he says that there is "no reason to think that such probabilities make sense", and The Design Inference p. 39 where he says that explanations that appeal to design are not "characterized by probability".) So in saying that Ewert hasn't met Barry's challenge, you're consequently denying a fundamental ID claim. You're saying that the following modified version of Barry's challenge can't be met: Show me one example – just one; that’s all I need – of an intelligent agent creating 500 bits of complex specified information. That seems like an awfully high price to pay. R0bb
#98 HeKS
The fact that a method of detection seeking to identify the results of design over chance processes would be fine-tuned in a way that typically excludes, with high fidelity, the sorts of things we see happen by chance processes is unsurprising.
I see. While I was discussing whether Discovery Institute's ID is suitable as part of natural science, you were apparently talking about whether it is suitable as theological or philosophical or literary or conversational material. While there is no question that anyone can play with semantics of CSI and weave some warm and fuzzy lines and shapes around it into passable stories in any of those other fields, there isn't a scrap worth of natural science in any such narrative that could be legitimately taught in science class. Unfortunately, all that yarn fueled by religious zealotry (or maybe by plain fear of death in some) has completely buried under layers of muck a little gem worthy scientific attention, which is the CSI as an intriguing mathematical abstraction yielding interesting results about power of search algorithms and restating the older concepts of lawfulness and compressibility in language of search algorithms.
3) If you don't see a difference between a rock making an imprint in mud that it happens to come into contact with and the creation of, say, a computer monitor, I don't think there's much I can say to help you.
I wasn't discussing what I can see or feel, but whether Discovery Institute's ID is a natural science (it's not). Namely, not everything that you or I can see or sense or feel is part of natural science. Natural science doesn't capture (as yet) the complete content of human experience. My point is that you can't just go out and peddle any odd feelings and sensations that come over you as a natural science. nightlight
nightlight:
My real point is that the none of the CSI ID arguments prove that the observed phenomena of life and its evolution are not even in principle computable (e.g. by some underlying computational process) i.e. they cannot be a manifestation of some purely lawful underlying processes.
If you are asking for a formal, deductive proof, then you are correct, we don't have proof of a negative. No-one has claimed to have such a proof. What we need to do is look at the overall weight of the evidence and draw an inference to the best explanation. For example, we have multiple, observable examples in the real world of engineering and design that are at least similar to some of the kinds of systems we see in biology. And we know that those required sophisticated and carefully coordinated programming and intelligence for their existence -- essentially across the board. And yet the best you can come up with is the assertion that there might be some unidentifiable, unknown, as-yet-undiscovered "few lines of code" that could produce everything we see in the living world? That doesn't even pass the smell test, much less come close to being the "best explanation" for living systems. You want to hold out hope for some as-yet-undiscovered natural algorithm that can produce everything? Fine, you have the prerogative to repose your blind faith wherever you wish, with the hope that at some distant day your faith will be confirmed. The rest of us prefer to look at the actual evidence on the ground today to see what the best explanation is. (Actually, it is much worse than that, because there are excellent reasons to affirmatively conclude that such a natural algorithm is not possible, even in principle.) So far the only examples you've been able to come up with are a couple of simplistic and poor analogies. You keep harping on chess, for example. Yet a chess program is written by intelligent beings, operating on an intelligently-designed operating system, on intelligently-designed hardware. There is nothing purely naturalistic about it. Furthermore, the fact that it can often beat its creator tells us nothing about whether it has somehow gone beyond its initial programming to create new things. The reason a computer can beat me at chess is because the program is set up to take advantage of what computers are stupendously good at: running myriad calculations per second and tracking possibilities. We are duly impressed by the speed and extent of its calculations and its ability to track move possibilities, but it isn't doing anything special beyond what it was programmed to do. You haven't provided any evidence that a simple algorithm could, even in principle, produce all the design we see in life. So as we look to draw a reasonable inference about best explanation for life, we have a stark contrast: Your proposal has no real-world examples and is based on a hoped-for future discovery of some undefined, unknown, heretofore unseen algorithm in nature. In contrast, intelligent design has billions of real-world examples and is based on what we do currently know about nature and the cause and effect relationships that exist in the world. ID wins hands down. It is not even a close call. Eric Anderson
I think nightlight's point is this: "Since everything that exists is reducible to physical matter/processes, then all CSI is always the product of the same (and has the same origin). The causal-chain merely takes us back through physical mechanisms to the big-bang or the multiverse." But this idea breaks down in the evolutionary model and abiogenesis models when we ask for evidence of the natural origin of the first DNA or the first multicellular life or the first body plans, etc. You can't just assume reductionism - you have to prove it. If unproven, which it is, there remains the proposal that intelligence is not a product of, or determined by, physical processes alone. Silver Asiatic
@nightlight #97 I'll try to look at this in more depth tomorrow, but a few quick comments. 1) It's not clear to me that you actually understand the No True Scotsman fallacy in spite of your propensity for invoking it. The fact that a method of detection seeking to identify the results of design over chance processes would be fine-tuned in a way that typically excludes, with high fidelity, the sorts of things we see happen by chance processes is unsurprising. It is to be expected in principle. And yet it does not, by definition, eliminate the possibility of chance processes accomplishing these things. There is nothing fallacious in this methodology. 2) The matter of causal chain length has nothing to do with anything. There's no magic or hidden number of steps in the chain at which something changes. It's as simple this: For the creation of an object, event or pattern to be an example of the creation of CSI, the specification must be independent of the instantiation, and by independent it is meant that the specification cannot be in the causal chain leading to the arising of the instantiation at all. 3) If you don't see a difference between a rock making an imprint in mud that it happens to come into contact with and the creation of, say, a computer monitor, I don't think there's much I can say to help you. HeKS
#96 HeKS
Your comment asserts a world of absolute determinism governed by unknown, allegedly-simple algorithms
Yes, anything we observe could have been computed by perfectly deterministic algorithms. There is no way to exclude such possibility since any finite sequence of observational data points can be generated (computed) by a suitable algorithm.
allowing every possible outcome we might observe to be the necessary product of natural laws.
This is a very common misunderstanding of "natural laws" at UD. Perfectly deterministic natural laws (such as physical laws and any computations) do not on their own determine the future events. The deterministic laws are merely one part of the input into the "physics algorithm", the second part of the input are data representing initial and boundary conditions (IBC). Only the combined input of Natural Laws + IBC yields via "physics algorithm" (or general computations) the specific events or outcomes. For example, a ball satisfies Newton laws of motion and gravity. But those laws don't tell you or determine what the ball will do next. You also need to input (i.e. put in by hand) the initial position and velocity of the ball (as two 3-D vectors), plus any forces (such as intercepts, winds, friction, etc) it will encounter during the flight. The latter two data sets are the arbitrary IBC data. If you consider the entire universe as the "infinite physical system" to which the natural laws are applied (hence there is no finite boundary effects), then you still need to specify initial conditions of the universe. I.e. even then the natural laws on their own, despite being fully deterministic (like computation), don't on their own determine the future of that system. Only combination of inputs: Laws + arbitrary 'Initial Conditions' determine the actual outcome. If you look at the natural laws as a compression algorithms (for observational data), then the laws are the code (instructions) of the compression algorithm while the IBC are the compressed data that are being expanded (by the laws) into the detailed sequence of states or trajectory that the system will traverse. E.g. in the ball case, the full trajectory (thousands or millions of time stamped coordinates) of a flying ball represents the raw, uncompressed data. The physical laws allow you to compress all this mass of thousands of trajectory numbers into just 6 numbers, the vector of initial position (x, y, z) and vector of initial velocity (Vx, Vy, Vz). If you input those 6 numbers into the 'laws algorithm' it will expand them into thousands (or millions) of numbers of the full trajectory that the ball will traverse. In short, the fact that some process unfolds perfectly lawfully via deterministic laws does not mean there is just one way that process will unfold. There are in fact as many ways it will unfold as there are possible IBC inputs, which is generally infinitely many possible paths. Restating this in compression perspective on natural laws, there are as many possible expanded data sequences (the full trajectories of sthe system) as there are possible compressed sequences (the IBC data sets) i.e. there are infinitely many. The lawfulness is in fact merely another way to restate compressibility of the observed path/trajectory data points. But that is precisely the same feature of data sequences that presence of CSI identifies. Namely, in CSI the combined sequence X = D + S, where D='designed pattern' and S='specification pattern for D', is necessarily compressible since symbols from S predict (specify) symbols from D, hence D has some level of redundancy (how much, depends on the tightness of specification). In fact 'no free lunch' results of Dembski and others are merely trivial restatements or translations into search language of the older well known incompressibility results (based on pigeonhole principle) of 'random' or already compressed sequences (generally, of max entropy sequences). Hence, the CSI not only does not disprove lawfulness of the processes in observed phenomena, but is actually restatement of lawfulness in the language of search algorithms. Since a more general perspective on 'lawfulness' (physical laws) is compressibility, or computability, the detection of CSI in patterns in nature points to computational origin of such CSI sequences & their specification. It discovers in effect that the universe is rigged i.e. that there is an underlying computational process which is much more economical than the superficial appearance of the phenomena would suggest. This observation has also been expressed (by Eugene Wigner) as "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". This is precisely why you and others here were stomped at drawing a line which could exclude 3-D mud image of a rock from the usual examples of CSI. You can't draw such line because there isn't any such line. You can only play 'no true Scotsman' sleight of hand by shifting around the semantics of 'local independence' (for face saving tapering of the "debate" it seems). But there really is no coherent way out, since all three concepts: lawfulness, compressibility and CSI describe precisely the same property of the natural phenomena -- the computability via more economical data than what is contained in the raw data of the observed phenomena. Note also the additional parallel or 'coincidence' here -- in classical philosophy and theology (and especially in mystery cults, such as those of Pythagoras, gnostics and neo-Platonists), the observed lawfulness of universe was used to argue for existence of God, just as ID argues the same from its restated lawfulness (the identification of CSI phenomena). The Discovery Institute's ID argument based on CSI is basically a warmed over theological argument for existence of God going back to ancient Greeks (at least; more likely even farther to ancient Egypt, Persia, China and India).
Here's the problem: The fact that those presidents had faces did not cause the likeness of those faces to be carved into those rocks. The brute fact that George Washington had a face did not, as a matter of physical necessity, and through purely natural processes, cause the complex process to occur that resulted in a giant likeness of his face appearing on a mountainside in South Dakota.
It caused it in the same sense that 3-D image of the rock was imprinted in the mud. There was no necessity from physical laws for the rock to imprint its image in the mud. With any change in initial or boundary conditions of the rock, mud or anything in between, there may have not been 3-D image of the rock in the mud (there are infinitely many scenarios that could have happened with that rock and that mud, some with image some without). Which is exactly the situation with the connection via cusal chain of lawful processes between Mount Rushmore images and the faces of presidents. The only difference is in 'which specific interactions made up the corresponding chains' and the lengths of chains. But neither you nor anyone else was able to draw scientifically and logically coherent semantic line that can separate the two examples. All the defense amounts to handwaving the 'no true Scotsman' song and dance. There is not an ounce of science in any of it.
In order for the paintings of the presidents to arise it required an intelligent agent to manipulate matter into a highly complex and improbable arrangement that matched the independent specification of their faces.
There you go, after the first 'true Scotsman' got pinned down on his back with no way out, now you bring in his brother, the 'intelligent agent' as the backup, for another round of the same semantic song and dance. Namely, the 'intelligent agent' is another one of those entities like 'locally independent' or beauty or consciousness, that is in the eye of the beholder, but scientifically sterile or vapid. For example, if you try to work out exactly via math and physics the interaction of that rock with mud, you will find that exact actions that take place are far beyond the smartest physicists and mathematicians -- you can put all of them together with all the computers they ask for, and ask them to predict precisely atom by atom that outcome, they will be stomped and give up. So, rock and mud were doing something so rich in content that no human intelligence and technology can fully comprehend or model. The best we can do is provide extremely coarse grained sketches of what's going on, but can never reach the true richnes of the phenomena achieved by the real master of that particular realm, the rock and the mud. So, I may choose to call what the rock and the mud were producing there an action of 'intelligent agent' since not even the smartest people on Earth could truly figure it out in all its richness. Or take the other example, the 'machines inheriting the Earth' scenario sketched in the last post, where the robots take over and build their own version mount Rushmore with images of Apple II and IBM PC. Are these computers 'intelligent agents'? If not, have you tried playing chess with a computer lately, since they can beat, even when running on smart phone, the best human players in the world. There is no 'intelligent agent' in any scientific definition of CSI. Interjecting that (or consciousness) is pure evasion. The CSI is a mathematical concept (restating compressibility of phenomenal data), not a topic for a literary essay of free associations where you can dredge up anything, including kitchen sink.
What points to design in the case of CSI is largely the need for a mind to be able to recognize a specification and then intentionally carry out steps to independently reproduce or in some way instantiate that specification, producing an outcome that would be incredibly improbable on any naturalistic hypothesis.
Ok, now we got yet another brother of the 'true Scotsman', the mind. Again there is no scientifically founded relation between 'mind' and CSI. Of course, in free association literary genre, anything goes with anything, whatever it feels like.
It seems that you're trying to stretch the concept and logic of CSI to cover cases where complex but unspecified patterns are reproduced through simplistic processes where the outcome of replication is highly probable (if not certain) and then trying to use this to discredit the entire concept of CSI as a reasonable indicator of intelligent activity.
Whoa Nelly, we got now the whole 'true Scotsman' family here, wife, kids and the rest, to help weave that free associations essay about CSI (well, only the Scotsman's cousin, "functional" CSI, is missing in your salvage crew). Basically, as explained above, the CSI (with related constraints on efficiency of search) is a purely mathematical result, equivalent to older compressibility results in information theory, or to lawfulness in physics. Therefore, no amount of semantic squirming and no army of 'true Scotsman' and his large family, will let you scientifically distinguish between the usual CSI examples parroted here at UD and the rock leaving its 3-D image in the mud example. There is no way to rigorously or scientifically distinguish between the two not because of some debating ineptitude of the DI's ID supporters, but because the CSI is merely a restatement of the lawfulness concept from physics (or compressibility concept from information theory) in the language of search algorithms. Behind the curtain of terminological conventions, the two (lawfulness and CSI) identify one and the same property of natural phenomena. nightlight
@nightlight #93 I repeat, it's not about the length of the causal chain. It's about the nature of the causal chain. I'm not trying to be rude or anything, but your comment seems to be an exercise in question begging, which also happens to misrepresent the logic of CSI determination. Your comment asserts a world of absolute determinism governed by unknown, allegedly-simple algorithms allowing every possible outcome we might observe to be the necessary product of natural laws. And if you're not asserting this than your whole point seems to immediately break down before it gets going. You give a purely deterministic / naturalistic account of some process leading from the faces of presidents to the carvings of those faces at Mount Rushmore. Here's the problem: The fact that those presidents had faces did not cause the likeness of those faces to be carved into those rocks. The brute fact that George Washington had a face did not, as a matter of physical necessity, and through purely natural processes, cause the complex process to occur that resulted in a giant likeness of his face appearing on a mountainside in South Dakota. In order for the paintings of the presidents to arise it required an intelligent agent to manipulate matter into a highly complex and improbable arrangement that matched the independent specification of their faces. But their faces did not, as a necessary result of natural laws, cause the paintings of their faces to arise. And the same is true of Mount Rushmore. Likewise, the specification pattern provided by the grammar of the English language does not cause any piece of English literature to arise. Rather, a piece of literature is an independent instantiation of a pattern that conforms to the specification of the English language. Now, it goes without saying that there obviously has to be some connection between a pattern and its specification, otherwise there could be no specification in the first place. That is why you will often hear people speak of "local independence". If there was a requirement for absolute independence then the only way Mount Rushmore could display CSI would be if the faces had been carved in the likeness of the presidents' faces without knowing about or ever having seen them or any other faces, which is silly. On the other hand, local independence is essentially used to mean that there is no simple, deterministic process that leads necessarily from the specification itself to a pattern that corresponds to the specification. A simplistic process that ineluctably results in a duplication of some pattern where that duplication is a highly probable event given that simplistic process cannot be said to be a case of natural forces generating CSI. We might marvel at a fabulously complex design etched into a stamp, but we do not marvel at the creative power of natural law if a wind comes along, knocks the stamp off its ink pad, and the stamp leaves an imprint of its design on the floor. What points to design in the case of CSI is largely the need for a mind to be able to recognize a specification and then intentionally carry out steps to independently reproduce or in some way instantiate that specification, producing an outcome that would be incredibly improbable on any naturalistic hypothesis. It seems that you're trying to stretch the concept and logic of CSI to cover cases where complex but unspecified patterns are reproduced through simplistic processes where the outcome of replication is highly probable (if not certain) and then trying to use this to discredit the entire concept of CSI as a reasonable indicator of intelligent activity. But these cases you're trying to bring under the umbrella of CSI wouldn't end up getting high CSI calculations anyway due to the incredibly simple naturalistic hypotheses that explain them and the correspondingly high probability of the outcomes, so I don't really see what the point is. And to take issue with the fact that the method of calculating CSI ends up ruling out such simple naturalistic events as creating CSI would be to simply take issue with the fact that the method of calculation as been tuned to reliably indicate designed events and rule out natural ones in cases where we already know the cause, which should be considered a point in its favor. HeKS
Because his conceptualization of information has no basis in reality. His model has everything to do with the argument he wants to present and nothing to do with the way information operates in the natural world. :| Upright BiPed
Why is it that to much of what nightlight writes I am so tempted to respond with a mere... So? Mung
#90 HeKS
I'm not sure why you think it's the length of the causal chain that matters. What matters is that the pattern under investigation arises independent of the specification used to describe it.
The problem is that "independent of the specification" doesn't exist other than as a wishful arbitrary definition or semantic game. Consider Mount Rushmore statues with images of presidents. The sculptors shaping them were not "independent" of the specifications since they saw the paintings of those presidents. Without unbroken chain of lawful interactions between specification and its CSI patterm, no statues in the likeness of those presidents would have been produced. The interaction chain started with photons scattering from faces of those presidents into retinas of painters, then brains of the painters based on those signals computed actions for their hands, how to pick and apply the paints to the canvases. Then years later the retinas of sculptors captured photons scattered from those paintings, processed the signals and computed actions of their hands that shaped the molds for the statues. Then construction teams interacted with molds (again via photons scatterig on the molds, the retinas, computations in their brains directing their hands and voices), finally continuing the chain of the interactions down to workers, their retinas, brains and hands operating machinery that carved the faces in the rocks. So interaction chain of lawful interactions of that particular length connecting the specification and its CSI pattern is claimed here at UD to be long enough to qualify for "independence" between specification pattern (faces of presidents) and the CSI pattern (images in the rocks). Thus that is an example of 'true CSI'. But the other chain, of rock striking the mud and leaving its detailed 3-D image in the mud, is apparently not long enough causal chain of lawful interactions to qualify for "independence", thus according to UD wisdom, it is not an example of 'true CSI'. So where exactly is the threshold length of the chain of lawful interactions between 'specification' and its 'CSI pattern' beyond which you call it "independent" allowing you to declare the 'candidate CSI pattern' as 'true CSI', in contrast to improper one, like the rock imprint in the mud? There must be some threshold value of chain length in order for you to make such distinction. Namely, there is no question that in all CSI cases there is always a causal chain of lawful interaction between the two patterns, the specification pattern and the CSI pattern. So the only issue of contention is the semantics of "independence" attribute -- causal chains of lawful interactions shorter than certain secret length (in the spirit 'no true Scotsman' fallacy) are disqualified as not being 'true Scotsman' chains (or not 'independent' enough for 'true CSI'), while chains longer than this secret length qualify as 'true Scotsman' chains, so they result in 'true CSI'. What I am asking is what is this secret threshold length of a causal chain of lawful interactions that allows two patterns at the two ends of the causal chain to be called "independent", hence the 'true CSI' case (as opposed to mud imprint by a rock, which apparently is not a 'true CSI' due to shortness of causal chain between the two patterns). Note also, before anyone start on with "consciousness" talk evasion, imagine a future in which robots take over the Earth and carve statues to their own 'founding fathers' (say, Apple II and IBM PC) on their own Mount Rushmore. In that case, interaction chains remain in substance the same kind as those that produced our Mount Rushmore (photon scattering, image processing algorithms, motoric instructions, etc). The hardware and software differ, but the high level algorithms would work the same way. In this case the causal chain of lawful interaction is fully explicit (consisting of programs running on deterministic hardware) between the 'specification' pattern and the rock images. Would rock images have large CSI in this case? If they would, wouldn't that then satisfy Barry's request for an example of CSI produced by causal chain of lawful interactions? Note that no humans ever programmed these robots to build their own Mount Rushmore, the specification pattern and CSI pattern are connected exclusively by an explicitly known chain of lawful interacitons. nightlight
@R0bb re: my comment #77 I don't know where the question marks came from in my quote of Ewert. It should have just said: "Yes. You're exactly right." HeKS
nightlight,  
You can’t derive “flight response” for that rabbit with already operational “flight response” mechanism and that fox. It took many generations of not just of rabbits and foxes but their ancestors (possibly going back to single cell organisms since they all have it), to program their “flight” response into their (genetically) built in response repertoire.
  This is a non-response. Darwinian evolution is not even the issue here, and doesn’t even exist until the system I’ve described is in place. To suggest that Darwinian evolution is responsible for the organization of the system is to say that a thing that does not yet exist on a pre-biotic earth caused something to happen. (which is obviously false)
That’s merely a variant of my example with fox fur color adaptation — the cold weather with upcoming snow doesn’t change genes to turn fox fur white in any simple push-button way. That takes many generations of foxes and snow, probably with epigenetic imprint happening first which eventually gets transferred and hardwired into genetic record (via Shapiro’s “natural genetic engineering”).
  You seem to be missing the issue. Everything you say to support your position requires the functional organization that only comes from the translation of information.. The requirements I am pointing out to you are the necessary material conditions for that translation to occur..  If A requires B to exist, then A cannot be the source of B.   I am interested in the source of B, not the operation of A.  
Regarding your explanation, it seems your “local independence” is “no true Scotsman” fallacy — it can shift the definition to exclude lawful processes (such as computations by biochemical networks) by definition as you wish.
  You are mistaken.. I clearly stated that purely deterministic forces are not excluded from being the source of the system. They are simply required to explain the system as it actually is, not as someone might wish it to be. If on the other hand, the reality of the system is physically and conceptually inconvenient, then I suggest that one might want to adopt a different approach to the problem.  
That in turn renders Barry’s challenge into a vapid semantic game — you can’t show that “locally independent” lawful process can produce CSI, since the exact level of “local independence” that is “required” is apparently a secret definition invoked and tailored to exclude whatever you want.
  On the contrary, I stated exactly what the issues are, and why they are the way they are. If the aaRS establishes the effect of the codon while preserving the discontinuity between the codon and the effect, then it is not me tailoring the options for a rational explanation of the system -  it’s reality itself.   Frankly, I wouldn’t have it any other way.  
There is no definition stating what exactly is the minimum length of causal chain of lawful interactions between generations of foxes and rabbits, or foxes and winter snows from my example, and computations by their biochemical networks, which would qualify outcomes of such chain of lawful interactions as “locally independent”. It’s an empty verbiage.
  Since you failed to actually address any specific material observation I made, yet have concluded the observations are empty, then I suppose you needn't fool with it any longer. cheers     Upright BiPed
@nightlight #89
There is no definition stating what exactly is the minimum length of causal chain of lawful interactions ... which would qualify outcomes of such chain of lawful interactions as “locally independent”. It’s an empty verbiage.
I'm not sure why you think it's the length of the causal chain that matters. What matters is that the pattern under investigation arises independent of the specification used to describe it. Obviously, something cannot be its own independent specification. However improbable the surface of any given rock may be, the chance of that shape being left behind when it makes contact with some soft surface is highly probable, as is the chance that some hardening material poured into the imprint will harden into the shape of part of the rock. There are straightforward chance hypotheses to explain these events on which the outcomes are not considered to be improbable at all. Furthermore, none of these events or outcomes correspond to a specification pattern that did not directly cause or lead to their existence. Does this mean, then, that natural process are eliminated from being able to produce CSI by definition? No, not at all. All it means is that, in order to do so, natural process would need to cause some pattern, object or event that corresponds to an independently recognizable specification. For example, if a windstorm whipped a bunch of leaves into an improbable pattern that corresponded to an English word or sentence, that would be a case of natural processes creating CSI. Or if an earthquake was followed by volcano and tsunami and this happened to organize some scrap materials into a kind of machine-like contraption capable of fulfilling some mechanical function, that would also be an example of natural processes creating CSI. The key feature here is that the specification pattern (an English word or sentence, or a mechanical function) is not causally responsible for the for the particular pattern that matches it arising. The specification and the pattern that match it are independent of each other. HeKS
#88 Upright Biped
As an example, a rabbit sees a fox coming up from behind him. The rabbit responds with the "flight response" which includes increased breathing and heart rate, heightened sensory awareness, and motor function it its legs. What happened? The specialized organization of the rabbit's visual system physically transcribed the image of the fox into a neural representation, which then travels through the optical nerve to the visual cortex and brain. But you cannot derive the "flight response" of a rabbit from the arrangement of the neural representation in the optical nerve.
You can't derive "flight response" for that rabbit with already operational "flight response" mechanism and that fox. It took many generations of not just of rabbits and foxes but their ancestors (possibly going back to single cell organisms since they all have it), to program their "flight" response into their (genetically) built in response repertoire. That's merely a variant of my example with fox fur color adaptation -- the cold weather with upcoming snow doesn't change genes to turn fox fur white in any simple push-button way. That takes many generations of foxes and snow, probably with epigenetic imprint happening first which eventually gets transferred and hardwired into genetic record (via Shapiro's "natural genetic engineering"). But computations (intelligence) needed to reshape operation of DNA of similar complexity are routinely done by the cellular biochemical networks in the processes of reproduction, ontogenesis, immune response, etc. These networks are distributed self-progamming computers which are far more intelligent and knowledgeable about molecular scale bioengineering than all the human exerts and their biotechnology taken together. After all, ask human molecular biologists to synthesize a live cell from simple molecules. They would have no clue how to even get started on synthesizing one live organelle of a cell from simple molecules, let alone whole live cell, to say nothing of organizing trillions of cells into a live organism. Yet, the biochemical networks of your own cells have accomplished this humanly unachievable technological feat of bioengineering (synthesizing live cell from simple molecules) thousands of times as you read this paragraph. The human level of expertise in this realm is not even close to that of cellular biochemical networks. But we know that humans can already genetically engineer some useful features into the live organisms (GMO technology). The biochemical networks which are the real masters of molecular engineering light years ahead of the human molecular biologists, could surely then do thousands of times more complex transformations. Regarding your explanation, it seems your "local independence" is "no true Scotsman" fallacy -- it can shift the definition to exclude lawful processes (such as computations by biochemical networks) by definition as you wish. That in turn renders Barry's challenge into a vapid semantic game -- you can't show that "locally independent" lawful process can produce CSI, since the exact level of "local independence" that is "required" is apparently a secret definition invoked and tailored to exclude whatever you want. There is no definition stating what exactly is the minimum length of causal chain of lawful interactions between generations of foxes and rabbits, or foxes and winter snows from my example, and computations by their biochemical networks, which would qualify outcomes of such chain of lawful interactions as "locally independent". It's an empty verbiage. nightlight
nightlight,
What is “local independence from physical determinism”?
Information depends on two interdependent things to exist: representation and specification. Firstly, information requires an arrangement of matter as a medium (i.e. a representation). That medium is translated to produce physical effects in nature. But the effects produced cannot be derived from the arrangement of the medium. It requires a second arrangement of matter to establish (i.e. specifty) what the effects will be. Therefore, there is a natural discontinuity between the arrangement of the medium and its post-translation effects. That discontinuity must be preserved by the system, or else the system becomes locked into physical determinism and cannot produce the effects in question. In other words, if the effects of translation were derivable from the arrangement of the medium, it would be so by the forces of inexorable law, and those inexorable forces would limit the system to only those effects that can actually be derived from the arrangement of the medium – making the production of effects not derivable from the medium impossible to obtain. As an example, a rabbit sees a fox coming up from behind him. The rabbit responds with the “flight response” which includes increased breathing and heart rate, heightened sensory awareness, and motor function it its legs. What happened? The specialized organization of the rabbit’s visual system physically transcribed the image of the fox into a neural representation, which then travels through the optical nerve to the visual cortex and brain. But you cannot derive the "flight response" of a rabbit from the arrangement of the neural representation in the optical nerve. It requires a second arrangement of matter in the visual cortex/brain to specify what the response will be. In other words, the survival of a rabbit (“run away and hide”) is not something that can be derived from inexorable law, so a natural discontinuity will exist in any system that produces such an effect. This is accomplished by having two arrangements of matter; one to serve as a physical representation and another to physically establish (specify) the effect. Preserving the discontinuity between the arrangement of the medium and its post translation effect is therefore a physical necessity of the system, and this discontinuity establishes a local independence from physical determinism. This architecture can be found in any such system. Since DNA is generally the topic here, we can easily analyze the genetic translation system and find the exact same architecture. During protein synthesis, the arrangement of bases within codons are used to evoke specific amino acids to be presented for binding. But there is nothing you can do to a codon to relate it to an amino acid except translate it (which is what the cell does). The arrangement of the codon evokes the effect, but does not physically determine what the effect will be. That effect is physically determined in spatial and temporal isolation by a second arrangement of matter in the system (the protein aaRS) before the tRNA ever enters the ribosome – thus preserving the discontinuity between the codon and its post-translation effect, while simultaneously specifying what that effect will be. You can’t derive via physical law an amino acid from a codon; you can’t derive a survival response from a neural impulse; you can’t derive the “ahh” sound from the letter “a” or the paper it’s written on; you can derive which direction a bird should fly to catch a grasshopper; you can’t derive “middle C” from a pin on a music box cylinder; you can’t derive “defend the mound” from the atoms of a pheromone. You can’t derive any of these effects of information by the arrangements of the matter that evoke them. They all require a second arrangement of matter to establish specification upon translation, and they all require the discontinuity to be preserved. Thus, the translation of information produces lawful effects in nature which are not locally determined by inexorable law. They are only derivable from the organization of the systems that use and translate the information.
the above shift of goalposts renders ridiculous Barry’s challenge
My comments are only related to your position regarding law.
Either lawful processes are allowed as the mechanism for the CSI or Barry’s “challenge” is a pointless word play.
The issue is not whether lawful processes are allowed as a potential source of CSI, they are. The issue is that the systems that translate information must preserve the physicochemical discontinuity between the arrangement of a medium that evokes an effect, and the effect itself. The advocates of materialism will simply have to account for this within their models. It cannot be denied without sinking into absurdity. Upright BiPed
Jerry, Functionality is observable, and moreso function dependent on particular organisation. When the organisation to effect function requires more than 500 - 1,000 Yes/No structured questions to answer it, that gives a config space beyond the plausible reach of atomic and temporal resources for our solar system or the observed cosmos going at fast chem rxn rates. Thus a blind chance and/or mechanical necessity based search will be maximally implausible to arrive at such islands of function. That is plain, so plain that every effort is exerted to obfuscate it. Worse, it traces to Wicken, and to direct implications of the context of use of specified complexity by Orgel. As in, 1979 and 1973, nigh twenty years before Dembsky and five to fifteen years before the first technical ID book, by Thaxton et al. (That timeline gives the lie to the NCSE etc tale about ID being invented to deflect impacts of US Supreme Court decisions of 1987.) WmAD sought to build on the general concept specified complexity, noting in NFL that for biological systems, it is tied to function, giving yardsticks. I think much of the problem with that generalisation is, it opened the way for obfuscators. I actually find it quite sensible at core, whatever quibbles one may have on points. But the nature of the real problems comes out when we see people pretending that the genetic code is not a directly recognisable case of machine language, or that chance and randomness can be emptied of meaning, or that function is meaningless etc etc etc. At this point, I conclude that too often we are not dealing with intellectually serious or constructive people but with zero concessions to IDiots ideologues who have no regard for truth, fairness or genuine learning. Such intellectual nihilism will do us no good if it is allowed to gain the upper hand and have free course to do what it will. Not all critics are like that, but too many are. KF kairosfocus
Sorry - reformatted #74 nightlight
There is no coherent universal definition of “functional” (CSI) — anything does something, changes something, downstream, via interactions. When is that “something” deserving of label “functional” effect?
I think Jerry answers all of this, including the history of the development of CSI measurements in the ID world in post #82.
But if a rock breaks off and leaves its imprint in the mud, that is not “S.A.-functional” since it does nothing S.A. cares about.
The rock falling is not causing a complex, specified functional-operation as a result. It’s causing a determined, predictable result – which is explainable by natural law or chance. There is no need for a design inference here. You could make it simpler – every pile of rocks … is that CSI? No, because it’s not a specified result. Specification implies a future state. DNA code or any language is a classic example. The code functions for a future complex operation as a result. There’s a communication network between sender and receiver – requiring a translation/de-coding and interpretation of symbol. In the end, an operation occurs. Visual images are the weakest evidence of CSI (faces in rocks on the moon for example).
In short “functional” is an arbitrary label without intrinsic or universal meaning or definition.
Many terms in science have no intrinsic or universal meaning (species?, nature?, life?, mind?). For the sake of understanding the natural world, we use concepts that have generally understood meanings. Functional and non-funtionoal are terms applied in various contexts. They do have specific meaning (“Three weeks after death, the heart is non-functioning”).
One can attach it or not attach it to any effects of any interactions as one wishes.
“Three weeks after death, the heart is still functioning”? That doesn’t work.
The upshot is, such semantic games are not going to prove God or convince anyone to teach any such word play at schools as natural science.
ID is more than word games. ID doesn’t attempt to prove God. ID has convinced many scientists who are very accomplished in their field of study. Silver Asiatic
#74 nightlight
There is no coherent universal definition of “functional” (CSI) — anything does something, changes something, downstream, via interactions. When is that “something” deserving of label “functional” effect?
I think Jerry answers all of this, including the history of the development of CSI measurements in the ID world in post #82.
But if a rock breaks off and leaves its imprint in the mud, that is not “S.A.-functional” since it does nothing S.A. cares about.
The rock falling is not causing a complex, specified functional-operation as a result. It's causing a determined, predictable result - which is explainable by natural law or chance. There is no need for a design inference here. You could make it simpler - every pile of rocks ... is that CSI? No, because it's not a specified result. Specification implies a future state. DNA code or any language is a classic example. The code functions for a future complex operation as a result. There's a communication network between sender and receiver - requiring a translation/de-coding and interpretation of symbol. In the end, an operation occurs. Visual images are the weakest evidence of CSI (faces in rocks on the moon for example).
In short “functional” is an arbitrary label without intrinsic or universal meaning or definition.
Many terms in science have no intrinsic or universal meaning (species?, nature?, life?, mind?). For the sake of understanding the natural world, we use concepts that have generally understood meanings. Functional and non-funtionoal are terms applied in various contexts. They do have specific meaning ("Three weeks after death, the heart is non-functioning").
One can attach it or not attach it to any effects of any interactions as one wishes.
"Three weeks after death, the heart is still functioning"? That doesn't work.
The upshot is, such semantic games are not going to prove God or convince anyone to teach any such word play at schools as natural science.
ID is more than word games. ID doesn't attempt to prove God. ID has convinced many scientists who are very accomplished in their field of study. Silver Asiatic
#76 Upright Biped
nightlight "My real point is that the none of the CSI ID arguments prove that the observed phenomena of life and its evolution are not even in principle computable (e.g. by some underlying computational process) i.e. they cannot be a manifestation of some purely lawful underlying processes. The translation of information requires a local independence from physical determinism. It accomplishes this by preserving the necessary discontinuity between the arrangement of the medium and its post-translation effect. It's a coherent system, made coherent by this independence. It could not function without it.
What is "local independence from physical determinism" ? You can't coherently define CSI to exclude by definition causal interactions connecting the two harmonized patterns, then claim as ID "discovery" that causal interactions cannot yield CSI so defined. That's a trivial tautology, not a "discovery". Similarly, the above shift of goalposts renders ridiculous the Barry's challenge asking anyone to show CSI produced by lawful processes, when CSI now excludes by definition any harmonized patterns that are result of lawful processes. You can't have it both ways. Either lawful processes are allowed as the mechanism for the CSI or Barry's "challenge" is a pointless word play. Back to problem proper. There is actually no real independence from lawful physical interactions, ever, between CSI pattern (e.g. encoded in the DNA) and the properties of other systems in the environment the pattern is harmonized with. As to how the causal chain of lawful interactions actually creates such CSI (or harmonization of patterns), consider for example the fur color of polar fox which is programmed to turn white in the winter and darker in the summer. Here the CSI in the DNA of the fox controlling the fur color is harmonized with the environmental colors, and this connection is not physically independent. Namely, countless previous generations of foxes have interacted with that same environment and passed genetic information (including any environmental interaction induced changes) to their offspring. So, while DNA of the current fox has not interacted yet with the environment of the upcoming winter, yet it is still synchronized with the environmental colors of that upcoming winter, the past generations of the foxes have interacted with the winter colors and their genetic and epigenetic code has imprinted or harmonized with this pattern and passed it on to the following generations (or even altered it epigenetically in the existent generation for the rest of the current winter). Of course, neo-Darwinian "random mutation" is a non-starter as the mechanism for such change. But the James Shapiro's "natural genetic engineering" or generally the computations by the cellular biochemical networks can and does accomplish such kind of targeted changes (epigenetically and/or genetically; e.g. see adaptive mutation experiments). The cellular biochemical networks are networks with adaptable links, hence they are a distributed self-programming computer of the same kind as human or animal brain (which are also an adaptable networks, but made of neurons). These cellular biochemical network are unrivalled experts of molecular bioengineering, far ahead of human brains and technology in that field. E.g. these networks manufacture routinely new live cells from scratch (from simple molecules), the task that human molecular biologist can only dream of achieving some day. Hence, the fur color harmonization in polar animals is the result of physical interactions essentially in the same way that the colors of military uniforms are harmonized with the environmental colors in which soldiers are deployed. Namely, soldiers uniforms also haven't interacted with environment of the upcoming winter, yet the colors of the uniforms are synchronized with the colors of that upcoming winter. How can that be? Well, the military staff in charge of uniforms design knows a bit about winter colors and advantages of having uniform colors blending into the environment, hence their brains designed (i.e. adaptable networks of their neurons computed) uniform colors to match the expected colors at the place and time of the deployment. With polar animals, their cellular biochemical networks (adaptable networks of molecules) similarly computed the advantageous fur colors for their environment and altered the DNA to cycle the fur colors in harmony with seasons. So, the adaptation is intelligently guided, but the intelligence that achieved this doesn't require some Jewish or Sumeran tribal sky god figuring it all out up there in the heavens, then sending his angels or ancient aliens down to Earth to muck with the DNA of the foxes. That fairytale "mechanism" is absurd and unnecessary when the needed intelligence and expertise in molecular engineering is readily available in the systems at hand locally. Namely, the well known and plentifully demonstrated intelligence of cellular biochemical networks (e.g. clearly evident in processes of reproduction & ontogenesis) suffices for such design and computation. If human molecular biologist can archive some desired adaptation in GMO plants and animals, then the cellular biochemical networks which are light years ahead in mastery of molecular bioengineering can easily do it without any help from tribal sky gods. Of course, the next natural question is how did distributed self-programming computers, such as the above cellular biochemical networks, come to be at all? This is the origin of life problem. Taking also into account the related fine tuning problem, the most plausible hypothesis is that the computational processes by adaptable networks didn't start with biochemical networks, but are also the underlying processes behind the physical laws of matter-energy and space-time. Such computational 'pregeometry' models based on adaptable networks at Planck scale have been descrihbed and discussed at UD in several longer threads (see second half of this post for hyperlinked TOC of these discussions). In this kind of bottom-up computational models, the cellular biochemical networks are a large scale technology designed (computed) and built by these Planck scale networks (via the intermediate technologies of physical fields and particles), just as biological organisms are a large scale technology designed and built by the cellular biochemical networks, or as industrial societies, manufacturing, internet, etc. are large scale technologies designed and built by certain kinds of these organisms (humans). Since this question always comes up here, the bottom level Planck scale networks are a front loaded ontological foundation from which everything else follows. The key difference between this approach and theological or religious perspectives is that this front loading is far simpler and more economical than the omnipotent and omniscient front loading of the religions. The 'chief programmer of the universe' who front loaded the Planck scale networks need not have any clue what the networks will compute eventually, just as human programmer who wrote a program to compute million digits of number Pi has no clue what digits the program will spew out (beyond perhaps the first few). nightlight
By the way if one goes to the link I provided above and look at the last comment which is by KF, you will see a sentiment that I am referring to. Also the commenter just before KF comment last year on this thread is the only truly honest Darwinist I ever met. He soon stopped commenting here but was an evolutionary biologist. jerry
There has always been a problem with the definition of CSI. We had a long thread about this 7 years ago which KF knows well because it where he first chose to comment here. https://uncommondescent.com/biology/michael-egnor-responds-to-michael-lemonick-at-time-online/ It is a long thread but it shows that the powers to be at UD really did not have a good definition of CSI. It led to a focus on the relationship between A and B where A leads directly to B and B is functional. So the term Functional Complex Specified Information came into being with a couple of variances. KF has done a lot of the building of this. Some obvious examples were language and writing, computer programming and of course DNA. Extremely complex information that led to something else that had function. The anti ID people have since been led to inanity trying to undermine this so obvious of a concept. It is actually CSI on steroids but it has the advantage of being so obvious and understandable. FCSI or whatever the proper abbreviation, is so simple and obvious. But CSI is a little different. It has been undermined by the lack of a good definition and the mind numbing mathematics behind Dembski's calculations along with those of his cohorts. The main complaint about it is that it can not be quantified in a meaningful way which is why we get people like R0bb ranting on from time to time about this. The math if it could be applied in any easy way wold indicate such large numbers that 500 bits would seem child's play but we get the usual assault, that there are no calculations and because of this that CSI is nonsense. Well their complaints are what is really nonsense and they all know it but their purpose in life it seems is to be critical in any way they can and never being constructive. CSI is more nebulous because the relationship is less clear than it is for FCSI which has a direct link between one complexity and an independent complexity. For CSI there is often no direct link. Take for example the often used Mt. Rushmore. There is no direct link between any of the faces on the mountain and the presidents represented other than what is in our heads. If we didn't know that the sculpture used the likenesses of these men we would only be speculating that was the origin. Here we have two independent patterns, one of which preceded the other and which are closely linked in a way that it could not have been chance or law. Even if we did not know of this relationship, we would know the link between the rock formation and typical human faces even if we did not know who the faces belonged to. Try to calculate the odds of this and one will get into numbers so large that there is not enough zeros in all the printing devices on the planet to illustrate it. But we will get the usually malcontents challenging the concept for a lack of clarity of the calculations. What childish behavior. But this is all they have. jerry
#68 Nightlight – Further to my earlier response at #71, the point made by other contributors (and consistent with Barry's original challenge) is that the stone itself has no complex specified information, only complex information; your point that it nevertheless provides a specification for an imprint or mould so that the subsequent impression now has CSI fails because that impression is only a duplication/transmission of the original information: it’s not specified by the stone, only copied from it, so the goal posts haven’t changed. And upon further consideration I would agree with that. Unless considered to be another way of saying the same thing, my previous points would still hold additionally: the information impressed into the cast is a necessary (deterministic) direct product of the casting process (and so discounts a design inference), and it is not in the least independent of the “specification” (again discounting a design inference). Regarding your overall approach, science is based upon what we currently know (or have reason to consider to be the case), not on what we don’t: it requires positive evidence for its claims, not unsupported speculations, and the onus of proof lies with whoever positively asserts. Thomas2
R0bb- The only evidence that we have says that CSI only comes from intelligent agencies. No one has ever observed nature, operating freely, produce anything close to CSI. Joe
nightlight- the problem is there isn't any evidence for unguided processes producing CSI. Unguided processes break and deform- that is it. Joe
F/N: From the conclusion, OP: >> . . . there is strong evidence of ideological bias and censorship in contemporary science and science education on especially matters of origins, reflecting the dominance of a priori evolutionary materialism. To all such, Philip Johnson’s reply to Lewontin of November 1997 is a classic:
For scientific materialists the materialism comes first; the science comes thereafter. [Emphasis original.] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.” . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [Emphasis added.] [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Please, bear such in mind when you continue to observe the debate exchanges here at UD and beyond. >> The above has abundantly underscored this. Let the onlooker of contemporary origins science and the student under contemporary science education beware. KF kairosfocus
@R0bb #65
Regardless, you can look at any example of CSI-based design detection that you like, whether it be Hamlet or coins or DNA or ballot headers. In no case does an IDist base the calculation on the correct hypothesis of design, and in no case does the IDist say “This would have N bits of CSI if my chance hypothesis were the correct one.”
I don't claim to be an expert on the issue of CSI and Dembski's methods of calculation, and this largely because, as I've said, math is not my forte. In fact, for whatever areas I have in which I excel, my lack of ability to deal with remotely complex math will certainly serve to keep me humble (until I get a chance to learn it and take over the world ... muahahahah). That said, in the matter of what would be required to meet Barry's challenge, the answer has been absolutely clear to me from square one. In fact, even a very basic understanding of how CSI is used within the Design Inference to eliminate chance hypotheses, combined with very basic logic, makes the answer clear. Now, you cited Ewert's article to say that Barry's challenge has already been met, because there was a chance hypothesis addressed by Ewert that did not produce the pattern in Liddle's image but which resulted in a high CSI calculation. In turn, I explained that this obviously does not answer Barry's challenge, because in order to show that a chance process can produce 500 bits of CSI, you would need to show that the natural pattern claimed to have 500 bits of CSI had that amount of CSI when the calculation was based on the natural process known to have actually produced it. If the calculation of CSI attributed to a pattern changes and is dependent upon the chance hypothesis used to explain it, then it is obvious that a natural process that has produced some pattern can only be said to have produced 500 bits of CSI if the CSI calculation that results in a value of 500 bits is one that is based on that process known to have produced the pattern. If the CSI value is contingent on the chance hypothesis in question, you can't simply port over the CSI value from a calculation contingent on a different chance hypothesis that didn't produce the pattern. You seemed to think that this was nonsense and that I was misunderstanding how this is supposed to work, and evidently that I was misunderstanding Ewert. In light of this, I contacted Ewert. We've been having an interesting exchange that is still ongoing as I try to get a better handle on some of the logic of various aspects of CSI and the Design Inference. However, on the point of dispute between you and I, specifically over what would be required to meet Barry's challenge and demonstrate that a natural process had produced something like 500 bits of CSI, the discussion was very brief. I reproduce it here:
HeKS: So, for the purposes of Barry's challenge asking id opponents to give an example of natural processes actually producing 500 or more bits of CSI, it seems to me that the only way one could meet this challenge is to point to some object that was actually produced by natural processes where that object demonstrates 500 or more bits of CSI in a calculation that is made on the basis of that particular natural process that actually produced it. Isn't this correct? Do you get what I'm saying?
Ewert: Yes.? ?You're exactly right.
Dare I hope that you will now agree that I was accurately representing Ewert and the way that the calculation of CSI relates to Barry's challenge? HeKS
My real point is that the none of the CSI ID arguments prove that the observed phenomena of life and its evolution are not even in principle computable (e.g. by some underlying computational process) i.e. they cannot be a manifestation of some purely lawful underlying processes.
The translation of information requires a local independence from physical determinism. It accomplishes this by preserving the necessary discontinuity between the arrangement of the medium and its post-translation effect. It's a coherent system, made coherent by this independence. It could not function without it. To say that it's all a result of underlying law is an assertion without evidence. It flies in the face of evidence. Additionally, the representations within genetic encoding are dimensional in nature, which means they exist independent of the minimum total potential energy principle (which otherwise applies to all physical objects). This separates the individual representations from the thermodynamic properties of the medium, which in turn enables effective long-term memory (i.e. it limits the set of objects required to encode memory, and moreover, the primacy of the pattern makes the representation transferable among mediums). Consequently, this also means that the system requires an entirely separate set of additional protocols to establish the dimensional operation of the system itself. None of this is locally derivable from the material that makes up the system. Which is to say that you'll need a whole set of interdependent causes at work in your mysterious unknown laws. On the other hand, we can posit that the most likely cause of all this is the same as it is in the only other example of such a system found in the cosmos. That would be in the altogether common use of recorded language and mathematics. - - - - - - - - - - (...which by the way, provides incontrovertible physical evidence that the material conditions required for recorded language and mathematics existed at the origin of the very first living cell on earth). Upright BiPed
#73 Eric Anderson "Is your point really just that, yes, design is required, but that a smaller amount of original design is required than some people think? If that is the real point you are making, then fine." My real point is that the none of the CSI ID arguments prove that the observed phenomena of life and its evolution are not even in principle computable (e.g. by some underlying computational process) i.e. they cannot be a manifestation of some purely lawful underlying processes. The CSI arguments prove nothing of the sort. They carry no more weight or significance than someone looking at endless sequence of digits of number Pi and proving that these digits cannot be obtained as decimal digits from division A/B of two integers A and B. The Discovery Institute's ID then leaps from such non-controversial and correct result to "conclusion" that no arithmetic algorithm of any kind can produce that sequence of digits, hence they must be coming from 'conscious intelligent agency' that is beyond arithmetic algorithms (i.e. that no lawful process can exist that could generate life and its evolution, hence life must be result of action by a 'conscious intelligent agency'). How much computational front loading (hardware + software i.e. rules of the automata + initial configuration) is needed to produce phenomena of physical laws, fine tuning, life and its evolution, is another matter. I happen to believe (along with some number of others working on pregeometry models) that relatively simple front loading of the right kind of computational building blocks would suffice. But whatever the answer eventually turns out to be (simple or complex front loading), no one has proven that no amount of such computational (or lawful process based) front loading is sufficient as claimed in Discovery Institute's ID mythology. nightlight
#72 Silver Asiatic "Also, as I mentioned, the presence of functional CSI (information that does something or communicates some meaning or shows a purpose) is another means of saying "higher CSI". There is no coherent universal definition of "functional" (CSI) -- anything does something, changes something, downstream, via interactions. When is that "something" deserving of label "functional" effect? If it makes you (S.A.) money, or keeps you (S.A) alive, you (S.A.) may call it "S.A.-functional" since it does something S.A. happens to care about. But if a rock breaks off and leaves its imprint in the mud, that is not "S.A.-functional" since it does nothing S.A. cares about. In short "functional" is an arbitrary label without intrinsic or universal meaning or definition. One can attach it or not attach it to any effects of any interactions as one wishes. The upshot is, such semantic games are not going to prove God or convince anyone to teach any such word play at schools as natural science. nightlight
nighlight: Ignoring your religious aspersions as irrelevant, we note that your hypothesis that "the amount of CSI needed for life or proteins or fine tuning by a computing process underlying universe and its laws may be relatively trivial . . . " is largely unsupported by anything other than your assertion. Nevertheless, even at that you still aren't answering the question of where those "few lines of code" came from. Is your point really just that, yes, design is required, but that a smaller amount of original design is required than some people think? If that is the real point you are making, then fine. We note the lack of empirical support, but, sure, probably less design is required than some people think.
Therefore declaring some figure say of 500 bits as the CSI of some protein, and asking how it got to have all these 500 bits of ‘complex specified information’, is as “profound” as declaring that some object has x coordinate of 500 yards and asking how it got to have all these 500 yards of x coordinate. It has got x=500 yards because you chose x=0 to be 500 yards away from it, that’s how. The same goes for the 500 bits of CSI — a sequence has got 500 bits of CSI because you assumed kind of computer and algorithm which needs 500 bits of program to generate that string.
This raises a couple of interesting issues. First, your primary point (some simpler algorithm might have generated the 500 bits) is, again, little more than an assertion and is based on, we might be forgiven for pointing out, the mere possibility that there might exist some algorithm of fewer bits that could produce the protein that might someday be discovered. So you are essentially asking us to rely on a non-existent, unproven, unknowable, as-yet-undiscovered algorithm that could generate a protein, and not just any protein, but essentially all proteins, and protein complexes, and molecular machines, and cells, and tissues, and organs and organisms. All from a simple "few lines of code." This is an argument built firmly on a lack of evidence. Second, and somewhat more interesting is the question of whether some number of bits, say 500, has any inherent meaning. No it really doesn't, at least not in terms of function or information of specification. I'm in the middle of a post on that very issue, which hopefully will get posted in the next couple of days, but suffice it to say that the question of "bits" in a string is really only relevant for ruling out determinism and identifying a threshold of complexity. It does not help us identify specification. (This is precisely the point I have been making on the other thread, "KF Cuts to the Chase (Again)." Eric Anderson
The ‘specification’ pattern can be anything. It is the correlation with a second pattern that makes the second pattern have ‘specified information’. I can’t find “echoing” exception (or its definition) in any CSI definition.
An echoed pattern is not the original generation of information. Gene duplication is not considered the creation of new CSI.
Namely, how do you know that biological CSI isn’t also “echoing” merely with a longer chain of interactions than rock and mud.
We don't know, but we don't observe any echo. How do we know that biological CSI isn't a replication of something the intelligent designer created? Again, we know what we observe - and if we observe Hamlet's soliloquy reflected off the surface of water, we don't consider the water to be the generator of CSI.
But there is nothing in the CSI definitions that sets some threshold on the length of chain of interactions below which it is called “echoing” and above which it becomes CSI phenomenon.
We're looking for the original source of the CSI. It's a question of origins. When you press the wet ink of a printed page on a flat surface, the surface shows that CSI -- but it's not the origin of it. The wet paint of an illustration can press against a surface and leave a print -- but the surface is not the origin of the CSI in the illustration.
Would that be long enough chain of interaction to get us beyond the “echoing” threshold you just made up above?
Again, it's a question of the original generation of CSI and not copies of the same.
What about a robot or a sculptor watching the rock and shaping the mud (or amber) into the same 3-D imprint? Is there CSI in either of these imprints?
Interesting question and I would say "no". There are some gray areas in discerning CSI. For example, there are Jackson Pollack styled paintings which are "designed" but use a randomizing technique making them virtually indistinguishable from non-designed splotches. Is there CSI in the designed version and not in the random? In the same way, we could be receiving high-CSI content radio signals every day from space but we can't match them to specified patterns at present so we can't recognize the CSI in them. Sure, a sculptor could carve a rock to make it look like a piece of rock that broke off another. Or the sculptor deliberatly break the rock. But a rock doesn't provide much by way of a recognizable pattern. Also, I think it's problems like this, that you raise -- which moved the discussion to functional CSI (FCSI) which is a later development from Dr. Dembski's work.
Of course, instead of rock, it could have been human face that struck the mud, which then turned into amber sculpture, which gets us closer to the favorite Mount Rushmore example of CSI.
Or a fossil, of course. But again, it's a search for origins.
Specifically, which kinds of interaction chains between CSI rich object features and its specification are allowed and which disallowed (give precise characterization of interactions so a robot can measure and deduce which is which), for declaring that something demonstrates presence of high CSI?
I think it's dependent on the quantity of CSI in the matched pattern, for one thing. Also, as I mentioned, the presence of functional CSI (information that does something or communicates some meaning or shows a purpose) is another means of saying "higher CSI". Silver Asiatic
#68 Nightlight – Thanks for the response. In reply: first, Barry hasn't changed the goalposts. My query was not necessarily consistent with Barry's point, if I understood it correctly (which is by no means certain): I am not persuaded that bare low-probability CSI is sufficient to justify a design hypothesis (which is what, on the face of it, Barry's case seemed to possibly imply), and I was in part therefore questioning that; however, I note that the explanatory filter in the original post does include other considerations (necessity and chance), so Barry's CSI might just be shorthand for the whole package (in which case I'm not sure what Barry is in particular driving at). Following the way the discussion was going I was questioning whether these other components were being overlooked. Secondly, I do not need to prove that there was no law-like (or chance) process which produced intelligent designer organisms: I observe that the appearance of design can result from necessity, chance or intentional conscious mindful design, or a combination of these. ID, as I understand it, provides an unequivocal law-like method of identifying the certain tell-tale signs of intentional conscious mindful design within a certain range of defined conditions; on the basis of those signs an intelligent design hypothesis is formulated which can then be subject to further enquiry and test. I simply observe that personal design exists and consider that ID, as a scientific methodology formulated along the above lines, is a reliable way to detect it (within its defined range of limitations). If you propose that the intelligent designer is in fact ultimately the product of exclusively chance and necessity (and so intelligent design is reducible to chance and necessity) then it is up to you to prove it: "he who asserts must prove". Again, I am not at all convinced that natural (non-conscious/non-mindful) agents cannot produce CSI: however, the production of CSI is characteristic of minds at work, and what ID does (in my view) is separate the ultimately conscious/intentional/mindful sources of CSI production from the ultimately non-conscious/non-intentional/non-mindful sources of CSI. The majority of the rest of your post seems to be an appeal to our ignorance to justify a conclusion of non-intentional design (or to justify a conclusion that intentional design cannot be reliably hypothesised); but we cannot rely upon “what ifs” to get along: ID as formulated above claims reliability in identifying intentional design: it provides positive justification for formulating design hypotheses. In each hypothesised case it is up to us to then examine it and test it. There may be other (even better) explanations/descriptions, but it is up to their advocates to justify and test them, not merely raise them as speculative objections or assertions (which is what many 'big name' critics of ID do). Science is grounded in observation, description, analysis, hypothesis and test – not mere speculation beyond those observations, descriptions and analyses. Finally, in rough terms, tractable and conditional independence looks at proximity and remoteness of potential causal linkages: in short, relevance (at least, that's how I perceive it—which, again, might be a mis-perception on my part). Yes, in some way everything is conditionally dependent on something else before or contemporary with it to a degree, but this point on your part really seems to be a case of obfuscation; the point is that the specification should not directly and necessarily trigger or physically cause the thing specified. In the case of your stone, the antitype or impression (with its CSI) was directly caused by the specification - the contours or type - of the stone: it wasn't remotely independent. Anyway, thanks for giving me food for thought. Thomas2
#69 Eric Anderson Assuming for a moment that much of nature can be understood as the result of computational processes, the question is: Where did those computational processes come from? Any deductive system DS1 needs some starting premises (postulates) that are taken for granted within that system DS1. Of course, there could be some other deductive system DS2 in which some of the premises from DS2 are deduced, but still only from the premises of DS2. Deductive system DS0 with no premises is empty system -- it says nothing since nothing follows from it. The ontological counterpart of the 'starting premises' of DS (which is epistemological category) is 'front loading' of the physical system PS being described by the deductive system DS. Hence, the natural science at any stage of its development always takes for granted existence of (front loaded) physical system PS (universe) satisfying the postulates (starting premises) of that science. How did the PS come to have particular starting properties is not something that given DS can answer. Only some new DS2 can explain the starting premises of some other DS1. The interesting question is how much front loading does one need and how complicated does it have to be to explain the known phenomena in the universe (including fine tuning & life). Religious dogmas commonly declare that only an action of all-knowing and all-powerful being is the minimum requirement on the front loading needed to explain the known facts about universe. Curiously, though, the priesthoods making and peddling such proclamations, always somehow happen to have a secret, invisible channel with that being through which the instruction allegedly came down that you, as a mere mortal, owe them obedience plus 10% or some such of all the fruits of your labors, or the being will get angry at you and bad things will happen. Well, how convenient. As explained over previous posts in this thread (e.g. here), what research of computational processes has shown is that far simpler front loading (than actions of omniscient and omnipotent being) suffices. Namely, networks consisting of simple finite state automata connected via adaptable links work as distributed self-programming universal computers i.e. they can compute anything that is conceivably computable (including anything we presently compute with our computing technology). Their intelligence is additive i.e. it grows as more of the same kinds of simple nodes & links are added to the network. Hence the minimum front loading needed to explain universe is far less powerful and wise than the omnipotent and omniscient being of conventional religions or theism/deism. The 'chief programmer of the universe' i.e. the 'front loader', need not have any more clue as to what such system of networked automata will eventually come up with, than a human programmer who wrote a simple few line program to compute million digits of Pi would have about the digits that will turn up. Chess programmers are quite familiar with this phenomenon (of child exceeding their parent)-- their own chess programs are as a rule much stronger chess players than themselves. nightlight completely misses the question at hand, and thus stumbles over himself to provide a red herring non-answer to something other than the real question at issue. Just because you didn't get it, doesn't make it "red herring." I am still awaiting a coherent answer to CSI questions I asked and why the counter-examples provided don't work. All I got back instead so far are ad hominem or non sequitur responses. My basic issue with CSI is that most people here apparently don't understand that "information" (including CSI) are relative quantities like velocity or coordinates x,y,z. The coordinate x of some object is always meant as distance from some point arbitrarily defined as point with x=0. Similarly, amount of "information" S(A) in some sequence of symbols A is always meant as the minimum length (in bits) of program P on some computer C that can generate sequence A. For example, if someone gave you 1 million bits of Pi, and you have no idea about what number Pi is, much less how to compute it, you would declare that the sequence has million bits of information. Yet the program that produced it on regular PC may have been only 500 bits long. Therefore declaring some figure say of 500 bits as the CSI of some protein, and asking how it got to have all these 500 bits of 'complex specified information', is as "profound" as declaring that some object has x coordinate of 500 yards and asking how it got to have all these 500 yards of x coordinate. It has got x=500 yards because you chose x=0 to be 500 yards away from it, that's how. The same goes for the 500 bits of CSI -- a sequence has got 500 bits of CSI because you assumed kind of computer and algorithm which needs 500 bits of program to generate that string. As suggested above, the research of computational processes which might explain our physical laws (including space-time) suggests that far less and far simpler front loading is needed than usually proclaimed by theologians and religions. Hence, the amount of CSI needed for life or proteins or fine tuning by a computing process underlying universe and its laws may be relatively trivial (like few lines of code for the automata rules, as Wolfram believes), just like front loading needed to compute billions of digits of Pi, or of square root of 2,... on conventional computer is a few line program. nightlight
nightlight @6 claims that nature can be completely understood in terms of computational processes. Assuming for a moment that much of nature can be understood as the result of computational processes, the question is: Where did those computational processes come from? The certainly didn't come from law-like necessity. That is anathema to computational processes. So we're left with blind dumb luck as the answer. And so the question remains: are functional computational processes more likely the result of blind dumb luck or the result of intelligent purpose and planning. It isn't even a close call. The "creator" of the materialist creation myth is a lame, pathetic, incompetent bunch of particles bumping into each other -- something that has never been shown to produce anything even close to the kinds of computational processes nightlight refers to. No-one in the ID camp is arguing that processes in living organisms do not follow rules of chemistry and physics. The question is: Where did the computational process that mediates the inexorable effect of chemistry and physics come from? nightlight completely misses the question at hand, and thus stumbles over himself to provide a red herring non-answer to something other than the real question at issue. Eric Anderson
#66 Thomas2 "If that is a fair summary, then the stone impression may indeed exhibit significant CSI, but that would not be sufficient to draw a design inference because the stone impression will have been the result of law-like (deterministic) cause" Well, first Barry asks for some example of CSI that is produced by a law-like process. I provide example, but now the goalpost has moved and any law-like process/cause is apparently excluded by definition (whose definition?). But let's say, hust for the fun I go along and approve the amendment of no law-like process/cause for any CSI. But then, how do you prove that there was no law-like process behind transformations of matter-energy that started with non-live matter-energy and transformed it into live matter-energy (cells and other organisms). Namely, if there is a law-like process/cause that can produce such transformation (from non-live matter-energy to live), then there is no CSI in the live organisms by the above amended CSI exclusion rule, hence there is nothing to debate as to who created the CSI of the live organisms that is not there. In fact, that level of transformation of matter-energy from non-live into live by law-like processes isn't all that far fetched -- it happens all the time. For example, we know that law-like processes of metabolism, reproduction and ontogenesis can transform energy, food and water into cells and live organisms such as new humans that didn't exist before and which are built of previously non-live matter (atoms of food & water making up their cells). Hence law-like processes that can achieve that degree of transformation are not in principle impossible, they exist and we know them. In other words, there exist the initial conditions for some chunk of matter-energy (= the state of 'parent' organisms and their food & energy) which yield that kind of law-like transformation of surrounding matter-energy within a short time relative to age of universe. Once you know that at least one set of initial conditions can produce transformation of matter-energy from non-live to live via lawful processes, the question is how do you know that there are no other initial conditions that can do the same? Note for those unfamiliar with term "initial conditions" -- the natural laws by themselves don't determine what some system will do. E.g. while a ball does obey Newton laws of motion and gravity, these laws don't tell you how the ball will move or where and when it will land. To deduce what ball will do, you need also to input into the "physics algorithm" (equations) the "initial conditions" of the ball, such as its starting position and velocity. Only the combination of these two sets of data, the 'natural laws' + 'initial conditions', yields definite behavior of the ball. With that explanation, the previous example of transformation of matter energy from non-live to live via lawful or law-like processes is saying that the 'initial conditions' of matter energy which are known to yield such transformation are the physical state of 'parent' organisms plus state of their food & energy used in the transformation into new organisms. There was never any evidence found of anything violating natural laws within such processes of transforming non-live matter-energy to live organism. Hence, having never observed non-lawlike interventions in the above known examples of such transformation processes, we can conclude that some future more advanced technology will be able to have a computerized process that performs the entire transformation of non-live to live matter-energy, while operating under explicitly known law-like processes, in every detail and step by step. But then, as explained in earlier posts in this thread, if some computing processes like that future technology can produce such transformation, some other computing processes (such as networks of simple cellular automata) underlying the operation of universe and its physical laws can also produce the existent life without human programmers or technology involved. "and even more because it is not at all conditionally independent of the specification." Nothing is "conditionally independent" from anything that has the common backwards light cone i.e. everything on Earth is in principle dependent (via physical interactions) on anything else in the universe within a sphere of 14+ billion light years around Earth. nightlight
Barry:
There is no need to form any hypothesis whatsoever to meet the challenge.
Since Dembski defines CSI in terms of a hypothesis, you must be using some other definition of CSI which you haven't shared with us. That makes for a pretty safe challenge. R0bb
Nightlight - I seem to have missed something: in a nutshell (as I understand it), ID as a would-be scientific theory states that where in nature an entity exhibits non-deterministic, appropriately statistically significant, tractable and conditionally independent specified complexity then an unequivocal design inference/hypothesis can be made; and that CSI is a particular way of defining and quantifying specified complexity. If that is a fair summary, then the stone impression may indeed exhibit significant CSI, but that would not be sufficient to draw a design inference because the stone impression will have been the result of law-like (deterministic) cause and even more because it is not at all conditionally independent of the specification. Thomas2
HeKS:
I don’t have that book by Dembski
You can see the passage here.
Dembski is here providing the calculation of the mere complexity of the sequence, not its specified complexity or degree of specification.
In Dembski's old way of measuring CSI, specificity was binary, and if an event was specified, then its amount of CSI was simply its amount of complexity. His current definition folds the degree of specificity into the CSI measure, along with probabilistic resources. (And he shifts the "design threshold" accordingly -- it's now zero bits rather than 500 bits.) Also, note that in the passage above, he's doing the calculation in order to demonstrate "CSI holism". Regardless, you can look at any example of CSI-based design detection that you like, whether it be Hamlet or coins or DNA or ballot headers. In no case does an IDist base the calculation on the correct hypothesis of design, and in no case does the IDist say "This would have N bits of CSI if my chance hypothesis were the correct one." R0bb
#62 Silver Asiatic "The rock itself is not specified information since its shape is formed randomly." The 'specification' pattern can be anything. It is the correlation with a second pattern that makes the second pattern have 'specified information'. I can't find "echoing" exception (or its definition) in any CSI definition. "The image of the rock in the mud also does not create new CSI — it merely echoes whatever information is in the rock." Just labeling it "echoing" doesn't distinguish it from usual examples of CSI. Namely, how do you know that biological CSI isn't also "echoing" merely with a longer chain of interactions than rock and mud. But there is nothing in the CSI definitions that sets some threshold on the length of chain of interactions below which it is called "echoing" and above which it becomes CSI phenomenon. All that CSI asks for are the two correlated sequences (e.g. values of some physical variables), one which is called 'specification' for the other. Imagine for example that the rock landed on a log in a river after bouncing from a mud where it left its 3-D image. Then the rock on this log got carried by the river and ocean currents far away, say to another continent. At the same time, a tree resin dripped into and filled the mud imprint, hardened over time into a big chunk of amber. Now someone finds this chunk of amber say in Africa, then travels to America and finds there a rock that closely matches (in millions of bits of CSI) the amber block. Would that be long enough chain of interaction to get us beyond the "echoing" threshold you just made up above? What about a robot or a sculptor watching the rock and shaping the mud (or amber) into the same 3-D imprint? Is there CSI in either of these imprints? Interactions between rock and a hole were different in three cases, but there is nothing in CSI "theory" that distinguishes one from the other. Of course, instead of rock, it could have been human face that struck the mud, which then turned into amber sculpture, which gets us closer to the favorite Mount Rushmore example of CSI. What precisely are the semantic lines of CSI rich phenomena? Specifically, which kinds of interaction chains between CSI rich object features and its specification are allowed and which disallowed (give precise characterization of interactions so a robot can measure and deduce which is which), for declaring that something demonstrates presence of high CSI? nightlight
Worth repeating, as this objection to her (mis-)characterization of the argument for ID came up repeatedly here at UD:
Liddle’s primary objection is that we cannot calculate the P(T|H), that is, the “Probability that we would observe the Target (i.e. a member of the specified subset of patterns) given the null Hypothesis.” However, there is no single null hypothesis. There is a collection of chance hypotheses. Liddle appears to believe that the design inference requires the calculation of a single null hypothesis somehow combining all possible chance hypotheses into one master hypothesis. She objects that Dembski has never provided a discussion of how to calculate this hypothesis. But that is because Dembski’s method does not require it. Therefore, her objection is simply irrelevant.
Mung
Interesting example, but it doesn't quite work. The rock itself is not specified information since its shape is formed randomly. The image of the rock in the mud also does not create new CSI -- it merely echoes whatever information is in the rock. It would be like the image of something reflected in smooth water. The water isn't creating new CSI. Or the echo of musical notes in a canyon - the canyon is not creating new CSI. Silver Asiatic
#59 My challenge will be met when someone shows a single example of chance/law forces having been actually observed creating 500 bits of CSI. Say a chunk of a rock on some slope breaks off and falls down on a muddy bank, then bounces off nearby. At this point you have a highly detailed and accurate 3-D mud imprint of the face of the rock, with megabits worth of information (e.g. if you tried to scan, digitize and compress it), plus you also have its 'specification' nearby, the rock which matches this 3-D mud image. So, a pure lawful behavior yielded millions of bits of CSI in front of your eyes. Of course, a robot could have created similar 3-D imprint of a rock, too. Or human sculptor. In all 3 cases it is interaction between a rock and mud through whatever chain of intermediary objects and interactions (e.g. photons bouncing on rock's face if robot or human were viewing the rock to create its imprint in the mud, plus any tools they used to shape the mud) that produced the CSI. With CSI examples in the live organisms, we don't know the chain of interactions that produced it as we do for the above mud imprint in the rock, but there is no scientific law or principle or a mathematical theorem that precludes existence of such chain of interactions, Dembski's no free lunch reasoning notwithstanding. Namely, Dembski's 'no free lunch' argument translated into the above rock & its mud imprint example, amounts to restricting the possible interaction chains by wishfully adding arbitrary assumptions about the 'allowed interaction chains', then showing that within his 'allowed set of interaction chains' there is none that could have yielded the imprint with that amount of CSI. nightlight
@R0bb #57
Consider, for example, page 174 of Dembski’s book Intelligent Design: The Bridge Between Science and Theology, where he is explaining “CSI holism” and using “METHINKS IT IS LIKE A WEASEL” as an example. He says, “Moreover, because the sentence is a sequence of 28 letters and spaces, its complexity comes to -log2(1/27^8) = 133 bits of information.” This is based on a chance hypothesis of uniform randomness, even though the correct hypothesis is that the phrase was designed. Why do you not consider this confused and nonsensical, as you do when we’re talking about natural processes rather than design?
I don't have that book by Dembski so I can't check that reference or speak particularly intelligently about the larger context of what he's saying. However, I do notice this: "Moreover, because the sentence is a sequence of 28 letters and spaces, its complexity comes to -log2(1/27^8) = 133 bits of information" Unless he says something in the larger context to contradict the normal distinctions in terminology that is typically used on this matter (and which Ewert uses in his articles), Dembski is here providing the calculation of the mere complexity of the sequence, not its specified complexity or degree of specification. Consider what Ewert says in one of the relevant articles:
Under Dembski's definition, an event is complex if it is improbable, and specified if it matches an independent pattern. Events that have both properties are said to exhibit CSI. We may infer that an event that exhibits CSI under all possible chance hypotheses is a product of design.
And...
If a given object is highly improbable under a giving hypothesis, i.e. it is unlikely to occur, we say that the object is complicated. If the object fits an independent pattern, we say that it is specified. When the object exhibits both of these properties we say that it is an example of specified complexity. If an object exhibits specified complexity under a given hypothesis, we can reject that hypothesis as a possible explanation.
And...
At a couple of points, Liddle seems to misunderstand the design inference. As I mentioned, two criteria are necessary to reject a chance hypothesis: specification and complexity. However, in Liddle's account there are actually three. Her additional requirement is that the object be "One of a very large number of patterns that could be made from the same elements (Shannon Complexity)." This appears to be a confused rendition of the complexity requirement. "Shannon Complexity" usually refers to the Shannon Entropy, which is not used in the design inference. Instead, complexity is measured as the negative logarithm of probability, known as the Shannon Self-Information. But this description of Shannon Complexity would only be accurate under a chance hypotheses where all rearrangement of parts are equally likely. A common misconception of the design inference is that it always calculates probability according to that hypothesis. Liddle seems to be plagued by a vestigial remnant of that understanding.
In any case, I've reached out to Ewert to offer some clarification on this issue. We'll see if he responds. In the meantime, I need to get some actual work done :) HeKS
RObb and HeKS Here is my challenge from above:
Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.] If you do, I will delete all of the pro-ID posts on this website and turn it into a forum for the promotion of materialism. [I won’t be holding my breath.] I understand your faith requires you to taunt us like this; yours is a demanding religion after all; but until you can do that, your taunts seem premature at best and just plain stupid at worst.
I thought my challenge was plain enough, but I will clarify for RObb. There is no need to form any hypothesis whatsoever to meet the challenge. The provenance of the example of CSI that will meet the challenge will be ACTUALLY KNOWN. That is why I put the part about question begging in there. It is easy for a materialist to say “the DNA code easily has more than 500 bits of CSI and we know that it came about by chance/law forces.” Of course we know no such thing. Materialists infer it from the evidence, but that is not the only possible explanation. Let me give you an example. If you watch me put 500 coins on a table and I turn all of them “heads” up, you will know that the provenance of the pattern is “intelligent design.” You do not have to form a chance hypothesis and see if it is rejected. You sat there and watched me. There is no doubt that the pattern resulted from intelligent agency. My challenge will be met when someone shows a single example of chance/law forces having been actually observed creating 500 bits of CSI. Barry Arrington
P.S. Sorry for the blockquote mess-up in the last part. The sentence "No, because what we’re talking about is the second step in the Design Inference" is HeKS's. R0bb
HeKS, I don't have much time, so it's good that we're getting to the heart of our disagreement:
In any given case, ‘natural processes’ can only be said to have produced the amount of CSI that is calculated on the basis of the specific natural process that actually produced the effect in question.
On what do you base this claim? In everything that has been written about CSI, can you show me anything to support this? When Dembski and other ID proponents say that intelligent designers create CSI, are they talking about CSI calculated on the basis of the hypothesis of design? Of course not. They're talking about CSI calculated on the basis of some chance hypothesis H, even though H didn't actually produce the effect. Consider, for example, page 174 of Dembski's book Intelligent Design: The Bridge Between Science and Theology, where he is explaining "CSI holism" and using "METHINKS IT IS LIKE A WEASEL" as an example. He says, "Moreover, because the sentence is a sequence of 28 letters and spaces, its complexity comes to -log2(1/27^8) = 133 bits of information." This is based on a chance hypothesis of uniform randomness, even though the correct hypothesis is that the phrase was designed. Why do you not consider this confused and nonsensical, as you do when we're talking about natural processes rather than design?
R0bb, that is EXACTLY what Barry’s challenge was asking for. That just IS Barry’s challenge in different words. Any other challenge would be ludicrous, since it is ridiculously easy to generate high CSI values when using the WRONG chance hypothesis, as Ewert’s article amply demonstrates.
Barry's challenge is an "other challenge", since he never says that the calculation must be based on the correct hypothesis, and there is nothing in the ID literature that says or implies anything about basing CSI calculations on the correct hypothesis, nor are there any examples of Dembski or anyone else doing this. I honestly have no idea where you got the idea from. And I agree that the challenge is ludicrous.
Ewert is accurately representing Dembski’s work, which involves identifying ALL relevant chance hypotheses that might explain some object, pattern, event, etc. See Ewert’s third article on this subject, where he addresses this misconception you seem to have about calculating CSI with respect to only one chance hypothesis.
What misconception? I said that "when IDists calculate CSI in practice, it is almost always with respect to only one chance hypothesis, that of white noise." Are you saying that this isn't true?
- You’ve made some remarks regarding the design inference. But Barry’s challenge and my response to it have nothing to with the design inference. Do you agree?
No, because what we’re talking about is the second step in the Design Inference. Here is Barry's challenge: "Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.]" Note: 1) The challenge refers only to CSI, not to the design inference. 2) Dembski's definition of CSI makes no reference to the design inference. 3) Therefore, we don't need to refer to the design inference in order to understand Barry's challenge, and, in the interest of managing the scope of our discussion, I propose we leave it out. If you think that Barry meant something more than what he actually said, then we should bring Barry into the discussion. (I wish he'd join it anyway.) R0bb
@R0bb #54 We seem to actually agree on a lot of points this time so I'm going to give this another try...
Ewert did not give a calculation of CSI inherent in the image itself,
Again, you’re talking as if there is such thing as “CSI inherent in the image itself”, as distinct from “CSI in the image with respect to chance hypothesis H”. Can you tell me how to calculate the CSI inherent in a thing itself?
No, I'm saying exactly that there is NOT such a thing as "CSI inherent in the image itself" because the CSI can only be calculated with reference to the backdrop of a particular proposed naturalistic hypothesis. I was saying that Ewert did not give such a calculation of the CSI inherent in the image itself because it seemed to me, based on the way you were talking, that you thought he somehow had. At least that was one of only two ways I thought you could be interpreting him that made your claim seem coherent to me.
The only high-CSI values Ewert offered for the image were based on two natural mechanisms that did not produce the pattern and you agree that these processes did not produce any high-CSI pattern.
I’ll say it again: The question of which process actually produced the pattern is completely irrelevant to CSI calculations. If you disagree with this, then please say so explicitly. If you agree with it, then why do you keep bringing up the fact that the processes described by the chance hypotheses didn’t actually produce the pattern?
This is where I start to feel like I'm entering the Twilight Zone. Look again at Barry's challenge:
Show me one example … of chance/law forces creating 500 bits of complex specified information.
Let me add a word to this which does not change but merely draws out the obvious meaning of what Barry said:
Show me one example … of chance/law forces [ACTUALLY] creating 500 bits of complex specified information.
In order for this challenge to be met we would need a naturalistic process, called X, to produce some object / pattern / event that had 500 or more bits of CSI when the calculation was made with respect to process X! I don't know how I can be any more clear. You cannot meet the challenge by saying, for example: 1) Process X produced a pattern that, when calculated with respect to Process X, is found to have 2 bits of CSI 2) When a CSI calculation is performed on the pattern produced by Process X under the hypothesis that it was produced by Process Y, the pattern is found to have 3,000 bits of CSI 3) Therefore, a natural process has produced over 500 (i.e. 3,000) bits of CSI This just doesn't make any sense. It is completely confused. The way that the above example is properly interpreted is that, in this example, a natural process produced 2 bits of CSI. NOT 3,000 bits of CSI! The value produced by a CSI calculation tells us how much CSI the object or pattern would be found to have IF it was produced by the naturalistic hypothesis upon which the calculation was based. You cannot take a value for the amount of CSI a pattern WOULD have IF it was produced by a process that DIDN'T produce it and then say that some natural process produced that amount of CSI. Again, that is completely confused. If the naturalistic hypothesis that was used to calculate the high CSI value for a pattern is not the hypothesis that was actually responsible for producing the pattern then that hypothesized process did not produce that high amount of CSI. So, coming back to your comment, you said:
The question of which process actually produced the pattern is completely irrelevant to CSI calculations. If you disagree with this, then please say so explicitly.
Whether this is true or false depends on how you mean it. It is obviously true that CSI calculations can be performed with respect to naturalistic hypotheses that did not create the object, pattern, etc. under consideration, since the whole point of the CSI calculation is to determine if the hypothesis can be eliminated from consideration as being the correct one. So being the correct hypothesis is not a precondition for simply performing a calculation to find out how much CSI a pattern WOULD have IF it was produced by that hypothesis. HOWEVER, whether a particular naturalistic hypothesis is the correct one is ABSOLUTELY, UTTERLY, COMPLETELY relevant and central to the question of how much CSI a natural process has ACTUALLY produced in any given case. In any given case, 'natural processes' can only be said to have produced the amount of CSI that is calculated on the basis of the specific natural process that actually produced the effect in question.
1) When compared to some natural processes that did not produce the pattern, the pattern is calculated to have high CSI. 2) When compared to some other natural processes that did not produce the pattern, the pattern is calculated to have low CSI. 3a) Therefore, the pattern itself has high CSI. OR 3b) Therefore, when compared to the natural process that actually produced the pattern, the pattern has high CSI. 3a is a complete non-sequitur.
Actually, 3a follows from 1. From 2, it follows that the pattern has low CSI. It has multiple values of CSI, one value for each chance hypothesis.
No, 3a does not follow from 1. 3a is equivalent to saying "Therefore, the pattern inherently has high CSI." And just to avoid confusion, I think it's too muddy to say that the pattern "has multiple values of CSI, one value for each chance hypothesis." Rather, it would be more accurate to simply say that each chance hypothesis leads to a different CSI calculation for the pattern, but the pattern itself does not have any of these CSI values (unless, of course, the CSI value is calculated on the basis of the correct hypothesis).
But Ewert did not say that the pattern had a high CSI value when calculated on the hypothesis that it was produced by volcanic eruptions. Nor did he say the pattern had a high CSI value when calculated on a single hypothesis called “by nature”. If he had said that then you would have a point, but he didn’t.
Barry’s challenge said nothing about “a high CSI value when calculated on the chance hypothesis that is, in fact, the correct hypothesis”
R0bb, that is EXACTLY what Barry's challenge was asking for. That just IS Barry's challenge in different words. Any other challenge would be ludicrous, since it is ridiculously easy to generate high CSI values when using the WRONG chance hypothesis, as Ewert's article amply demonstrates. I mean, for goodness sake, the entire design inference relies on first finding high CSI values for all relevant chance hypotheses in order to identify those hypotheses as incorrect! Finding high CSI values on WRONG hypotheses is exactly what is expected under Dembski's method. Barry would have to be completely insane to challenge someone to find an instance where the comparison of a pattern to a WRONG hypothesis resulted in a CSI calculation over 500 bits. You fall over those every which way you turn, which is kinda the whole point.
But I get your drift. For objects that aren’t designed, one of the chance hypotheses may in fact be the correct hypothesis. But as I said, when IDists calculate CSI in practice, it is almost always with respect to only one chance hypothesis, that of white noise. Ewert’s article is almost the sole exception.
Ewert is accurately representing Dembski's work, which involves identifying ALL relevant chance hypotheses that might explain some object, pattern, event, etc. See Ewert's third article on this subject, where he addresses this misconception you seem to have about calculating CSI with respect to only one chance hypothesis. Wrapping this up...
From my perspective, here is what you’ve done: - You’ve pointed out repeatedly that none of the chance hypotheses chosen by Ewert actually produced the pattern in question. But this fact is irrelevant to the validity of his CSI calculations, or any CSI calculations. Do you agree?
No. See my explanation above. It is absolutely relevant to determining how much CSI any natural process has actually produced and is the only thing relevant to Barry's challenge. In light of the way that CSI calculations are used in the second step of the design inference, anything else would be obviously and utterly ridiculous.
- You’ve said that “Ewert did not give a calculation of CSI inherent in the image itself”. But from your summary above, it seems that you understand that there is no such thing as CSI that inheres in the image alone. Every CSI calculation is with respect to a chance hypothesis. Ewert gave us four such calculations. There is no such thing as a CSI calculation that is unlike Ewert’s calculations in that it inheres “in the image itself”. Do you agree?
Yes.
- You’ve made some remarks regarding the design inference. But Barry’s challenge and my response to it have nothing to with the design inference. Do you agree?
No, because what we're talking about is the second step in the Design Inference. Actually making a design inference is the third step, which comes after all relevant chance hypotheses identified in the first step have been found to result in high CSI values at the second step. Do you get it now? Please tell me you get it. If not then we may need to get someone else to jump in here and pick up the ball because I don't think there's much point in me spending several more hours writing out the same thing in different words. HeKS
R0bb: You have been looking at file size of an image, and that such is FSCO/I is unquestionable. You seem to be going back to the old debate on whether a raw event per se is information, or is potentially informational. Even if so, we are not dealing with empirically shown, functionally specific organisation and or associated information. To see that, is the ash-and-snow pattern constrained to be close to what it is or some function will fail? If so, exactly what function is so sensitive to configuration? Then, you can ask whether describing that config sufficiently to specify the relevant config requires a chain of at least 500 - 1,000+ yes/no questions, which of course is expressible in a suitably coded string. (This is essentially the question answered by an AutoCAD DWG file.) By contrast, if the image is to faithfully reflect reality as projected to a 2-d surface, it is functionally constrained. And I am sure you know that accounting for the eye per blind chance and mechanical necessity through chance variation and incremental culling on differential reproductive success or the like, with adequate empirical observations and no a priori Lewontinian materialist impositions or just so stories dressed up in lab coats is a major un-answered challenge to Darwinists and fellow travellers. KF PS: You have long since known of reasonable metric models for FSCO/I using complexity thresholds as above and using observed functional specificity as a control dummy variable on cunting info, e.g. Chi_500 = Ip*s - 500, bits beyond the solar system threshold kairosfocus
Ewert did not give a calculation of CSI inherent in the image itself,
Again, you're talking as if there is such thing as "CSI inherent in the image itself", as distinct from "CSI in the image with respect to chance hypothesis H". Can you tell me how to calculate the CSI inherent in a thing itself?
nor did he give a calculation of the CSI on the assumption that the pattern was produced by the successive eruptions of a volcano
If you mean that he never used the volcanic process as a chance hypothesis, then of course he didn't. I never said that he did.
The only high-CSI values Ewert offered for the image were based on two natural mechanisms that did not produce the pattern and you agree that these processes did not produce any high-CSI pattern.
I'll say it again: The question of which process actually produced the pattern is completely irrelevant to CSI calculations. If you disagree with this, then please say so explicitly. If you agree with it, then why do you keep bringing up the fact that the processes described by the chance hypotheses didn't actually produce the pattern?
He also offered very low-CSI values for the image based on different natural processes. Why are you not adopting these as the CSI value for the image?
I've never rejected those values -- they're just as valid as the high values. As I said before, for a given semiotic agent, the image has multiple values of CSI, one for each chance hypothesis considered.
The simple fact of the matter is that none of these CSI values are inherent in the pattern itself.
Again, can you tell me how to calculate the CSI inherent in the pattern itself?
1) When compared to some natural processes that did not produce the pattern, the pattern is calculated to have high CSI. 2) When compared to some other natural processes that did not produce the pattern, the pattern is calculated to have low CSI. 3a) Therefore, the pattern itself has high CSI. OR 3b) Therefore, when compared to the natural process that actually produced the pattern, the pattern has high CSI. 3a is a complete non-sequitur.
Actually, 3a follows from 1. From 2, it follows that the pattern has low CSI. It has multiple values of CSI, one value for each chance hypothesis. When we talk as if CSI inheres in the pattern (which, as I said, is understandable since even Dembski does it), we're stuck with the fact that the CSI for a given pattern is multivalent. Dembski himself says so, as I pointed out already. If we want univalent CSI, we have to talk about "the CSI in pattern T with respect to chance hypothesis H and semiotic agent S". And we can talk about a different value: "CSI in the pattern T with respect to chance hypothesis J and semiotic agent S", and yet another value of "CSI in the pattern T with respect to hypothesis H and semiotic agent W". But if you think that "the CSI in pattern T" has a single value, then again, please tell me how to calculate it.
But Ewert did not say that the pattern had a high CSI value when calculated on the hypothesis that it was produced by volcanic eruptions. Nor did he say the pattern had a high CSI value when calculated on a single hypothesis called “by nature”. If he had said that then you would have a point, but he didn’t.
Barry's challenge said nothing about "a high CSI value when calculated on the chance hypothesis that is, in fact, the correct hypothesis" or "a high CSI value when calculated on a single hypothesis called 'by nature'". Once again, the validity of a CSI calculation doesn't depend on choosing a chance hypothesis that turns out to be the correct hypothesis, nor on choosing a chance hypothesis called "by nature".
You keep pointing out that the chance hypothesis employed to calculate the CSI of the pattern is not the actual process that produced the pattern. But that is how CSI is always calculated. It’s calculated with a null hypothesis, not the actual process that produced the pattern.
That is just not correct. A CSI value is calculated for each naturalistic hypothesis that is proposed to have produced the object or event in question, not on the basis of a single specific null hypothesis.
Each naturalistic hypothesis that is proposed to have produced the object or event in question is a single specific null hypothesis. Each CSI calculation is based on single null hypothesis, but there are multiple null hypotheses for a given object or event. It seems that we agree on this, which is encouraging.
In fact, after writing that last sentence I just went and looked up Ewert’s other articles on this subject and here’s what he says about this very issue:
Ewert: ... However, there is no single null hypothesis. ... the design inference requires that we calculate probabilities, not a probability. Each chance hypothesis will have it own probability, and will be rejected if that probability is too low.
Again, we agree that there is no single null hypothesis, which is great. And again, Barry's challenge and my response to it have nothing to do with the design inference.
But CSI is not defined in terms of the process that actually produced the result. If you think that it is, then why do IDists always calculate the CSI in designed artifacts with respect to a non-design hypothesis (almost always white noise, as I said in #41)?
The CSI value is determined/calculated in terms of each of the naturalistic (i.e. non-teleological) hypotheses proposed to explain some object or event, one of which might actually be the process that produced the object or event.
Thank you for answering. By definition, a naturalistic process cannot produce a designed artifact. (I realize that "designed artifact" is redundant, but I wanted to emphasize that I'm talking about cases in which the object is in fact designed.) But I get your drift. For objects that aren't designed, one of the chance hypotheses may in fact be the correct hypothesis. But as I said, when IDists calculate CSI in practice, it is almost always with respect to only one chance hypothesis, that of white noise. Ewert's article is almost the sole exception.
The concept of CSI is used exclusively to rule out the naturalistic hypotheses proposed to explain some object or event. Each of the proposed naturalistic hypotheses offer a backdrop, baseline or context against which the object or event might be compared and found to either have a high or low degree of CSI relative to that backdrop. If any proposed naturalistic hypothesis leads to a very low calculation of CSI for the object/event, then a design inference is not made. A design inference is only made when all proposed naturalistic hypotheses lead to a calculation of high CSI. And even then the design inference is held tentatively, allowing the possibility that some naturalistic hypothesis might be proposed in the future that leads to a low CSI calculation.
That's an excellent summary of Dembski's work on CSI -- I mean that sincerely. But again, Barry's challenge and my response to it have nothing to do with the design inference.
I’ve done just about all I can to help you understand that it was mistaken. If you want to go on insisting that it wasn’t mistaken, then I don’t think there’s anything else I can do for you.
From my perspective, here is what you've done: - You've pointed out repeatedly that none of the chance hypotheses chosen by Ewert actually produced the pattern in question. But this fact is irrelevant to the validity of his CSI calculations, or any CSI calculations. Do you agree? - You've said that "Ewert did not give a calculation of CSI inherent in the image itself". But from your summary above, it seems that you understand that there is no such thing as CSI that inheres in the image alone. Every CSI calculation is with respect to a chance hypothesis. Ewert gave us four such calculations. There is no such thing as a CSI calculation that is unlike Ewert's calculations in that it inheres "in the image itself". Do you agree? - You've made some remarks regarding the design inference. But Barry's challenge and my response to it have nothing to with the design inference. Do you agree? I've probably missed something, and I hope you'll tell me what it is. R0bb
@R0bb #48
They were two chance hypotheses that actually had nothing to do with the generation of the image content, so to say that they created high levels of CSI in the image, thereby answering Barry’s challenge, is literally nonsensical.
I agree completely. That’s why I never said that those two hypotheses created high levels of CSI. As I said in #27, it was a volcano that created the high-CSI pattern.
But R0bb, you've been arguing that, according to Ewert, the pattern given by Liddle has sufficient CSI to answer Barry's challenge. This is not what Ewert said and nothing that Ewert said supports this claim. Ewert did not give a calculation of CSI inherent in the image itself, nor did he give a calculation of the CSI on the assumption that the pattern was produced by the successive eruptions of a volcano, since he was not supposed to know where the pattern came from. To say "it was a volcano that created the high-CSI pattern" is to say something that Ewert did not say at all. The only high-CSI values Ewert offered for the image were based on two natural mechanisms that did not produce the pattern and you agree that these processes did not produce any high-CSI pattern. Yet you use these values (values which are only valid under the specific chance hypotheses to which they were attributed) to claim that natural processes produced a high-CSI pattern. This simply doesn't make any sense. He also offered very low-CSI values for the image based on different natural processes. Why are you not adopting these as the CSI value for the image? The simple fact of the matter is that none of these CSI values are inherent in the pattern itself. The claim you are making is a non-sequitur. You are basically arguing like this: 1) When compared to some natural processes that did not produce the pattern, the pattern is calculated to have high CSI. 2) When compared to some other natural processes that did not produce the pattern, the pattern is calculated to have low CSI. 3a) Therefore, the pattern itself has high CSI. OR 3b) Therefore, when compared to the natural process that actually produced the pattern, the pattern has high CSI. 3a is a complete non-sequitur. 3b is not in evidence, since no CSI calculation has been provided for the pattern when compared to the actual natural process of successive volcanic eruptions.
Again, I agree completely. Barry’s challenge is obviously not met by the Mona Lisa, because the Mona Lisa was not produced by nature. But the ash bands that Ewert analyzed were produced by nature.
But Ewert did not say that the pattern had a high CSI value when calculated on the hypothesis that it was produced by volcanic eruptions. Nor did he say the pattern had a high CSI value when calculated on a single hypothesis called "by nature". If he had said that then you would have a point, but he didn't.
You keep pointing out that the chance hypothesis employed to calculate the CSI of the pattern is not the actual process that produced the pattern. But that is how CSI is always calculated. It’s calculated with a null hypothesis, not the actual process that produced the pattern.
That is just not correct. A CSI value is calculated for each naturalistic hypothesis that is proposed to have produced the object or event in question, not on the basis of a single specific null hypothesis. In fact, after writing that last sentence I just went and looked up Ewert's other articles on this subject and here's what he says about this very issue:
Ewert: As I emphasized earlier, the design inference depends on the serial rejection of all relevant chance hypotheses. Liddle has missed that point. I wrote about multiple chance hypotheses but Liddle talks about a single null hypothesis. She quotes the phrase "relevant [null] chance hypothesis"; however, I consistently wrote "relevant chance hypotheses." Liddle's primary objection is that we cannot calculate the P(T|H), that is, the "Probability that we would observe the Target (i.e. a member of the specified subset of patterns) given the null Hypothesis." However, there is no single null hypothesis. There is a collection of chance hypotheses. Liddle appears to believe that the design inference requires the calculation of a single null hypothesis somehow combining all possible chance hypotheses into one master hypothesis. She objects that Dembski has never provided a discussion of how to calculate this hypothesis. But that is because Dembski's method does not require it. Therefore, her objection is simply irrelevant. .... Liddle objects that we cannot calculate the probability necessary to make a design inference. However, she is mistaken because the design inference requires that we calculate probabilities, not a probability. Each chance hypothesis will have it own probability, and will be rejected if that probability is too low. - Pink EleP(T|H)ants on Parade: Understanding, and Misunderstanding, the Design Inference
Also, in pointing out that 'the chance hypothesis employed to calculate the CSI of the pattern is not the actual process that produced the pattern', I was not saying that as a matter of general truth. I was making a specific statement about the case under consideration. Generally speaking, it could obviously very well be the case that one of the natural hypotheses proposed to explain some event or object was the one that actually produced it, but when this is the case, the calculated CSI value invariably turns out to be very low.
But CSI is not defined in terms of the process that actually produced the result. If you think that it is, then why do IDists always calculate the CSI in designed artifacts with respect to a non-design hypothesis (almost always white noise, as I said in #41)?
The CSI value is determined/calculated in terms of each of the naturalistic (i.e. non-teleological) hypotheses proposed to explain some object or event, one of which might actually be the process that produced the object or event. The concept of CSI is used exclusively to rule out the naturalistic hypotheses proposed to explain some object or event. Each of the proposed naturalistic hypotheses offer a backdrop, baseline or context against which the object or event might be compared and found to either have a high or low degree of CSI relative to that backdrop. If any proposed naturalistic hypothesis leads to a very low calculation of CSI for the object/event, then a design inference is not made. A design inference is only made when all proposed naturalistic hypotheses lead to a calculation of high CSI. And even then the design inference is held tentatively, allowing the possibility that some naturalistic hypothesis might be proposed in the future that leads to a low CSI calculation. I don't want to be rude here or anything, but I really don't know what else I can say. Your original claim that Ewert's article showed Barry's challenge had been met was simply mistaken. I've done just about all I can to help you understand that it was mistaken. If you want to go on insisting that it wasn't mistaken, then I don't think there's anything else I can do for you. HeKS
Sorry, option 1 above is poorly worded. Substitute: 1) Even if the data was never store as an image, it was still collected by some kind of optical sensor, and optical sensors are man-made! R0bb
KF and UB, Here are the data points that Ewert analyzed. Please show me how his calculations would have been different if those data points had been taken from the glacier itself rather than from an image of the glacier. For your convenience, you can respond by referring to one of the following by number: 1) But it was a camera that collected that data from the glacier, and cameras are man-made! 2) But that's a computer file that you linked to, and computers are man-made! 3) But those data points are Hindu-Arabic numbers, and the Hindu-Arabic numeral system is man-made! 4) Other: _____________________________________________________ R0bb
KF
Why have you presented a case of an IMAGE, created by artifice of man ... as an example of natural generation of CSI?
Upright BiPed
nightlife, If you'd like to conflate local regularity with inexorable law, in the face of incontrovertible evidence to the contrary, then that is certainly your prerogative. It remains a flawed perspective all the same. Upright BiPed
HeKS:
I’m not sure how many more times we can go around this. Here is Barry’s original challenge:
Show me one example … of chance/law forces creating 500 bits of complex specified information.
You responded by saying:
Winston Ewert calculates that a pattern posted by Elizabeth Liddle has 1,068,017 bits of CSI, or 593,493 bits, or -11,836 bits, or -3,123,223 bits, depending on which chance hypothesis is used.
There are two problems here. 1) This is not exactly what Ewert said. I’ve already described more precisely what Ewert said in my previous comments and I find myself wondering if you actually read his article. If you want to take the position that Ewert himself simply doesn’t understand how CSI relates to a design inference, you are free to take that up with him, but that not what you originally claimed.
I've read Ewert's article, thanks, and we both know precisely what he said, but we apparently disagree on his meaning. With regards to me wanting "to take the position that Ewert himself simply doesn’t understand how CSI relates to a design inference", I have no idea where that came from. You correctly quoted Barry's challenge and my response, and neither have anything to do with "how CSI relates to a design inference".
They were two chance hypotheses that actually had nothing to do with the generation of the image content, so to say that they created high levels of CSI in the image, thereby answering Barry’s challenge, is literally nonsensical.
I agree completely. That's why I never said that those two hypotheses created high levels of CSI. As I said in #27, it was a volcano that created the high-CSI pattern.
It would be like testing the chance hypothesis that a bunch of cans of paint tipped over, spilled onto a canvas, and created the Mona Lisa and finding that, lo and behold, this chance hypothesis leads to an incredibly high calculation of CSI, and so Barry’s challenge has been answered. But this obviously makes no sense, because that isn’t how the Mona Lisa came to exist, so that chance hypothesis did not actually create a high degree of CSI, as it didn’t even happen in the first place.
Again, I agree completely. Barry's challenge is obviously not met by the Mona Lisa, because the Mona Lisa was not produced by nature. But the ash bands that Ewert analyzed were produced by nature. You keep pointing out that the chance hypothesis employed to calculate the CSI of the pattern is not the actual process that produced the pattern. But that is how CSI is always calculated. It's calculated with a null hypothesis, not the actual process that produced the pattern. In #41, I said that if you know of any exceptions to this, then please share. That invitation is still open. I think it would also help resolve our miscommunication/disagreement if you would answer the question I posed at the end of #44. I'll repeat it: But CSI is not defined in terms of the process that actually produced the result. If you think that it is, then why do IDists always calculate the CSI in designed artifacts with respect to a non-design hypothesis (almost always white noise, as I said in #41)? I know this is a semantically murky subject, so thanks for your patience, HeKS. R0bb
NL - You do seem to come across as holding natural science to be based and operating upon an exclusive and dogmatic assumption of universally applicable naturalism - which would seem to reduce it to blind faith religious atheism (or possibly deism) - unless you really mean that the assumption of naturalism is only an initial and correctable working assumption. The problem with science-as-dogmatic-universal-and exclusive-naturalism is that such "science" is front-loaded with prior religious/metaphysical conclusions which make it impossible to let the evidence speak for itself, and impossible to reliably describe, understand and explain nature: if you have decided in advance what science may or may not discover then you are not actually doing science at all - hence IDers' rejection of MN as a valid scientific philosophy. Nevertheless (as I understand it), in scientific practice ID identifies design (within a limited range of applicability) employing MN exclusively! - it proposes a basically mathematical method of unequivocally identifying and describing purposeful design by detecting certain law-like properties in designed entities for a defined class of cases: its output is a design inference/hypothesis which can then be subject to further scientific test. Essentially ID proposes a scientific law for detecting design. Thomas2
@R0bb #44 I'm not sure how many more times we can go around this. Here is Barry's original challenge:
Show me one example ... of chance/law forces creating 500 bits of complex specified information.
You responded by saying:
Winston Ewert calculates that a pattern posted by Elizabeth Liddle has 1,068,017 bits of CSI, or 593,493 bits, or -11,836 bits, or -3,123,223 bits, depending on which chance hypothesis is used.
There are two problems here. 1) This is not exactly what Ewert said. I've already described more precisely what Ewert said in my previous comments and I find myself wondering if you actually read his article. If you want to take the position that Ewert himself simply doesn't understand how CSI relates to a design inference, you are free to take that up with him, but that not what you originally claimed. 2) Barry asked for an example of a chance/law process actually creating 500 or more bits of CSI. But the two chance hypotheses Ewert covers that would lead to a very high CSI calculation were not the ones that created the image. They were two chance hypotheses that actually had nothing to do with the generation of the image content, so to say that they created high levels of CSI in the image, thereby answering Barry's challenge, is literally nonsensical. It would be like testing the chance hypothesis that a bunch of cans of paint tipped over, spilled onto a canvas, and created the Mona Lisa and finding that, lo and behold, this chance hypothesis leads to an incredibly high calculation of CSI, and so Barry's challenge has been answered. But this obviously makes no sense, because that isn't how the Mona Lisa came to exist, so that chance hypothesis did not actually create a high degree of CSI, as it didn't even happen in the first place. Consider some excerpts from Ewert:
The subject of CSI has prompted much debate, including in a recent article I wrote for ENV, "Information, Past and Present." I emphasized there that measuring CSI requires calculating probabilities. At her blog, The Skeptical Zone, writer Elizabeth Liddle has offered a challenge to CSI that seems worth considering. She presents a mystery image and ASKS FOR A CALCULATION OF CSI. The image is in gray-scale, and looks a bit like the grain in a plank of wood. Her intent is either to force an admission that SUCH A CALCULATION IS IMPOSSIBLE or to produce a false positive, detecting design where none was present. But as long as we remain in the dark about what the image actually represents, CALCULATING ITS PROBABILITY IS INDEED IMPOSSIBLE. Dembski never intended the design inference to work in the absence of understanding possible chance hypotheses for the event. Rather, the assumption is that we know enough about the object to make this determination.
If the CSI value was simply a measure of the CSI inherent in the object/image itself, it would not be impossible to calculate the CSI value of the image without knowing what it represented or what chance hypotheses could be tested against it. Ewert continues (and I'll just bold this stuff without comment):
Let's review the design inference... There are three major steps in the process: 1. Identify the relevant chance hypotheses. 2. Reject all the chance hypotheses. 3. Infer design. .... Specified complexity is used in the second of these steps. In the original version of Dembski's concept, we reject each chance hypothesis if it assigns an overwhelmingly low probability to a specified event. Under the version he presented in the essay "Specification," a chance hypothesis is rejected due to having a high level of specified complexity. .... The criterion of specified complexity is used to eliminate individual chance hypotheses. It is not, as Liddle seems to think, the complete framework of the process all by itself. It is the method by which we decide that particular causes cannot account for the existence of the object under investigation. .... Specified complexity as a quantity gives us reason to reject individual chance hypotheses. It requires careful investigation to identify the relevant chance hypotheses. This has been the consistent approach presented in Dembski's work, despite attempts to claim otherwise, or criticisms that Dembski has contradicted himself.
As I said above, if you want to go argue with Ewert that he doesn't understand how CSI factors into the design inference, you're free to do that. What I'm telling you is that Ewert does not make any statement in that article that can be cited in support of your claim that Barry's challenge has been met. I freely admit that I'm no math wizard, but in this case this is not a question of math. It's a question of reading comprehension. HeKS
#43 logically_speaking
"science operates under assumption that universe operates lawfully from some front loaded foundation". Laws require a law maker, just an observation.
That's a topic for metaphysics or theology, not for natural science. By definition, the postulates (which are epistemological counterpart of the ontological element 'front loaded foundation') are assumptions that are taken for granted by natural science ('for the time being' at any point). Of course, no starting assumption is cast in stone and what is taken for granted today may be explained in the future under some more economical starting assumptions (i.e. under more succinct set of postulates). But you cannot get around the basic requirement that no matter how far the science advances, some starting assumption taken for granted must be accepted before anything can be deduced within that science. The hypothetical 'ideal theory' with empty set of initial assumptions i.e. with empty scientific statement generating algorithm, generates empty set of scientific statements. Science has moved beyond that point by now. The ontological restatement of the above epistemological requirement is that science always presumes existence of some front loaded system (universe) that plays or operates by the rules described by its postulates. The advance of science consists of reducing that 'front loading' to fewer and simpler elemental entities which can explain as much or more phenomena than before. The computational or algorithmic approach to natural science (exemplified by Wolfram's NKS, sketched in this post) examines the above process which normally unfolds implicitly, more systematically and deliberately, treating it as an object of scientific research and formalization of its own. One remarkable finding of such analysis (by Wolfram and others) is that extremely simple elemental building blocks, such as 2 state automata (states denoted as 0 and 1) with very simple rules of state change, can be universal computers (capable of computing anything that is computable by an abstract Turing machine, which is a superset of what any existent, concrete and finite, computer can compute). Hence, the most economical front loading will eventually advance (or shrink) into a form of a network of very simple elemental automata with few rules of their state/link change. While many such systems with universal computer capabilities do exist (many are also distributed self-programming universal computers), the hard part is finding one that also requires only a very simple initial state of the system. Namely, the initial state is the 'starting program' (the 'universe program', as it were) being executed by this simple front loaded distributed computer. One complication in this type of modeling is that this 'starting program' is self-modifying program i.e. its outputs are themselves the instructions that will/may get performed at later stages in the run of the 'universe program'. Ideally, one would want to find the rules of the automata network for which the initial state itself has a very low algorithmic information/complexity (i.e. a short length of a program needed to generate it), yet remain capable of computing/reproducing all observed phenomena. Otherwise, a single-minded focus on simplifying just the building blocks (network of automata), one may be merely shifting the front loading from the rules of operation of the elemental building blocks into the initial state of the system (the 'starting program' of the universe), without necessarily reducing the total amount of front loading (which consists of the rules of operation/hardware + the initial state/software).
You say, "By definition you can't have science presuming a capricious, laws violating entity interfering with the phenomena (e.g. arranging molecules into "irreducibly complex designs")". But we arrange molecules in irreducibly complex designs all the time, does that mean we violate the laws of the universe, of course not.
This is a very common misunderstanding (of science and natural laws) in the ID debates, often on both sides. The natural laws are not the entire input needed to compute or make a prediction about (or describe) phenomena or events. The natural law, such as Newton laws of motion & gravity, merely constrain the events but don't single out one that will happen. For example, while a kicked ball does move according to the Newton laws, these laws don't tell you where and when it will land or what its trajectory will be. To compute the actual behavior of the ball, you need to input into the 'physics algorithm' not just data representing the natural law but also the data representing the initial and boundary conditions (IBC) i.e. the initial velocity and direction of the ball (initial conditions), plus any forces affecting the ball during the flight (boundary conditions). Only this combined set of data or numbers yield predictions about actual events: algorithmic instructions of the law + numbers for initial conditions + numbers for boundary conditions. The latter two sets of numbers, the IBC, are not themselves a natural law, but are some numbers 'put in by hand' as it were, into the physics algorithm (but are not specified by the algorithm). The law itself merely constrains or compresses the description of events -- e.g. instead of recording/specifying all points of the ball trajectory, you just need to specify initial velocity and position (plus any ball intercepts in flight by other players), and the law algorithm computes the rest of the trajectory. Hence instead of describing events using say million of numbers for a high res trajectory points, you just need 6 numbers (3 for initial 3-D position + 3 for initial 3-D velocity). Similarly, when a biochemist arranges molecules into a complex design, he is not violating any law of chemistry or quantum physics. He is merely setting the initial and boundary conditions for the molecules, aiming to make them form some desired arrangement. That's just like a player adjusting the initial speed and direction of the kick, aiming to make the ball enter the goal. No law is violated at any point in either case. An important detail about boundary conditions (through which one can control the behavior of the system without violating any natural laws) is that they don't refer only to physical conditions on the outer surface of the system, but also physical conditions on any inner surface of the system. For example you control a car by adjusting the boundary conditions on its internal surfaces (steering wheel, gas pedal, etc). In other words, objects can be controlled via boundary conditions without grabbing or pushing or manipulating them from outside. And all that can be done while remaining perfectly within natural laws throughout the process. Hence no violation of natural laws is needed for objects to perform arbitrarily complex actions, and no one has to manipulate objects from outside or in any way that is observable from outside. This implies that the 'chief programmer of the universe' doesn't need to reach down from heavens, or in any way that anyone would notice directly from outside, in order to make molecules do something clever, such as arrange into proteins or live cells, while playing strictly by the rules (natural laws) throughout the entire construction process. Note also that any operations you can do with objects, such as arranging molecules in complex ways, in principle a robot can do (or generally, a program with suitable electro-mechanical transducers), including operations needed to build that 'chemist' robot, or operations needed to build this second robot that built the chemist robot,... etc. Such (potentially unlimited) hierarchy or chain of robots, each generation building the next more advanced (in relevant aspects) generation can in principle reach a stage of technology with robots that can arrange molecules in complex ways, such those constructing live cells. The fascinating insight of the algorithmic approach to natural science sketched earlier is that the starting robots which are as simple as 2 state automata connected into a network with adaptable links, can build robots not just as complex as any we can build, but as complex as any we can conceive ever building. As discussed earlier, a hierarchy of 'robots' starting with the most elemental ones at Planck scale, would pack 10^80 times more computing power (hence intelligence) in any volume of matter energy than what we can presently conceive building in that same volume using our elementary particles as building blocks (e.g. for logic gates). Further, this unimaginably powerful underlying computation (intelligence) can be operating each elementary particle, at all moments and all places, as their galactic scale robotic technology, which in turn are operating cells as their galactic scale technology, which finally operate us as their galactic scale technology. Eventually, human civilizations will build and operate their own galactic scale technologies, extending thus the harmonization process at ever larger scale. As explained above, all such control can be done while everyone at every level is playing strictly by the rules (natural laws of that level) throughout, via control of the internal boundary conditions of those objects, hence without any apparent or directly observable external manipulation of any of the objects in the hierarchy. Of course, these 'rules of operation' or 'natural laws' that different levels are playing by are not what we presently call or understand as natural laws. The latter laws capture only a few outermost coarse grained features or regularities of the much finer, more subtle patterns computed by the hierarchy of underlying computing technologies. The front loaded ground level system computing it all in full detail and the 'natural laws' by which it works, are those simple elemental automata and their simple rules operating together as adaptable networks (societies) at Planck scale. The topic of Planck scale networks and their implications for ID was described and discussed in great detail in an earlier longer thread at UD. The hyperlinked TOC of that discussion is in the second half of this post. nightlight
HeKS:
If you look at Ewert’s article, the amount of calculated CSI changes based on the naturalistic explanation being considered. This could not be the case if the CSI values Ewert gives were simply a measure of the CSI present in the image itself. If it were, there would only be one CSI value.
You're under the illusion that CSI inheres in the entity alone. This is understandable since most ID proponents, including Dembski himself, often talk as if it does. But consider Dembski's current definition of specified complexity: Χ = -log2[10^120 ⋅ Φ_S(T) ⋅ P(T|H)] Χ is a function of three variables, namely T, H, and S. So CSI does not inhere in the observed instance of the pattern (T) alone, but rather inheres in the observed instance of the pattern (T) in combination with the chance hypothesis (H) and the semiotic agent (S) who observes the instance of the pattern. So yes, entities have multiple values of CSI. For a given semiotic agent, an entity has a CSI value for every chance hypothesis. To detect design, says Dembski, we calculate the CSI of the entity for every "relevant" chance hypothesis, and infer design only of all of the CSI values meet the threshold. If Barry understood the definition of specified complexity, he would not have issued the challenge that he did. Nature produces all kinds of stuff that has high CSI with respect to a chance hypothesis of white noise.
Instead, the CSI value in each case is calculated on the assumption that the hypothesis currently under consideration was actually the one that produced the image.
You seem to think that if the hypothesized process turns out to not be the one that actually produced the image, then the calculated number is not "a measure of the CSI present in the image itself". But CSI is not defined in terms of the process that actually produced the result. If you think that it is, then why do IDists always calculate the CSI in designed artifacts with respect to a non-design hypothesis (almost always white noise, as I said in #41)? R0bb
Nightlight, You said, "science operates under assumption that universe operates lawfully from some front loaded foundation". Laws require a law maker, just an observation. You say, "By definition you can’t have science presuming a capricious, laws violating entity interfering with the phenomena (e.g. arranging molecules into “irreducibly complex designs”)". But we arrange molecules in irreducibly complex designs all the time, does that mean we violate the laws of the universe, of course not. For example we can take the ingredients of eggs, flour and milk and make a cake. Now the cake while probably tasting terrible is irreducibly complex. Anyway the point is that the designer does exactly what we do, he takes the ingredients (molecules), and turns them into something new. logically_speaking
@R0bb #41, If you look at Ewert's article, the amount of calculated CSI changes based on the naturalistic explanation being considered. This could not be the case if the CSI values Ewert gives were simply a measure of the CSI present in the image itself. If it were, there would only be one CSI value. Instead, the CSI value in each case is calculated on the assumption that the hypothesis currently under consideration was actually the one that produced the image. So, if we assume that the image was "was generated by choosing uniformly over the set of all possible gray-scale images of the same size", the formula to determine the amount of CSI out of the total amount of Shannon Information would give "a result of approximately 1,068,017 bits". However, if the image was "generated by a process biased towards lighter pixels", then the formula to determine the amount of CSI out of the total amount of Shannon Information would give a result of "approximately 593,493 bits". Ewert concludes in both of these cases that the naturalistic hypothesis under consideration results in a calculation of CSI that is too high for the hypothesis to be plausible and so both hypotheses are rejected as the correct explanation for the image. And, as it turns out, neither of those hypotheses were the correct explanation for the image. In the case of the other two hypotheses that Ewert considers, however, they result in CSI calculations of "approximately -11,836 bits" and "approximately -3,123,223 bits" respectively, both of which are far too low for the hypotheses to be ruled out by the concept of specified complexity. This, of course, does not necessarily mean that the hypotheses are correct, but the existence of natural hypotheses capable of accounting for the image that result in a very low calculation of CSI is sufficient to rule out the necessity of a design inference HeKS
HeKS:
For the two positive bit calculations you cite, Ewert is saying that the image would contain that amount of CSI if the image had been produced by the chance hypothesis under consideration and that this amount of CSI is sufficient to rule out that chance hypothesis
(Emphasis mine.) Actually, no. Whenever Dembski or another ID proponent calculates the CSI in target T, it is always with respect to a chance hypothesis H (which, in practice, is often tacit, and almost always uniformly random noise). If the result of the calculation is N bits, the ID proponent says that T has N bits of CSI. They don't say "T would have N bits of CSI if it were actually produced by H." If you know of any exceptions to this, then please share. Otherwise, can I assume we're on the same page? R0bb
#39 Not all regularities are lawful. Some are locally systematic; specifically not lawful. Lawfulness, regularities and patterns are essentially synonymous in this context -- algorithmically they all express redundancy, compressibility of the data, which is what science does from algorithmic perspective. When you systemize some data set, you are also creating an algorithm, whether you noticed it or not, which can reconstruct the data from the more concise data set/model of the data (e.g. as the systemizing rule plus the list of exceptions to the rule). That is in essence the same thing one does with a mathematical formula expressing say, Newton or Maxwell laws. nightlight
The essence of natural science is research of the lawful aspects of nature. Its objective is to discover and model regularities and patterns (laws) in natural phenomena.
Not all regularities are lawful. Some are locally systematic; specifically not lawful. They are established by contingent organization. Like biology. Upright BiPed
#36 William J Murray phenomena which indicates that the known natural computational power of the universe is not sufficient to account for it, and what we call intelligence is known to trivially produce similar artifacts. There is no reason to assume that what is presently "known" is all there is. Even within present outer limits set by the fundamental physics which breaks down at Planck scale (lengths < 10^-35m or times < 10^-44s), there are as many or more orders of magnitude of scale between our smallest known building blocks (elementary particles ~ 10^15m) and us (humans), as there are between Planck scale and these 'elementary' particles. Hence, as discussed earlier, the smallest building blocks at Planck scale would yield 10^80 times more computational power packed in any volume of matter-energy than the most powerful computer conceivable, built from our present smallest building blocks (elementary particles, which itself is still far ahead of the actual computational technology we have today). Vague or not, ultimately "natural" or not, what we refer to as intelligence is obviously the necessary cause for some artifacts we find in the universe - the novel War and Peace, for example. There's no more logical reason to avoid the term "intelligent design" in science than there is to avoid the term "evolution" or "natural law" or "entropy" or "time" or "random variation" or "natural selection". The term is merely a symptom of the problem (the parasitic infestation and degradation of natural sciences by emoters from left and right) not the problem itself. The question that matters is what comes next, after the informal observation of the obvious i.e. how do you model or formalize "intelligence" so it can become a productive element of the model space of natural science (the formal/algorithmic part of science that computes its predictions)? The natural science models intelligence not by fuzzying and softening it further via even more vague and emotional terms (god, agency, consciousness) but through computational processes and algorithms. That's what real scientists like Stephen Wolfram, James Shapiro and researchers at SFI are doing and that is where the advance the science will take place. nightlight
Humbled at 4 & 5 hits the nail on the head as regards to early parts of my own life story from ages ~19 to 36. I was the worlds leading authority and expert on that Book I had never bothered to open, and was quite evangelical in my Atheism. I didn't realize it at the time, but that Book I hadn't opened held a certain amount of fear for me. The fear was "what happens if I find out that Book is true? If true, then what are the implications for me personally ... and my lifestyle?" And other such very personal and existential questions. "Would it be a mirror to my own very unflattering life (it was)." I don't think my experience was anything unique, especially among young American males supposedly searching for the 'truth.' The popularity of Hugh Hefner's 'Playboy Philosophy' and other Atheistic and hedonistic world views tells me that what Humbled says above in 4 & 5 is a valid observation ... even today. In the 7 decades of my life I have seen quite a number of destructive waves crash onto the shores of American culture; Hefner was/is such a wave, Timothy Leary is another, easy divorce is another, Bertrand Russell was another, the almost total cultural embrace of homo-sexuality is another, the Occupy Wall Street is another, radical feminism is another, the deconstruction and demonization of American history is another, multi-culturism is another, the New Atheist is another ... and on and on and on. ayearningforpublius
These two ‘armies’ of overly passionate empathizers, who are to natural science what bicycle is to a fish, should take their silly little war to the social and gender studies or other humanities, and just get the heck out of natural science. Everyone would be better off from such shift, including the battling emoters themselves, since everyone benefits from the real advances in natural sciences and technology.
The problem with the a priori position that science is only about understanding the natural computational abilities of the universe is when it runs smack into a phenomena which indicates that the known natural computational power of the universe is not sufficient to account for it, and what we call intelligence is known to trivially produce similar artifacts. Vague or not, ultimately "natural" or not, what we refer to as intelligence is obviously the necessary cause for some artifacts we find in the universe - the novel War and Peace, for example. There's no more logical reason to avoid the term "intelligent design" in science than there is to avoid the term "evolution" or "natural law" or "entropy" or "time" or "random variation" or "natural selection". It is a classification of a category of causal agency, whether (ultimately) natural/computational or not, which can leave quantifiable, recognizable evidence. The only reason to avoid using the term "intelligent design" in such cases is political/ideological. William J Murray
#26 drc466 Shorter nightlight: 1) I have a philosophical predisposition to MN ("universe operates lawfully"), and refuse to accept any other theory a priori The essence of natural science is research of the lawful aspects of nature. Its objective is to discover and model regularities and patterns (laws) in natural phenomena. Another way of stating it is to note that natural science is a 'compression algorithm' for natural phenomena. The regular compression algorithms identify some regularity, pattern, repetitiveness, predictability... aka lawfulness, in the data stream and use it to encode data in fewer bits e.g. by using a shorthand (shorter codes) for the most common data elements. The 'lawless' or 'patternless' sequence (symbol sequence drawn uniformly from all possible symbol sequences) is incompressible i.e. there is no compression algorithm for those. Compression algorithm for 'lawless' data is oxymoron. Similarly, natural science of 'lawless' phenomena is oxymoron. (I will call it childish and anti-science and congratulate myself on my intellectual superiority). The real problem is that too many people from arts, humanities, gender 'studies',... have blundered into and over time completely parasitised natural sciences (as well as technologies with major government funding, such as space program; e.g. NASA went rapidly downhill after the mental illness of political correctness took over). There is an interesting classification of human cognitive styles into empathizers and systemizers, or E and S dimensions or scale (interesting article): "Empathizers identify with another person's emotions, whereas systemizers are driven to understand the underlying rules that govern behavior in nature and society." General observation from the above research about placement along E and S dimensions is that liberals and conservatives are empathizers, while libertarians are systemizers. While natural science is a product and natural domain for systemizing cognitive style, the leftist takeover of academia has resulted in major empathizer infestation and degradation of the natural sciences. A natural reaction to that takeover was a more recent rise of the opposing force, empathizers from the right, such as Discovery Institute with its ID "theory" based on capricious, part time deity (which is as useless to natural science as what it fights against, neo-Darwinism). These two 'armies' of overly passionate empathizers, who are to natural science what bicycle is to a fish, should take their silly little war to the social and gender studies or other humanities, and just get the heck out of natural science. Everyone would be better off from such shift, including the battling emoters themselves, since everyone benefits from the real advances in natural sciences and technology. nightlight
The computational power of the universe, it seems, is not without a sense of irony. William J Murray
NL: Your problem seems to be hostility to the mere idea of a Creator-God who is a maximally great and necessary being, the root and sustainer of reality who is Reason Himself. Such a being simply will not be thoughtlessly or irresponsibly impulsive, which is what caprice is about. Purposeful, thoughtful decision is not caprice. One does not have to accept that such a Being exists to have a fair view of the character identified for such. Fictional characters can have just that, recognisable character. Much less, the God of the universe. KF kairosfocus
R0b @ 28: Why have you presented a case of an IMAGE, created by artifice of man [and which was sourced in a few minutes by UD'ers . . . ], as an example of natural generation of CSI? Are you unaware that this was yet another of the attempted counter-examples shot down and shown to inadvertently demonstrate what they sought to overturn? So, this is not even a strawman. The Ash and snow pattern is produced by intersecting forces of chance and necessity. It is complex but not functionally specific tied to that complexity; if the pattern were wildly different it would make no difference, it is just an event on the ground. The image made from it is indeed functionally specific and complex: it reasonably accurately portrays the pattern. But obviously, blatantly, such an image is a product of intelligent design using machinery that is intelligently designed to capture such images. The pivot, again, seems to be confusion about joint complexity AND specificity attaching to the same aspect of an observed entity, leading you to collapse the matter into mere complexity. That is a strawman. Please, take time to examine the flowchart and infographic in the OP, with intent to actually understand them in their own terms rather than to find targets for counter-talking points by snipping and sniping. A process very liable to set up and knock over strawmen. BTW, this case also illustrates how refusal to acknowledge cogent correction at the time by objectors to design thought leads to far and wide insistent or even stubborn propagation of errors, misunderstandings, false claims and strawman caricatures. KF kairosfocus
@R0bb #27 If I'm reading you right, you seem to have misunderstood Ewert's article at ENV. Liddle's image doesn't meet Barry's challenge at all. For the two positive bit calculations you cite, Ewert is saying that the image would contain that amount of CSI if the image had been produced by the chance hypothesis under consideration and that this amount of CSI is sufficient to rule out that chance hypothesis. As it turns out, the calculation of CSI was accurate in eliminating these particular chance hypotheses, as the image was not actually generated by either of them. Barry's challenge is not asking for some natural process that would have been found to have produced a large amount of CSI if it had actually been responsible for the origin of a high-CSI object but was not. His challenge is asking for a natural process that actually produced a high-CSI object as calculated in the light of that particular natural process. HeKS
Nightlight: Do you have a blog of your own? I want to read your perspectives in further detail. pobri19
NL: Regarding related issue often brought up in this context — while present chess programs are written humans, nothing in principle precludes another program B from writing a chess program A, then another one program C from writing program B, then a program D from writing program C, etc.
To create a program overview is needed - a plan. How can a program come up with a (new) plan of its own? How can it create something new - do something else than it has been instructed to do? Programs who write programs merely mimic design. You are suggesting that there is an accumulation of creative intelligence and understanding going on, but Searle's Chinese Room teaches us that there is no intelligence nor understanding going on when manipulating symbols according to rules.
NL: It is also perfectly plausible, or at least conceivable, that anything we (humans) are doing is result of underlying computational processes. Any finite sequence (of anything, actions, symbols etc) can be replicated by a computational process, hence any actions and production of any human can be replicated by such processes.
Not plausible at all, since computational processes don't replicate but merely mimic understanding. Box
nightlight: Your replies are excellent learning material, thank you. Can you recommend to me any blogs or reading material with a world view congruent with yours (and thus, mine)? It doesn't necessarily have to be related to ID or non-ID, but simply interesting information to digest. I'm always on the look out for new information to further my understanding of the world, human nature and the universe, but it's important to find and follow the correct path. You introduced me to Conway's Game of Life, and that was really intruiging. As an example, a great blog I stumbled onto a few months ago: meltingasphalt.com As a programmer it was really easy to understand and agree with your perspective. Hope to hear back from you. pobri19
Barry:
Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.] If you do, I will delete all of the pro-ID posts on this website and turn it into a forum for the promotion of materialism.
Here Winston Ewert calculates that a pattern posted by Elizabeth Liddle has 1,068,017 bits of CSI, or 593,493 bits, or -11,836 bits, or -3,123,223 bits, depending on which chance hypothesis is used. The pattern was produced naturally by a periodically erupting volcano leaving ash bands on a glacier. A million bits of CSI certainly meets your challenge, but please don't delete all pro-ID posts from this site. R0bb
Shorter nightlight: 1) I have a philosophical predisposition to MN ("universe operates lawfully"), and refuse to accept any other theory a priori (I will call it childish and anti-science and congratulate myself on my intellectual superiority). 2) I don't have an answer for your objection that I can't provide an example of design that doesn't start with a designed object. I just have my "just-so" story that must be true because of #1 above. Response to point 1: Describe, while remaining within your requirement of "universe operates lawfully", how matter and energy originated. Either of your two possible responses ("has always existed", "from nothing") violate your own requirement. Ergo, at some point in the history of the universe, unlawful behavior occurred. Response to point 2: Simply provide an example of de novo complexity that doesn't start from "take a designed object, and then..." drc466
dang you kf! Now I keep seeing advertisements for Swing-Away can openers! Mung
#20 KF actions of an all-wise Creator would reflect “caprice” It is 'caprice' with respect to lawful unfolding of the universe that natural science takes as the definition of its task. Some epistemological system which rejects lawfulness may be fine as theology or poetry or politics etc, but it is not a natural science. If Discovery Institute wants to speculate about deity which comes in every now and then to "fix" some "irreducibly complex" molecule that its own laws and initial-boundary conditions botched for some reason, fine with me, it's their time and their money to squander as they wish. But what they are talking about is just not a science by definition (as a discipline that deals with lawful aspects of universe), that's all I am saying. nightlight
#22 "Out of curiosity, are you even aware that your "front-loading" hypothesis assumes what you are attempting to prove - namely, that "processes built-in or front-loaded into the universe are capable of generating design" Any scientific theory needs some postulates (rules of the game) to build anything. Ontologically this translates into presumed front loading of a system satisfying those postulates. What makes theory scientific, in contrast to say, poetry or theology or casual chit chat or Discovery Institute's ID, is that science operates under assumption that universe operates lawfully from some front loaded foundation. To what extent our present scientific theories capture the ultimate/true foundation (if there is any such) is a separate issue (metaphysics). The presumed lawfulness is what differentiates scientific approach from religion or other approaches. By definition you can't have science presuming a capricious, laws violating entity interfering with the phenomena (e.g. arranging molecules into "irreducibly complex designs") in the universe as Discovery Institute's ID assumes. The mission of science is research of the lawful aspects of phenomena. Whether there are any other kind of aspects we can't ever know with certainty since any rules defining presently understood 'lawfulness' are provisional and what seems outside of the current rules may be deducible from some other rules discovered in the future. But science is limited by definition to what is lawful (whatever the front loaded foundation it starts with may be). There is no problem of course in pursuing and experiencing other aspects ('unlawful' by the presently known rules of the game). But one can't misbrand and sell such pursuits as 'science' and insist on teaching them as such in science class, as Discovery Institute's ID seeks to do. DI's ID is anti-scientific with its part time capricious deity somehow jumping in and out of the universe to rearrange some molecules into 'irreducibly complex' forms that its own laws and initial-boundary conditions allegedly can't manage. It is a completely childish and incoherent position, based on fundamental misunderstanding of science (at least), hence it is justifiably excluded from science. Regarding your assertion that scientific front loading assumes that which needs to be proven (e.g. origin of life, biological complexity, etc), that is again misunderstanding of how computational perspective works. For example, a simple chess program containing few pages of C code, can produce millions of highly creative and complex chess games beyond understanding of the chess programmer (modern chess programs easily beat the best chess players in the world, let alone their programmers). The outward apparent complexity is the result of program (algorithms) + computational process executing the program. The objective of computational approach to natural science (such as Wolfram's NKS) is to assume front loading with very simple basic computational units (as simple as binary on/off state) along with simple rules of their state change, and seek to reproduce what we presently know as laws of physics (including the space-time aspects). Many systems with simple binary cellular automata were shown to have capability of universal computer (they can compute anything that is computable with any Turing machine equivalent abstract computer). Some such extremely simple systems can reproduce several fundamental laws of physics, such as Schrodinger, Maxwell and Dirac equations (hence covering basic quantum mechanics and electromagnetism). Wolfram believes it is conceivable that the basic computational unit for the universe and its most fundamental laws is describable with one line of code for the rules of some networked cellular automaton. So, front loading some very simple computational building block (finite state machine) and its rules of connecting with other such blocks which allow combined computational power be additive (as in neural networks), could in principle compute all of the present physical laws, with their fine tuning for life, as well as origin and evolution of life. While no one has worked out yet the most fundamental computational system of this kind for the whole universe, numerous bits and pieces of that gear have surfaced at different levels of science, from the several laws of fundamental physics to 'natural genetic engineering' of James Shapiro, 'biochemical networks' and related computational models from Santa Fe Institute for Complexity Science at the level of biological systems. In such approach what we presently see as physical laws are merely some aspects or regularities of the computed patterns (by this underlying computing substratum). The biological systems would be separate aspects of the computed patterns which are not reducible to the 'physical laws' regularities but are merely consistent with physical laws. Namely, physical laws on their own don't determine what physical system will do -- you need also to input into 'physics' algorithm (equations) the initial and boundary conditions (like initial angle and velocity of a billiard ball, plus all its interactions with table & other balls as boundary conditions) in order to compute what physical system will do. Further, with the physical laws as presently known, not even the physical law + initial-boundary conditions determine what system will do. Namely, our present fundamental physical laws (quantum field theory) are probabilistic, hence with all the data input, laws + initial-boundary conditions, the physics algorithm only yields the probabilities of different events with that system but no which of the events will occur. This is suggestive of existence some underlying more complex underlying system for which our present laws of physics are merely an approximate statistical description (like general laws of economy vs. what individuals buyers and sellers are doing). If this implied underlying system is computed, as assumed in the computational approach to natural science (the NKS), then any level of biological complexity can be result of this underlying computation which is vastly more powerful than anything we can conceive based on our present computing technology (10^80 times more powerful computing process, if one assumes elemental computational building blocks at Planck scale -- see second half of this post for hyperlinked TOC to discussion on UD of this topic). In terms of chief programmer of the universe, what this means is that CPoU could have created some very simple computing blocks and set up rules for their combination into networks of blocks (so that their computing power is additive), but he has no idea what will come out of the compuation, just as programmer of a little program that computes million digits of Pi has no idea beyond the first few digits what digit its creation will spew next. Similarly, computational approach does not need to presume that what needs to be shown (e.g. origin of life, biological complexity) as you assert -- it only needs to assume a much simpler computational substratum and show that such system can itself compute the observed biological complexity. Obviosuly, this is still a work in progress (e.g. see James Shapiro's and SFI papers). nightlight
nightlight, Out of curiosity, are you even aware that your "front-loading" hypothesis assumes what you are attempting to prove - namely, that "processes built-in or front-loaded into the universe are capable of generating design", and therefore ID's claim that "the rules of physic in the universe + chance is not capable of generating design" must be wrong? You're assuming the can-opener. One essential argument against your position is that you cannot even provide a single example that would support your position. Your example of a chess program that can play an infinite # of chess games starts with a chess program that was intelligently designed to play an infinite # of chess games. Your Turing machines would be (as none currently exist) programs designed to mimic human behavior, and would be restricted to mimicking the human behavior in the fashion designed into the code. Neural networks and fuzzy logic are nonetheless goal-oriented, with the goal and processes for reaching the goal designed into the code. In order for your position to have merit, you have to assume that whatever chemical "computational process" required to generate life is "front-loaded" into the universe's initial conditions. While this is not technically impossible, there are several problems with it: 1) Lacking a supernatural intelligence to front-load those processes, how'd they get there? You're still arguing chance, free of front-loaded process, created front-loaded process. 2) Lack of affirmative evidence wrt life. If the generation of life as a computational process was built-in to the universe, how come all evidence is that it was a singular event unreproducible even with the assistance of intelligent design (us)? 3) Lack of affirmative evidence universally. There is not a single unambiguous, experimentally reproducible example of "front-loaded computational processes" generating de novo design (ref BA's 500 bits of complex information) that does not start with "take a designed object, and then..." drc466
I'm curious... being, among other things, a computer programmer (primarily of User Interfaces), is there some meaningful description that can be offered of how I program that does not consist primarily of the thought processes I use to plan out the logic of the interface functionality or the details of how the medium in which my design plan is instantiated happens to carry out my instructions? HeKS
NL: Let us hear the thinking of a great theistic scientist on a designer and architect of the cosmos. Yes, Sir Isaaac Newton, in the General Scholium to Principia:
. . . This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being. And if the fixed stars are the centres of other like systems, these, being formed by the like wise counsel, must be all subject to the dominion of One; especially since the light of the fixed stars is of the same nature with the light of the sun, and from every system light passes into all the other systems: and lest the systems of the fixed stars should, by their gravity, fall on each other mutually, he hath placed those systems at immense distances one from another. This Being governs all things, not as the soul of the world, but as Lord over all; and on account of his dominion he is wont to be called Lord God pantokrator , or Universal Ruler; for God is a relative word, and has a respect to servants; and Deity is the dominion of God not over his own body, as those imagine who fancy God to be the soul of the world, but over servants. The Supreme God is a Being eternal, infinite, absolutely perfect; but a being, however perfect, without dominion, cannot be said to be Lord God; for we say, my God, your God, the God of Israel, the God of Gods, and Lord of Lords; but we do not say, my Eternal, your Eternal, the Eternal of Israel, the Eternal of Gods; we do not say, my Infinite, or my Perfect: these are titles which have no respect to servants. The word God usually signifies Lord; but every lord is not a God. It is the dominion of a spiritual being which constitutes a God: a true, supreme, or imaginary dominion makes a true, supreme, or imaginary God. And from his true dominion it follows that the true God is a living, intelligent, and powerful Being; and, from his other perfections, that he is supreme, or most perfect. He is eternal and infinite, omnipotent and omniscient; that is, his duration reaches from eternity to eternity; his presence from infinity to infinity; he governs all things, and knows all things that are or can be done. He is not eternity or infinity, but eternal and infinite; he is not duration or space, but he endures and is present. He endures for ever, and is every where present; and by existing always and every where, he constitutes duration and space. Since every particle of space is always, and every indivisible moment of duration is every where, certainly the Maker and Lord of all things cannot be never and no where. Every soul that has perception is, though in different times and in different organs of sense and motion, still the same indivisible person. There are given successive parts in duration, co-existent puts in space, but neither the one nor the other in the person of a man, or his thinking principle; and much less can they be found in the thinking substance of God. Every man, so far as he is a thing that has perception, is one and the same man during his whole life, in all and each of his organs of sense. God is the same God, always and every where. He is omnipresent not virtually only, but also substantially; for virtue cannot subsist without substance. In him are all things contained and moved [i.e. cites Ac 17, where Paul evidently cites Cleanthes]; yet neither affects the other: God suffers nothing from the motion of bodies; bodies find no resistance from the omnipresence of God. It is allowed by all that the Supreme God exists necessarily; and by the same necessity he exists always, and every where. [i.e accepts the cosmological argument to God.] Whence also he is all similar, all eye, all ear, all brain, all arm, all power to perceive, to understand, and to act; but in a manner not at all human, in a manner not at all corporeal, in a manner utterly unknown to us. As a blind man has no idea of colours, so have we no idea of the manner by which the all-wise God perceives and understands all things. He is utterly void of all body and bodily figure, and can therefore neither be seen, nor heard, or touched; nor ought he to be worshipped under the representation of any corporeal thing. [Cites Exod 20.] We have ideas of his attributes, but what the real substance of any thing is we know not. In bodies, we see only their figures and colours, we hear only the sounds, we touch only their outward surfaces, we smell only the smells, and taste the savours; but their inward substances are not to be known either by our senses, or by any reflex act of our minds: much less, then, have we any idea of the substance of God. We know him only by his most wise and excellent contrivances of things, and final cause [i.e from his designs]: we admire him for his perfections; but we reverence and adore him on account of his dominion: for we adore him as his servants; and a god without dominion, providence, and final causes, is nothing else but Fate and Nature. Blind metaphysical necessity, which is certainly the same always and every where, could produce no variety of things. [i.e necessity does not produce contingency] All that diversity of natural things which we find suited to different times and places could arise from nothing but the ideas and will of a Being necessarily existing. [That is, implicitly rejects chance, Plato's third alternative and explicitly infers to the Designer of the Cosmos.] But, by way of allegory, God is said to see, to speak, to laugh, to love, to hate, to desire, to give, to receive, to rejoice, to be angry, to fight, to frame, to work, to build; for all our notions of God are taken from. the ways of mankind by a certain similitude, which, though not perfect, has some likeness, however. And thus much concerning God; to discourse of whom from the appearances of things, does certainly belong to Natural Philosophy.
All I will say for now beyond that is, that if your picture of how a theistic scientist thinks cannot at least match up to Newton, you are painting, scorning and knocking over a strawman. KF PS: You seem to be caught up in the fixed, propagandistic notion that the actions of an all-wise Creator would reflect "caprice" ---
caprice (k??pri?s) n 1. a sudden or unpredictable change of attitude, behaviour, etc; whim 2. a tendency to such changes 3. (Classical Music) another word for capriccio [C17: from French, from Italian capriccio a shiver, caprice, from capo head + riccio hedgehog, suggesting a convulsive shudder in which the hair stood on end like a hedgehog's spines; meaning also influenced by Italian capra goat, by folk etymology] Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003
-- That, it seems to me would be the exact opposite of a God who would be reason himself, a maximally great being, and would have infinitely wise purpose. Instead, I suggest that C S Lewis in Miracles and in other essays, is far closer to home. His point was that God would use miracles as signposts standing out from the usual order of creation and as such there would necessarily be a usual order amenable to understanding and science. But it would be open to necessarily rare actions beyond the usual order for god purposes of the Creator's. And besides, there is no necessity of the miraculous in the creation or diversification of cell based life, even on the part of God. Why wouldn't God use a molecular nanotech lab to create and diversify? And if not, that something happened beyond the course of nature or ordinary art, how different is that really from our own intelligent and purposeful creativity? If that is what he wished to do, would that be irrational or whimsical or merely impulsive? I suggest to you, not. (And in the Christian frame, reflect here on the God who would in love hang on a cross as a wounded healer redeemer.) It seems to me you are caught up in dismissive strawman fallacies and linked polarisation. Kindly, think again. kairosfocus
nightlight is pure entertainment bwahahahaha thanks nightlight! Vishnu
#16 NL: Has it registered that a chess program is a highly complex, carefully programmed software entity, tracing directly to a highly skilled and knowlegeable intelligent designer? There is nothing in principle that precludes another program B from writing a chess program A. Similarly, there is nothing in principle that precludes some program C from writing program B, etc. All these programs are finite sequences of symbols, hence computable objects by a Turing machine (or universal computer). As noted previously, any action that human programer of chess program did is a finite sequence of actions that a suitably programmed android robot could have executed (which in turn could have been produced by another android robot etc). For all you or anyone else knows, the human chess programmer, along with the rest of us, could all be some artifacts of an underlying computation, like those gliders and glider guns in the Conway's Game of Life. nightlight
#12 "There is nothing in the science literature or history that ID objects to if it is supported by evidence of naturalistic processes." How does DI's ID explain "irreducible complexity" i.e. how exactly does their so-called "intelligent agency" enter the picture of the lawfully unfolding universe to "solve" the "irreducibly complex" puzzle? How does it make molecules (e.g. of the first live organism) arrange themselves into something they wouldn't have otherwise done by natural laws (presently known or some we will discover in the future)? Does it do it lawfully (without violating the rules of the game, whatever they may be) or does it override the laws? If the latter, than this is anti-scientific approach, since natural science is based on faith in complete lawfulness of all phenomena in the universe. Note that 'rules of the game' above are not a synonym with 'natural laws as presently known' since these are always subject to change, from smaller refinements to complete overturning. Hence, the real or ultimate 'rules of the game' may never be known. nightlight
NL: Has it registered that a chess program is a highly complex, carefully programmed software entity, tracing directly to a highly skilled and knowlegeable intelligent designer? That, if the designer blunders badly enough, it will fail, or almost worse, partly work then fail when it is counted upon? KF kairosfocus
#11 Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. A chess program which only has basic rules front-loaded into the code, can compute/produce millions of interesting, high quality chess games (better or equal to any that humans have created) that its programmer never dreamed of. Each game contains from few hundred to few thousand bits of CSI. Note though that "information" in CSI is a vague quantity, unless one specifies computational model with respect to which the information is measured (how much code does one need to specify algorithm that produces it within that model). E.g. you can look at sequence of 1,000,000 binary digits and declare that it has 1 million bits of "information". That is true if the program producing it is just a simple printf("%s\n",array) statement. But if the program is a random number generator function, the "information" in that sequence may be only few dozen bits (for the initial state of the generator). Of course, the same ambiguity and arbitrariness holds for any amount of CSI attributed to any biological system in the ID literature. If you take some dumb algorithm (such as random trial and error or simple stochastic search, such as GA), the CSI will appear numerically large. But if you knew the underlying algorithm that actually computed it (analogous to knowing the random number generator behind the above million bits of "information"), it may be small, like the one that produces complex looking fractals or little program that spews millions of digits of Pi. Present science doesn't know what is the minimum amount of "information" contained in any biological system, or universe relative to all possible generating algorithms. There are only guesses relative to some generating models that particular authors were able come up with. Hence all such CSI claims by Dembski and others, are subjective and carry little scientific weight i.e. all they are effectively saying amounts to "if God were as smart as William Dembski, then he would need to front load this much CSI into this phenomenon." So what? Who cares. Regarding related issue often brought up in this context -- while present chess programs are written humans, nothing in principle precludes another program B from writing a chess program A, then another one program C from writing program B, then a program D from writing program C, etc. It is also perfectly plausible, or at least conceivable, that anything we (humans) are doing is result of underlying computational processes. Any finite sequence (of anything, actions, symbols etc) can be replicated by a computational process, hence any actions and production of any human can be replicated by such processes. All that is completely non-controversial, trivial observation. The scientifically interesting question is how much computational front loading does one need to replicate the present universe. Some researchers who play with fundamental models of physics, pregeometry (see post & links on Wolfram's NKS), believe that very little 'intelligence' or CSI needs to be front loaded (ontologically, or postulated on the epistemological side), as long as the basic building blocks (which are front loaded) support for additive intelligence (computational power), such as neural networks of simple automata. Of course, some front loading (or postulates) are needed as a starting point for any science. The key trait of natural science is that it assumes lawfulness of construction from whatever front loading is taken as its basis. In that respect Discovery Institute's ID is anti-scientific since it insists on postulating capricious part time entity jumping in or out of the creation at will, to "fix" or "improve" upon this or that flaw in the lawful behavior of its creation. In contrast, the natural science is based on the faith in complete lawfulness of the universe (our present 'natural laws' are not necessarily the last word on that 'lawfulness'). nightlight
CHANCE: >> chance (chns) n. 1. a. The unknown and unpredictable element in happenings that seems to have no assignable cause. b. A force assumed to cause events that cannot be foreseen or controlled; luck: Chance will determine the outcome. 2. The likelihood of something happening; possibility or probability. Often used in the plural: Chances are good that you will win. Is there any chance of rain? 3. An accidental or unpredictable event. 4. A favorable set of circumstances; an opportunity: a chance to escape. 5. A risk or hazard; a gamble: took a chance that the ice would hold me. 6. Games A raffle or lottery ticket. 7. Baseball An opportunity to make a putout or an assist that counts as an error if unsuccessful. adj. Caused by or ascribable to chance; unexpected, random, or casual: a chance encounter; a chance result. v. chanced, chanc·ing, chanc·es v.intr. To come about by chance; occur: It chanced that the train was late that day. v.tr. To take the risk or hazard of: not willing to chance it. Phrasal Verb: chance on/upon To find or meet accidentally; happen upon: While in Paris we chanced on two old friends. Idioms: by chance 1. Without plan; accidentally: They met by chance on a plane. 2. Possibly; perchance: Is he, by chance, her brother? on the off chance In the slight hope or possibility. [Middle English, unexpected event, from Old French, from Vulgar Latin *cadentia, from Latin cadns, cadent-, present participle of cadere, to fall, befall; see kad- in Indo-European roots.] Synonyms: chance, random, casual, haphazard, desultory These adjectives apply to what is determined not by deliberation but by accident. Chance stresses lack of premeditation: a chance meeting with a friend. Random implies the absence of a specific pattern or objective: took a random guess. Casual often suggests an absence of due concern: a casual observation. Haphazard implies a carelessness or a willful leaving to chance: a haphazard plan of action. Desultory suggests a shifting about from one thing to another that reflects a lack of method: a desultory conversation. See Also Synonyms at happen, opportunity. The American Heritage® Dictionary of the English Language, Fourth Edition copyright ©2000 by Houghton Mifflin Company. Updated in 2009. Published by Houghton Mifflin Company. All rights reserved. >> I would suggest that events of high contingency of outcomes under closely similar initial circumstances, reflective of stochastic distributions and the resulting range of expectations, or similar models of contingencies leading to simulated or real-world Monte Carlo-like patterns of outcomes, are generally ascribed to chance. Such as, the total achieved by tossing a pair of ordinary dice . . . a classic case in point. Zener or sky noise is another. In the former case, clashing uncorrelated cause effect chains amplified by sensitive dependence on initial conditions, drives a butterfly effect influenced outcome predictable only up to a distribution. In the latter case Q-mech effects and patterns point to a random influence. As the infographic in the OP shows, such would be maximally unlikely to hit on FSCO/I islands, on the gamut of solar system or observed cosmos. Mechanical necessity by contrast is low contingency, e.g F = mA and the case F = mg. Intelligently directed contingency is widely seen to be responsible for designs. Now, I really gotta get a move on. KF kairosfocus
I wonder if nightlight will accept the logic of those cogent rebuttals. Who can but doubt that he will find a way of, at least, privately, rejecting them, on whatever fanciful grounds he can imagine? Axel
The capricious part time deity of Discovery Institute’s ID is an anti-scientific dead end.
This is one of the more nonsense statements made here. There is nothing in the science literature or history that ID objects to if it is supported by evidence of naturalistic processes. Name one. jerry
Nightlight:
[ID is] a silly, childishly naive, incoherent, tautologically anti-scientific position that by definition has no chance of ever becoming part of natural science.
Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.] If you do, I will delete all of the pro-ID posts on this website and turn it into a forum for the promotion of materialism. [I won’t be holding my breath.] I understand your faith requires you to taunt us like this; yours is a demanding religion after all; but until you can do that, your taunts seem premature at best and just plain stupid at worst. Barry Arrington
NL: I gotta go now, but I just want to tell you that computational substrates blindly carry out programmed-in or built-in instructions driven by GIGO. And behind every such, is intelligently directed contingency, AKA design. If you doubt me, go ask Uncle Gill Gates about his payroll. I assure you it is not a list of monkeys pounding at keyboards and eep eeping for bananas. And, if you look in the above, you will see that Sir Fred spoke to the programming of the laws of the cosmos. Where, the per aspect design inference explanatory filter is specific to settings where from observable features of an entity, design is warranted on empirically reliable sign. KF kairosfocus
nightlight #3
Anything that can be informally described as ‘intelligent’ behavior can in principle be modeled via computation
For a thing to be modelled, it needs a modeller.
Programs can also write other programs
But you need to show that a non-program can write a program. Silver Asiatic
(I should point out that I am not saying that ID postulates "non-lawful" behavior at any point. Just that, as an explanation for apparent design, nightlight's "nature is lawful" arguments are wishful thinking at best and irrelevant non sequitur at worst). drc466
Nightlight @3, You seem to be saying that what we perceive as "design" is simply the natural outcome of "lawful" processes. To put it politely, to assert this is to assume what you claim to prove - you have no evidence that unguided lawful processes can produce what we perceive as design.
Computers operate completely via contingency (lawful behavior), yet they can play chess better than human world champion. Programs can also write other programs.
Your two attempts to provide examples of how computational process can produce apparent design serve to majestically illustrate your flawed premise. Any person with common sense, let alone software developer, will tell you that you can't take an empty computer, flip it on, sit back and wait for a chess program or program-writing-program to magically appear from random "computational process". Only the design and implementation of such a program will produce the desired result. And even then a "program-writing-program" can only write programs for which the design is built in - it cannot write, for example, a chess-playing program if it has not been provided the rules for playing chess, or some method of learning them. Your entire premise falls cleanly into the category of "just-so story". Until and unless you can experimentally demonstrate "lawful" processes creating de novo design, you're just another evo-carny barker preaching up your magic elixir, hoping to deceive the gullible with lots of hand-waving and pseudo-scientific babble. Or, in other words - if "Nature points out to computational process as the source for initial & boundary condition", how come we cannot observe these computational processes producing design today? Why can we look at a car and immediately conclude "designed", not "computationally processed by nature"? drc466
#5 The harsh reality is that nature screams design Nature points out to computational process as the source for initial & boundary condition, not to 'chance' (the simple probability distributions). You can call that kind of initial-boundary conditions informally 'design' or 'intelligence' in popular presentations, but computational process suffices for the natural science to work with i.e. to explain and model such phenomena (as it already does to some extent, see at the link). The warm and fuzzy woozy design-talk or consciousness-talk don't add anything constructively useful to 'computational process'. The capricious part time deity of Discovery Institute's ID is an anti-scientific dead end. nightlight
I submit post #3 as evidence of the mental gymnastics materialists will employ to dodge and duck the uncomfortable questions. The harsh reality is that nature screams design and people like nightlight are terrified. They hide kicking and screaming and will employ any tactic possible in an attempt to escape this reality. humbled
The ramifications of accepting design poses a huge challenge in the minds of the materialists and is the root cause for people to reject common sense and logic as well as their own intuition. If they accept ID they open the door to scary questions like "Who am I", "Why was I created", "Do I have a purpose", "Who or what is my Creator(s)" and "What does s/he, they, it want / expect from me" and of course other scary issues surface as well like responsibility, accountability etc. These are the real reasons they resist/reject ID. They themselves on many occasions have admitted design in nature only to dismiss it as only having the "appearance" of design. Instead they disable their critical thinking skills, ignore logic, reason and sound judgement, and create materialist myths and fairy tales in an attempt to hide from the unpleasant likely reality that we are the result of an intelligent mind. humbled
Your flowchart is overly simpleminded. Computers operate completely via contingency (lawful behavior), yet they can play chess better than human world champion. Programs can also write other programs. Lawfulness and 'design' are thus not mutually exclusive (or even in comparable realms). You simply don't need subjective, anthropomorphic 'design-talk' to scientifically describe or model what informally we call 'intelligent' behavior, when computation suffices. Anything that can be informally described as 'intelligent' behavior can in principle be modeled via computation (whether we can presently write suitable program for some specific behavior is another issue) i.e. via a perfectly lawful system. Similarly, 'chance' is merely a term of convention for special kind of initial-boundary conditions (the handful of simple probability distributions) in a lawful system. Computation is another kind of initial-boundary conditions (non-probabilistic) which are not covered by the simple probability distributions that we informally label as 'chance'. You are weaving way too much speculation on the top of some informal, vague and anthropomorphic language. Nothing scientific really follows from your flowchart. The real issue of Discovery Institute's ID (DI-ID) is not 'methodological naturalism' but whether nature operates lawfully or not. Natural science assumes it does i.e. its domain is by definition that of the lawful phenomena (which includes lawful systems with 'chance' and 'computation' describing their initial-boundary conditions). In contrast, DI-ID insists on unscientific notion of unlawful, capricious part-time designer, which jumps in every now and then to solve some puzzle of 'irreducible complexity' that the 'laws' cannot solve. It's a silly, childishly naive, incoherent, tautologically anti-scientific position that by definition has no chance of ever becoming part of natural science. The capricious, anti-lawful DI-ID's deity is the fiction created by the religious priesthoods (churchianity) claiming to itself the exclusive connection to that unlawful deity (the lawfulness is much too democratic and egalitarian for the elitists priesthoods). nightlight
"Abductive arguments [--> and broader inductive arguments] are always held tentatively because they cannot be as certain as deductive arguments [--> rooted in known true premises and using correct deductions step by step], but they are a perfectly valid form of argumentation and their conclusions are legitimate as long as the premises remain true, because they are a statement about the current state of our knowledge and the evidence rather than deductive statements about reality." The no-free-lunch theorem and the law of conservation of information are mathematical proof that biological information cannot arise naturally. Jim Smith
Why is it that, in the face of strong appearance of and empirical plausibility of design as cause of FSCO/I, such is so often so stoutly resisted? kairosfocus

Leave a Reply