Uncommon Descent Serving The Intelligent Design Community

Michael Egnor Responds to Michael Lemonick at Time Online

Categories
Biology
Darwinism
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a piece at Time Online, More Spin from the Anti-Evolutionists, senior writer Michael Lemonick attacks ID, the Discovery Institute, the signatories of the Dissent From Darwin list, and Michael Egnor in particular.

Dr. Michael Egnor (a professor of neurosurgery and pediatrics at State University of New York, Stony Brook, and an award-winning brain surgeon named one of New York’s best doctors by New York Magazine) is quoted: “Darwinism is a trivial idea that has been elevated to the status of the scientific theory that governs modern biology.” You can imagine the ire this comment would provoke from a Time science journalist.

The comments section is very illuminating as Dr. Egnor replies to and challenges Lemonick.

Egnor comments:

Can random heritable variation and natural selection generate a code, a language, with letters (nucleotide bases), words (codons), punctuation (stop codons), and syntax? There is even new evidence that DNA can encode parallel information, readable in different reading frames.

I ask this question as a scientific question, not a theological or philosophical question. The only codes or languages we observe in the natural world, aside from biology, are codes generated by minds. In 150 years, Darwinists have failed to provide even rudimentary evidence that significant new information, such as a code or language, can emerge without intelligent agency.

I am asking a simple question: show me the evidence (journal, date, page) that new information, measured in bits or any appropriate units, can emerge from random variation and natural selection, without intelligent agency.

Egnor repeats this request for evidence several times in his comments. Incredibly, Lemonick not only never provides an answer, he retorts: “[One possibility is that] your question isn’t a legitimate one in the first place, and thus doesn’t even interest actual scientists.”

Lemonick goes on to comment: “Invoking a mysterious ‘intelligent designer’ is tantamount to saying ‘it’s magic.'”

Egnor replies:

Your assertion that ID is “magic,” however, is ironic. You are asserting that life, in its astonishing complexity, arose spontaneously from the mud, by chance. Even the UFO nuts would balk at that.

It gets worse. Your assertion that the question, “How much biological information can natural selection actually generate?” might not be of interest to Darwinists staggers me. The question is the heart of Darwinism’s central claim: the claim that, to paraphrase Richard Dawkins, “biology is the study of complex things that appear to be designed, but aren’t.” It’s the hinge on which the argument about Darwinism turns. And you tell me that the reason that Darwinists have no answer is that they don’t care about the question (!).

More comments from Egnor:

There are two reasons that people you trust might not find arguments like mine very persuasive:

They’re right about the science, and they understand that I’m wrong.
or
They’re wrong about the science, and they’re evading questions that would reveal that they’re wrong.

My “argument” is just a question: How much new information can Darwinian mechanisms generate? It’s a quantitative question, and it needs more than an <i>ad hominem</a> answer. If I ask a physicist, “How much energy can fission of uranium generate?” he can tell me the answer, without much difficulty, in ergs/ mass of uranium/unit time. He can provide references in scientific journals (journal, issue, page) detailing the experiments that generated the number. Valid scientific theories are transparent, in this sense.

So if “people you trust” are right about the science, they should have no difficulty answering my question, with checkable references and reproducible experiments, which would get to the heart of Darwinists’ claims: that the appearance of design in living things is illusory.

[…]

One of the things that has flipped me to the ID side, besides the science, is the incivility of the Darwinists. Their collective behavior is a scandal to science. Look at what happened to Richard Sternberg at the Smithsonian, or at the sneering denunciations of ID folks who ask fairly obvious questions that Darwinists can’t answer.

The most distressing thing about Darwinists’ behavior has been their almost unanimous support for censorship of criticism of Darwinism in public schools. It’s sobering to reflect on this: this very discussion we’re having now, were it to be presented to school children in a Dover, Pennsylvania public school, would violate a federal court order and thus be a federal crime.

There’s lots more interesting stuff in the comments section referenced above. I encourage you to check it out. I was pleasantly surprised at the number of commentaters who stood up for ID and challenged Darwinian theory along with Dr. Egnor.

[HT: Evolution News & Views]

Comments
Hi All: You have made me interested enough to unlurk. First, on definition: precise descriptive statements that give necessary and sufficient conditions for an entity are quite hard to make; same, for definition by genus/difference [cf taxonomy in biology]. But, as the above shows, recognition of a pattern by pointing out examples and recognition of "family resemblance" is much easier. Indeed, one can argue that precising definitions are logically subsequent to that intuitive recognition/identification by example and/or counter example. (We usually argue over definitions by saying whether or no they include all and only instances of the recognised entity, and exclude all and only non-instances.) For instance, kindly supply a generally accepted precise definition of LIFE that meets this criterion. (Of course, that is to show that the subject matter of the overarching discipline for biological ID is itself subject to the same issues of definition, so we should not be selectively hyperskeptical.) Be that as it may, we should distinguish the ability to identify/distinguish intuitively, from the specifications [!] by formal definitional wording, hopefully within Sewell's 1,000 word limit. The classic distinction between: [1] "fffffffff . . ." [here, assumed non-contingent, and obviously not complex], [2] "nfgrduywornfgfkdyre . . ." [assumed contingent and complex but random] & [3] "this is a functionally specified, complex statement" should not be forgotten. (Cf discussion in TBO's TMLO, 1984, Ch 8, etc.) Similarly, 500 coins neatly lined up, all heads or all tails or alternating h,t, etc are specified and complex, and function in the context of recognisable patterns. [I emphasise "functional" as well as specified, as I have found that this helps us eliminate a major set of issues: first, let the alleged information actually function in a communicative context (i.e. fit in with signal sources, encoders, transmitters, channels, receivers, decoders and sinks, physical and/or abstract], then discuss its specification and complexity. A rock slide or erosional feature is indeed complex, but is non functional in communicative situations, absent someone's analysis of it that derives from observations of it say a bit pattern . (Such patterns start with say retinal patches of light/dark and colour, and/or real-time frequency patterns in our cochlear sensing hairs.) On the other side of the issue, going to an examplethe late great Sir Fred Hoyle used to discuss, it is logically and physically possible that a tornado passing through an aero industry junkyard could assemble a fully functioning 747, but that is so overwhelmingly improbable that it exhausts the probabilistic resources of the observed cosmos, say, 10^80 atoms and 13.7 BY. Oddly, Mr Dawkins cites the same example and notes that such functional outcomes are sparse indeed in the available configuration space for such a random shuffling, but then insists that the appearance of complex design can be deceiving; due to that bare possibility. The problem is, that it is a routine principle of statistical mechanics, that we look at he issue of microstates [here shuffled of aero-parts, i.e we are looking at giant “molecules”] compatible with a given macrostate [here a functional aircraft] and infer from the proportions of the so-called statistical weight of relevant macrostates, relative likelihood. This is in fact the basis for pointing out why though it is logically and physically possible for the molecules of oxygen in a room to all rush to one end, without intelligent intervention etc, it is so maximally improbable that the relevant fluctuations on that scale are simply not observed. Similarly, TBO's analysis in CH 8 of TMLO turns on this same basic principle, captured in the Boltzmann expression s = k ln w, w being the statistical weight of the macrostate. Apply the concept of Brillouin on the link between entropy and information [there is still a school in physics that speaks of such, following Jayne, cf Harry Robertson's Statistical Thermophysics], and use the resulting measue of information in a biofunctional molecule, and the relevant Gibbs Free energy, to deduce equilibrium concentration in a generaous pre-biotic soup, and we see thatit is vanishingly small. [10^-338 molar for a 101 monomer protein.] More modern arguments such as Trevors and Able, use probabilistic and related thinking and arrive at the same basic result. No wonder Honest Shapiro has recently re-stirred the OOL pot! (I think Meyer has a serious point on the similar challenge to get to step-changes in complex biofunctional information through "lucky noise" in life forms required by NDT to drive say the Cambrian life revolution -- ie, the challenge of body-plan level macroevolution, as his now famous paper argued.] That is, the functionally specified outcomes are so maximally improbable that they exhaust the available probabilistic resources, relative to an assumption of chance [and necessity] only. If we see a room in which all the oxygen molecules are at one end, we infer intelligent agency. If we see a jumbo jet, we do the same. If we see an intelligible post in a blog thread, we do the same. Why then, do many – absent worldview level question begging [often labelled here, methodological naturalism] – infer from the even more complex functionally specified, complex information in the nanotechnology of life at cellular level, that it is explicable in terms of chance pluys necessity so we can rule out agency, even ahead of time? Is this not grossly inconsistent? Then, having thought a bit about that underlying context, let us look at ongoing mathematical attempts to define what we observe in nature and recognise intuitively, e.g. as Mr Dembski has done, as models, not the reality that the models seek to capture. That way, we can be objective about the success/failure of the models [I view Mr Dembski's work as work in progress, with great promise and interesting potential applications], without losing sight of the underlying reality. (NB: I find that evolutionary materialism advocates are often guilty of using that confusion to dismiss the underlying intuitive point, and then gleefully pounce on debates over the matter to assert that the concept is "hopelessly confused" and can be brushed aside. But, in modern educational psychology, I long ago learned from the pioneer cognitivist, Richard Skremp, that a CONCEPT and its verbal expression are quite distinct. Mathematical descriptions are of course an extension of such verbal descriptions.) So, let us keep this issue in due proportion. Sorry on length. Cheers GEM of TKIkairosfocus
February 22, 2007
February
02
Feb
22
22
2007
01:43 AM
1
01
43
AM
PDT
Great_ape, I like your description of it. The rarity of the second target was addressed by Sewell. (This is something I had trouble with initially as well.) He says, again:
There are so many simply describable results possible that it is tempting to think that all or most outcomes could be simply described in some way, but in fact, there are only about 2^30000 different 1000-word paragraphs, so the odds are about 2^999970000 to 1 that a given result will not be that highly ordered—so our surprise would be quite justified.
When we limit the number of bits (words) in our description, we get a limited number of possible descriptions. We can then compare how many of the event patterns match one of those descriptions. It is this ratio that defines the "rarity" of the specification (second target.) Now sure we can begin to nitpick at this informal definition I just gave, because it is informal. This is why Dembski formalizes and quantifies the idea. Tribune7, now the entire scenario may (or may not) have CSI, since it seems to match a pattern (Prophet performs miracle in judgment), but the individual pieces like the thunderstorm in isolation do not necessarily.Atom
February 22, 2007
February
02
Feb
22
22
2007
12:07 AM
12
12
07
AM
PDT
Specified complexity is when you coat marbles with contact cement, blindfold yourself and throw 10,000 of them a handful at a time against a wall from a distance of thirty feet and they stick together into a statue of Elvis Presley. There's exactly the same probability of that particular arrangement of marbles as any other arrangement. Does this require subjective knowledge on the part of the observer? You bet. So does the Copenhagen Interpretation of Quantum Mechanics. If needing an observer for probability waves to collapse into certain events is okay for the most widely accepted fundamental theory of physics I don't see why needing an observer to find specification should be a problem for ID.DaveScot
February 21, 2007
February
02
Feb
21
21
2007
10:09 PM
10
10
09
PM
PDT
"So what characteristic is it about something that would make it so it had to be a specifiee." ==Jerry I think they are suggesting that the fact that DNA or similar systems can _be described_ in a relatively compact fashion using the vocabulary/grammar of english language and/or human engineering concepts is an indicator of "specification". Of all possible random patterns that are generated from a theoretical random pattern generator, very few of these thing should have this characteristic of compressibility or "compact describability" We could argue about just how random the "theoretical random generator" is for organic life in this scenario, but that's a whole other discussion. Many things are subject to simplistic description. Dembski uses the single letter 'A' as an example. It is specified. What supposedly sets CSI systems apart is their complexity. He suggests the complete works of Shakespeare are both specified and complex. I'm still confused, of course. I think I have sort of a handle on their usage of "specification" and why it is statistically rare among all conceivable patterns. And I think I have a rough handle on complexity/information in the Shannon sense. And I agree in principle that some patterns possess both of these attributes. I remain confused as to why the possession of high Shannon information by any given specified pattern would be considered rare. Sal's example, "500 coins, all heads" suggests that *any* specified pattern given for something within a largish space (whatever the technical term is for 500 coin flips) would yield a high Shannon information content, however simple the pattern was in KC-complexity. So while I see the rarity of the "specifiable" part, I don't see the rarity of the complex part (Atom's second target (above)). As such, I so far can't see how the two concepts (specification and complexity) can be joined to indicate some extra-extra-unlikely event. (I suppose one could argue the observance of such a large flip space (500 coin flips) is itself indicative of complexity, but that can't be because nature is ripe with large spaces that could be construed as samples(1 billion raindrops, 10,000 meteor strikes, untold grains of sand) that have high Shannon information.... So again I'm back to "specification" as the true rarity.great_ape
February 21, 2007
February
02
Feb
21
21
2007
09:55 PM
9
09
55
PM
PDT
Atom, Imagine a sunny day, with a beautiful blue cloudless sky, and a rising barometer. Now imagine a fellow in a dirty hairshirt and a long white beard walking through campus to the administration building. He comes to it and shouts YOU HAVE SINNED! He points his staff at it, and out of the clear blue sky a bolt of lightning crashes into it followed by a loud peal of thunder. All the while not a cloud in sight and the barometer rising. Then huge hailstones fall on it destroying what was left by the lightning bolt. I would say that would be a thunderstorm with some serious CSI.tribune7
February 21, 2007
February
02
Feb
21
21
2007
09:26 PM
9
09
26
PM
PDT
Only if you can do it in a 100 words or less and it makes sense OK, how about this: "Content requires contingency" Here's another one: "For there to be information, there must be a multiplicity of distinct possibilities any one of which might happen."tribune7
February 21, 2007
February
02
Feb
21
21
2007
09:15 PM
9
09
15
PM
PDT
Hey Jerry, Macroscopically describable means (IMHO) that you can reproduce the event in question from scratch based on the description (specification.) Kinda like the specifications I get at work (I'm a software engineer.) Two programmers can come up with the same functional design based on the detailed spec. So the specification contains all the "info" necessary, even if it doesn't spell it out bit-for-bit. How does it do this? By relying on my programmer's background knowledge. They know I know what Dijkstra's algorithm is, so they can use that concept in their specification without writing it out explicitly. And the final result is an event (my source code) which is much larger than the specification that defines it. (Again, simply describes a complex event.) Back to Mt Rushmore, I can tell you "Carve four presidents into the rock, starting at this altitude, at this scale, at these angles to one another..." and in a few paragraphs fully specify a very complex final product. How do we know our low-bit specification contains all the necessary info (combined with our background knowledge)? Easy, give two separate sculptors the same spec, and they'll come up with the same final product. To reproduce a given thunderstorm, you'd have to specify each lightning arc. Thus, your specification would be equivalent to the event in question in bits. Unless your specification is intentionally vague, so that your specification target is large. ("Any old thunderstorm arrangement will do.") In that case your specification target does not sufficiently constrain the possibilities, so there is a large probability that you will hit that "target" by luck, rendering it useless as a specification. Again, it is like hitting two extremely small targets simultaneously. That is a plain english explanation. For the relevant maths, see Dembski's paper.Atom
February 21, 2007
February
02
Feb
21
21
2007
07:29 PM
7
07
29
PM
PDT
great_ape, Thank you for your admission. Now I don't feel like the "Lone Ranger" on the CSI definition. I didn't think that my take of bFast's explanation of CSI what was Dembski and Sal means by it but I thought it was insightful. It didn't seem to be what they meant especially when they keep talking about coin tosses and coin tosses don't specify anything except who kicks off. As you said, somehow DNA must be specified by something to fit their definition. But what does it mean to be specified with out being able to identify what is doing the specifying? To use street language, DNA must be the specifiee as opposed to just a specifier. So what characteristic is it about something that would make it so it had to be a specifiee. I don't know if you can follow what I am talking about especially since you think there is no specifier for DNA. What is it about something that could rule out chance or law as an explanation for its existence? What makes DNA different from other self organizing physical phenomena? Does any other self organizing physical phenomena specify something else? Anyway I am still confused.jerry
February 21, 2007
February
02
Feb
21
21
2007
07:23 PM
7
07
23
PM
PDT
"The DNA specifies something else outside itself just as an audio signal specifies something else outside itself. Noise doesn't. A thunder storm doesn't. That I can relate to." ==Jerry Jerry, While DNA does specify numerous things beyond itself, I don't think this is quite what Dembski/Sal have in mind. What they are saying is that the dna/protein/cell machine *itself* conforms to or resembles certain human engineering structures and, in that sense, it is specified by a pattern. Specified by a pattern(s) already known to exist. This much, at least, I think I understand, but someone please correct me if I'm wrong. I don't want to contribute further to the confusion.great_ape
February 21, 2007
February
02
Feb
21
21
2007
06:05 PM
6
06
05
PM
PDT
"Great_ape, you have presented a rather Shannonish definition of information, but your definition of CSI seems to fully ignore the complex and specified bits." ==bFast And here I actually thought my definition was a more based on K-C information... "To understand what information is, we must simply look at its root, to “inform”. Dictionary.com: “to give or impart knowledge…” Information, therefore, is a set of details which gives or imparts knowledge. It has a quatifiable characteristic which is that it is somewhat compressable — but compressability is not, in itself information." ==bFast That reminds me of an old Jack Handy quote: "Maybe in order to understand mankind, we have to look at the word itself: "Mankind". Basically, it's made up of two separate words - "mank" and "ind". What do these words mean ? It's a mystery, and that's why so is mankind. I'm going to go way out on a limb here and guess you're not a mathematician or engineer? "Now, CSI is complex information that specifies something. Its that simple." ==bFast Sigh. If only it were so. I don't mean to be harsh, but I have a strong suspicion you don't have any more of a clue about CSI than I do. Then again, not having a clue myself, I'm not really in a good position to judge that. Ok guys. I'm just a biologist (i.e. caveman), but I know enough to know that no two folks here have offered the same definition of CSI. I still need to read through again and try to digest some of the more recent posts, particularly Sal's. In my scientific experience, though, I can't recall running across a working *concept* previously that seemed so difficult to verbalize.great_ape
February 21, 2007
February
02
Feb
21
21
2007
05:46 PM
5
05
46
PM
PDT
PaV, What does "right-ordering" mean? Is a specific outcropping of a rock formation, highly improbable. I think so and it is also complex. So I guess right ordering is what makes the difference but I do not know what you mean by the term. Atom, What is "macroscopically describable" mean? How does that work for Mt. Rushmore? What is the implication of 1000 words? Could you describe a thunder storm in a 1000 words or less? Could you describe the entire DNA molecule in 1000 words or less? Thank you for your help.jerry
February 21, 2007
February
02
Feb
21
21
2007
05:36 PM
5
05
36
PM
PDT
Again, Sewell's definition: somthing is specified if it is macroscopically describable, in 1000 words or less. That doesn't work for you Jerry?Atom
February 21, 2007
February
02
Feb
21
21
2007
04:39 PM
4
04
39
PM
PDT
Jerry: Here's a definition of "specificiation" in plain English: "The right-ordering of a complex, highly improbable object."PaV
February 21, 2007
February
02
Feb
21
21
2007
03:44 PM
3
03
44
PM
PDT
tribune7, Only if you can do it in a 100 words or less and it makes sense I've seen the Darwinists challenge the CSI concept in several places and I saw no intelligent rebuttal from anyone which I thought was strange. But now I am starting to understand why. When no one can explain it easily, it is hard to defend it. I understand all the examples and intuitively see the differences between what is called specified and what is not but find it curious that no one has been able to put an easy explainable layman's definition to the examples. So far I like bFast's take the best.jerry
February 21, 2007
February
02
Feb
21
21
2007
03:18 PM
3
03
18
PM
PDT
The specific addressing of my comment was a paper that had close to 8000 words, uses variation of the word information 177 times, specific 68 times, possibilities 67 times, actualized 46 times etc. The objection you raised along with a specific (great word hee hee) definition of information was one of the fundamental points of the paper. Would you like me to cut and paste?tribune7
February 21, 2007
February
02
Feb
21
21
2007
02:42 PM
2
02
42
PM
PDT
Barrett1: "Scheesman invoked the word “abstract” in his history lesson post to describe what we know about the genetic code. And in the Time Magazine debate linked above, an important pro-ID person invoked the word “information.” I'm still quite confused about what it is about genetics that invokes these descriptive words. Do you see the genetics or the genetic code as “abstract” and “information.” And why? Thank you in advance. " I located the original comment using the useful "Search" feature up top: https://uncommondescent.com/evolution/the-emerald-cockroach-wasp/ [27] "...Today things are stood on their head. We have complex proteins, intinctive behaviours that must somehow be encoded into DNA, which is itself a code, an abstraction, ..." I used the term "abstraction" in the simple sense that something concrete is expressed as something intangible; without a direct link to the original. A chance-worshipper is left with the task of explaining not only the code, but the machinery for transcribing it into what it encodes for. This is made doubly difficult when the instructions for the machinery itself are inside the same code!SCheesman
February 21, 2007
February
02
Feb
21
21
2007
10:18 AM
10
10
18
AM
PDT
tribune7, "your objection is addressed specifically (no pun intended) in Dembski's paper." The specific addressing of my comment was a paper that had close to 8000 words, uses variation of the word information 177 times, specific 68 times, possibilities 67 times, actualized 46 times etc. I love "actualized." Does that mean something happened? A term (CSI), that no one seems to be able to define clearly, was used 57 times. Let me say that I will pay a whole lot of money for certainty, which supposedly has no information value because of all information that I will possess. For example, who is going to win the 4th race at Gulfstream today. That is certainty that is worth a lot of money. A couple pieces of certainty like that and we could bankroll the whole ID movement with so called zero information. I love the way the term "specific" gets used. Let me count the ways? Now I do not say that I will not be more knowledgeable and possess more information once I digest Dembski's paper but to say my comment was addressed specifically.... This whole process sounds like the "overwhelming evidence" that supports Darwinism. Some how you ask for simple answers and you get treatises that are hard to evaluate.jerry
February 21, 2007
February
02
Feb
21
21
2007
09:09 AM
9
09
09
AM
PDT
If something is certain, that is a very important piece of information that enables me to eliminate thousands of alternatives. So to say that it contains no information is not true in the normal meaning of the word. Jerry, your objection is addressed specifically (no pun intended) in Dembski's paper.tribune7
February 21, 2007
February
02
Feb
21
21
2007
07:47 AM
7
07
47
AM
PDT
Gpuccio, Thanks for your expansion on the subject. I too enjoyed the paper regarding Chaitin, et al, from Progetta? You hit on a point I was going to make as well and I think deserving of repeating in this forum. Dembski is ahead of the curve. It's not all there yet, but he never said it was. He is leading a new path however that is causing entire fields to stand up and take notice, indeed, world leading scientist have recognized the guantlet has been thrown down now and are openly acknowledging this fact with the Origins of Life Prize. No Free Lunch and The Design Inference is a much needed breakthru of ideas built upon others from the past brought into modern mathemtical synthesis. I do think fundamentals are coming into view regarding information and that Demski leads in this charge. How close we are to fundamental laws? I do not know. But leading scientist recognize now, the importance of ID to this debate. Despite all the naysayers to the contrary, we are in wonderful new territory of science and I like it.Michaels7
February 21, 2007
February
02
Feb
21
21
2007
07:37 AM
7
07
37
AM
PDT
Salvador, Thanks for the quick note and tie in to Trevors and Abel. Am I right to say CSI=FSC? Also, you state,
"That is actually a fairly bold statement!!! ID proponents claim biology is optimized to correspond to human engineered aritfacts like computers and robots and manufacturing control systems, sensing, navigation, error correction, etc.!!"
This is true - very bold! But, not just ID Proponents. People like Bill Gates recognize genetic code as optimized technology for computational algorithms and storage mechanisms! So, to IBM and other major Hardware/Software players. This is what evolutionary biologist have failed to recognize. The paradigm has gone from random to designed inquirey of life's turing machine.Michaels7
February 21, 2007
February
02
Feb
21
21
2007
07:17 AM
7
07
17
AM
PDT
tribune7, The placement of every molecule and its movement in a thunder storm requires a fantastic amount of information to "specify". I do not know how big the file would be but it would be immense. No one would probably ever want to do this but maybe someone would want summary information of all the parts of a thunder storm to try and understand it and even this summary could contain an immense amount of information. Similarly a rock outcropping would contain a large amount of information to specify exactly the outcropping. I am not sure if Mt. Rushmore would actually require more information or less. My guess is less. So nature produces some very complex things that require a lot of information to specify. They are obviously the result of random physical forces. If something is certain, that is a very important piece of information that enables me to eliminate thousands of alternatives. So to say that it contains no information is not true in the normal meaning of the word. After reading all that has been presented here, I am not sure we are close to a definition of CSI that the average person can comprehend. Certainly after reading the comment from gpuccio who is a very bright and knowledgeable person, I am sure of it. We can point many examples but not to a definition that would lead someone to know the difference. As for your last comment "Now, what are the conditions for life to occur?" That would be a meaningless comment for the Darwinists because all they would say is that it will be solved some day and just because we can specify the causes of thunder storm does not mean we will not be able to someday specify the causes of DNA. I would also bet we are far from fully understanding thunder storms.jerry
February 21, 2007
February
02
Feb
21
21
2007
07:14 AM
7
07
14
AM
PDT
scordova: thanks for all your very competent and detailed explanations about CSI. Although I certainly lack the technical knowledge to discuss these subjects in detail, I would like to express my limited understanding of their general meaning, and if possible to deepen it with your or everybody else's help. First of all, I must say that I have always found Dembski's work about CSI really exciting and stimulating, even for a non-mathematician as I am. I cannot always understand all the details, but my feeling is that Dembski has addressed a problem of fundamental importance, even beyond the framework of ID debate. Maybe Demski's theories don't explain everything, maybe they are sometimes incomplete or evolving, but the objection that many critics express, that his views are rather isolated in the field of information theory, is in my opinion just a demonstration of his greatness, in the sense that he is practically the only one (as far as I know) who has systematically addressed a fundamental problem which others have chosen to ignore, probably because at present our theoretical tools are insufficient to completely understand or even define it. What is that problem? Dembski calls it CSI, but in my personal, not technical language, I would call it the problem of meaning. Obviously, we have a lot of cultural discussions about meaning in phylosophy, in semantics, and so on, but who has tried to define meaning in a scientific, mathemathical way? I see the question in these terms. We have a lot of work about information, but information, I believe, is always defined (I may be wrong) in terms of complexity (number of bits) or, more specifically, of non compressibility (minimum number of bits to transmit the information). OK, that's all very interesting, and I am fascinated by the implications of that all (I was really fascinated, for instance, by the very good paper by Chaitin linked, I think, from this blog). But still, something important is lacking. And that is the difference between meaningful and not meaningful information. Which, in other words, is the field of the CSI theory. Now, I understand that in Dembski's thought specification is the whole magic to distinguish between meanimg and not meaning. I also understand that rigorously defining specification is the most difficult part, and I think that even Dembski's approach has been evolving about that, although always with great consinstency. My impression is that meaning, or specification, is a rather deep subject, which at present is beyond our full comprehension. That does not mean that we cannot try to understand it better. Above all, that does not mean that we cannot use it in our theoretical frames. I think that meaning is a concept of the same class as causation and probability. Neither of these two concepts, as far as I know, can be completely and universally defined and understood (see all the epistemologic and philopophical debates about them), but that has never prevented us from building a whole scientific paradigm on them. My feeling is that meaning is strictly linked to the concept of consciuosness, and that's why it is so elusive. A very general way to describe meaning is that it is any information which, in the appropriate circumstances, can be recognized as "different" from general, random information by a conscious observer. It is interesting that even the conscious observer usually (but not always) needs a previouly known specification pattern to recognize meaning. That's the case for language. In the very good example of the 100 letters english phrase, knowledge of the english language is the mechanical algorithm necessary to recognize if, in a single sequence of letters, there is any meaning encoded by that language, and to understand that meaning. But the mechanical algorhytm is not the same thing as the recognition of meaning. A conscious observer is always necessary to recognize meaning. So, we can say that a mechanical algorithm never "recognizes" a meaning, but can, if previously programmed by a conscious observer, "identify" or "select" it for a conscious observer (or for some further process). Searle's "chinese room" argument remains a very good examplification of this difference. Penrose's interpretation of the non algorithmic nature of (some) conscious processes on the basis of the godel theorem is another one. But in the end, what makes some kind of information "recognizable" to a conscious observer? That's the real problem, and it is very similar to the problem of specification. One answer could be: pre-specification. If a conscious observer knows a pattern in advance, he can identify it when it occurs, even if it is completely random. But that answer is not very good, because a programmed algorithm can do the same, and in this case the conscious observer only acts algorithmically, "comparing" the input data with a reference (the pre-specified random information), let's say from a notebook. This is exactly the same situation as the chinese room setting, where a conscious observer "answers" to chinese sentences algorithmically, without unserstanding their meaning. But conscious recognizion is another thing. In language recognizion, it is not only a problem of knowing the code (the language rules), but mainly to understand that what is written in a sentence by that code has a meaning, in other words that it is a translation of a conscious content generated by another conscious mind. The "translation" could have been done by another code (another language), but the meaning remains the same. In a sense, I think conpressibility has something to do with meaning, because conscious beings regularly utilize compressible information to express conscious patterns. Real randomness is really difficult to conceive, remember and identify consciously. Specific, compressible patterns are often easily recognized. Scientific laws are, in a sense, a very specific way to "compress" the description of regular phenomena in nature, if I understand well Chaitin. In that sense, no algorithm can recognize a law, although any algorithm can identify it if it has been intelligently programmed to do so. Another parameter which can help to recognize meaning is the observation of function. That's, in a nutshell, the classic watchmaker argument. If we find a machine we think it was designed by intelligent agents (unless we are part of that strange folk, darwinists...), but the recognizion that a machine is a machine does not depend only from an observation of a complex, even specially ordered structure, but usually is made obvious by the obervation of a specific function performed by the machine. And here we have another problem: how can we recognize function? I don't know, but I know that this is another concept that conscious beings can easily understand and apply, even without defining it (Ah, the wonders of conscious, non algorithmic processes!). The interesting aspect of function as a form of meaning is that it can well be used as a specification "a posteriori", and so it is very useful in the ID framework. (Obviously, if Dembski or someone else can give a mathematical definition of it, that would help). And who can deny that "functional" CSI is extremely (and I mean extremely!) abundant in the living world? (I know, I know... darwinists).gpuccio
February 21, 2007
February
02
Feb
21
21
2007
02:04 AM
2
02
04
AM
PDT
Can you explain in plain English why DNA is CSI and a thunder storm is not?
DNA (and the accompanying machinery to processing) conforms to linguistic processing system. Language processors scream design. The specification whcih DNA systems conform to the pattern of a linguistic system. Thunderstorms are complex, but do not conform or correspond to a highly specific pattern within the repertoire of engineering.scordova
February 20, 2007
February
02
Feb
20
20
2007
10:56 PM
10
10
56
PM
PDT
Can you explain in plain English why DNA is CSI and a thunder storm is not? Both are complex. Both contain large amounts of information. Why would a thunderstorm contain a large amount of info? As per Dembski's paper if something is certain there is no information. As far as I can tell if you have the right humidity and the right temperature, a thunderstorm is inevitable. Now, what are the conditions for life to occur?tribune7
February 20, 2007
February
02
Feb
20
20
2007
10:49 PM
10
10
49
PM
PDT
Let me give an illustration of calculating an approxmiate level of complexity for CSI in an English Sentence of 100 characters: Deniable Darwin Linguists in the 1950's, most notably Noam Chomsky and George Miller, asked dramatically how many grammatical English sentences could be constructed with 100 letters. Approximately 10 to the 25th power, they answered. This is a very large number. But a sentence is one thing; a sequence, another. A sentence obeys the laws of English grammar; a sequence is lawless and comprises any concatenation of those 100 letters. If there are roughly (10^25) sentences at hand, the number of sequences 100 letters in length is, by way of contrast, 26 to the 100th power. This is an inconceivably greater number. The space of possibilities has blown up, the explosive process being one of combinatorial inflation. Each sentence in isolation has a shannon information of about log2(26^100) = 470 bits CSI with respect to the English Grammar Specification is log2 (26^100/ 10^25) = 386 bits We all have the experience of reading an English sentence we've never read before, but still recognizing a grammatical pattern and knowing it is not gibberish. This gives an idea of how to measure CSI for a given alphabet and grammar, for estimating the improbability of even sentences we have never seen before. Extending the idea to biology we have the alphabet of molecules and then we have acceptable grammatical constructs. But English is a human language, how then can we find grammar in a non-human language, especially one we did not invent? Suffice to say, it is actually doable if one can build a correspondence to known and used systems in human engineering. That is actually a fairly bold statement!!! ID proponents claim biology is optimized to correspond to human engineered aritfacts like computers and robots and manufacturing control systems, sensing, navigation, error correction, etc.!! Consider the fact we knew before hieroglyphics and cuneiform were deciphered that these insriptions still it had a syntax and grammar, in other words, design! Even before we knew the meaning of such designs, we knew they were designed. Without going into the messy details there is a parallel situation with living systems in characterizing grammatical systems in languages we may not have yet completely deciphered. But a similar CSI calculation can be carried out even for such cases if the Intelligent Designer was willing for his works to be discovered one day! One linguistic construct (LOOSELY SPEAKING), is the notion of a computer or Turing Machine. One can take a computer and then write software which will function as a Turing Machine. That's right a computer is a Turing Machine, and then one can further write a piece of software which itself will be another Turing Machine. One can take this Turing Machine and then write yet more softwre that will create yet another Turing Machine, etc…. One can take the laws and materials that are in our universe and build Turing Machines (like a Dell Computer). Think of these machines (Dell Computers, Apple Computers, SGI Computers, etc.) as analagous to a grammatically correct sentence. All of these, though constructed of different materials still obey a basic form. One can see that in principle a calculation similar to what I did above for English sentences might be carried out to get an estimate of CSI of the computers and Turing Machines found in life. What Trevors and Abel have shown is that the construct of such a Turing machine, especially a self-replicating Turing machine is rare. I personally have been curious to get an idea of how many bits are involved, but suffice to say, even modest estimates by others are astronomical. More on Trevors and Abel peer-reviewed paper are here: Perfect architectures which scream designscordova
February 20, 2007
February
02
Feb
20
20
2007
10:38 PM
10
10
38
PM
PDT
tribune7, I am sorry but I do not understand one thing you said. Maybe if you expressed it in a pracitcal example. The archer example is not something I can relate to because it has no reality. No one has every done this. I don't believe examples that have never happened or ever will happen are very useful. If an idea cannot be expressed in situations that are likely to happen, then maybe the idea is not clearly understood. Let's take an example that is relevant to evolution. Can you explain in plain English why DNA is CSI and a thunder storm is not? Both are complex. Both contain large amounts of information. Both are organized systems. One, the thunderstorm is the result of basic physical forces operating randomly so that basic moleucles eventually form an organized pattern to cause such things as lightning, loud noises, rain, hale and high winds. There are zillions of possible combination of molecules but only a small subset are thunderstorms. Why isn't DNA similar? I have no idea how the phrase "stated explicitly" has relevance here. I use this example because someone used it a year ago to question the usefulness of CSI to imply there was intelligence behind DNA and no one could answer the person. So I am curious. I intuitively know that DNA is on some plain way above a thunder storm and I like bFast's explanation of specificity. The DNA specifies something else outside itself just as an audio signal specifies something else outside itself. Noise doesn't. A thunder storm doesn't. That I can relate to. I am not sure what Sal's 500 heads specifies though I know it is an extremely rare event (smaller than the limits used to accept chance) and as such is not likely to happen except if it was rigged. I have no trouble understanding that but what about it is CSI? I like bFast's explanation because it leads me to differentiate between DNA and a thunder storm but is that is what Meyer and Dembski means by CSI?jerry
February 20, 2007
February
02
Feb
20
20
2007
10:07 PM
10
10
07
PM
PDT
Salvador, In your upcoming CSI post, would it be fruitful to cross-pollinate definitions of Complexity with Trevors and Abel's Three Subsets of Biopolymeric Information? I keep trying to get people to read their paper because it does simplify the issues of Complex Information and it utilizes a common construct of falisfication via Null Hypotheses challenge. CSI = FSC would be the cross-platform introduction for the point of self-organization. The other two positions of random and ordered are trivial. What they show is that Rules and Law alone do not equal CSI. The boundary is set between the lower two and the upper one is intelligent exchange across boundaries. SETI is a perfect example of searching for CSI/FSC whether they want to admit it or not. What is amazing is this exist in our body, in all life forms. What SETI searches for outside of earth's blue body, is inside our own. Another reason, I'd tie the Dembski papers to Trevors and Abel is their recognition of Dembski and Behe citations. I love Dembski's detailed work. He is far advanced of most people's understanding in his treatment of information theory related to life. Trevors and Abel's simple constructs serve to open up the door to his detailed analysis. They in turn along with other leading world scientist now recognize the quandry for evolution by material processes is exposed to its severest test due to Demski's work. Thus the Origins Prize of Life recognizing him in opposition. He detailed CSI before them. What they have done is built upon his work. They have helped me to appreciate Demski's insights.Michaels7
February 20, 2007
February
02
Feb
20
20
2007
09:23 PM
9
09
23
PM
PDT
Does this mean that one of the possible options happened? You have to understand that something happened out of several possibilities. Does this mean that we have to know which possible options did not happened? See above. “What's more, the competing possibilities that were excluded must be live possibilities” . . .Does this mean the possible options It means they had to be legit and a lot. It would have been less confusing if you didn't break the sentence. Why is something highly improbable from what has proceeded this . . . You're taking it a little out of context and it's explained in the article. If the event is the necessary result of nature then the outcome is basically certain (probability = 1) so whether its specified or not it's going to happen hence you cannot determine design. There is no information in cetainty as the article notes. Hence if there are the many realistic possiblilties required for information and one is specified, and that happens then you can be sure an intelligence is at work. Specified means stated explicitly.tribune7
February 20, 2007
February
02
Feb
20
20
2007
08:15 PM
8
08
15
PM
PDT
I think we need to break CSI down into its three components to understand it. Great_ape, you have presented a rather Shannonish definition of information, but your definition of CSI seems to fully ignore the complex and specified bits. Firstly, we need to understand Shannon a bit. It is my understanding that (s)he works for AT&T (or some other phone company.) The purpose of the Shannon paper was to determine a way of distinguishing between the "information" part and the "noise" part of an electronic signal modulated by a voice, or a stream of computer generated data. As such, the algorithm is quite effective. However, Shannon was never attempting to define information, but was attempting to detect information by establishing a detectable characteristic of the information. To understand what information is, we must simply look at its root, to "inform". Dictionary.com: "to give or impart knowledge..." Information, therefore, is a set of details which gives or imparts knowledge. It has a quatifiable characteristic which is that it is somewhat compressable -- but compressability is not, in itself information. Secondly, lets consider the "complex" of CSI. Complex simply means not simple. In a Shannon world, it is what is left after the compression has happened. If you take the ASCII code of this diatribe, and compress it such as through a zip program, you will get a size. That is the complexity of this discussion. Dembski defines the threshold of "complex" v. "not complex" with the UPB. If information can be compressed so small that it could be reasonably derived by random chance, then Dembski says that it is not complex. Lastly, "specified". This is the root of the word "specification". Dictionary.com: "a detailed description or assessment of requirements, dimensions, materials, etc." CSI specifies something -- it describes something external to itself to the point where knowing the information is sufficient to make the resultant thing. DNA specifies the amino sequences in proteins. (It surely specifies a bunch of other stuff too.) Now, CSI is complex information that specifies something. Its that simple.bFast
February 20, 2007
February
02
Feb
20
20
2007
08:06 PM
8
08
06
PM
PDT
Great_ape asks: Does this not suggest that compressibility (i.e. simplistic description) and complexity are at odds with each other? Yet aren't these concepts joined in CSI?
In contrast to the "500 coins all heads" there is also CSI which is likely K-complex and complex in the shannon sense: 1. Mp3 file 2. 500 coins arranged to specifically duplicate a previous random roll of 500 coins 3. A zipped file of Shakespeare's Hamlet When I say likely K-complex, it is actually prohibitive to know for sure it is K-complex, we can only make an estimate from a practical standpoint. For example, a compression algorith tries to make the compressed file as K-complex as possible. There is a remote chance it could be compressed further, but that is hard to actually know. All the above examples of K-complex are estimates that they are K-complex or approximately so.... Salscordova
February 20, 2007
February
02
Feb
20
20
2007
08:01 PM
8
08
01
PM
PDT
1 2 3 4 5 6 7

Leave a Reply