Nuances in understanding NFL Theorems — some pathological “counterexamples”

NLF theorems are stated in terms of the average performance of evolutionary algorithms, but ID proponents must be mindful whenever the word AVERAGE is used, because it implies there may be above average performers, and I’m surprised Darwinists have been slow to seize refuge in the possibility of above average outcomes.

To illustrate, the house edge (casino edge) in the game of dice (craps) is a mere 1.41% for the “passline” wager. So on average we expect the casino to win, but not immutably. I asked one pit boss, “what was the longest winning streak by the players?” He said something on the order of 15 wins in a row, and the casino lost over \$140,000 in a few hours as a result. 😯 Just because the casino on average has the edge over dice players, doesn’t mean there will not be above average performers. There will be, it is guaranteed!

All this to illustrate, that even with the NFL theorems saying evolution will not increase CSI on average, it does not preclude the possibility of pathological examples where CSI increases above average, in fact it is guaranteed to happen whenever we use the notion of average (with some deviation from average).

What is the best pathological example I can think of. How about at thought example first. Consider a robot whose sole mission is to take existing materials and build Rube Gold machines.

In principle, this robot can build something far more complex than itself. That is to say, the final CSI doesn’t have to be as intelligent as the robot, but it could have more interlocking parts than the robot. The only task of such Rube Goldberg machines might be to turn on a light. But if the robot builds enough of them with sufficient variety, then there will be more CSI in the end than what we started with. At the very least, the CSI of the Robot combined with the CSI of the Rube Goldberg machines it builds is greater CSI than the initial CSI the Robot by itself before it started on its task.

We see small illustrations with this if we start out with a small population of beavers. Let them loose, let them multiply, and CSI will increase as they build more and more Dams. Or how about Bees with honeycombs? But what matters is not the quantity of CSI, it is the quality. Building more honeycombs (and hence more CSI) is not the sort of CSI that the ID/Evolution debate is really interested in. Unfortunately the NFL theorems do not distinguish quality of CSI from quantity.

For that matter, take a robot in a room full of coins that have random heads-tail configurations. The robot orders them all heads. The final CSI inside the room (the Robot’s CSI plus the coin’s CSI) is now greater than what we began with!

Because biology has agents that can be deemed to possess weak AI (Aritifical Intelligence), they can in principle increase CSI from their initial state if they are pre-programmed to do so. Front-loaded evolution, or the evolution James Shapiro envisions may take advantage of this fact. Do I believe this is what happened? No. IMHO, the evidence indicates we do not live in world that is occupied by such pathological examples. But I highlight these issues for the sake of completeness. Care should be taken when arguing ID using NFL theorems. I illustrate how I argued it in The mootness of anti-NFL arguments.

Perhaps the moral of this essay, NFL theorems do not distinguish the quality of CSI from the quantity. In the case of the robot increasing CSI in a room of coins, it doesn’t improve the CSI of the robot. Be careful using NFL arguments to defend ID.

23 Replies to “Nuances in understanding NFL Theorems — some pathological “counterexamples””

1. 1

Sal:

You seem to be in a slump lately. 🙂

. . . and I’m surprised Darwinists have been slow to seize refuge in the possibility of above average outcomes.

That is the refuge they always seize upon. “Of course,” they acknowledge, “things in nature don’t happen that way normally. But just give us enough time and eventually a wildly improbable, extraordinary event will occur. Then natural selection will carefully preserve it.”

The whole theory is built upon the idea that these wildly improbable outliers can and do happen, given enough time. The math is so far off it is a joke, but that is their whole mechanism: improbable events, outliers, sheer dumb luck.

As for CSI — dare I repeat myself yet again? — you keep conflating the reproduction of pre-existing CSI, with the creation of novel new CSI. They are not the same thing!

2. 2
Mapou says:

Anderson:

That is the refuge they always seize upon. “Of course,” they acknowledge, “things in nature don’t happen that way normally. But just give us enough time and eventually a wildly improbable, extraordinary event will occur. Then natural selection will carefully preserve it.”

The problem with this is that they assume that whatever potentially useful complexity has been achieved along the way will be conserved until the needed subsequent outliers show up to complete it. This will never happen because subsequent random changes are guaranteed to destroy all existing sequences. There is no way for NS to “know” that a given unselected sequence is potentially useful in order to conserve it for eventual completion.

Potentially successful outliers must be cumulative over a very long period of time but NS cannot know they are winners until final completion. Thus they are invariably destroyed. This is one of the biggest problems with the RM/NS scenario, in my opinion. And this is why CSI is such a problem for Darwinian evolution. Randomness implies chaos, not order. Always.

3. 3
scordova says:

As for CSI — dare I repeat myself yet again? — you keep conflating the reproduction of pre-existing CSI, with the creation of novel new CSI. They are not the same thing!

CSI is only in evidence when it emerges in a physical artifact. I believe the coins going from a random configuration to all heads, qualifies as novel.

4. 4

First, what CSI do you see in coins that are all heads? That smells as much like a necessity explanation as anything.

Second, if the robots are programmed to do x (in this case pick up a coin, examine it, and then put it tails down), then all the CSI for x exists in the programming up front. The fact that the robot carries out the instructions that were already contained in the CSI-filled programming hardly demonstrates that the robots “created” any new novel CSI.

This is the same fallacy Dawkins ran into with his Weasel program in which he claimed the “Darwinian” process produced a meaningful phrase. It didn’t. It was programmed in up front. Every byte and bit of information that showed up in the output already existed in the input. Not one thing new was created.

5. 5
kairosfocus says:

SC: The key issue is blind search in a vast config space with resources that only allow a next to zero fraction to be searched. In such a case unless an intelligently designed cause is ruled out a priori, it is by far and away the better explanation. KF

6. 6
selvaRajan says:

Crop Circles & CSI!
Consider a Paddy field. It’s CSI is low
Now consider 2 similar Crop circles – one man made and another natural. Both have complex design so, they have high CSI.
Would you be able to tell each one apart?
If you look closer, you will find that the paddy stalk of the Natural Crop circle is bent ( as against broken stalk of man made Crop circle) and the circumference of the Natural Crop Circle is smoother than man made Crop circle (which is always a lot more jagged).
So Natural Crop circle is more complex than man made crop circle!
The point is uniformity or complexity of a design can not be equated to Intelligence.

7. 7
scordova says:

First, what CSI do you see in coins that are all heads?

Exactly the number of coins is the number of bits. As far as I know, I did the calculation the standard way any ID proponent would calculate it.

Second, if the robots are programmed to do x (in this case pick up a coin, examine it, and then put it tails down), then all the CSI for x exists in the programming up front. The fact that the robot carries out the instructions that were already contained in the CSI-filled programming hardly demonstrates that the robots “created” any new novel CSI.

This is the same fallacy Dawkins ran into with his Weasel program in which he claimed the “Darwinian” process produced a meaningful phrase. It didn’t. It was programmed in up front. Every byte and bit of information that showed up in the output already existed in the input. Not one thing new was created.

This is a similar paradox with a compressed MP3 or any compressed file. If the MP3 file in the compressed state is 2 gigs but in the uncompressed is 60 gigs, how much information 60 gigs, how much information is really in 60 gig uncompressed file. The Shannon metric does not make a distinction, it will say 60 gigs. CSI is measured with Shannon metrics. If I asked an ID proponent how much CSI is in the uncompressed file (without telling you it had previous been compressed), and further, if there were a few copies of the decompressed file lying around, most ID proponents would say 60 gigs.

Of course the information was front-loaded in weasel, of course it was front loaded in the robot, but CSI, which is stated in terms of Shannon metrics doesn’t actually track this. The distinction between front-loaded and novel is not in the definition of CSI as far as I can tell.

Let’s say the sole job of the robot is to build statues of existing objects (kind of like mount rushmore). Standard ID would say the statues are novel CSI. That’s because CSI is measured according to the configuration of a physical artifact, not a conceptual one. Again Shannon metrics.

I’m only pointing out, anti-ID proponents will confront us with such objections and questions (like how much CSI is in that 60 gig file).

The robot example is something I raised with Bill Dembski way back in the ISCID days.

The robot example is an extension of my notion that weak AI (such as in robots) can look very much like RI (real intelligence).

If we suppose that ovum cell from a mother and the sperm cell from a father are non sentient beings, but rather AI like entities, we have a paradox. These mindless entities join in the womb, and then develop into a thinking conscious human. In some respects, a mechanical computational process exceeded the construction and fabrication abilities of a conscious being — in fact it made a conscious being! Where then did the real intelligence originate since as far as we can trace, it came from two unconscious cells from the mother and father.

Personally, I believe consciousness has a non-material origin from God, but its awfully hard to demonstrate that just looking at the developmental process. CSI and a lot of ID arguments deal with probability and mechanical evolution (or lack thereof), not the souls and immaterial consciousness of humans.

If we were to be formalists, we’d be tempted to say human mental processes were front loaded from the cells that were involved in conception, that the CSI that humans make (like Mt Rushmore or space shuttles) isn’t any more novel than what mindless robots make. Therefore all CSI was front loaded from the very beginning of life. That is also an ID inference, but it conflicts with some peoples notions (myself included) of free will and human creativity.

This view (of front-loaded from the start) is still an ID view, but one that isn’t palatable to most IDists, particularly creationists and others who believe in free will and moral responsibility.

Personally I believe in consciousness, free-will, and souls made by God, and moral responsibility. But that’s kind of hard to demonstrate as proceeding from the definitions and theorems of CSI and ID literature.

I’m stating some these concerns now because the Darwinists have totally lost the debate with ID. The most pointed concerns about ID are now only from ID proponents themselves.

You said I’m in a slump. If you like I could just keep highlighting all the long refuted stupidity of the evolutionists, but I think that would be boring and beating a dead horse.

I find the current topics (like AI and NFL) more challenging.

Measuring the amount of front loaded information is a serious challenge. I don’t know that it can be done with the way CSI is stated, particularly for lossy-decompressed information such as in the robot building statues. That is because CSI uses Shannon metrics, not algorithmic information metrics.

I’m putting it on the table, because I think the topic needs to be considered and debated.

8. 8

Sal:

I believe I’ve pinned down what is going on here after thinking about this a bit more last night.

The crux of your approach is that when an inanimate thing instantiates some CSI in matter (a printing press prints another copy of a book, a cell reproduces itself, a robot places coins all heads) you are saying that “more” CSI now exists in the world. So the real question is whether having another copy, or another physical instantiation, of something means that there is “more” CSI than there was before.

I can guarantee you that none of the main proponents of intelligent design view another copy of, say, a book as “more” CSI than one copy. Be that as it may, however, perhaps they are wrong and we really should view the second copy of the book that rolls off the printing press as “more” CSI than there was moments earlier. So let’s think it through.

If we take that approach, then a reasonable person might also point out there there is a difference between creating “more” CSI de-novo and creating “more” CSI by making another copy of pre-existing CSI. Thus, in order to have a clear discussion they might (and rightly so) insist that when speaking of the former we refer to “more new CSI” and when speaking of the former we refer to “more pre-existing CSI.”

Once we have clarified those definitions then we can get back to the substance of the matter, which is whether there is any known source for “more new CSI” other than intelligent agents. That analysis consists of two parts (i) are intelligent agents known to create “more new CSI,” (ii) are purely natural and material causes ever known to create “more new CSI?”

The analysis is then precisely the same one the Dembski, Behe, et al. have been focusing on all along. Yes to the former, no to the latter.

In summary, at best your approach is an exercise in definitional semantics; at worst it adds an unnecessary step to the process without changing the ultimate analysis.

9. 9
selvaRajan says:

ID proponents must be mindful whenever the word AVERAGE is used, because it implies there may be above average performers

Nothing controversial here.Absolute fact.I Agree

In summary, at best your approach is an exercise in definitional semantics; at worst it adds an unnecessary step to the process without changing the ultimate analysis

Agree.

10. 10
scordova says:

I can guarantee you that none of the main proponents of intelligent design view another copy of, say, a book as “more” CSI than one copy.

It is not more information in the Algorithmic sense, it is in the Shannon sense. There is ambiguity about whether CSI uses the Shannon metric or Algorithmic metrics. You’ve helped me at least state the problem more succinctly.

I don’t have a resolution to the problem by the way. It is difficult if not impossible to actually say how many bits exist in the algorithmic sense because that is observer dependent.

I’ve said if biology implements things like login/password, lock-and-key systems, then as NFL shows, passwords can’t be resolved by Darwinian processes any better than random process on average. I’m okay using NFL in that way, I gladly use it in that sense. Hence, evolution of complex protein binding sites (like passwords) cannot evolve via Darwinian processes. We’ll win that argument every time.

I’m not comfortable however with a blanket statement saying CSI cannot increase via evolutionary processes. NFL precludes Darwinism from evolving proteins, but not because CSI doesn’t increase via evolutionary processes, but because of the difficulty of blind search.

With respect to the robot building Rube Goldberg systems, the search isn’t blind, he knows both sides of the matching parts. That is, he can make matching parts like making up a password for a new account.

However, with the protein binding problem, he cannot solve in this way because the protein binding problem is like trying to solve a password for an existing account. He can’t make up the password. He’s stuck with a blind search.

For example, if a system needed insulin, you can’t just make it up, you’d have to figure out the system needed insulin, and then you’d have to figure out how to construct an insulin molecule. No computer to date can solve it without pre-existing knowledge.

In sum, I think Darwinism is refuted by NFL, but for different reasons than what other ID proponents believe.

11. 11
PaV says:

Sal:

You write this:

I asked one pit boss, “what was the longest winning streak by the players?” He said something on the order of 15 wins in a row, and the casino lost over \$140,000 in a few hours as a result.

My follow-up to this is: How many rolls of the dice has this “pit-boss” been witness to? Let’s say he’s been around the casino for twenty years, working five days a week, and 8 hours a shift, with rollers rolling, on average (there are high volume days/hours and low volume days/hours) 12 rollers an hour.

So, 20 x 52 x 5 x 8 x 12 = (approx) 500,000 rollers.

So that 15-win run is a 1-in-500,000 occurrence.

Here’s another way of calculating:

Let’s say that this lucky roller rolled the dice 20 times an hour, and that his/her streak lasted three hours. Well, on the first ‘roll’, there is a possibility of winning, but none of losing.

So, out of the 60= 20 x 3 ‘rolls’, we can throw out 15 of them, leaving 45. The chance of losing is twice that of winning on any particular roll. So, the chance of NOT losing in 45 ‘rolls’ is 2^45, or about 3.3 x 10^13. So the odds of this happening are 5 x 10^5/3.3 x 10^13 = approx. 1.5 x 10^8, or a one in 150 billion chance.

But what if it was 18 rolls/hr. and only two-and-one-half hours, this equals 45 ‘rolls’ -15 ‘rolls’ (as before), and the odds of such a streak occuring is 1 in 2^30= 10^9. Then, given that the pit-boss has witnessed 5 x 10^5 ‘rollers’, each with a chance of pulling off such a stunt, this means the odds of that pit-boss ‘seeing’ such a streak is 5 x 10^5 ‘rollers’/ 10^9 chance of such a ‘roller’ making such a ‘roll’ = 2 x 10^3 , or a 1 in 2,000 chance of the pit-boss seeing this over the twenty year period he worked.

These give us some rough estimates of the improbability of such a ‘streak’ ever happening. It might not be all that rare.

However, when it comes to genetics, and the search spaces we’re looking at in the case of these extreme improbabilities, what is the probability of that “above-average” search taking place? I would think it would be astronomically small.

12. 12
PaV says:

Sal:

You wrote:

I’m not comfortable however with a blanket statement saying CSI cannot increase via evolutionary processes.

Don’t you really mean to say: “I’m not comfortable however with a blanket statement saying CSI cannot increase via biological processes.”

IOW, there are biological processes—per Shapiro’s point-of-view—that are algorithmic, and which then could increase CSI. But is that ‘process’ really “evolutionary”? Or is it simply ‘biological,’ something built into what cells naturally do, but which is present via ‘design’ methods?

13. 13
scordova says:

PaV,

Nice to hear from you. Long time. 🙂

I think CSI can increase via biological processes, but not high quality CSI. Example, a population of beavers. They breed and multiply and build more dams, hence more CSI.

But that doesn’t mean the beavers increase their own CSI (like new proteins). The problems is CSI’s definition doesn’t distinguish quality. Clearly the CSI of interest is whether the beavers can evolve into more complex creatures.

As I thought on it more, the problem of algorithmic information is very very difficult.

Two people might look at the same 10 gigabit string and conclude very different amounts of information exist.

One might say a stream has 10 gigabits of algorithmic information, and another that it has very little algorithmic information (if he realizes that the bit stream are the digits of PI or the Champernowne sequence).

That is the problem when we’re trying to estimate the true information content of system (like trying to estimate the front-loaded information in Weasel, etc.). Algorithmic information is very difficult to place objective measures on, and the measure is in the eye of the beholder.

Whereas Shannon information is objective, but it doesn’t make the distinction in quality. It does not distinguish garbage from highly meaningful information. CSI does make distinctions through use of specification, but the bit counts are still basically via Shannon, hence it can’t really measure algorithmic information.

CSI and NFL have made good advances in stating the difficulty of blind search, and I will go far as to say, CSI will not increase in biological systems if we are talking the CSI of proteins. But I don’t feel comfortable saying that for all forms of CSI.

One might say, the CSI from the dams of a growing population of beavers isn’t novel. To which I say, therein lies the problem, it is very difficult to actually state what is and isn’t novel when one is talking algorithmically measured information. It’s like asking how many bits of information are in a decompressed file — the answer depends on who you are talking to. This is particularly true of lossy-decompression where novel information is thrown in on the fly.

NOTE:
MP3 files are lossy compressed. That means when they are decompressed they don’t exactly represent the original. Distortion is added. Is this distortion novel information?

The problem posed by MP3 files is analogous to a population of beavers breeding and building more dams. The breeding beavers are inexact copies of the parental forms, and so are the new dams. Are these variations novel information or not? The dam does have some matching parts in a sense, so it could be said to have some novel CSI.

14. 14
scordova says:

So that 15-win run is a 1-in-500,000 occurrence.

I calculate it to be far less than that.

Chance of player win = 49.295%

(49.295%) ^ 15 = 1 out of 40,546

If we estimate about 30 decision per hour, one craps table will see:

30 * 24 * 365 = 25,920 decisions per year. That particular casino had 3 craps tables.

The reason I think this stood out in the pit bosses mind was the fact several players kept reinvesting their winnings to make ever larger wagers. The only reason the casino probably wasn’t wiped out is they limit the maximum bets, otherwise a lucky streak like this would wipe the casino out.

PS:
By the way, there is a certain catholic priest who is a hero of mine who was a skilled casino gambler:

http://articles.latimes.com/20.....me-fahey21

Also my good friend Dr. R. Michael Canjar who taught math at Catholic school and recruited a Catholic nun as part of his casino activities raised a lot of charity money from the casinos:

http://en.wikipedia.org/wiki/R._Michael_Canjar

I remember he and I reminiscing about Turtle Creek casino in Michigan. I told him I was thrown out of Turtle Creek after winning \$6,000. He smiled and said, they threw him out after he won \$60,000!

15. 15
PaV says:

Sal:

Nice to see you back as well.

About that priest, Fr. Joseph Fahey, I wonder if that class he taught on how to beat black-jack was the basis of that Matt Damon(?) movie?

As to “algorithmic information,” I was referring to the fact that we know there are things that genomes do in reaction to what is happening to the organism, and just normal cell-division and reproduction. For example, the error-correction mechanisms of cells, which, given the right circumstances can either speed up, or slow down, with differing resultants. I don’t consider these types of processes “evolutionary processes,” but simply “biological processes”: things that cells simply do. Calling them “evolutionary” is almost question-begging since we still don’t know how true species divergence happens at the major taxonomic levels.

I’m just cautioning about the use of the word, “evolutionary.”

16. 16
PaV says:

Sal:

As to the 15 wins at the craps table, from the link you provided, they show that the average number of rolls per win is 3.38. 15 x 3.38 = 48.5 rolls at .49% chance (=1/2) of winning. This gives roughly the numbers I posted before. I’m sure it was a rare event; but nothing outlandish. The odds of winning the lottery are higher.

17. 17

Sal @10:

It is not more information in the Algorithmic sense, it is in the Shannon sense.

How so? We don’t just measure the amount of “information” in the world, with each bit independent of all other bits in the world. The Shannon metric is applied to a particular instance, a particular artifact, a particular pipeline of information carrying capacity.

If I measure the Shannon information* in one copy of War and Peace and then measure the Shannon information in another copy of War and Peace do I now twice as much information as I had? No. I don’t have twice as much information. The only new information I have is that I have the original information twice. All I have done is confirm that two physical artifacts have the same Shannon information.

This can be seen even in a compression sense — one of the key aspects of information theory. If we string a thousand copies of War and Peace together in a file, we most certainly do not end up with 1000 times more information than we would have with a single tome. Indeed we only need but the tiniest amount of additional information to jump from 1 copy to 1000 copies: “one copy; repeat 1000 times.” The file is highly compressible, and therefore the pipeline required to transmit it (which is what Shannon is really all about), is practically identical to what we had with one copy of the book.

—–

One other question that isn’t clear to me: When you talk about “Shannon information” in the context of CSI, do you think the Shannon metric applies to the ‘C’ or the ‘S’ or the ‘I’. I think it matters for the discussion.

—–

* Shannon information should not even be called “information,” but rather a “measurement” or a “metric,” but for purposes of discussion I’ll use the more common terminology.

18. 18
scordova says:

If I measure the Shannon information* in one copy of War and Peace and then measure the Shannon information in another copy of War and Peace do I now twice as much information as I had?

It depends!

CSI is usually stated in terms of the a priori improbability. The War and Peace illustration you gave has a subtlety. Was it known in advance that one was the copy of the other? If so, there is no uncertainty in the content of the duplicate book, thus the Shannon information for the first book is X bits, and the 2nd book is 0 bits since there is no uncertainty about the 2nd book’s contents.

However, if a priori we had no knowledge of what was in the 2nd book (even though it was a copy), then total Shannon information is the sum of the information of the two books, or 2X bits.

The amount of information that exists is somewhat in the eye of the beholder. When we use Shannon metrics, we are talking pretty much in terms of pure probability of symbol arrangements.

Let’s extend the War and Peace illustration to something even simpler. We have on set of 500 coins all heads, and another set of 500 coins all heads. One could argue one set is a copy of the other, thus the information of the 1000 coins could be argued to be only 500 bits. But this is problematic because the total improbability of both sets of 500 coins being all heads is 1000 bits. Shannon metrics are a measure of the improbability of events, it does not measure the most compact way (the algorithmic way) of representing symbols.

So if I said, what is the probability of 2 sets of coins (each set being 500 coins) being all heads. I would calculate that as 1000 bits. And that is also the CSI calculation, since CSI is stated in terms of improbability, not in in terms of the most compact representation of the symbols (algorithmic information). The algorithmic information on the other hand is small.

CSI measures improbability. Algorithmic information metrics (as in how compactly can we represent a set of symbols) is not a measure of improbability.

19. 19
EDTA says:

scordova #10

I don’t have a resolution to the problem by the way. It is difficult if not impossible to actually say how many bits exist in the algorithmic sense because that is observer dependent.

In the case of the robot and coins, it’s easy to make a relative determination: after the robot has erased the information that existed in the random coins, the amount of algorithmic information went down to near zero because the sequence is now highly compressible. (Any reasonable Turing machine model should do here.) The amount of Shannon information went to near zero because the probability of a coin showing heads is now completely determined. I.e., seeing each new coin gives us no new bits; they’re all the same, predictably. So the robot hasn’t increased anything.

This is the point that I think EA is trying to make also: If a duplication of a physical object means more CSI, then bacteria are creating scads of it every second all over our planet. But we don’t use the term CSI that way. (And don’t want to.) That’s why I consider gene duplication a very weak form of CSI creation, and why Darwinists want to consider it a very significant form of CSI creation. Nature is somewhat good at duplicating things, i.e., at creating strict order–the classic example being a crystal. That’s why we care about how duplication effects our metrics.

20. 20

Sal, thanks.

It seems the Shannon measurement is only relevant to the ‘C’ part of CSI. It is not necessary to view complexity in terms of the Shannon measurement, but it is a simple, easy-to-define surrogate for the Complexity part, and so we can often use it as a basic starting point for thinking about complexity.

However, if a priori we had no knowledge of what was in the 2nd book (even though it was a copy), then total Shannon information is the sum of the information of the two books, or 2X bits.

I’m not sure this is the best or only way to look at it, but I’ll grant it for the moment, as there is a larger issue.

In all the cases you’ve cited (cells reproducing, robots sorting coins, etc.), it is a very simple evidentiary matter to look at the source of the information and see that the information pre-existed the new instantiation in matter. Thus, even (in your example) if we didn’t know a priori that a cell came from another cell with the same information, it is a very easy matter to quickly make that determination and so we are right back to the question of whether we are dealing with “more” CSI or not.

And that is really the crux of the matter. Do purely natural processes have the ability to create new CSI? Again, if we define “new” or “more” so broadly as to encompass every instantiation of information in matter, regardless of whether we can easily pinpoint the pre-existing source of the information, then the words become so broad as to be unhelpful and we will just need to come up with a new term to describe those situations in which we are talking about truly new, novel information, rather than just a reproduction or a repeated instantiation of pre-existing matter. We can call it whatever we want — it doesn’t matter what term we use — but a new term we will need.

The bottom line is that we can play with definitions, but we are still left with our original analysis.

—–

Finally, let me say that I do appreciate your effort to formulate things in a way that will appeal to the ID skeptic. I don’t know, perhaps I am just a bit too jaded.

I’ve been at this a long time and, if memory serves, you’ve been at it even longer. The general outline of issues by the prominent ID proponents is pretty clear. In my experience, there are not a lot of ID critics out there who are sincerely attempting to understand the issues and who would readily join the ID ranks if only we could clarify the ID position a bit or tweak a definition here or there. Quite the contrary. The ID position is very simple and easy to understand. All the demands for comprehensive definitions of “intelligence” or “agent,” all the feigned questions about what constitutes “information,” are just attempts to avoid dealing with the real issues and to avoid following the evidence where it leads — in many cases because the implications are quite uncomfortable to some.

I guess I just don’t think an effort to come up with a revised conception of CSI or to change terminology is going to alter things in any substantive way, and it seems like an exercise in semantics to me.

21. 21
scordova says:

I guess I just don’t think an effort to come up with a revised conception of CSI or to change terminology is going to alter things in any substantive way, and it seems like an exercise in semantics to me.

But if I’m out there defending ID theories, I don’t like defending it ways I’m uncomfortable with.

For example I don’t use 2nd law arguments to defend ID.

Because there are so many notions of CSI out there, I’m thinking simple probability arguments are the way to go. The CSI and information arguments are creating too many distractions. Rather than adding clarity, they are creating confusion.

I would prefer to go back to the basics of the explanatory filter.

As far as NFL, why the big fuss? No Darwinian algorithm can resolve passwords better than random without extra information. Clear as day.

But the real problem is that of extravagance.

Also few have focused on all the rhetorical tricks and equivocations that Darwinists resort to. I’ve tried to explore those at UD.

As far as the present discussion, I don’t feel comfortable with all the pronouncements of whether information can increase or not. That is not the way to frame the problem, it doesn’t feel like the best way to defend ID, imho.

I’ve come to appreciate Behe’s ideas more and more over time. They are not as math intensive as the CSI arguments, but they are more accessible and clear. The way I defend ID of late are just variations of that.

The next best route are population genetics arguments of Sanford and ReMine.

Last but not least are the OOL arguments.

An area not sufficiently highlighted is the rhetorical tricks and cherry picking of data evolutionists resort to give the appearance of proof. The supposed mountains of evidence, are mountains of falsehoods and misinterpretations.

The CSI/NFL arguments. I don’t feel completely comfortable with. Perhaps my strongest expression of that was:
Siding with Mathgrrl on a point. But even then, the Darwinists could not let well enough alone, and couldn’t even be grateful I somewhat took their side! They ended criticizing me by saying:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

And I’ve never let them off the hook for that statement. 🙂

22. 22

Sal at the end of #21:

Well, that is quite a confirmation of what I said in comment #1, to wit:

That is the refuge they always seize upon. “Of course,” they acknowledge, “things in nature don’t happen that way normally. But just give us enough time and eventually a wildly improbable, extraordinary event will occur.

Look, the reality is that no committed materialist will ever accept any CSI argument, because they are not interested in the truth or in an inference to the best explanation. They are interested in preserving a materialist outlook on all things. They will believe in outrageous coincidences, rather than consider the possibility of design. Their whole theory is built upon the sheer logical possibility of wildly improbably outliers. Once a person has gone down that path — and in the process abandoned all reason — it is nigh impossible to bring them back through reason alone.

Fortunately, there are some folks on the fence who are willing to at least consider the possibility that materialism might be wrong or incomplete. Hopefully, over time, some of them are listening . . .

23. 23
kllrDogThermo says:

Let me try a post here. First, I am disappointed that Sal Cordova seems to be bailing on ID. Is ID scientific? Yes, I don’t see a good reason to drop it. Can we use it to discern the natural from the intelligently designed? Of course we can! Is there difficulties in calculating probabilities? Yes, but I don’t think these difficulties are insurmountable. I even think approximations in the probabilities will suffice.

Sal seems to be rigorously evaluating the concept in detail where I see it working in a general sense, where I look at the probabilities of processes, patterns, or events happening where I probably should make sure it aligns with Dembski’s information formulation precisely.

Arguments about information content of two books seems like arguing how many Angels can dance on the head of a needle. Common sense approaches should suffice. Is one bacterium the same information content as a hundred bacteria? If they were exactly the same then they would be, but since they are not then the information content would not be 100 times, but a factor significantly lower. This would depend somewhat on the variation of the genomic information. As long as we clearly define our assumptions going into our CSI calculations, then I think we will be OK.

Sal is recommending that the second law of thermodynamics argument not be used in challenging evolution which I see as clearly bogus. His post shows he doesn’t fully understand the argument which goes into the need for a thermodynamic mechanism to account for entropy decreasing processes moving away from equilibrium. I have atheists referencing your article so I don’t appreciate a post that clearly doesn’t even show an understanding of the argument.