# Can Computation and Computational Algorithms Produce Novel Information?

As some UD readers are aware, one of my interests is artificial-intelligence computer programming, especially games-playing AI (here, here, and here).

In producing retrograde endgame databases for the game of checkers, with massive computational resources (two CPUs performing approximately a billion integer operations each per second over a period of two months, for a total of 10,000,000,000,000,000 [ten thousand trillion] mathematical calculations), some very interesting results were produced, including correction of human play that had been in the books for centuries. But did the program produce any new information? Well, yes, in a sense, because the computer found stuff that no human had ever found. But here’s the real question, which those of us at the Evolutionary Informatics Lab are attempting to address: Was the “new information” supplied by the programmer and his intelligently designed computational algorithm, or did the computer really do anything original on its own, in terms of information generation?

The answer is that computers do not generate new information; they only reshuffle it and make it more easily accessible. Here’s an example:

Assume that a census has been taken, and we have data (information) about the annual income of every individual in the world — about 6.6 billion people. Suppose that we would like to know the average (mean) income. A computer program, given the data, could generate this new “information” almost instantaneously, but it would be impossible for humans to do this calculation with pencil and paper because it would simply take too long, and the probability of error would be too high.

Would any new information be generated by such a computer program? It might appear that the answer to this question is yes, but the real answer is no; all the information was included at the beginning, in the data and the computational algorithm. The computer program only made the information more easily accessible and understandable in a reasonable amount of time.

I’ve coined a phrase for this phenomenon. I call it “the elegance of brute force.”

The relevance of this dissertation to the origin of biological information should be obvious. Can random variation (mutational or of any other stochastic variety), filtered by natural selection, produce novel information, especially the information that created humans who design computer programs and create real novel information?

## 28 Replies to “Can Computation and Computational Algorithms Produce Novel Information?”

1. 1
Mapou says:

Would any new information be generated by such a computer program? It might appear that the answer to this question is yes, but the real answer is no; all the information was included at the beginning, in the data and the computational algorithm. The computer program only made the information more easily accessible and understandable in a reasonable amount of time.

Interesting. In this vein, using the same rationale, what would be the answer to the question, “Can the human brain produce new information?”? Are we just reshuffling information that already exists in our environment and picked up by the senses?

2. 2
larrynormanfan says:

Mapou, this is a good question. I’ve been stumped for a while by Dr. Dembski’s Law of Conservation of Information in No Free Lunch. Sometimes I get the impression that, in some real sense, he’s saying that all the information that ever was, was there at the beginning. Can anybody help me with this? Some of you will know that I am not an ID advocate myself, but I am trying to understand the material.

3. 3
Mapou says:

GilDodgen, I just took a quick look at your Gothic Chess invention. It’s an awesome modification to the traditional game. Congratulations. I don’t want to stray too much from ID but how would your additions affect the effectiveness of the brute force alpha-beta approach? I guess what I’m asking is, what are the estimated number of moves (per turn or in total) in Gothic Chess as opposed to regular chess?

4. 4
Mapou says:

larrynormanfan, I don’t quite yet understand Dr. Dembski’s argument either. I remember the No Free Lunch Theorem being used back in the nineties by advocates of symbolic AI so as to cast a dark eye on those who thought that the neural network approach was a better approach than the symbolic one. I did not buy that argument. I do think that the neural approach is indeed better and that symbolic AI is a complete failure at achieving general intelligence.

The reason that I asked the question about the brain’s ability to produce new information is as follows. I suspect that if it can indeed be shown that a Turing machine cannot produce new info and that the human brain can, then one could deduce that there is more to the brain than what is computable. In other words, it would lend credence to the religionists’ claim that a “spirit” inhabits the brain.

Can the human brain create new information? I would say so. I mean, did Beethoven or Mozart create new info that did not yet exist in nature? The answer seems to be yes. But I’ll defer to the information theory experts.

5. 5

Mapou: “Using the same rationale, what would be the answer to the question, “Can the human brain produce new information?”? Are we just reshuffling information that already exists in our environment and picked up by the senses?”

The difference between artificial computers and human brains is fundamental. Brain is yes also a computer but human mind is connected by mean of intellect to an infinite information source (God). In a sense brain is a tool of mind, which in turn is a tool of intellect, which is a direct probe of that infinite information source. For this very reason humans can create new information, or better said, can discover and express new information coming from the infinite source. We reshuffle information that already exists in God picked up by the intellect. In this sense the creation of really *new* information is an impossibility.

6. 6
Gerry Rzeppa says:

I think remarks #12 and #13 here:

with my replies (#20 and #21) exemplify and resolve a common misunderstanding in this matter.

7. 7
gpuccio says:

Mapou:

“I suspect that if it can indeed be shown that a Turing machine cannot produce new info and that the human brain can, then one could deduce that there is more to the brain than what is computable”

The point is, human brain is “connected” to a consciousness, and a Turing machine is not, and never will be. In the case of the brain, the operating “I” is the intelligent consciousness of an intelligent agent, which allows the two fundamental processes which are typical of human consciousness, that is:

1) Conscious representation and conscious feeling, which allow perception of meaning and purpose.

2) Free will, which allows to generate an output which is in no way completely computable from the input, or from the input plus random variables.

These two magical levels of interaction between consciousness and brain are the source of the new specified information which human beings are constantly producing in the world, adding their creative intervention to the one of the basic Designer.

The non computability of human knowledge is nothing new: independently from any ID argument, Penrose has already stated the same concept from a purely mathematical background, with his controversial, but in my opinion extremely brilliant, demonstration based on Godel theorem. In brief, the essence of his argument is that a conscious mathematician can “see” a specific mathemathical truth (meaning) which a purely algorithmic formal system cannot know, and that is always true of any possible formal system. In my opinion, that is only explained by the nature itself of consciousness, that is the ability of the percieving “I” to always detach itself, in meta-level, from what it is observing. In other words, the perceiving “I” is always, at the basic level, transcendental.

8. 8
Shazard says:

Actually what is information… It is something very strange which makes sense in it’s context. Feature is that information is presented by rearrangement of the matter in very specific state. Only infromation needs “interpreter” or it’s context without witch number 3.1415296 is just number. To say, that computers does not produce information only because they just rearrange the matter in different way, is error. Producing information from some letters is just that… rearranging letters into some meaningless sentence… But this meaning is not property of the letters themselves, meaning is function of letters and interpreter or context…
And if there is some context which asks or orders some permutation generator to rearrange matter as to match some other properties, then yes, computer can produce novel information!

Problem is that computer can’t do it without human mind which gives that component of interpreter to produced results and means of production of these results.

9. 9
Timothy V Reeves says:

Hello. I’m a Christian who currently favors evolutionary explanations of life, but I am interested in ID and believe it needs some serious consideration.

What is meant by ‘New Information’ in this context? An original literary work written by an author presumably classifies as ‘new information’ in as much as the world has never seen this configuration of symbols before. But in principle it is possible to arrive at this particular unique combination of symbols using a simple brute force algorithm (like, for example, an elementary counting algorithm) that works through all the possible combinations of symbols. Thus, given such a systematic algorithm, all the books of the world, those written and those yet to be written, are implied by it.

So what is ‘New information’? If it simply means a configuration of symbols that has never been seen before then in principle even the simplest algorithm can ‘new generate information’. If you stipulate that ‘New Information’ only exists if the pattern is ‘meaningful’ in relation to human intelligence, then again an algorithm can in principle generate meaningful patterns. After all, it often happens that some cue from our environment in the form of, say, a fortuitous pattern of symbols that may have no origins in ‘intelligence’ is enough trigger some new idea. ‘Intelligence’ is a phenomenon that does not have clear-cut boundaries.

10. 10
tribune7 says:

In this vein, using the same rationale, what would be the answer to the question, “Can the human brain produce new information?”?

The brain can certainly produce information new to the human mind. It is impossible, however, for a computer to produce information new to the computer.

11. 11
gpuccio says:

Timothy V Reeves:

“What is meant by ‘New Information’ in this context?”

Information is a tricky word. It has many different meanings. In the ID context, the relevant concept of information is usually CSI, Complex Specified Information, as defined by Dembski. CSI is the kind of information which is never produced by random causes, but is always the product of an intelligent agents. All human agents produce CSI. Out of human artifacts, CSI can only be found, and in extreme abundance, in the biological world: that’s biological information, biological CSI. That’s the basis of the design inference in the living world.

Information theory considers information in a different way, relating mainly to its computational properties. CSI has a complutational aspect, which is the “Complexity”. CSI must be complex, in the sence that it must be so unlikely that there are no computational resources in the universe which could generate it randomly in a systematic way. That concept id made explicit by Dembski’s Universal Probability Bound (UPB): we can speak of CSI only if the observed result is improbable enough, so that it is not reasonable that it can be the product of mere chance. Dembski conventionally puts his UPB at 1:10^120 (or something like that), which is anyway a very generous statement, and a very conservative one.

But in the concept of CSI there is a second, fundamental aspect, which is in essence “semantic”: CSI must not only be complex, but also specified: in other words, it must have some form of “meaning”. That is a very difficult topic, and I advice that you carefully read something from Dembski about specification. For now, we can say that the kind of specification we observe in biological information is functional specification: biological information is complex (unlikely enough) and functional (it has a specific meaning in terms of doing something useful in a specific context).

Finally, CSI must not be the product of known necessary laws, in other words it must not be explicable in terms of necessity.

Summing it up, biological information, in almost all its instances:

1) Is complex (most single proteins are well beyond Dembski’s UPB; the general biological context is almost infinitely beyond that)

2) Is functionally specified: almost all single proteins have very well defined functions in a well defined context. The general biological context has almost infinite more functions, at multiple levels of organization.

3) Cannot, in any known or conceivable way, be the product of necessity. Indeed, even darwinian theory fundamentally believes that biological information is created in the beginning by various kinds of random variation. Necessity has no role in that.

So, biological information is a huge repository of CSI. The only one in nature, together with human artifacts. The logical consequence? Design inference. The only reasonable possible explanation for that? Design.

12. 12
jerry says:

This seems like a silly discussion. Once given a system such as English writing, one can imagine all the permutations that can possibly exist. Therefore one can say that nothing new can be produced that isn’t already there. It seems like a tautology and I said a meaningless discussion.

It is certainly different from listing all the financial reports for everyone on the planet. Now we could do the same for DNA and list all the possible combinations.

All these analogies are theoretically possible but in reality exceed all the computing possibilities that can be imagined. But I fail to see anything meaningful in this. Maybe someone could make a case for this being an intelligent question.

13. 13
PannenbergOmega says:

This sounds vaugely reminiscent of an article I read by Mike Behe in Time Magazine.

“Some think God set up the universe following the Big Bang to unfold like a computer program”

http://www.evolutionnews.org/2.....tuall.html

14. 14
bFast says:

larrynormanfan:

I’ve been stumped for a while by Dr. Dembski’s Law of Conservation of Information in No Free Lunch.

I tend to agree with you. If we have a working gene, if that gene gets duplicated, if the duplicate gene gets coopted for a task for which it is functional but not ideal, and if time + mutation pulls off optimizations, how can we claim that the information content has not increased.

Or to put it into more practical terms, I researched the Langor monkey, that has a cow-like digestive system. Apparently the Langor uses a gene designed for bacterial defence to digest the bacteria that process the first stage of its meal. When the gene is compared to that of the horse and the baboon (which only use it for defence) an with the cow which uses it for digestion, lo and behold the gene is like the cow where the horse and baboon’s are not. The langor’s gene has clearly modified to become a better digestor than its cousin’s. Is there no new information here? I think there is.

(Btw, I did this study because Spetner suggested that the langor had an example of a mutation that couldn’t happen by chance. I did the calculations out, and the langor’s mutation was totally within the scope of what chance could do. I hate it when I put the claims of people on my side to the test and they are found wanting – painfully wanting.)

15. 15
Borne says:

bFast: I think you make claims (painfully wanting etc.) too quickly.

“if the duplicate gene gets coopted for a task for which it is functional but not ideal, and if time + mutation pulls off optimizations, how can we claim that the information content has not increased”

That’s a lot of important ifs – that will inevitably lead to speculation.

So:
2. Show how they require your conclusions
3. Optimization is a nice word – define it in your context and then explain how time + mutation pulls it off
4. How does change of role = increased information?

Also you state that

“The langor’s gene has clearly modified to become a better digestor than its cousin’s”

This sounds like speculation à la Darwin.
How do you know there was a modification?
Upon what grounds is it “better”?

Thanks

16. 16
GilDodgen says:

Mapou: GilDodgen, I just took a quick look at your Gothic Chess invention. It’s an awesome modification to the traditional game. Congratulations.

Gothic Chess was invented by my AI programming colleague Ed Trice, not by me. (Ed collaborated with me on the checkers endgame database project. He designed the sparsely populated matrix indexing functions and did the play annotation, tasks for which I had no patience.) My involvement in Gothic Chess has been as coauthor with Ed of our Gothic Chess program, Gothic Vortex.

I have not been involved in checkers programming for the last few years, primarily because there is no remaining human competition, and the same thing has happened in conventional chess. The highest rated PC chess programs are approaching 3100 Elo at tournament time control. The highest rated human player in history was Garry Kasparov, who was rated about 2850 at his peak.

Chess is suffering the same fate that checkers has, with an excessive percentage of draws and the need to spend years memorizing mountains of opening play to be competitive. Gothic chess, with its two new pieces (the chancellor and archbishop, which combine the moves of rook and knight, and bishop and knight respectively, just as the queen combines the moves of bishop and rook) solves both these problems. All conventional chess opening play is out the window, and draws are rare with all the extra firepower on the board. Tactical combinations in Gothic Chess are stunning and common.

In 2004 Gothic Vortex walked away with the gold at the computer world championship with no losses or draws. However, the game is attracting top programming talent, and in 2007 a program called Pulverizer tied for first place with Vortex. Vortex lost one game to Pulverizer on time after getting in a weak position, and Vortex beat Pulverizer in one game. (Pulverizer was written by Stefan Meyer-Kahlen, author of the Shredder chess program, which has won 12 different World Computer Chess Champion titles since 1996.)

A new site is just up where you can play Gothic Chess online: http://www.gothic-chess.com/

By the way, I have no financial interest in either checkers or Gothic Chess. It’s just been a fun, intellectually challenging hobby.

17. 17
bFast says:

Borne:

My method of calculating was quite simple. I worked on the presumption (I can hunt down the reference, but the rule of thumb seems somewhat obsolete) that nucleotide point mutations happen at a rate of 1% per million years per lineage. Using this calculation, and the fact that it required 8 nucleotide mutations to pull off the 6 amino mutations that occurred, and the fact that there are about 20 differences between the cow’s gene and the baboon’s gene upon which to work. (It is assumed that the baboon’s gene is most like the original langor’s gene because it is the nearest genetic relative.) I then calculated how many lineages it would take to experience the required mutations — around 1000 if I recall. This is well within what is available from chance. (Spetner states strongly that the mutation in question is well beyond chance, even though most of the information I got for my calculations comes straight from the original paper on the subject. I located the paper by following the sitation in Spetner’s book.)

2. Show how they require your conclusions.

I don’t know that the require my conclusions. What they do is permit my conclusions. Ie, though I may not have proved that random mutations did provide the raw information, I did prove that random mutations reasonably could have done so.

3. Optimization is a nice word – define it in your context and then explain how time + mutation pulls it off

Borne, even if you are an IDer as I am, you must find a way to put on a “neo-Darwinian” thinking cap. To the extent that the neo-Darwinian model is feasible, I think that we are obligated to consider that it happened that way. Our only option is to experimentally prove that it didn’t happen that way. In this case, we could take langors, and reverse the mutation, then insert the mutations one by one, competing mutated langors against demutated langors to prove that each mutation would spread.

However, I do not think that the phenomenon presented here is beyond the edge of evolution. I see no reason to claim that there is a knowledge gap here, when it is statistically reasonable to suggest that it happened by chance.

4. How does change of role = increased information?

If the langor has a modified gene that is now more effective at digesting bacteria than its ancestors had, how is this not an increase in information?

18. 18
gpuccio says:

bFast:

Are you sure of your calculations? I have not read the Spetner article (is it available online?), but it seems strange to me that, if I understand well, you find a 6 aminoacids functional mutation in baboons or similar “well within what is available from chance”, while Behe finds a two aminoacdis functional mutations extremely unlikely in the malaria parasite, both theoretically and empirically. Could you please specify if, in your calculations, you have assumed that each correct aminoacid mutation should be fixed in the populations? And if you have, why should it be so? Otherwise, you have to calculate for the probability of all the 6 aminoacid mutations occurring in the same single individual or lineage, among all the possible 6 aminoacid substitutions in the whole genome of the species. Have you done that? And if you have, could you please give us the numbers?

19. 19
bFast says:

gpuccio:

you find a 6 aminoacids functional mutation in baboons or similar “well within what is available from chance”, while Behe finds a two aminoacdis functional mutations extremely unlikely in the malaria parasite

There is a vast difference between my calculations and Behe’s. In the scenerio I calculated I assumed (reasonably) that any one of the six mutations each added a little advantage to the monkey. In Behe’s calculation, the assumption was that there was no advantage until both mutations happened. The difference between consecutive mutations and simultaneous mutations is HUGE!

I believe that when there is a very smooth trail up mount improbable, classic evolution can get there. However, I respect Behe’s argument that mounts have been climbed when there is no smooth trail up. I agree with Behe that such mounts will not be climbed via RV + NS. As such, it is the obligation of the evolutionary biologists to prove that the challenging mounts such as the bacterial flagellum really do have a very smooth path up them.

I love Behe’s question, where is the edge of evolution?

20. 20
bFast says:

gpuccio, in response to the question of whether each mutation is fixed in the population before the next happens, I did not factor this into the calculation. However, though it is necessary for the subsequent mutation to happen in the same lineage as the first, the first does not need to be fully fixed before the second can happen. If, for instance, the first mutation is 1/4 way to being fixed, then the subsequent mutation has a 25% chance of happening in the lineage enhanced by the first. Once the 2nd mutation happens there, the doubly enhanced gene will spread through the population even faster than it did when there was just one mutation. As such, your point would challenge my calculations somewhat, but probably not a rediculous amount.

21. 21
gpuccio says:

bFast:

I find your answers very correct, but I would like to understand if you have any reason for your assumption that “that any one of the six mutations each added a little advantage to the monkey”, and therefore be fixed, either totally or partially.
I say that because it could be exactly the reason of the difference between your assertion and Spetner’s (although, as I said, I have not read his book, so I’m just guessing). In other words, if Spetner considered the six mutations as non selected, and therefore to be reached simultaneously or at least in the same lineage, which as you correctly say is practically the same thing, then his conclusion that the mutation was far beyond the range of mere chance is correct.

In general, I think that we concede too easily to natural selection the assumption that single aminoacid mutations can confer benefits. I think that is very rarely true, and in those rare cases the benefits are not necessarily strong enough to be selected through a survival advantage. And fixation, as we well know, even in the cases where it happens, requires time. And time, especially in higher mammals, is not there in large amounts.

As far as I can understand, the concept of the smooth path is really a myth: any assumption of smooth paths should be really proven, at least in principle, on sound biochemical and molecular basis. Again, I can’t see any reason why a new function, or an increase in function, should be regularly deconstructable into single aminoacid useful substitutions. Logic and data are completely against such a concept.

I would like to remark again that the only observable model of spontaneous, but guided evolution of a specific protein, which is antibody maturation after the primary immune response, does not work that way. It starts from a pre-existing functional protein (the low-affinity primary antibody) and realizes a process of intensive, targeted random mutation, strictly selecting the results acoording to a precise measure of the affinity of the mutated molecule with the target antigen, whose memory is probably stored in the antigen presenting cells. Here we have random mutation, but strictly targeted and absolutely induced, and a very intelligent selection by pre-existing information (we are, in other words, exactly in the framework of Dawkins’ “methinks it’s like a weasel” model).

Interestingly enough, a recent paper cited on this blog, of which at present I don’t remember the reference, described a similar approach in trying to build an artificial new functional enzyme: the authors started with a vast array of proteins already selected for their basic structure (a zinc finger, if I remember well) compatible with the function to be developed, and targeted them with random mutations, strictly measuring for the emergence of the new functon. After years of attempts, they had finally “evolved” a protein with a weak activity, probably not sufficient to give any benefit in vivo, but certainly measurable in vitro. That was considered an important first step to refine, in time, the result. That should tell us a lot about how difficult it can be to reach a new “island” of function in the search space, even when we are trying to do that by intelligent trials, and we start from already selected compatible structures, and we already know what we want to attain, and can measure it step by step.

22. 22
bFast says:

gpuccio:

I would like to understand if you have any reason for your assumption that “that any one of the six mutations each added a little advantage to the monkey”

The only reasoning for why these mutations were presumed beneficial was because they made the enzyme more like that of the bovine.

it could be exactly the reason of the difference between your assertion and Spetner’s.

I actually think that Spetner did interpret the evidence as requiring the six mutations to be simultaneous. Further, as there are 20 differences between the baboon and the bovine, the paper suggested that any of the twenty would provide benefit for the Langor, yet only six did converge to the bovine. However, in light of the lack of evidence that these mutations must be simultaneous, I don’t see Spetner’s approach to be valid.

As far as I can understand, the concept of the smooth path is really a myth: any assumption of smooth paths should be really proven, at least in principle, on sound biochemical and molecular basis.

In general I would very much agree with you. I really think that experimental evolutionary biology almost totally doesn’t exist. How much real world advantage must a mutation have for it to fix? How fast will it fix? Certainly there is a tight advantage/rate ratio, but in real terms, what is it? I have seen a calculation that states something to the tune of, given X population, a gene needs to offer Y% advantage, and the Y% is surprisingly low. However, in the real world, how do you measure the percentage of advantage.

I believe that the solution to this problem is in experimental science. Let’s create a population of fruit flies that is devoid of a mutation that is known to be advantageous, then lets induce the advantageous mutation, and watch it spread through the population. This should not be rocket science.

I do think that there is some evidence of just how significant a mutation must be to fix, however. It is in the inverse, the disease-producing mutation. It seems to me to be the mirror problem, ie, the same calcluation. (If two populations merge, one having a less advantageous gene than the other, the better gene should fix in time. If the “less advantageous gene” is disease producing, the non-disease-producing gene it is competing against should fix at the same moment that the disease producing gene is purged.)

There was a discussion in ISCID’s Brainstorms a while back which discussed that there are at least 100 such disease producing mutations that are common between man and chimp. Why have these obviously dysfunctional mutations not been purged? I would suggest that it is because the amount of advantage/disadvantage that a mutation has to offer to produce fixation/purging is really quite large. If so, RV+NS fails.

The only problem with this argument is “I would suggest”. I cannot point to the experimental science that proves it! Someone must do the science. Sometimes I wish I had become an experimental biologist instead of a software developer.

23. 23
ericB says:

bFast: “I really think that experimental evolutionary biology almost totally doesn’t exist. How much real world advantage must a mutation have for it to fix? How fast will it fix? Certainly there is a tight advantage/rate ratio, but in real terms, what is it?”

I cannot read these words without wondering if the ideological prejudice dominant within Darwinist thought is actually inhibiting the development evolutionary science in this way.

Most hard science leans heavily upon mathematics. Yet, in the past when mathematicians have tried to inform evolutionists about the implausibility of their beliefs, what was the response? That the mathematicians must have their math wrong.

Answers could be obtained, but if they were, would they be acceptable?

Who is on the forefront of trying to establish the edge for RM+NS? Behe. Yet the Darwinists would like to strangle such ideas in the cradle.

One of the ironic benefits of ID will be that it also frees evolutionary thinking to become more more empirical and more truly scientific. Evolutionary science will no longer be hostage to ideology.

24. 24
bFast says:

ericB:

I cannot read these words without wondering if the ideological prejudice dominant within Darwinist thought is actually inhibiting the development evolutionary science in this way.

I absolutely agree with you. Even if life unfolded as neo-Darwinian evolution describes, allowing ID to have a seat at the table will spur the scientific community into much deeper and richer proofs than they currently have. I think that this would be very valuable for science. That said, I think that serious experimental evolutionary biology will produce a profound disbelief in the neo-Darwinian hypothesis.

25. 25
Borne says:

Just a remark on your

“you must find a way to put on a “neo-Darwinian” thinking cap”

Indeed! I do so occasionally but generally I avoid doing that because Darwinian thinking cripples the mind. 😉 Darwinists tend to become immune to logic. Just like atheists.

“To the extent that the neo-Darwinian model is feasible, I think that we are obligated to consider that it happened that way.”

Why? Moreover, why for any specific trait in any organism?

Also,

“I did prove that random mutations reasonably could have done so”

Did you not leave out of your calculations the effect of deleterious mutations?
I didn’t notice anything on that.

The fact the some “beneficial” mutations (rare) may be undone by subsequent deleterious ones? That would lower the probabilities of consecutive beneficial mutations.

Remember that virtually every disease and deformity known is caused by a mutation.

Of course (thankfully)not all mutations are bad many are silent.

“Scientists estimate that every one of us has between 5 and 10 potentially deadly mutations in our genes-the good news is that because there’s usually only one copy of the bad gene, these diseases don’t manifest.
Cancer usually results from a series of mutations within a single cell.”

I suppose you know that mutation/disease data banks are now numerous online.

In any case calculations on mutational benefits ought to include the effects of other synchronous or subsequent mutations. Don’t you agree? You cannot assume the absence of these.

26. 26
mike1962 says:

What you’re offering is essentially the Platonic view of Penrose, Emperor’s New Mind, etc. Except that Penrose doesn’t call the Platonic Reality “God”.

27. 27
bFast says:

Borne:

Did you not leave out of your calculations the effect of deleterious mutations?

I actually very much factored this in. When I did the calculation, I assumed that the gene would be hit by a random mutation at a rate of 1% per million years. As such, it appears to have thrown off quite a lot of mutations, presumably because they were deleterious.

“To the extent that the neo-Darwinian model is feasible, I think that we are obligated to consider that it happened that way.”

Why? Moreover, why for any specific trait in any organism?

Because this is science. Why do we assume that lightning is caused by an electrical charge build up in the clouds? Because if there is a natural explanation we must accept it or quit pretending to be scientific. I am quite happy to see someone produce experimental or discovered evidence that is not explainable via the neo-Darwinian hypothesis. (I think there’s a bunch of it already on the table.) However, I am too much of a scientist to throw every piece of evidence out which fits the Darwian paradyme just because I hold to a different view.

28. 28
ericB says:

bFast: “To the extent that the neo-Darwinian model is feasible, I think that we are obligated to consider that it happened that way.”

Borne: “Why? Moreover, why for any specific trait in any organism?”

I agree with bFast, but since directed causes are also properly within science, I would describe the reason slightly differently.

The advance needed in science is to allow unbiased recognition of directed causes (a.k.a. design or intelligent causation).

However, directed causation is too powerful an explanation to use indiscriminately. It could be abused to explain anything at all, including those effects that do not need it.

Consequently, priority must be given to undirected, natural process explanations, wherever those are plausibly applicable.

This sense of priority is explicit in Dembski’s explanatory design filter. If something can be explained through combinations of law and/or chance, that is considered the inference to the best explanation.

In general, when the evidence we have indicates that an effect is beyond the plausible reach of undirected causes, then an inference to directed/intelligent causes can be justified as the best available explanation.