Uncommon Descent Serving The Intelligent Design Community

On a stochastic algorithm and its asymptotic behaviour


While most people agree that simple laws/rules per se cannot create information, some believe that algorithms are capable to do that. This seems an odd idea, because algorithms, i.e. sets of instructions, after all can be considered complex laws/rules, or set of rules, sort of generalizations of rules.

The usual and simplest example some evolutionists offer to prove that algorithms can produce information is a stochastic algorithm that, by randomly choosing characters from the English alphabet, in a number of trials, finally outputs the phrase “methinks it is like a weasel” (or whatever else phrase with meaning). This way it seems to them that information is produced by randomness + laws, or even created from nothing. Let’s admit for the sake of argument that the phrase “methinks it is like a weasel” so produced is information. The questions are: (1) is that truly creation of information from nothing? (2) what really produces it?

Consider the following schema of such algorithm:


We have on the left a pseudo random number generator PRNG (like the Mersenne twister) that provides random numbers. In the middle, a set of instructions labelled “FORMAT” take the numbers, convert them in characters of the English alphabet (26 characters + space), and finally format them in a series of 28 character long strings listed vertically on a file (on the right). The total possible 28 character strings are Os = ~1.2*10^40 corresponding to a Shannon information of near Oi = ~4.5*10^42 bits. Only a very small subset of those strings are English sentences with a meaning (an interpolation based on a power law says Es = ~10^9 sentences). Therefore a very reduced amount of those combinatorial Shannon bits Oi is meaningful English information Ei = ~4.2*10^10 bits. (By the way, the recognition of the English sentences is a job requiring a linguistic intelligence – both in the syntax and semantics sense – that is hard to simulate mechanistically.)

Here we are interested in the asymptotic behaviour of the system, that is how it works when the running time tends to infinity. Given enough time our algorithm will output all possible 28 char strings and all 28 letter English sentences (“methinks it is like a weasel” included). If the program runs 10^29 seconds (10^12 times the age of the universe) on the fastest computer available today it will output ~10^41 sequences and any given sequence has 0.99976 probability of occurring. Considering the asymptotic behaviour, let’s finally wonder where Ei comes from.

Ei comes from the potentiality that the system contains just from the beginning. This potentiality is somehow front-loaded in the system and develops when the system is running. The potentiality will develop partially or totally dependently from the running time. The information potentiality, measured in Shannon sense is Oi bits, measured in English sense is Ei bits, as shown before. This system (as any closed system) cannot unfold more than its potentiality allows, because, after the production of Os, the sequences repeat and no new sequence is produced. The potentiality is entirely accounted for by the instructions pre-loaded in the algorithm (and obviously by the pre-existent computing environment available to run the program). These instructions (hosted and processed by the informatics infrastructure) define, in a compress way, the potentiality and how it can develop. About the concept of potentiality see here.

Answer to #2 question: given such potentiality is entirely due to the program (plus the computer), what really produces Ei is not the algorithm, rather the cause of its design, the designer. The information provided by the designer exactly accounts for what the program actually does, what the program could potentially do, and its asymptotic tendency. The programmer provides all information in potential/compressed form by design just from the beginning, then the algorithm per se creates zero information. Since the algorithm and its potentiality come from their programmer, also Ei comes from the programmer. The algorithm, its potentiality and all effectively produced (or potentially producible) strings are due to the system designer. The guidance given by the programmer is exactly the program, no more no less.

Therefore one can say that nothing new can be produced that isn’t already there. Nowhere there is something coming from nothing. Nowhere there is more arising from less. No “free-lunch”, as ID theory puts it.

At this point a Darwinist could argue: “since your example of randomness + laws creating information is analogous to my chance + necessity creating organisms, yourself have proved that Darwinism can work”. This argumentation is flawed for two main reasons:
(a) the stochastic system I described is entirely designed;
(b) organisms contain advanced organization, a far higher and qualitative thing, whose vertical hierarchical functional complex specified information is incomparable to the horizontal flat serialization of characters produced by my algorithm.

But this is a point worth of analysis in another thread.

Paul Giem #27 Yours is an excellent mathematical demonstration why we obtained two almost identical results via different formulas. By the way, you are right that mine is impracticable for such large values of T (yes, pocket calculators overflow just for far smaller values). One needs logarithms and in fact that is what I did. I know that you love, among other things, both metaphysics/theology and mathematics/science. This is a curious thing we share with some other IDers/UDers (if you look at my 48 previous UD posts you can find topics related to those different fields). I am proud you read my post and I hope to exchange ideas with you also in the future via this priceless ID blog. Again thank you. niwrad
Niwrad, Your procedure is exactly correct mathematically. However, calculators have trouble with such large exponents with numbers so close to 1, so for estimation purposes it is worthwhile recalling that for large x, (1 + 1/x)^x approximates e (and (1 - 1/x)^x approximates 1/e ) being accurate to the number of places that the x has exponents, that is, in this case 40 digits, which is plenty accurate for our purposes. That means that if we say that p1 is the probability that we will get the desired result in 1 trial (so that 1/p1 is a very large number), and T is the number of trials, as you did above, our 1 - (1 - p1)^T becomes 1 - (1 - p1)^((1/p1)*(T*p1)) or 1 - ((1 - p1)^((1/p1))^(T*p1) which can be approximated (to 40 places) by 1 - (1/e)^(T*p1) or 1 - e^-(T*p1) which is the end result of using the Poisson distribution Paul Giem
So, is natural selection eliminative? Or is it not?
It is eliminative and it doesn't operate. Joe
niwrad: Thank you for your answer. I agree. I have tried to specify that my perspective was merely empirical and scientific, as it always is in my discussions about ID, because I believe that ID is exactly that: a scientific theory. However, ID certainly has metaphysical implications, and I am obviously interested in them too. So, I really appreciate your contribution. gpuccio
gpuccio #22 Thanks for reading that old post of mine. We say different things but it is likely they are compatible, why not. Your viewpoint is cosmological, mine (in that old post) tries to be metaphysical. You argue in term of effects, results, proofs, history, scientific evidence... I argued in term of final causation, relations between the cosmos and its designing principle, time and non-time, potentiality and its deployment... In general, any speculation involves objects and the perspectives from which we look at them. Almost always we define the former, almost never we perfectly specify the latter. niwrad
Paul Giam #20 Thanks for your calculations by Poisson. I calculated the probability of occurring at least once in T (10^41) trials as 1 - (1 - p1)^T , where p1 is the probability of occurring in a single trial. Probably, given the big exponent, I rounded too much. niwrad
niwrad: I have read now that older post of yours. It's very good, but I have some objections to the reasoning. Indeed, the problem is always the same: ID deals with scientific facts, and the ID interpretation of some facts requires a specific design intervention in time and space. I will be more clear with my favourite scenario: the appearance, in time and space of natural history on our planet, of completely new protein superfamilies. So, let's take, just as an example, the myc protein, a very important transcription factor that appears in vertebrates, probably about 400 my ago. It is a very important protein, highly connected to many fundamental cell cycle processes, and it is quite long (439 AA in humans). Now, here we have a specific time in natural history on our planet where that protein appears for the first time. In vertebrates, 400 my ago (in terms of natural history, quite recently). First of all, one could suggest that the information for that protein was in some way "front-loaded". That is a legitimate hypothesis, but I ask: what facts are there in favor of it? And the answer is: absolutely none. There is no fact that suggests that something material containing the information for the myc protein ever existed, before the time of its appearance. So, I must say that the front-loading hypothesis, while scientifically legitimate, is scientifically unsupported by facts. You say that God is out of time. That's certainly true. But if He intervenes in time, then the results of that intervention must be visible, from inside time, at definite times. To be more clear, we have two possible scenarios: a) God (outside time) inputs the information for the myc protein at the Big Bang. OK, that's fine for me as an hypothesis, but then we have to find some evidence for the presence of that information in the material world from the Big Bang to 400 my ago. Again, that evidence is completely lacking. b) God (outside time) inputs the information for the myc protein on our planet, 400 my ago. That is perfectly compatible with the facts we observe. But, and that is my point, for us it is not scientifically important that God starts His intervention "outside time". The important thing, scientifically, is that the result of that intervention happens in time, 400 my ago on our planet, and that we can find evidence for that. So, I stick to my previous statement: I am absolutely for design intervention, in space and time, many times, through an interface. (And the designer may well start his intervention outside time, if we prefer to think that way). gpuccio
Paul: It's always a pleasure to hear from you! gpuccio
Niwrad, it appears that you need another small correction. Your assertion that
The total possible 28 character strings are Os = ~1.2*10^40
is correct, but not quite your assertion that
If the program runs 10^29 seconds (10^12 times the age of the universe) on the fastest computer available today it will output ~10^41 sequences and any given sequence has 0.9999 probability of occurring.
This is because with such large numbers, it is highly probable that some sequences will be represented more than once, and the probability of any one sequence being represented very closely approximates the Poisson distribution p(k) = (lambda^k) / (k! * e^lambda) where lambda is the expected number of times a given sequence will be found. In your example, approximately, lambda = 10^41 / 1.19725 x 10^40, which is approximately 8.352. Selecting k = 0, we have p(0) = 0.0002358 and 1 - p(0) = 0.99976. This is not much different, but you have seen how pedants can pick at irrelevancies. It is better to have them corrected beforehand. Paul Giem
Joe @ 10 So, is natural selection eliminative? Or is it not? e·lim·i·na·tive, adjective; e·lim·i·nate: to remove or get rid of; < Latin ?l?min?tus turned out of doors (past participle of ?l?min?re ), equivalent to ?- + l?min-, stem of l?men threshold + -?tus -ate CLAVDIVS
William J Murray, gpuccio Maybe I already dealt with the Designer's-intervention-in-time problem here: https://uncommondesc.wpengine.com/intelligent-design/when-does-the-programmer-install-the-software/ niwrad
William J Murray:
or use an interface to “enter” the program as it is running and do various things provided by the nature of the running program.
That's my idea. I am absolutely for design intervention, in space and time, many times, through an interface. gpuccio
So, we can say - in some sense - that the universe is the computer and that the so-called "natural laws" represent an operating system. Life would represent a program running on that operating system. The user can act on the system in various ways - they can change the physical features of the computer, alter the operating system, alter the programs running on the operating system (like, say, life) or use an interface to "enter" the program as it is running and do various things provided by the nature of the running program. William J Murray
Joe: I agree. Is that in any way different from what I said in my post? gpuccio
gpuccio, Natural selection is just differential reproduction due to heritable random (as in happenstance) variation.
“Natural selection is the result of differences in survival and reproduction among individuals of a population that vary in one or more heritable traits.” Page 11 “Biology: Concepts and Applications” Starr fifth edition
“Natural selection is the simple result of variation, differential reproduction, and heredity—it is mindless and mechanistic.” UBerkley
“Natural selection is the blind watchmaker, blind because it does not see ahead, does not plan consequences, has no purpose in view.” Dawkins in “The Blind Watchmaker”?
“Natural selection is therefore a result of three processes, as first described by Darwin: Variation Inheritance Fecundity which together result in non-random, unequal survival and reproduction of individuals, which results in changes in the phenotypes present in populations of organisms over time.”- Allen McNeill prof. introductory biology and evolution at Cornell University
Non-random in that not every organism has the ame probability for survival. Joe
KF: OK, I was aware of that other possible meaning, although I have never really found it anywhere. So, just to understand: a) Billion = 10^9 is the common meaning, and maybe american in origin? b) Billion = 10^12 is the old, british meaning? gpuccio
gpuccio Thanks. I largely agree with your comments. Yes, "information" is a biggg tent. A tent where one of the jobs of IDers is to make many important distinctions. For example, I tend to use the word "organization" to avoid unpleasant misunderstandings. In this post I distinguish between Shannon and English information, but when one speaks of biology - the "temple" of organization - it seems to me "information" is even a too poor word. niwrad
Joe and CLAUDIUS: I would say that NS is not "something", but simply the description of a complex effect that takes place in a very specific scenario (replicators in an environment, competing for resources). In a sense, it is the algorithmic consequence of the process of replication, and therefore a consequence of the information already present in the replicators. There is great misunderstanding about NS, mainly as the result of the reification and mythology built by neo darwinism about it. For example, many are not aware that different causes can generate the negative or positive selection of a variation in information. Everybody seems to think that NS is mainly the result of some "fitness function" in the environment. But that is not always true. The environment needs not be implied in the process. For example, a mutation which inactivates some fundamental metabolic process is in itself incompatible with life, and therefore "negatively selected", but the environment has nothing to do with that. Finally, positive selection, although extremely limited, does exist. It is eliminative in the sense that it works through the elimination of the "old" information and the expansion of the "new" variant. The result, however, is the expansion of new information. It happens, even if it is always simple, and almost always "degenerative" in essence (Behe's "burning the bridges" concept), and it usually requires some extreme selecting condition in the environment. But it exists. The simple forms of antibiotic resistance are probably the best example, as very well described by Behe. gpuccio
So it has to operate on it in order to eliminate it? Something can't be eliminated without being operated on? The Origin of Theoretical Population Genetics (University of Chicago Press, 1971), reissued in 2001 by William Provine:
Natural selection does not act on anything, nor does it select (for or against), force, maximize, create, modify, shape, operate, drive, favor, maintain, push, or adjust. Natural selection does nothing….Having natural selection select is nifty because it excuses the necessity of talking about the actual causation of natural selection. Such talk was excusable for Charles Darwin, but inexcusable for evolutionists now. Creationists have discovered our empty “natural selection” language, and the “actions” of natural selection make huge, vulnerable targets. (pp. 199-200)
Thanks for the honesty Will. Joe
Joe @ 7
Umm natural selection is a result and NS doesn’t operate on anything. Natural selection is eliminative, Graham2.
How can natural selection be "a result" that "doesn't operate on anything", and also be "eliminative"? If it's eliminative, then its operating on something to eliminate it, is it not? CLAVDIVS
Genetic and evolutionary algorithms, such as Dawkins' "weasel" utilize a goal-oriented targeted search via cumulative selection towards the target phrase. Remove the target phrase from the program and the program would never hit that target if all else is left the same. Joe
Is there any equivalent of natural selection operating on the results ?
Umm natural selection is a result and NS doesn't operate on anything. Natural selection is eliminative, Graham2. Joe
niwrad: Very interesting post! A few comments. a) First of all, for clarity, I think we should always specify that we are not speaking of information in the "Shannon" sense, but of specified (or functional) information, in the ID sense. In your examples, the specification is "Being a phrase with meaning in the english language". It is interesting to observe that, as I have said many times, no functional specification is really possible without some reference to a conscious intelligent observer. In your example, no purely non conscious system could ever recognize "a meaning", even if, obviously, soem objective algorithm to recognize english sentences can be programmed in a system by a conscious intelligent designer. So, either the specification is based on a recognition of "meaning" or of "purpose" (function), it requires some reference to a conscious intelligent observer who can recognize that meaning or purpose. At that point, functional complexity is easily defined as the minimal number of bits required to implement the function by a random search. In most cases, as in your example, that corresponds to the ratio of the target space to the search space. b) What I believe is that a random system can generate outputs that can appear functional to a conscious intelligent observer, but those outputs will never be complex, in the sense of having high functional complexity. You say that the random system itself is designed. That is true in your example, but not in all scenarios. For example, I can observe some truly non designed random system, for example the days when the weather is prevalently sunny, and read them as a binary outcome (I can decide, for example, to read the sunny days as 1, and all the others as 0). And then I can transmute those data into ascii characters, and decide if the sequence has meaning in the english language. Now, it is possible that I obtain some simple sequence that is correct in english (for example, I am), but that sentence will never be long. "I am" is 4 characters, and I leave to your "damn neurons" :) the computation of the probabilities, and of how many days would be necessary "Methink it's like a weasel", in that system, is definitely beyond the computational resources. That is to say that the real motive why a purely random system cannot output complex functional information is the probabilistic barrier. c) There is the possibility that the system includes some selection algorithm which helps overcome the probabilistic barriers. In that case, I strongly believe that the "no free lunch" rule applies. Intelligent selection can easily overcome probabilistic barriers, but it requires intelligent information in the system, and that information must be about the function to be found, and the search for it. d) So, to sum up: random non conscious systems can output simple apparently functional information, but never complex functional information; Intelligently designed algorithms can output complex functional information, but only about the function defined in them (directly and indirectly) and according to the computational abilities defined in them for that function. e) What an algorithm can never do, even if designed, is to output true new complex functional information about some new function that was never defined, either directly or indirectly in its program. The reason is simple: complex functional information is beyond the probabilistic resources of any realistic random system, and a non conscious algorithm will never be able to recognize a meaning or a function which were never programmed in it, either directly or indirectly, least of all make computations about that meaning and function. f) Therefore, only a conscious intelligent being can generate true new complex functional information, using his representations of meaning and purpose to guide his computations and his algorithmic search. gpuccio
Okie, there is another (british) sense of "billion," 10^12. I guess that is dying. KF kairosfocus
RexTugwell Right! Thank you! Sorry, I had in mind the power law and so I argued somewhat logarithmically, ah my damn neurons... :) Now I have to correct my mistake in the post, again thanks. niwrad
I'm not the sharpest knife in the drawer and math isn't my forte so forgive me if I'm wrong but isn't 10^29 seconds a trillion times the age of the universe? 10^29 / 10^17 (seconds since big bang) RexTugwell
Graham2 Welcome first commenter. Thanks for reading so early my post and for the question. In a sense, it could be the English recognizer I cited. I said that it implies "linguistic intelligence hard to mechanize", and this reinforces my anti-Darwinian thesis (I am niwrad after all :) ). I could add it as point "c". niwrad
Is there any equivalent of natural selection operating on the results ? Graham2

Leave a Reply