Over at Telic Thoughts Bradford resurrected a discussion based on my UD essay, Writing Computer Programs by Random Mutation and Natural Selection. In reference to the quote, “The set of truly functional novel situations is so small in comparison with the total possible number of situations that they will never occur, which is the point of the original post,” I commented as follows:
That was the main point of my essay, that combinatorics produce such huge numbers so quickly and totally swamp islands of function. My 66-character program, assuming only the 26 lower-case letters, produces 2.4 x 10^93 possible outcomes, or the number of subatomic particles in 10 trillion universes.
In fact, the C programming language is case sensitive and uses all 92 characters on a standard keyboard, which produces 4 x 10^129 possible combinations in a 66-character program, or the number of subatomic particles in 10,000,000,000,000,000,000,000,000,
Evolutionary biologists put blind faith in chance and necessity and arbitrarily invoke “deep time” to make the impossible imaginarily possible. The problem is that deep time is not actually all that deep. There are only about 10^17 seconds in five billion years.
Hard numbers put things in perspective. The probabilities are not a close call; they are catastrophically lopsided.
It can always be argued that we don’t know how many possible amino acid sequences can be functional, but we can know that this number would have to be astronomical in order for chance and necessity to have any probability of coming up with something biologically workable, even given an absurdly short protein. This is why the simple mathematics of combinatorics renders the random variation thesis simple not credible when it comes to the molecular machinery and information content of even the simplest cell.
A Math Problem to Solve
I’m currently working on an as-yet-undisclosed computational algorithm for which I need to solve a math problem. I will send a free set of my three classical piano CDs (with works by Chopin, Liszt, Rachmaninoff, and Gershwin, along with program notes on the works and their composers) to anyone who can solve the following problem. (My e-mail can be found at the Evolutionary Informatics Lab on the People page.)
The sum of consecutive integers 1 to n (1 + 2 + 3 + 4 + … + n) is given by n(n+1)/2, or (n^2 + n)/2. n(n+1) will always be divisible by 2 since either n or n+1 must be even. Given a generating function, kn + p, where k is an interval and p is an initial offset, it is easy to calculate sums that skip numbers in regular intervals. For example, if we want to sum 3n + 2, for n = 1 to 4, we have 5 + 8 + 11 + 14, or (3+2 + 6+2 + 9+2 + 12+2), so we have one 3 plus 2, plus two 3’s plus 2, plus three 3’s plus 2, plus four 3’s plus 2, or 3(1 + 2 + 3 + 4) + four 2’s, or 3 times 10 plus 4 times 2. In general then, we have the formula k(n(n+1)/2) + np to find the sum for any generating function kn + p and a range of 1 to n.
I give this by way of background because I must find an analogous general formula for the sum of cubes. I need to sum cubes because Fermat’s Last Theorem tells us that there are no powers greater than 2 for which there are any integer solutions for a^n + b^n = c^n. I use the power of 3 because it’s the smallest power that guarantees that no two integers raised to the third power will ever sum to any other integer raised to the power of 3, which is a requirement of my algorithm.
Fortunately, there is a surprising and beautiful identity, which is that the sum of consecutive cubes of 1 to n (1^3 + 2^3 + 3^3 + 4^3 + … + n^3) is the sum of 1 to n, quantity squared — that is, (1 + 2 + 3 + 4 + … + n)^2, which equals (n(n+1)/2)^2. For example, 1 + 8 + 27 + 64 = (1 + 2 + 3 + 4)^2 = 100.
So, I need a formula for kn + p that will sum cubes in the same way that k(n(n+1)/2) + np sums numbers raised to the power of 1. In practice, p will be in the range of 0 to k-1, since p is the result of a modulo divide by k.