In modern mathematics fractals are complex objects generated from simple formulas. Some have found also in biology forms that seem to have fractal shapes. Before the astonishing geometric shapes of fractals one might argue something like this: as the complexity of the fractal geometries arises from simple formulas, analogously the fractal biological complexity could come from simplicity and as a consequence intelligent design is not necessary to explain it.

For example, James Gleick in his book “Chaos” writes:

How did nature succeed to develop a [bio] architecture so complex? The Mandelbrot’s thesis is that the complications exist only in the context of traditional Euclidean geometry. As fractals, branched [bio] structures can be described with simplicity, with few bits of information. Perhaps the simple transformations that gave origin to the shapes invented by Koch, Peano, Sierpinski have their analogous ones in the instructions encoded of the organism genes. Without doubt the DNA does not succeed to specify [analytically, in detail] the great number of bronchi, bronchioles and alveoli or the particular spatial structure of the resulted tree, but it can specify a repetitive bifurcation and development process.

Although in the above quote there is some truth I think that it might implicitly contain also a misunderstanding that is worth considering here. May be a short philosophical “excursus” may help to introduce and focus the topic.

The general concept that from a “formula” can come a “form” is remarkable per se and is not new. The words themselves are closely tied because share the prefix “form” (this is true also in other languages: French = “Formules et Formes”; German = “Formeln und Formen”; Spanish and Portuguese = “Formulas et Formas”; Italian = “Formule e Forme”). Formulas are related to numbers while forms are related to geometric shapes. In the history of Western philosophical thought the former can be linked to Pythagorism and the latter to Platonism. Quite meaningful both these philosophical frameworks used mathematical concepts to symbolize higher realities. As such they were orthodox and in agreement with the traditional conception of sciences, which consider them mainly as indirect rational supports or symbols to achieve the direct knowledge of higher over-rational principles and only secondarily as tools to investigate nature and solve practical problems. Pythagorism considered numbers, which are quantitative and substantial entities, to symbolize − for inverse analogy − the more qualitative and essential principles. Platonism considered geometric shapes, which in a sense are sort of “clothes” of numbers, as cosmological models and symbols of metaphysical realities. Formulae and forms are generalizations respectively of numbers and geometries and as such were potentially in the thought of Pythagoras and Plato.

In the history of mathematics algebra arose as an extension of arithmetic and the so-called analytic geometry was developed as a bridge between algebra and geometry exactly according to the form-from-formula paradigm. Analytic geometry provides a way to visualize somehow the algebraic formulas and their inner potentialities. Here is a very simple example: in analytic geometry the quadratic equation x^2+y^2 = 1 displays a circle with ray equal 1 centered on the Cartesian axes origin (0, 0), when we vary the values of the x and y variables. In a sense we can say, in the conceptual framework of analytic geometry, that the formula contains implicitly the circle and conversely the circle manifests explicitly the formula. Since I have cited Pythagoras and Plato and I don’t want to seem to exclude another philosophical “pillar” of Western thought, Aristotle, I suggest that the formula is like the “unmoved mover or motor” or “fixed archetype” respect the variability or becoming it potentially contains, since the center is the principle of all points on the circle, which irradiate from it by mean of the rays. In a formula able to generate forms, the symbolic variable (x, y, z, …) represent sort of fixed tags or place-holders for entire ranges of variability or infinite sets of values. In other words a formula serves to synthetically fix what is variable and conversely a form serves to analytically express in variation what is fixed. We could say, analogically speaking, that formulae are metaphysical while forms are physical. By the way indeed on the fundamental distinction between fixed and variable entities is based the rigorousness of infinitesimal calculus (see René Guénon, Les Principes du calcul infinitésimal).

Fractals are more complex implementations of the above form-from-formula paradigm because their geometric shapes are generated by particular algorithms, where in a core formula one iteratively and in specific ways varies some parameters contained in it. In fractals (much more than in the above simple example of the circle) we have that from a relatively simple formula we can generate figures of remarkable complexity. However, given the huge number of calculations necessary to plot their images, fractals could be discovered and studied (mainly by Benoit Mandelbrot in the 1970s) only after the arise of computers era. Mandelbrot studied also fractals involving chance, which causes irregularities inside their geometries. Given that in those fractals we have both determinism and statistical fluctuations they are suitable to describe some phenomena of nature, where both chance and necessity operate.

One of the properties of some fractals is called “self-similarity” or “self-likeness”: examining these figures at several zoom scales they exhibit similar patterns or configurations. This property is also called “scale invariance”: the structures appear similar at all scale levels we explore them. The analysis could be indefinitely extended and we would always find similar structures.

Another property that some fractals have is that the complexity of the structure doesn’t decrease when we zoom in: the fractal always exhibits complex details independently from the zoom factor. In other words they show detailed fine structures. This is not at all a general property of all pictures. For example a page of a book has not such property. When we examine it at the macroscopic level we see hundreds of characters, which entail some complexity (in term of information content we have that x characters imply x bytes of information). When examining it with a lens a single character the complexity decreases (information = 1 byte = 8 bits). If we examine it at the microscopic level with a microscope we can see eventually half black and half white (information = 2 bits) or even all black or all white (information = 1 bit), the complexity has still decreased. To sum-up the complexity and information of a written page is not scale-invariant.

Some biological structures show fractal patterns. A tree is a typical fractal structure: the trunk divides into many branches, in turn every branch divides in many small sprigs and so on gradually until the leaves. The leaves themselves show fractal patterns: their ribbings repeat to smaller scale the ramifications of the tree. A particular fractal is called the “fern” because astonishingly resembles a real fern. See the following picture:

In higher organisms the lung’s bronchial tree and the branching of the circulatory system have fractal patterns. The fractal structure of some organs allows to condense a great surface in a minimal volume. The respiratory capacity of an organism is proportional to the surface of its lungs. The pulmonary surface area of human is greater than a tennis court. This way the fractal structure allows to capillary reach every volumetric portion of the body. Therefore it is a good solution in order to resolve the fundamental problem of the blood circulation, which is to feed all tissues. About the relations between length, area and volume in this context see Mandelbrot’s “The fractal geometry of nature” cap. IV.

To say that a system is fractal doesn’t mean it doesn’t need design. Quite the inverse, the design of these biological fractal structures and their parameters must be accurate and fine-tuned (about these biological application of fractals and the comparison between their mathematical theory and biological reality see ibidem cap. V).

At this point one could ask: given such spectacular biological examples of fractals (whose kernel is a simple formula) why is misleading to believe that biological complexity can arise from simplicity after all?

One of the reasons is contained implicitly in the very fact we noted above about their discovery and that we can synthetically express as follows: fractals (or more in general any geometry arising from formulas) need computer programming. To explain this concept in the simplest way let’s return to the easy example of the circle and its equation. The symbolic equation x^2+y^2 = 1 per se displays nothing, not a single point in space. The equation alone is fully unable to generate the plot of the circle and by looking at it at first sight one hardly would say that it can generate a circle. To obtain the plot of the circle we need two additional important things: (1) a space (a plane) with a distance measure and a system of Cartesian x-y axes in the plane and (2) a specific algorithm to calculate the coordinates and plot the circle on the Cartesian plane.

The algorithm must do this job: (a) to start from a given x; (b) obtain the related y from the equation; (c) plot the point (x, y) on the plane; (d) increment x of a z value; (e) go to (b). This sequence has to be iteratively done a large number of times. When sufficient points are calculated eventually connect the dots. This is exactly what a student does when his math teacher asks him to generate the graph of a function given its equation and is exactly what a computer runs when must do the same thing.

Also fractals need to be generated by iterative processing of their formulas by means of computer graphics technology. In order to generate a fractal plot its formula is not enough, a computing engine and a software are necessary to map all its indefinite variability into the bi-dimensional or three-dimensional space. Mathematicians use computer programming to make this passage from synthesis (the formula) to analysis (the form). In practice the formula is only one instruction of a program of many instructions that is iteratively re-run, with different parameters of input.

Our algebraic formula x^2+y^2 = 1 is able to symbolically synthesize all the infinite set of couples (x, y) of points of a circle. This set has a low algorithmic complexity measure (Kolmogorov complexity) because the algorithm capable of generating it is far shorter than the sequence itself. They say the set is “compressible”. Therefore the ability to synthesize infinities of values is not peculiar to fractals only but is intrinsic, in a sense, to many mathematical formulas containing symbolic variables (x, y, z, …). Sure fractals, giving their complex pictures, are spectacular examples of this characteristic but also complex fractals have a relatively low Kolmogorov complexity, because in practice they are generated iterating simple formulas, as, for example, the formula relative to the famous Mandelbrot set: f(z) = z^2+c − where z is a complex variable and c is a complex constant (as known a “complex number” is expressed as (a + ib) where i is the square root of -1).

Some could argue something like this: if the biological systems code information of their organizational complexity by mean of formulas (of fractals or other kinds of mathematical structures) their complexity is only apparent. This conclusion is incorrect for the following reasons. Aside from the fact that not all biological CSI is encoded such way, if it is true that formulas generating fractals can be simple and the Kolmogorov complexity of their developments is relatively low, it is also true that − in general − to use such formula (instead of some other mean) for obtaining a particular result per se is a sign of intelligence. In mathematics the search for the formulas capable to symbolically express the solution of a problem is, in general, an hard problem, whose resolvability is not at all gratuitous and granted. The mathematicians specialized in this kind of difficult problems are named algorists. Besides, admitted for hypothesis that we have found the formula, we must exploit all its potentiality, calculating all the necessary values. In Aristotelian terms, the passage “from potency to act” has to be accomplished and indeed this passage shows that real complexity is not free. In fact to the purpose it is necessary to develop a program. To run the program we have to design a minimal computer or instruction processing engine. To sum up, whether we think the system in its entirety (formula + program + computer), and eventually its physical/chemical implementation in biology, we are led to a scenario not at all simplistic. In this scenario intelligent design (ID) is absolutely undeniable, as far as all its three components demand intelligence.

For the reasons above the existence in the biological systems of compressed information, i.e. encoded in apparently simple way, cannot be used to deny intelligent design or justify oversimplified explanations about self-organization or creation of information from nothingness as those typical of evolutionism. Moreover fractals explain only some aspects of certain biological systems, usually their external form or shape. A lot of other things are not explained so, for instance, their internal working or processes (implying sophisticated information processing) usually are not explained by formulas and represent the major part of their overall functional information. In general functional information cannot be obtained by formulae. Anyway, if it is true that formulas explain a lot of things, it is even more true that only intelligence can invent formulas and all their products (forms included). This way the correct causal relationship in which the less comes from the more is fully acknowledged (intelligence creates formulas and in turn they create forms). To think otherwise is to put the cart before the horses and to wrongly believe that more comes from less.

I think that what this OP demonstrates, is that ID can no longer claim that the probability of any object can simply be based on that object’s final form. As an example, randomly guessing a binary input value of 2^64, is significantly more probable than guessing an output bitmap of 2^1000000.

I’ll give you that the probability expansion processor component is undefined, but done once, i.e., a generic one, it could serve as the expansion process for a variety of end results.

This still means that the expandor must now be taken into consideration when calculating FSCI.

A simple count of the number of bits of information in the output, can no longer can be used as the sole parameter for probability.

I agree with your main thesis, but I have a couple points of

contention. First, what I agree with:

Exactly. An elegant solution that ideally solves a complex problem is not what I would expect accidents to discover. I would put that ability out of the intellectual range of most humans, as well.

Take a physiology class and you will not be blown away by the fractal nature of the circulatory and respiratory systems as much as the ingenious microbiological functions that exist in them that make life possible. These are irreducible in their complexity and information value and are an even larger hurdle for reductionism to explain than the wonderful elegance of biological fractals.

That is not all I agreed with but I’ll skip to my questions:

I’m not immediately convinced all three require intelligence. Is not the program simply the physical laws which autonomously process biological information from information to function, and is not physical space the “space” or computer required to generate the form? That is ignoring the argument that the lawfulness of nature is inherently intelligent and purposeful itself, but I’m less apt to understand or argue from such metaphysical viewpoints.

Even so, the formula certainly did not come from a blind watchmaker.

Toronto @1,

That should say,

“I’ll give you that the probability

of theexpansion processor component is undefined,..”Toronto @1,

Sigh..

” ..can no longer be used as the sole parameter for probability.”

Thanks niwrad, that was an excellent read.

I agree with Toronto that we can’t examine the phenotype and simply say “How complicated, it must be designed.” But I think most of the ID debate does not rely on this level of argument.

Some fractal generation systems, such as L-systems are explicitly based on biological models. As with Koch curves and other historical work on fractals, this work predates the use of computers as a tool of science. (The same is also true of a lot of work on CAs.)

I think there are some aspects of genotype/development/phenotype that can be aligned with formula/program/picture, but the analogy is limited.

Nakashima @6,

Agreed.

My whole point is addressed to those on the ID side, and it’s not all of them, that claim that an object described in 2^1000 bits of information, has a probability of 1 in 2^1000 of occurring.

This OP shows that the odds against it could be much less.

Toronto #7

Consider this scenario. Let’s call FPC a system composed of (formula + program + computer) as those necessary to generate fractals. A FPC is an intelligent design. As such say it contains x bit of CSI (complex specified information). Eventually this FPC generates an output of y bits where y is far greater than x. Question: can we really say that (y-x) bits of CSI are created gratis?

The answer is no. The y bits are not bits of CSI. To be CSI an output must be specified and complex while an output of a FPC can be yes complex but not specified. In other words the y bits don’t entail contingency (they are deterministically generated) and without contingency no CSI. At best we could say that the specification of the output never is greater than the specification of its generator (whatever be the output). In philosophical terms that sounds: an effect cannot contain more essence/quality than its cause. This explains because there are no (y-x) bits of CSI created gratis in the scenario. In general not a single bit of CSI is costless (“no free lunch”).

Mr niwrad,

If I understand you correctly, you can’t expect the CSI of the bacterial flagellum to be larger than the CSI of its generator, is that correct?

I have to say that I find the claim that you need a computer to generate a fractal somewhat bizarre. Just as we see some simple forms in nature, such as spheres and circles, we also see many fractals. That was the point of Mandelbrot’s chapter on How Long is the Coast of Britain? I spent time in college measuring the fractal nature of vegetation patches in the Okeefenokee Swamp.

But as your fern illustration shows, you are really talking more about iterated function systems than fractals. Our physical bodies are built by repeated application of the same set of rules over and over again. Because these rules are context dependent, one cell type can build many cell types, and precursors can become complex organ systems. Some aspects of some of those organ systems exhibit a fractal dimension – the surface of our lungs and brains, the branching of our blood vessels.

I am at a loss to understand how you can claim “fractals need computer programming” after quoting from a book entitled The Fractal Geometry Of Nature.

niwrad @8,

Some ID’ers have suggested that the cell functions like a computer that generates output based on DNA and possibly other inputs. The living cell however, is primarily what this whole debate is all about. Since that is the point under contention, the EVO side can claim that it is an example of a FPC that has

notbeen intelligently designed.If a cell is an FPC, and the output of an FPC does not contain CSI, then we humans, the resultant output of a single cell, do not contain CSI.

Nakashima #9

Yes. This is true in general for any generator and its outputs.

The fractals in nature are generated by the physical/chemical laws. In a sense these laws are the instructions continually executed by that giant computer the universe is. This is one of the claims of the so-called “digital philosophy” (see for example the works of G. Chaitin).

Toronto #10

As rough approximation we could say that a cell is sort of FPC containing some formulas and a huge amount of software. Only a very little part of this software serves to iterate formulas to generate fractal structures (or simply iterated, repetitive and modular stuff). The most part serves to create organization (CSI) that is impossible to obtain simply by iterating formulas (as Nakashima #6 rightly says, in biology what can be fractalized and modularized is limited). We see that in human technology hardware and software are always the output of intelligence. It must be the same for the biological technology. This is a basic claim of ID: there cannot be double standard, i.e. human technology is designed and natural bio technology is not. This double standard is the error of the EVO side. Nowadays, where human and natural technologies are under the eyes of all, this error is even more serious than at the time of Darwin.

The output of a FPC (and in general of any system able to output something) cannot contain more CSI than the CSI contained in the FPC itself. A system can develop only what it contains potentially inside itself. In even more general terms, an effect is virtually in its cause. In a sense the CSI of the effect is nothing other than the CSI of its cause. It is legitimate to speak of the CSI of the effect. The error happens when one doesn’t recognize its causal counterpart and thinks that the effect comes from less or even from nothing (as evolutionism does). In particular the immense CSI of human body is entirely contained in its germ. Notice that here I say generically “germ”, I don’t say specifically “DNA” because DNA is only a little part of the CSI involved in the “human project”. So you are right when you say “DNA and possibly other inputs”.

niwrad @12,

But that is precisely what the ID/Evo debate is about. We say that the hardware and software, (our brain and mind?), for example, are the results of natural processes.

You cannot use the assertion that we are wrong, as evidence to prove we are wrong.

As far as the origins of life, I don’t know, but starting with a very basic first life, evolution is the description of the process that allowed life to fine-tune itself to the universe.

I think your OP clearly shows however, that 2^1000 bits of information does not imply a probability of 1 in 2^1000 of randomly arriving at that information.

Are we agreed here?

niwrad, do your “calculation” fit to anything Dr. Dembski has published?

Toronto #13

Sorry but I think not. First, your numbers seem a little odd to me. The standard measure of information I in an event of probability p is I = log(1/p) , where log is base 2 logarithm. Hence 1000 bits of information imply a probability of 1 in 2^1000.

My OP doesn’t state that x CSI bits have a probability to occur greater than 1 on 2^x. It suggests that a FPC system containing x CSI bits eventually can generate y bits, where y may be greater than x, but these y bits only apparently are CSI bits. The probability of the FPC of randomly occurring is 1 on 2^x while the probability of its output is 1 on 2^y.

Of course these ascertainments based on the standard concepts of probability and information don’t exclude a priori that, for other reasons, the FPC system has yet lesser probability and even probability equal zero to randomly occur.

niwrad @15,

They seem odd to me too when I read what I wrote! 🙂

You’re absolutely right.

My point was that there are some ID’ers who claim x bits have a random probability of 1 in (2^x).

I hope we can agree, that that claim is not valid.

osteonectin #14

My article don’t contain calculations properly. However I got the first inspiration from Dembski’s book “Intelligent Design”, especially chapter 6.2. It have tried, although imperfectly, to provide some comments to what Dembski wrote there magisterially about fractals and CSI.

uoflcard #2

Good question.

In nature many fractals arise simply from physical laws. In this case the couple (formula + program) defaults to laws. Anyway, about the third element “computer”, as I already wrote, these laws are instructions that the processing engine of the cosmos runs.

Differently, about the biological fractals, I think that the physical laws alone are not enough to explain them. Bio-fractals need more intelligence and in this case the paradigm FPC is real in all its components.

I agree that there is some truth in your “the lawfulness of nature is inherently intelligent and purposeful itself” but at two conditions: (1) this embedded “intelligence” is of low level and can account for the simplest forms alone; (2) the highest forms (like the biological ones) need additional huge injections of organization from an intelligent designer. These injections far transcend the natural laws.