I’d like to begin my post by inviting readers to watch a short video of a 2010 TED talk titled, The roots of plant intelligence, by Professor Stefano Mancuso, who works in the Department of Plant, Soil & Environmental Science at the University of Florence. Professor Mancuso, who describes himself as a plant neurobiologist, is a leading advocate of the radical idea that plants are intelligent, and that their intelligence is embodied not in a brain but in a highly resilient distributed network whose information processing is, in its own way, just as sophisticated as that found in animals. Reporter Michael Pollan interviewed Mancuso and his colleagues for a thought-provoking article, titled, Are Plants Intelligent?, which was published in the New Yorker last month (see also here). Pollan’s article makes for compelling reading, and I would warmly recommend it to anyone with an interest in Intelligent Design.
Personally, I thought that some parts of Professor Mancuso’s stimulating talk were better argued than others, but it was definitely one of the most interesting talks I’ve listened to in a while. The part I liked best was about biomimicry – or bioinspiration, as Mancuso prefers to call it – and I’ll be discussing that, at the end of my post. I was not so impressed with Professor Mancuso’s definition of “intelligence,” which I thought was too simplistic, but he did make a good point when he argued that plants have a much wider range of capacities than people give them credit for. While I certainly wouldn’t say they have minds, they can sense and remember.
What got Professor Mancuso interested in plants?
The laboratory run by Professor Mancuso is like no other: it is the only lab in the world that does research on plant intelligence. Situated about seven miles outside Florence, Italy, the International Laboratory of Plant Neurobiology (LINV), was christened back in 2004. According to a Wired.com report by Nicole Martinelli (30 October 2007), “Mancuso decided to use the controversial term
‘plant neurobiology’ to reinforce the idea that plants have biochemistry, cell biology and electrophysiology similar to the human nervous system.”
In his report for the New Yorker, Michael Pollan tells the story of how the new field of plant neurobiology took off in 2006, following the publication of a controversial article in Trends in Plant Science by Eric D. Brenner and five colleagues (including Mancuso), which proposed the creation of a whole new field of scientific inquiry:
The six authors — among them Eric D. Brenner, an American plant molecular biologist; Stefano Mancuso, an Italian plant physiologist; František Baluška, a Slovak cell biologist; and Elizabeth Van Volkenburgh, an American plant biologist — argued that the sophisticated behaviors observed in plants cannot at present be completely explained by familiar genetic and biochemical mechanisms. Plants are able to sense and optimally respond to so many environmental variables — light, water, gravity, temperature, soil structure, nutrients, toxins, microbes, herbivores, chemical signals from other plants — that there may exist some brainlike information-processing system to integrate the data and coördinate a plant’s behavioral response. The authors pointed out that electrical and chemical signalling systems have been identified in plants which are homologous to those found in the nervous systems of animals. They also noted that neurotransmitters such as serotonin, dopamine, and glutamate have been found in plants, though their role remains unclear.
Hence the need for plant neurobiology, a new field “aimed at understanding how plants perceive their circumstances and respond to environmental input in an integrated fashion.” The article argued that plants exhibit intelligence, defined by the authors as “an intrinsic ability to process information from both abiotic and biotic stimuli that allows optimal decisions about future activities in a given environment.” Shortly before the article’s publication, the Society for Plant Neurobiology held its first meeting, in Florence, in 2005. A new scientific journal, with the less tendentious title Plant Signaling & Behavior, appeared the following year. (Emphases mine – VJT.)
Pollan describes Mancuso as “the poet-philosopher of the movement, determined to win for plants the recognition they deserve and, perhaps, bring humans down a peg in the process… Mancuso is fiercely devoted to plants — a scientist needs to ‘love’ his subject in order to do it justice, he says.”
Mancuso dates his conviction that plants are smarter than we think back to an early “Star Trek” episode called “Wink of an Eye,” which he watched as a teenager. In that episode, a race of aliens who live in an accelerated time frame, arrive on Earth. Not being able to detect any movement in human beings, they conclude that humans are lifeless, and then proceed to utilize them for their own purposes, just as we do with plants. Could it be, Mancuso wondered, that plants are as smart as we are, but just slower? Are humans guilty of colossal arrogance in their treatment of plants?
Plant neurobiologists revere plants as the Earth’s dominant life-form: one plant neurobiologist interviewed by Pollan spoke of humans and other animals as “just traces,” when compared to plants, which comprise the bulk of the Earth’s biomass. Indeed, Pollan’s article states that plants make up 99% of the total biomass, but the true figure appears to be closer to 55%, according to recent research conducted in 2012, with microbes making up 30% of the total. (Animals, including insects, comprise less than 1%.)
Although he does not object to eating plants, Professor Mancuso argues that we do have certain obligations towards them, including the duty to protect their environment, refrain from manipulating their genes, and avoid practices that stunt their growth, such as bonsai.
Has Mancuso discredited Aristotle’s psychic hierarchy?
Bust of Aristotle. Marble, Roman copy after a Greek bronze original by Lysippos from 330 BC; the alabaster mantle is a modern addition. Courtesy of Jastrow, Ludovici Collection and Wikipedia.
In his De Anima, Book II, part 3, the philosopher Aristotle (384-322 B.C.) described what has become known as the psychic hierarchy of living things, with plants on the bottom, animals in the middle, and man on top:
Of the psychic powers above enumerated some kinds of living things, as we have said, possess all, some less than all, others one only. Those we have mentioned are the nutritive, the appetitive, the sensory, the locomotive, and the power of thinking. Plants have none but the first, the nutritive, while another order of living things [animals – VJT] has this plus the sensory. If any order of living things has the sensory, it must also have the appetitive; for appetite is the genus of which desire, passion, and wish are the species; now all animals have one sense at least, viz. touch, and whatever has a sense has the capacity for pleasure and pain and therefore has pleasant and painful objects present to it, and wherever these are present, there is desire, for desire is just appetition of what is pleasant… Certain kinds of animals possess in addition the power of locomotion, and still another order of animate beings, i.e. man and possibly another order like man or superior to him,
the power of thinking, i.e. mind. (Emphases mine – VJT.)
In his TED talk, The roots of plant intelligence, Professor Mancuso took aim at Aristotle’s psychic hierarchy, with its denigration of plants. As we have seen, the research conducted by Mancuso and his associates has shown convincingly that plants have wide variety of sensory powers: they are able to sense light, water, gravity, temperature, soil structure, nutrients, toxins, microbes, herbivores, and chemical signals from other plants. Does that mean that Aristotle’s distinction between plants which lack sensory powers, and animals which possess them, can no longer be defended?
In my 2007 thesis, The Anatomy of a Minimal Mind (see pages 157-173), I addressed claims made in the scientific literature that plants possess sensory powers (although I was not then aware of Professor Mancuso’s research), as well as powerful counter-arguments by the late Dr. Rodney Cotterill that only animals with nervous systems could be said to possess true senses. I concluded that on a broad definition of “sense”, any organism possessing (i) sensors that can encode and store information relating to a stimulus, and (ii) a built-in capacity to measure the degree of change in the sensor’s state when it encounters the stimulus, can be said to sense the stimulus. On that definition, plants unquestionably possess senses – indeed, even bacteria do. I then added that on a narrower definition, the verb “sense” can be restricted to organisms whose sensors are dedicated receptor cells, which trigger a distinctive, built-in, rapid-response motor pattern (i.e. a reflex) which is specific to the signal and independent of the organism’s internal state.
Plants have “enormous numbers of plant cell surface receptors,” according to an article by Niko Geldner and Silke Robatzek, titled, Plant Receptors Go Endosomal: A Moving View on Signal Transduction (Plant Physiology, August 2008, vol. 147, no. 4, 1565-1574). But there’s more:
Although overlooked in the past, current knowledge supports the idea of receptor signaling, not only from the surface but also from endosomes. [The endosome is a membrane-bounded compartment inside eukaryotic cells – VJT.] In plants, pioneer studies on receptors that show ligand-induced as well as constitutive endocytosis provide evidence for the accumulation of active receptors in endosomes and uncover complex trafficking routes leading to recycling and degradation… As such, translocation of plasma membrane (PM) resident receptors into endosomes can be seen as a means to extend limited signaling surface, adding plasticity and modularity to the PM and ensuring a robust and efficient cellular signaling system. (Emphases mine – VJT.)
Geldner and Robatzek’s paper certainly establishes that plants have senses, in a more robust sense than scientists had imagined, until recently. On the other hand, I have been unable to locate any evidence in the scientific literature that plants possess rapid-response reflexes which are specific to one particular kind of chemical signal, as most animals do.
I conclude that some degree of sensitivity is universal to living things, and that in this respect, plants are far more like animals than scientists and philosophers previously believed. Aristotle’s concept of a purely vegetative soul, devoid of sensory powers, therefore has to be discarded.
But sensation is one thing; intelligence, quite another. What are we to make of Professor Mancuso’s claims that plants possess an intelligence of their own, albeit one which is far slower than ours in its ability to process information?
What does Mancuso mean by “intelligence”?
Any sensible discussion of “intelligence” has to start with a definition of the term. In his interview with Professor Mancuso for the New Yorker, reporter Michael Pollan got straight to the point and asked him to provide a definition:
Early in our conversation, I asked Mancuso for his definition of “intelligence.”… Most definitions of intelligence fall into one of two categories. The first is worded so that intelligence requires a brain; the definition refers to intrinsic mental qualities such as reason, judgment, and abstract thought. The second category, less brain-bound and metaphysical, stresses behavior, defining intelligence as the ability to respond in optimal ways to the challenges presented by one’s environment and circumstances. Not surprisingly, the plant neurobiologists jump into this second camp.
“I define it very simply,” Mancuso said. “Intelligence is the ability to solve problems.” (Emphases mine – VJT.)
Intelligence: more than just problem solving
Slime molds (Mycetozoa ) from Ernst Haeckel’s 1904 Kunstformen der Natur (Artforms of Nature). Slime molds are able to compute and remember the shortest distance between multiple different food sources, and Professor Mancuso believes plants can do the same thing. Does this ability make them intelligent? Image courtesy of Wikipedia.
Professor Mancuso’s definition of intelligence as the ability to solve problems is simple and beguilingly practical, but it overlooks a great deal. While problem solving is certainly part of what most people would call intelligence, other capacities are arguably much more important. In our society, we value the capacity for creating new puzzles, or posing novel questions, far more than the ability to solve old problems; indeed, the latter capacity is generally viewed as rather mundane, which is why we delegate the task to computers whenever we can. In the field of philosophy, where problems are seldom, if ever, solved to everyone’s satisfaction, the greatest philosophers are deemed to be those who can pose conundrums (such as the Gettier problem, or the Chinese room thought experiment) that keep us arguing for decades, and sometimes centuries.
Another thing that Mancuso overlooks is that most problems don’t have just one solution; they have multiple solutions. But not all solutions are equal. In the first place, some solutions are much more efficient than others, so it obviously pays to find the optimal solution. On that score, plants are almost certainly competent: as Michael Pollan reports, even “slime molds, which are a kind of amoeba, grow in the direction of multiple food sources simultaneously, usually oat flakes, in the process computing and remembering the shortest distance between any two of them,” and because plants are already “analog electrical computers,” in the words of computer scientist Andrew Adamatzky, who has worked extensively with slime molds, he is optimistic that he and Mancuso will be able to harness plants for computational tasks in the future. So far, so good.
But there’s a second important difference between good and bad solutions: some solutions to problems are much more elegant than others. In the field of mathematics, an elegant proof of a theorem is especially admired, and mathematicians esteem the beauty of the short, simple proofs of Euclid’s theorem far more highly than Appel and Haken’s 1976 proof of the four-color theorem, which struck many referees as clumsy and bulky, and which relied heavily on the assistance of a computer.
The element of beauty is completely absent from the plant computations envisaged by Professor Mancuso. Even if plants can solve problems, after a fashion, they can’t solve them in a way that strikes us as beautiful. Neither, for that matter, can animals, although the flashes of insight that enable crows to manipulate three tools in succession to get at some food might be considered beautiful, from an intellectual standpoint.
Professor Mancuso also ignores the fact that when solving a problem, we are expected to “show workings,” or explain how we arrived at the solution – and even if we are accomplished problem solvers who can obtain the solution with little or no thought, we should still be able to explain how we did so to others, if we are asked to do so. Problem solvers who are unable to provide such an explanation are said to be performing their task mechanically, and without understanding. Non-human animals fail signally at meeting this requirement: despite impressive reports of New Caledonian crows using three tools in succession to get at some food and making tools with their beaks, no crow that successfully manipulated objects with its beak in order to get at some food has ever answered the question, “Why did you do it that way?” Nor do crows ever ask one another that question: for instance, we never observe crow parents instructing their offspring, “If you bend the hook this way, it can pick up a piece of meat, but if you bend it that way, it can’t.” It is for this reason that we say that while these animals possess great skill, as well as the capacity to mentally order their actions in a sequence, they lack understanding of what they are doing. And since any explanation of why an goal-directed action should be performed in a particular way has to be given in some language, it follows that language and intelligence are therefore inseparable.
Professor Mancuso may be inclined to shrug off the ability to “show workings” as icing on the cake of intelligence: a nice feature if you’ve got it, but not a necessary condition for the possession of intelligence. But I would urge him to reconsider. Two little stories will serve to illustrate why the ability to explain how you arrived at the solution matters. The first story is taken from my high school years. After a grueling mathematics exam, one of my classmates told me about a problem he’d guessed the answer to. The answer he put down on the exam paper was -2. You should have seen the shocked expression on his face when I told him that his answer was actually correct. Now tell me: did he solve the problem?
My second story dates from 2001, when I was training to be a high school teacher in Canberra, Australia. I remember a remark made by one mathematics teacher about a student who solved a math problem but was unable to show the steps he’d taken to arrive at the solution. She said, “If he can’t explain how he solved the problem, then he doesn’t really understand his solution.”
But, it will be objected, aren’t we all familiar with stories of lonely geniuses who were capable of great flashes of insight, but who were often poor at communicating the significance of their discoveries to their peers? And isn’t it true that some people mostly think in terms of pictures or images (Einstein being a notable example)? That may be so. But the lonely genius must have been able to communicate her ideas to someone, or we wouldn’t know about her. And while we stand in awe of Einstein’s Theory of Relativity, the fact remains that he still had to use language to explain it to his colleagues. He was a pretty good writer, too, as his 1905 paper on special relativity shows.
As we have seen, Professor Mancuso equates intelligence with problem-solving. However, a solution to a problem is the sort of thing that can be legitimately critiqued by one’s peers, and expert problem solvers are expected to be able to justify their procedure for arriving at the solution, against any critics who query whether their way is the right way, or whether it’s the best way. Problem solvers who are unable to respond to criticism of this kind are assumed to possess only a superficial knowledge of their field; while they may have some understanding of what they do, it is obviously not a very deep understanding.
I conclude that speculation about the existence of a totally non-verbal intelligence is absurd and nonsensical.
Where does all this leave plants? Certainly, they are capable of solving problems in their environment, but as we have seen, there are weighty reasons for denying that such an ability constitutes intelligence in the true sense of that word: they can’t create new problems, they can’t appreciate the difference between elegant and inelegant solutions to problems, they can’t explain how they arrived at the solution to the problems they solve, and they can’t defend their solution against critiques by outsiders.
Given these significant disanalogies between human and plant problem solving which pertain to the very nature of intelligence, I am forced to conclude that Professor Mancuso’s definition of “intelligence” is inadequate.
Problem solving: not only in plants, but everywhere in Nature
Gosper’s Glider Gun creating “gliders” in the cellular automaton Conway’s Game of Life. Image courtesy of Kieff and Wikipedia.
Professor Mancuso should also be aware that if he defines the term “intelligence” broadly as the capacity for problem-solving, then practically anything – including some very simple non-living systems – can be viewed as intelligent. In his ground-breaking book, A New Kind of Science (Wolfram Media, 2002), mathematician Stephen Wolfram argues for the ubiquity of intelligence, which he defines as “an ability to perform sophisticated computations” (2002, Chapter 12, Section 10, page 822). Wolfram contends that all of the general behavioral characteristics of living things, including their much-vaunted complexity, can be mimicked by a variety of computational systems – even systems with very simple rules. In his book, he demonstrates that not only complex systems, but practically any natural or man-made system, he argues,
can be programmed to solve the same range of computational problems – given enough time and memory – as a universal Turing machine:
With the development of computer technology it became clear that many features of intelligence could be achieved in systems that are not biological. Yet our experience has still been that to build a computer requires sophisticated engineering that in a sense exists only because of human biological and cultural development.
But one of the central discoveries of this book is that in fact nothing so elaborate is needed to get sophisticated computation. And indeed the Principle of Computational Equivalence implies that a vast range of systems–even ones with very simple underlying rules – should be equivalent in the sophistication of the computations they perform.
So in as much as intelligence is associated with the ability to do sophisticated computations it should in no way require billions of years of biological evolution to produce – and indeed we should see it all over the place, in all sorts of systems, whether biological or otherwise.
And certainly some everyday turns of phrase might suggest that we do. For when we say that the weather has a mind of its own we are in effect attributing something like intelligence to the motion of a fluid. (p. 822) (Emphases mine – VJT.)
Wolfram goes on to argue (p. 823) that there is no general feature by which we can distinguish human intelligence from other kinds of intelligence. Learning and memory can take place in any system that has “structures that form in response to input, and that can persist for a long time and affect the behavior of the system” – even “simple cellular automata” or “a fluid that carves out a long-term pattern in a solid surface.” Adaptation to complex situations occurs in a great many systems – for instance, “a fluid flowing around a complex object minimizes the energy it dissipates.” Nor is abstraction unique to human beings: “as soon as one thinks of a system as performing computations one can immediately view features of those computations as being like abstract representations of input to the system.”
Wolfram concludes that there is nothing special about human beings, after all:
…[I]n Western thought there is still a strong belief that there must be something fundamentally special about us. And nowadays the most common assumption is that it must have to do with the level of intelligence or complexity that we exhibit. But building on what I have discovered in this book, the Principle of Computational Equivalence now makes the fairly dramatic statement that even in these ways there is nothing fundamentally special about us.
…[T]he Principle of Computational Equivalence asserts that almost any system whose behavior is not obviously simple will tend to be exactly equivalent in its computational sophistication.
So this means that there is in the end no difference between the level of computational sophistication that is achieved by humans and by all sorts of other systems in nature and elsewhere.
For my discoveries imply that whether the underlying system is a human brain, a turbulent fluid, or a cellular automaton, the behavior it exhibits will correspond to a computation of equivalent sophistication. (p. 844) (Emphases mine – VJT.)
Wolfram believes that all of these natural and artificial systems must be alive and intelligent in some way – a position he describes as “animism” (2002, p. 845).
Professor Mancuso evidently believes that we should respect the intelligence of plants, but his definition of intelligence implies that we should extend the same respect to crystals, or to dust whirling about in the wind. I respectfully put it to him that such a viewpoint is ethically paralyzing: in order for civilization to progress, our society needs to have the freedom to exploit things that it deems to be “resources.” Consequently, there has to be something that we can treat like dirt. And if that something is not dirt, then what is it?
A British-style crossword grid. Some highly intelligent people are able to solve the Times crossword in 30 minutes, and detective writer Martha Grimes can solve it in as little as 15 minutes. Image courtesy of Michael J. and Wikipedia.
Professor Mancuso is opposed to a form of ethical bigotry (as he sees it) which I’ll call “speed discrimination”: on his view, the mere fact that plants live and move and grow within a much slower time-frame than ours should not keep us from appreciating their intelligence. But when it comes to solving problems, speed matters. We admire the mentally sharp woman who can solve the Times crossword in 30 minutes, but we would never think of a laggard who took 30 years to do the crossword as being equally intelligent, and we would be singularly unimpressed if he were to argue that the time taken to solve the crossword was irrelevant when assessing his verbal intelligence, and that only the ability to eventually solve the crossword should count. (Think of a cocktail party, too: a person who gave a good comeback to a witty remark would not be admired for his intelligence if it took him an hour to do it, instead of a split-second.)
There are also many simple, everyday problems that every adult in our society is expected to solve, like paying for a cup of coffee, replying to an invitation, and filling out a form. We normally deem the inability to solve these problems within a certain period of time (the length of which is determined by social convention) as being equivalent to not being able to solve them at all. The same goes for motor skills: anyone would consider a child who can’t tie his shoelaces on the tenth attempt to be utterly incapable of tying them. The reason why we stipulate a cutoff point for these problem-solving tasks is simple: time is a scarce commodity, and we can’t wait around all day.
The insight that speed matters also enables us to see what’s wrong with Stephen Wolfram’s argument, that since even simple systems can (given enough time) solve the same computational tasks that humans can, they should be regarded as equally intelligent. What is the time taken for a very simple system to solve a given problem – say, a Traveling Salesman problem – exceeded the age of the universe? Should we be impressed then? Surely not.
Or take a more substantial problem: the creation of life. Evolutionary biologist Dr. Eugene Koonin’s peer-reviewed article, The Cosmological Model of Eternal Inflation and the Transition from Chance to Biological Evolution in the History of Life (Biology Direct 2 (2007): 15, doi:10.1186/1745-6150-2-15), which claims that the emergence of even a basic replication-translation system on the primordial Earth is such an astronomically unlikely event that we would need to postulate a vast number of universes, in which all possible scenarios are played out, in order to make its emergence likely. Using a toy model which makes deliberately generous assumptions, he calculates that the probability that a coupled translation-replication system emerges by chance in an observable universe like our own, within the time available, is 1 in 101,018. That’s 1 in 1 with 1,018 zeroes after it.
Professor Mancuso is an admirer of Charles Darwin, as his talk makes very plain. So I’d like to ask him a question. Would he describe our universe as intelligent, because somehow it was able to generate life within the time available, or would he say it was lucky, because by rights, it should have take much, much longer?
The only cases where we set aside the requirement for speed with regard to problem solving are for those problems that require a great deal of heavy-duty thinking, which may take years or even decades, where there is no particular urgency to arrive at a solution right away. In police work, cold cases may go unsolved for decades, but we know that it is vitally important that the true culprit be correctly identified, in a way that leaves a panel of jurors with no reasonable doubt as to his guilt. And mathematicians may take hundreds of years to solve certain problems, such as Fermat’s Last Theorem, but there is generally no pressing need for these problems to be solved any faster. (Cryptography during war-time is a notable exception to this rule.)
Speed, then, is an essential aspect of problem-solving proficiency. The attempt to divorce speed from the definition of intelligence fails.
But that still leaves us with a nagging question. If intelligence is not to be equated with mere “problem-solving ability,” then what is it, and how do we identify it?
Means and ends
Intelligence is often envisaged in terms of means and ends. For instance, the Intelligent Design text, The Design of Life (Foundation for Thought and Ethics, Dallas, 2008), by Professor William Dembski and Dr. Jonathan Wells, states in its Glossary (p. 315) that “intelligence is about matching means to ends,” and defines it in terms of the ability “to find, select, adapt and implement the means needed to effectively bring about ends.” That definition, I have to say, would probably be broad enough to encompass the behavior of non-biological systems, such as Wolfram’s simple cellular automata.
Intelligence is sometimes defined as the ability to pursue long-term goals – a capacity which has only been convincingly demonstrated in human beings, notwithstanding claims by some scientists that chimpanzees and orangutans are capable of both mental time-travel into the past and planning for the future (claims which I critically evaluate here). But a critic might reasonably ask: why should we define intelligence in such an arbitrary fashion? Shouldn’t the attainment of short-term goals count for something as well?
One possible response to this question is that while the attainment of a short-term goal might be explained away in non-rational terms (such as natural selection), only rational foresight can enable an organism to achieve a distant, long-term goal, as careful planning and prioritization are required when aiming at a distant objective. Nature cannot “look ahead” – which is why Darwinists make so much of the allegedly inept designs in Nature that no rational agent would produce, such as the laryngeal nerve. (See here for an excellent refutation of the claim that the laryngeal nerve is a poor design, written by Casey Luskin.) But Mancuso would doubtless point out that Nature does not need to consciously aim at long-term goals in order to attain them. Given enough time, a series of viable intermediates and a suitable winnowing process (such as natural selection), Nature can duplicate any result that a rational agent is capable of implementing – albeit less efficiently, on occasions.
A better definition of intelligence is therefore needed, to distinguish it more clearly from unguided natural processes.
A better definition
Fortunately, Dembski and Wells provide an alternative definition of intelligence in their book, The Design of Life, which is much more useful. On page 165, intelligence is defined as the ability to create an independently given pattern (a specification), which is not easily reproducible either by chance or necessity – that is, by stochastic processes) or deterministic processes such as physical laws. (The specification may be given in advance, but it need not be: if I claimed to have been dealt a Royal Flush while playing poker, I would rightly be suspected of cheating, because it’s such a highly specific pattern and at the same time, an extremely improbable one, and if I claimed to have been dealt 20 Royal Flushes in a row, nobody would believe me: that would exhaust the probabilistc resources of the observable universe.) As Charles Thaxton et al. summed it up in their 1984 technical work that helped launch the Intelligent Design movement, The Mystery of Life’s Origin:
“…“order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity.” [The Mystery of Life’s Origin, (Foundation for Thought and Ethics, 1984), Chapter 8, p. 130.]
The ability to create a code, or a program, or a game (something which Stephen Wolfram is fond of doing), or for that matter, a language (such as the one developed by Wolfram research) can all be seen as special cases of intelligent behaviors which are both highly specified and beyond the reach of chance and/or necessity (i.e. probabilistically complex).
Professor Mancuso might object that a computer could one day duplicate these feats. George Orwell famously anticipated art-creating, novel-writing machines in his novel 1984:
“Here were produced rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes, films oozing with sex, and sentimental songs which were composed entirely by mechanical means on a special kind of kaleidoscope known as a versificator.”
Orwell does not tell his readers how the versificator was supposed to work. But in a provocative article titled, Who Needs Humans? Someday Machines Will Create Content for You (October 15, 2013), Hubspot columnist Dan Lyons suggests that artificial intelligence could soon be used to write news stories. But a computer that could write a decent story from scratch, without any human input and using nothing more than a worldwide Web of ubiquitous videocameras, sound recorders, transmitters and satellites to pick up and relay news, would require a massive amount of information: built-in speech decoding programs to pick up what people were saying, a vast online encyclopedia of background information about who’s who and what’s what in the outside world, built-in grammar checkers to produce sentences with proper syntax, and simple story writing programs telling it how to write a news story with a beginning, a middle and an end – and a good punchline. What Intelligent Design proponents claim is that it is impossible for a natural system operating according to simple rules to produce a high degree of specified complexity (such as you would find in a news story), without vast amounts of information being input either at the start or subsequently, by language-using rational agents.
Neurons: not so great after all?
(a) The case for the possibility of intelligence without neurons
In the course of his interview with Michael Pollan, Professor Mancuso went on to declare: “Neurons perhaps are overrated. They’re really just excitable cells.” What he and his colleagues hypothesize is that plants have something analogous to a brain, in their system of roots. Interestingly, it was Charles Darwin who first proposed this idea in his work, The Power of Movement in Plants (London: John Muaary, 1880). On the very last page, Darwin stated his conclusion: “It is hardly an exaggeration to say that the tip of the radicle … having the power of directing the movements of the adjoining parts, acts like the brain of one of the lower animals; the brain being seated within the anterior end of the body, receiving impressions from the sense organs and directing the several movements.” Subsequent research has shown that the tips of plant roots can sense not only gravity, moisture, light, pressure, and hardness, but also volume, nitrogen, phosphorus, salt, various toxins, microbes, and chemical signals from neighboring plants. In fact, some plants have a vocabulary of no less than 3,000 chemical signals, according to Professor Mancuso. He and his colleague František Baluška have identified unexpectedly high levels of electrical activity and oxygen consumption in a region just behind the tips of plant roots, and they believe that this so-called “transition zone” may be the site of the “root brain” that Darwin originally proposed. Mancuso explained to Pollan why he thinks a brain would not serve plants’ interests well, and why a distributed system of intelligence in their root systems would be better suited to their needs:
“If you are a plant, having a brain is not an advantage,” Stefano Mancuso points out. Mancuso is perhaps the field’s most impassioned spokesman for the plant point of view. A slight, bearded Calabrian in his late forties, he comes across more like a humanities professor than like a scientist…
In Mancuso’s view, our “fetishization” of neurons, as well as our tendency to equate behavior with mobility, keeps us from appreciating what plants can do. For instance, since plants can’t run away and frequently get eaten, it serves them well not to have any irreplaceable organs. “A plant has a modular design, so it can lose up to ninety per cent of its body without being killed,” he said. “There’s nothing like that in the animal world. It creates a resilience.”…
In place of a brain, “what I am looking for is a distributed sort of intelligence, as we see in the swarming of birds.” In a flock, each bird has only to follow a few simple rules, such as maintaining a prescribed distance from its neighbor, yet the collective effect of a great many birds executing a simple algorithm is a complex and supremely well-coördinated behavior. Mancuso’s hypothesis is that something similar is at work in plants, with their thousands of root tips playing the role of the individual birds — gathering and assessing data from the environment and responding in local but coördinated ways that benefit the entire organism. (Emphases mine – VJT.)
In all fairness, I should point out that even within the field of plant neurobiology, no-one believes that plants actually feel emotions, although Professor Mancuso refuses to rule out the possibility that they may feel pain, even though they lack a brain. (On this point, he would do well to read Professor James Rose’s widely cited article, The Neurobehavioral Nature of Fishes and the Question of Awareness and Pain (Reviews in Fisheries Science, 10(1): 1-38, 2002), as well as a more recent article he co-authored, titled, Can fish really feel pain? in Fish and Fisheries, doi: 10.1111/faf.12010. Rose makes a very strong case that the possession of a neocortex, which is unique to mammals, is a necessary condition for consciousness. Recently, it was also discovered that birds possess an avian homologue to the neocortex. No other animals – not even the octopus – possess anything like one.)
Nevertheless, it seems that plants are cleverer than most of us think. At a conference of the Society for Plant Signaling and Behavior (formerly known as the Society for Plant Neurobiology) held in Vancouver last July, Monica Gagliano, a researcher who used to work in Professor Mancuso’s laboratory in Florence and who now works at the University of Western Australia, presented a controversial paper titled, “Animal-Like Learning in Mimosa pudica,” in which she discussed the results of experiments she had conducted, showing that the plant Mimosa pudica (pictured at the top of this post), a sensitive plant whose compound leaves fold inward and droop when touched or shaken, is capable of a rudimentary form of learning called habituation. After being repeatedly dropped from a height of 15 centimeters every five seconds, the plant eventually learned that the stimulus of being dropped was not dangerous and could be safely ignored, so it stopped closing its leaves when it was dropped. What’s more, the plant still remembered the lesson that it had learned, when tested again, four weeks later. It seems that not all plants of this species have the same capacities; some learn faster than others. A lively exchange of views followed when Gagliano presented her paper: alternative explanations (fatigue and adaptation) were proposed by scientists in the audience, and carefully rebutted by Gagliano.
For his part, Professor Mancuso speculates that calcium channels may play a role in laying down memories in plants, but the idea remains untested.
Not surprisingly, Professor Mancuso’s views have encountered stiff resistance. In a sharply word response published in Trends in Plant Science (Vol. 12, No. 4, pp. 135-136) in 2007, Amadeo Alpi and 35 prominent plant scientists pointed out that “there is no evidence for structures such as neurons, synapses or a brain in plants,” and added that while plant cells share features in common with neurons at the molecular level, there was currently no evidence that plant contained structures for signal propagation “at the cellular, tissue and organ levels,” as occurs in animals. In the final paragraph of their rebuttal, the scientists rhetorically asked, “What long-term scientific beneﬁts will the plant science research community gain from the concept of ‘plant neurobiology’?” When interviewed by Pollan, Arpi letter signatory Lincoln Taiz accused plant neurobiologists of being motivated by “brain envy.”
A Venus fly-trap closing. Image courtesy of Wikipedia.
Earlier, when discussing plants’ sensory capacities, I alluded to speed as a defining feature of animals’ nervous systems, which make use of electrochemical signaling, as opposed to the much slower chemical signaling found in most other organisms. Neurons are the basic units of the nervous systems of animals, and their speed of transmission varies from roughly 1 to 100 meters per second. Later on, when addressing the question of intelligence in plants, I argued that it was impossible to remove the element of speed from the definition of intelligence: someone who finds the solution to a problem must also do so in a timely fashion. However, I should point out that plants also make use of electrochemical signaling (as Michael Pollan mentioned in his article), and that the speed of propagagtion of this signaling varies from 0.05 cm/sec to 40 m/sec. On grounds of speed alone, then, it is impossible to draw a hard-and-fast distinction between animals and plants.
(b) Connectivity: the real reason why neurons are so important for intelligence
There is, however, one other conspicuous enjoyed by the neurons in animal nervous systems: connectivity. The old Chinese saying that a picture is worth a thousand words was never more apposite here:
Diagram of a typical myelinated vertebrate motoneuron. Image courtesy of LadyofHats and Wikipedia.)
Pictures like this give the lie to claims by people who work in the field of artificial intelligence, that computers will soon catch up with the human brain in complexity. As a thoughtful blogger named Buckeye points out in an online article, such claims are simply nonsensical:
A transistor is not a neuron, not even close. A transistor is just a simple electronic switch…
Making out like 1 transistor = 1 neuron is beyond nonsense, it’s asinine…
The fact is our brains are not simply evolution’s version of electronic computers. Our brains are electro-chemical computing devices. Each neuron can have 1000 connections to other neurons, and the chemical soup of hormones sloshing around in our skulls can have drastic effects on how they processes information. Every neuron receives a vast array of input signals from other neurons and turn that into their own complicated firing pattern that is not fully understood. They are not simple on/off switches. They are a hell of a lot more sophisticated then that. (Emphases mine – VJT.)
This is an important, because if the complexity of our brains is related to our capacity for consciousness (as Professor James Rose argues in the articles I cited above) or to our intelligence – a point that Dr. Stephen Wolfram might dispute, but only because he overlooks the vital factor of speed – then it is important to know where plants belong, in the scheme of things. How do they compare with animals, in terms of their complexity?
(i) Levels of complexity and connectivity in the brains of animals
Professor David Deamer, of the Department of Biomolecular Engineering, University of California, addressed this question from the “animal” side, in his article, Consciousness and Intelligence in Mammals: Complexity thresholds, in the Journal of Cosmology, 2011, Vol. 14. Deamer calculated the complexity of an animal’s brain using the formula, C(complexity)=log(N)*log(Z), where N is the number of units and Z is the average number of synaptic inputs to a single neuron. The human brain contains 11,500,000,000 cortical neurons. That’s N in Deamer’s formula. For humans, log(N) is about 10.1, which is higher than for any other animal, including elephants, whales and dolphins. Z, or the number of synapses per neuron, is astonishingly high: “Each human cortical neuron has approximately 30,000 synapses per cell.” Thus log(Z) is about 4.5. According to Deamer’s complexity formula, the complexity of the human brain is 10.1 x 4.5, or 45.5. Deamer then proceeds to compare humans with other animals, using the same yardstick, and then making some adjustments for animals’ different body sizes and encephalization quotients. On either the raw of adjusted figures, humans come out on top. The raw figures are as follows: elephant 45, chimpanzee 44.1, dolphin 43.6, gorilla 43.2, horse 39.1, dog 37.8, rhesus (monkey) 39.1, cat 32.7, opossum 31.8, rat 31, mouse 23.4. The adjusted complexity figures are slightly different: humans 45.5, dolphins 43.2, chimpanzees 41.8, elephant 41.8, gorilla 40.0, rhesus (monkey) 36.5, horse 34.8, dog 34.4, cat 32.7, rat 25.4, opossum 24.9, mouse 23.2. Readers should remember that this is a (double) logarithmic scale: it doesn’t mean that dolphins and elephants are nearly as smart as we are.
I should mention that although I don’t personally believe that the reflective consciousness which distinguishes human beings can be boiled down to neuronal connections (see my recent post, Is meaning located in the brain?), I nevertheless think that the primary consciousness which is found in most (and perhaps all) mammals and birds (and just possibly also in cephalopods, such as octopuses) is the product of the interconnectivity of the neurons in their brains.
An ant carrying an aphid. According to Darwin, the difference in mental abilities between an ant and an aphid is much greater than the intellectual difference between a man and an ape. Image courtesy of Wikipedia.
What about insects? Darwin, who had a high opinion of the mental capacities of insects, wrote in his Descent of Man (London: John Murray, 1871; Vol. I, Chapter IV, p. 145):
It is certain that there may be extraordinary mental activity with an extremely small absolute mass of nervous matter: thus the wonderfully diversified instincts, mental powers, and affections of ants are generally known, yet their cerebral ganglia are not so large as the quarter of a small pin’s head. Under this latter point of view, the brain of an ant is one of the most marvellous atoms of matter in the world, perhaps more marvellous than the brain of man.
But Darwin was wrong. A normal human brain has 86,000,000,000 neurons and somewhere between 1014 and 1015 [one quadrillion] synapses, which makes us superior to every other animal, including whales, dolphins and elephants. Bees, with their one cubic millimeter, one-million neuron brains, have about 109 [one billion] synapses, which is up to 1,000,000 times fewer than we have. What’s more, it turns out that even if we make adjustments to compensate for our larger size, bees (which are generally considered to be the smartest insects) still lag behind the lowest mammals: Russell (1983) estimated that encephalization quotients for living insects vary between 0.008 and 0.045, whereas the encephalization quotients of mammals vary from 0.1 for rodents to 7.6 for human beings. (See Russell, D. A. 1983. “Exponential evolution: implications for intelligent extraterrestrial life.” Advances in Space Research 3, 95-103.)
(ii) The vastly inferior connectivity of computers
What about computers, you may be asking. Professor Deamer concludes in his essay that their intelligence has been greatly over-rated:
…[B]ecause of the limitations of computer electronics, it will be virtually impossible to construct a conscious computer in the foreseeable future. Even though the number of transistors (N) in a microprocessor chip now approaches the number of neurons in a mammalian brain, each chip has a Z of 2, that is, its input-output response is directly connected to just two other transistors. This is in contrast to a mammalian neuron, in which function is modulated by thousands of synaptic inputs and output relayed to hundreds of other neurons. According to the quantitative formula described above, the complexity of the human nervous system is log(N) * log (Z) = 45.5, while that of a microprocessor with 781 million transistors is 8.9 * .3 = 2.67, many orders of magnitude less. Of course, what the microprocessor lacks in connectivity can potentially be compensated in part by speed, which in the most powerful computers is measured in teraflops compared with the kilohertz activity of neurons. Interestingly, for the nematode [worm] the calculated complexity C = 3.2, assuming an average of 20 synapses per neuron, so the functioning nervous system of this simple organism could very well be computationally modeled. (Emphases mine – VJT.)
So there you have it. The world’s top computer is about as bright as a … worm.
“What about the Internet as a whole?” you might ask. The number of transistors (N) in the entire Internet is 10^18, so log(N) is 18. log(Z) is log(2) or about 0.3, so C=(18*0.3)=5.4. That’s right: on Deamer’s scale, the complexity of the entire Internet is a miserable 5.4, or 40 orders of magnitude less than that of the human brain, which stands at 45.5.
Given Professor Deamer’s remarks about the superior speed of microprocessors, we might make some special allowances for computers. Let’s look at the Internet as a whole. 1012 operations per second (one teraflop) divided by 103 is 109, so let’s lop off nine zeroes from the figure for the human brain. 40 minus 9 is 31, so that still makes the human brain 31 orders of magnitude more complex than the entire Internet. Even a mouse’s brain (complexity 23.4) is still 9 orders of magnitude more complex, – and that’s after we factor in speed.
Moore’s law tells us that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. But Moore’s law will be dead by 2022, predicts the director of the microsystems group of the U.S. Defense Advanced Research Projects Agency (DARPA), who really should know. So much, then, for the fantasy of computers taking over the world.
(iii) So how do plants fare, in the connectivity stakes?
An ear of rye. Image courtesy of LSDSL and Wikipedia.
Now where do plants figure in all this? During his talk, Professor Mancuso mentioned the root system of a rye plant, so I’ll assume that’s his star case. Here’s what I found in an article titled, Roots and Mathematics by Stefan Anitei, in Softpedia News (April 29, 2008):
Who would believe that all the roots and absorbing hairs of grasses or cereals, be they rye, wheat or couch grass, put all together, could form a line surrounding the perimeter of an European country? In 1937, a German naturalist carried out this scrupulous work, measuring the surface and length of all roots of a rye during the earing period…
The rye roots counting found 143 roots of the first level, 30,000 roots of the second level, 2,300,000 of the third level and 11,500,000 of the fourth level, thus a number of 13,835,143 roots having a total length of about 600 km (375 mi).
Yet, the roots are covered by absorbing hairs. Their approximate number boosts the aforementioned total to 15 billion, summing a surface of 400 square meters and a length of 10,000 km (6,250 mi), one quarter of the Equator.
A rye plant daily forms 115 new roots and 119 million absorbing hairs. Thus, the total length of the roots grows daily by 5 km (3 mi) and that of the absorbing hairs by 80 km (50 mi), in the fight for conquering new soil areas. (Emphases mine – VJT.)
15 billion root hairs sounds pretty impressive; but the critical question is:
how many connections, on average, does each root hair have with neighboring hairs? Professor Mancuso does not say, and my search of the Internet also came up with no statistics on the subject. But we can set an upper and lower bound. There are 14 million roots, and 15 billion root hairs. The ratio of the latter to the former is about 1,000, or 103, so let’s assume – I’m being absurdly generous here – that each root hair is connected to 1,000 others. (That’s false, of course; although I’m no botanist, judging from these diagrams, it appears to me that each root hair is just connected to the root, as a central node.) Plugging the highest possible figure of 1,000 connections into Professor Deamer’s complexity formula, we can calculate the complexity index C=log(15,000,000,000)*log(103), or about 10.2 times 3, which is 30.6 (which would make a rye plant smarter than a mouse at 23.4, but still well below the human figure of 45.5). But if the number of connections is 100, C falls to 20.4. If it’s 10, C drops to a mere 10.2. And if it’s just 1, C drops to… zero.
Now do you see why connectivity counts?
What about connections between trees, as opposed to root hairs? In his report for the New Yorker, Michael Pollan mentions some research indicating that in a forest of trees, “the oldest trees functioned as hubs, some with as many as forty-seven connections.” I’ll go with Pollan’s figure of 47. The world’s largest single organism, the Pando forest, contains 43,000 aspen trees. Using Deamer’s formula, we get C=log(43,000)*log(47), or about 7.7 – which is higher than the figure for a nematode worm – although we haven’t factored in speed of transmission, yet – but well below the figure of 23.4 for a mouse, let alone 45.5 for a human being.
The upshot of all this is that when it comes to consciousness, plants are not likely to be conscious at all. The articles by Professor James Rose which I cited above make it clear that animals lacking a neocortex – or some homologue, such as is found in birds – almost certainly lack consciousness. Rose explains what makes the neocortex so special – in a word, it’s connectivity:
The reasons why neocortex is critical for consciousness have not been resolved fully, but the matter is under active investigation. It is becoming clear that the existence of consciousness requires widely distributed brain activity that is simultaneously diverse, temporally coordinated, and of high informational complexity (Edelman and Tononi, 1999; Iacoboni, 2000; Koch and Crick, 1999; 2000; Libet, 1999). Human neocortex satisfies these functional criteria because of its unique structural features: (1) exceptionally high interconnectivity within the neocortex and between the cortex and thalamus and (2) enough mass and local functional diversification to permit regionally specialized, differentiated activity patterns (Edelman and Tononi, 1999). These structural and functional features are not present in subcortical regions of the brain, which is probably the main reason that activity confined to subcortical brain systems can’t support consciousness. (Emphases mine – VJT.)
So, if Professor Mancuso wants to attack the “cortical chauvinism” of those who think there’s something special about the consciousness of mammals and birds (which are known to have an avian homologue of the neocortex), what he really needs to discredit is the notion that connectivity is important for intelligence.
What we can learn from plants
The most interesting part of Professor Mancuso’s research, in my opinion, relates to what he calls “bioinspiration,” or the science of using designs from the natural world in order to help engineers develop designs to solve human problems. At the Vancouver conference of the Society for Plant Signaling and Behavior, which was held last year, Mancuso gave a talk in which he suggested that we can learn a lot from studying the design of plants, and that they can help us develop new technologies. As Michael Pollan reports:
By focussing on the otherness of plants rather than on their likeness, Mancuso suggested, we stand to learn valuable things and develop important new technologies. This was to be the theme of his presentation to the conference, the following morning, on what he called “bioinspiration.” How might the example of plant intelligence help us design better computers, or robots, or networks?
Mancuso was about to begin a collaboration with a prominent computer scientist to design a plant-based computer, modelled on the distributed computing performed by thousands of roots processing a vast number of environmental variables. His collaborator, Andrew Adamatzky, the director of the International Center of Unconventional Computing, at the University of the West of England, has worked extensively with slime molds, harnessing their maze-navigating and computational abilities. (Adamatzky’s slime molds, which are a kind of amoeba, grow in the direction of multiple food sources simultaneously, usually oat flakes, in the process computing and remembering the shortest distance between any two of them; he has used these organisms to model transportation networks.) (Emphases mine – VJT.)
In an email, Adamatzky informed Pollan that although plants were slower-growing and less flexible than slime molds, they were also more robust and capable of maintaining their shape for a very long time. Adamatzky was optimistic that he and Mancuso would be able to harness the power of plants for computational tasks.
During his talk, Professor Mancuso also discussed his collaboration with Barbara Mazzolai, of the Italian Institute of Technology, in Genoa, to develop a “plantoid”: a robot designed to act like a plant, rather than an animal. “If you look at the history of robots, they are always based on animals — they are humanoids or insectoids,” he said. “If you want something swimming, you look at a fish. But what about imitating plants instead? What would that allow you to do? Explore the soil!” A team headed by Mancuso and Mazzolai is currently developing a “robotic root” capable of slowly penetrating the soil, sensing conditions, and altering its trajectory when the circumstances demand it. The team is funded by a grant from the European Union’s Future and Emerging Technologies program.
A photo of the world’s oldest organism, Pando, also known as The Trembling Giant, a clonal colony of a single male quaking aspen (Populus tremuloides), with 40,000 tree trunks sharing one massive underground root system. The plant is estimated to weigh 6,000,000 kg altogether, and its root system is 80,000 years old. Image courtesy of J. Zapell, Fish Lake National Forest website and Wikipedia.
But there’s more. Professor Mancuso believes that studying plants could help researchers to design the Internet in better ways. Michael Pollan continues his story:
The most bracing part of Mancuso’s talk on bioinspiration came when he discussed underground plant networks. Citing the research of Suzanne Simard, a forest ecologist at the University of British Columbia, and her colleagues, Mancuso showed a slide depicting how trees in a forest organize themselves into far-flung networks, using the underground web of mycorrhizal fungi which connects their roots to exchange information and even goods. This “wood-wide web,” as the title of one paper put it, allows scores of trees in a forest to convey warnings of insect attacks, and also to deliver carbon, nitrogen, and water to trees in need…
In his talk, Mancuso juxtaposed a slide of the nodes and links in one of these subterranean forest networks with a diagram of the Internet, and suggested that in some respects the former was superior. “Plants are able to create scalable networks of self-maintaining, self-operating, and self-repairing units,” he said. “Plants.” (Emphases mine – VJT.)
Now, Professor Mancuso is very right about one thing: there is real intelligence at work in the design of these networks. Mancuso locates it in plants; Intelligent Design proponents, realizing that unguided mechanisms cannot account for the origins of these designs, attribute them to a Higher Intelligence.
This point was beautifully illustrated a few months ago, with the publication of an article in The Conversation titled, Spider silk is a wonder of nature, but it’s not stronger than steel (5 June 2013), by Michelle Oyen, a lecturer in the Mechanics of Biological Materials at University of Cambridge. In the course of her article, in which she pointed out that spider’s silk and steel were of roughly equal strength even though silk is six times less dense, Oyen compared human engineering with natural synthesis:
Spider silk is a protein, and proteins are formed inside of living cells. A process that happens at body temperature, unlike the manufacturing of steel, which happens in a furnace. The magic of spider silk has everything to do with the transmission of information through DNA. Human engineering is adept at using more energy to solve problems. Nature does it through the use of better information. (Emphases mine – VJT.)
A recent post on Evolution News and Views, titled, Why Biomimicry Beats Engineering: The Case of Spider Silk (July 17, 2013) included a telling comment on Oyen’s observation:
Think of the implications of that statement. We know that “information” in human engineering always comes via intelligent design. That includes the cases where engineers employ “evolutionary algorithms” to discover solutions: the human mind designs the algorithm, chooses the goal, and verifies the solution against the goal. Materialism provides no such agent. It shouldn’t be surprising, then, that Oyen never mentions evolution.
So if “nature” solves problems “through the use of better information,” does it make any sense to assume that nature’s information arose via unguided, purposeless processes like natural selection? (Emphases mine – VJT.)
Corn plants emit a chemical distress call when attacked by caterpillars. The call alerts parasitic wasps, which can pick up the scent at some distance, follow it to the plants being attacked, and destroy the caterpillars. This is a striking example of inter-species co-operation.
When reporter Michael Pollan phoned Suzanne Simard, whose research on the “wood-wide web” had been cited by Mancuso in his talk, she described in detail how she and her colleagues had tracked the flow of nutrients and chemical signals through the network of roots in a forest of fir trees, by injecting them with radioactive carbon isotopes, and then tracking the isotopes with a Geiger counter over the next few days. It turned out that every tree in a plot thirty meters square was connected to the root network, with some trees having up to forty-seven connections. In his report for the New Yorker, Pollan likened the forest network to an airline route map. But there was more:
The pattern of nutrient traffic showed how “mother trees” were using the network to nourish shaded seedlings, including their offspring — which the trees can apparently recognize as kin — until they’re tall enough to reach the light. And, in a striking example of interspecies coöperation, Simard found that fir trees were using the fungal web to trade nutrients with paper-bark birch trees over the course of the season. The evergreen species will tide over the deciduous one when it has sugars to spare, and then call in the debt later in the season. For the forest community, the value of this coöperative underground economy appears to be better over-all health, more total photosynthesis, and greater resilience in the face of disturbance. (Emphases mine – VJT.)
And if you think that’s a sophisticated case of plant signaling, get this:
Perhaps the cleverest instance of plant signalling involves two insect species, the first in the role of pest and the second as its exterminator. Several species, including corn and lima beans, emit a chemical distress call when attacked by caterpillars. Parasitic wasps some distance away lock in on that scent, follow it to the afflicted plant, and proceed to slowly destroy the caterpillars. Scientists call these insects “plant bodyguards.” (Emphases mine – VJT.)
I must say I find it very odd that these ecosystems appear to promote “the greatest good of the greatest number.” It certainly makes one wonder about the aptness of Richard Dawkins’ “selfish gene” metaphor for evolution.
Now, I have no doubt that Darwinists can come up with an evolutionary “Just-so” story that can explain how these instances of inter-specific co-operation might have evolved. Rather than try to discredit these accounts, I would suggest that a more fruitful avenue of inquiry for Intelligent Design proponents would be to construct artificial models of ecosystems and predict what kinds of inter-specific co-operation would be expected to take place, and how often. It may well turn out that there are a whole host of holistic properties of ecosystems that scientists are not seeing because their favored theory of origins is a bottom-up, individualistic, gene-centric account. If Intelligent Design scientists modeling ecosystems were able to discover these holistic properties through a process of top-down modeling, it would be a real feather in the cap of the Intelligent Design movement.