Uncommon Descent Serving The Intelligent Design Community

Craig and his critics: Why the Cambridge Declaration on Consciousness is more propaganda than science

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In my previous post, I wrote about the philosopher and Christian apologist, Professor William Lane Craig, who has been widely criticized for some remarks he made on animal suffering in a debate with the atheist philosopher Dr. Stephen Law, in October 2011. Although Craig made several scientific errors, his key claim that animals do not suffer in the same way that we do is a scientifically defensible one. Surprisingly, it turns out that science is not currently able to demonstrate with even a high degree of probability that animals suffer at all, and I cited various experts in the field of animal consciousness who admitted as much. Consequently any atheist claiming to know that animals suffer will have to appeal to a non-scientific source of knowledge – direct intuition. (Most pet owners would say that they “just know” that their animals are capable of suffering.) But if direct intuition is admissible as a valid source of knowledge, then a religious person’s claim to “just know” that God exists may also be admissible. I concluded that in any case, the atheistic argument from animal suffering was too beset with uncertainties to work as a successful demonstration of God’s non-existence.

I also mentioned in my post that Craig’s views on animal suffering had been criticized in a video released on October 3, 2012, entitled, Can animals suffer? Debunking the philosophers who say no, from Descartes to William Lane Craig. The video was very skilfully produced by an online skeptic called Skydivephil (whose real name is Phil Harper) and his friend Monica, who presented the video. Several neuroscientists co-operated with Skydivephil and Monica in the making of their video. Additionally, the Cambridge Declaration on Consciousness (which was publicly proclaimed and signed on July 7, 2012) was cited in support of the claim that all mammals and birds are conscious, as well as octopuses and many other creatures, possibly including even insects. The Declaration also stated that “evidence of near human-like levels of consciousness” had been most dramatically observed in African grey parrots (pictured above, courtesy of Wikipedia).

In this post, I’d like to explain why I think the Cambridge Declaration on Consciousness, which was signed by no more than a dozen scientists anyway (see also here), is more propaganda than science, and why it proves absolutely nothing regarding animal consciousness. Consequently, it would be utter foolishness to use such a flawed document to refute any claims made by Professor William Lane Craig, concerning animal suffering.

I would like to mention for the record that I personally believe that most, if not all, mammals and birds are capable of suffering and that some of these animals may have a rudimentary sense of self – although I would also maintain that there’s currently no good scientific evidence that any non-human animal has self-awareness. I’ll briefly discuss the theological implications of animals’ lack of self-awareness at the end of this post.

What’s wrong with the Cambridge Declaration on Consciousness, in a nutshell

In a nutshell, I contend that there are eleven sound reasons for rejecting the Cambridge Declaration on Consciousness: two logical reasons and nine scientific reasons.

My logical reasons for rejecting the Cambridge Declaration are as follows:

1. It makes the elementary logical mistake of arguing that because brain systems other than the cortex are also involved in supporting consciousness, therefore animal consciousness is possible, even in the total absence of a cortex.

2. It makes the logical error of arguing from the existence of strong evidence for emotions in certain animals to the conclusion these animals must possess some sort of consciousness. That doesn’t necessarily follow. If we grant that at least some of our emotions are unconscious, then we have to consider the possibility that for certain animals, all of their emotions might be unconscious. Emotions are not necessarily conscious feelings.

My scientific reasons for rejecting the Cambridge Declaration are as follows:

1. At least one of the signatories of the Cambridge Declaration is on the record as acknowledging that there’s no scientific proof that non-human animals – including primates – are conscious.

2. Key scientists who work in the field, such as Professor Marian Stamp Dawkins, have warned against the dangers of anthropomorphism, and have argued that scientists should maintain a “militant agnosticism” on the subject of animal consciousness, in the course of their research.

3. Professor Philip Low, who originally authored the Cambridge Declaration on Animal Consciousness before it was subsequently edited, has already used the Declaration for propagandistic purposes, in an interview he gave in the Brazilian magazine Veja on 16 July 2012, where he irresponsibly claimed that scientists now know that mammals, birds and octopuses suffer.

4. As someone who corresponded with some of the signatories of the Cambridge Declaration while writing my Ph.D. thesis, I can categorically state that their views on animal consciousness are not representative of what most neuroscientists think, on the subject of animal consciousness.

5. I also know for a fact that the signatories of the Cambridge Declaration disagree widely even amongst themselves as to which animals are conscious.

6. Some of the signatories of the Cambridge Declaration have emailed me in the past, expressing views which are at variance with statements they made in the Declaration.

7. The Cambridge Declaration on Consciousness goes far beyond the available scientific evidence in assigning human-like levels of consciousness to parrots.

8. The majority of neuroscientists currently working in the field of animal consciousness would disagree with the Cambridge Declaration’s contentious claim that a primitive affective consciousness (or emotional awareness) can be found in a wide variety of animals that lack a cortex, including insects.

9. The notion that consciousness evolved in parallel in vertebrates and octopuses, as the Declaration suggests, is highly problematic on anatomical grounds.

Notwithstanding my grave reservations regarding the Cambridge Declaration on Consciousness, I would be the first to acknowledge that a very strong scientific case can be made for the existence of consciousness in mammals and birds, although for reasons I discussed in my previous post, I don’t think we can yet say that the existence of consciousness in non-human animals is scientifically probable. Be that as it may, the case for consciousness in animals other than mammals and birds is a much weaker one, and the signatories of the Cambridge Declaration have no good scientific grounds for imputing consciousness to these animals as well.

Who are the signatories of the Cambridge Declaration on Consciousness, and how many of them are there?

The Cambridge Declaration on Consciousness was signed at the Francis Crick Memorial Conference in Cambridge, U.K., on July 7, 2012, by the conference participants, who are listed here. If you look very carefully at the signed copy of the Declaration – copying the photo over into a temporary Word document and then blowing it up might help, and if you press Control-A you can see the image in negative as well – you will see that it was signed by no more than a dozen scientists, who then printed their names beside their signatures.

I have to say that I find this very odd. Just to mention two organizations: the International Brain Research Organization‘s membership currently includes more than 80 corporate and academic affiliated associations with a combined membership of 75,000 neuroscientists, while the Society for Neuroscience has more than 40,000 members. Why are we allowing 12 people to speak for tens of thousands? I might add too that while there are some very “big names” on this list, not all of them are world-famous. That’s why most press releases on the Cambridge Declaration only mention six or seven names.

What does the Cambridge Declaration on Consciousness actually say, and what claims does it make that are scientifically contentious?

An octopus moving between two pools in a low-tide zone. The Cambridge Declaration imputes consciousness to octopuses. Picture courtesy of Brocken Inaglory and Wikipedia.

(a) Key statements in the Cambridge Declaration on Consciousness

The Cambridge Declaration on Consciousness (which was publicly proclaimed at the Francis Crick Memorial Conference on Consciousness in Human and non-Human Animals, at Churchill College, University of Cambridge, UK, on July 7, 2012, and signed by the conference participants that evening at the Hotel du Vin in Cambridge) claims that all mammals and birds are conscious, as well as octopuses and many other creatures, possibly including even insects. The following are a few key excerpts from the Declaration:

* The neural substrates of emotions do not appear to be confined to cortical structures. In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals. Artificial arousal of the same brain regions generates corresponding behavior and feeling states in both humans and non-human animals. Wherever in the brain one evokes instinctual emotional behaviors in non-human animals, many of the ensuing behaviors are consistent with experienced feeling states, including those internal states that are rewarding and punishing… Furthermore, neural circuits supporting behavioral/electrophysiological states of attentiveness, sleep and decision making appear to have arisen in evolution as early as the invertebrate radiation, being evident in insects and cephalopod mollusks (e.g., octopus).

* Birds appear to offer, in their behavior, neurophysiology, and neuroanatomy a striking case of parallel evolution of consciousness. Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots. Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought. Moreover, certain species of birds have been found to exhibit neural sleep patterns similar to those of mammals, including REM sleep and, as was demonstrated in zebra finches, neurophysiological patterns, previously thought to require a mammalian neocortex. Magpies in particular have been shown to exhibit striking similarities to humans, great apes, dolphins, and elephants in studies of mirror self-recognition.

* … Evidence that human and nonhuman animal emotional feelings arise from homologous subcortical brain networks provide compelling evidence for evolutionarily shared primal affective qualia.

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

(b) Professor James D. Rose’s summary of current scientific findings on consciousness

In order to appreciate what’s scientifically contentious about the Declaration, it’s useful to contrast it with what Professor James D. Rose of the Department of Zoology and Physiology, University of Wyoming, wrote about the two kinds of consciousness generally recognized by neuroscientists – primary consciousness and higher-order consciousness – in his widely cited article, The Neurobehavioral Nature of Fishes and the Question of Awareness and Pain (Reviews in Fisheries Science, 10(1): 1-38, 2002), after an exhaustive survey of the current scientific literature on consciousness (the headings are mine – VJT):

What kinds of consciousness do neuroscientists recognize?

Although consciousness has multiple dimensions and diverse definitions, use of the term here refers to two principal manifestations of consciousness that exist in humans (Damasio, 1999; Edelman and Tononi, 2000; Macphail, 1998): (1) “primary consciousness” (also known as “core consciousness” or “feeling consciousness”) and (2) “higher-order consciousness” (also called “extended consciousness” or “self-awareness”). Primary consciousness refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions. Higher-order consciousness includes awareness of one’s self as an entity that exists separately from other entities; it has an autobiographical dimension, including a memory of past life events; an awareness of facts, such as one’s language vocabulary; and a capacity for planning and anticipation of the future. Most discussions about the possible existence of conscious awareness in non-human mammals have been concerned with primary consciousness, although strongly divided opinions and debate exist regarding the presence of self-awareness in great apes (Macphail, 1998). The evidence that the neocortex is critical for conscious awareness applies to both types of consciousness. Evidence showing that neocortex is the foundation for consciousness also has led to an equally important conclusion: that we are unaware of the perpetual neural activity that is confined to subcortical regions of the central nervous system, including cerebral regions beneath the neocortex as well as the brainstem and spinal cord (Dolan, 2000; Guzeldere et al., 2000; Jouvet, 1969; Kihlstrom et al., 1999; Treede et al., 1999).
(2002, PDF, Section IV, p. 5)

Why can’t behavioral indicators be used to identify consciousness in animals?

In all vertebrates, including humans, innate responses to nociceptive stimuli, such as limb withdrawal, facial displays, and vocalizations are generated by neural systems in subcortical levels of the nervous system, mainly the spinal cord and brainstem. Understanding that the display of behavioral responses to nociceptive stimuli does not, by itself, imply conscious awareness of pain is vital for a valid conceptualization of the neural basis of pain. For example, humans that are completely unconscious due to massive damage of the cerebral cortex can still show facial, vocal, and limb responses to nociceptive stimuli even though the experience of pain is impossible.
(2002, PDF, Summary and Conclusions, pp. 27-28.)

What are the neural requirements for primary consciousness?

Primary consciousness appears to depend greatly on the functional integrity of several cortical regions of the cerebral hemispheres especially the “association areas” of the frontal, temporal, and parietal lobes (Laureys et al., 1999, 2000a-c). Primary consciousness also requires the operation of subcortical support systems such as the brainstem reticular formation and the thalamus that enable a working condition of the cortex. However, in the absence of cortical operations, activity limited to these subcortical systems cannot generate consciousness (Kandel et al., 2000; Laureys et al., 1999, 2000a; Young et al., 1998). Wakefulness is not evidence of consciousness because it can exist in situations where consciousness is absent (Laureys et al., 2000a-c). Dysfunction of the more lateral or posterior cortical regions does not eliminate primary consciousness unless this dysfunction is very anatomically extensive (Young et al., 1998).
(2002, PDF, Section IV, p. 5)

The anterior cingulate gyrus is thought to be especially important for processing the emotional unpleasantness of pain. It is a unique type of fivelayered cortex, known as mesocortex, nearly identical in structure with neocortex and specific to mammals, but also having unique structural features in great apes and humans (Nimchinsky et al., 1997). For simplification of his discussion, cingulated cortex will be included in references to neocortex.
(2002, PDF, Section VIII, p. 17)

…[T]he cognitive-evaluative components of pain (attention to the pain, perceived threat to the individual, and conscious generation of strategies for dealing with the pain), are based on frontal lobe structures, especially the prefrontal cortex and anterior cingulate gyrus. There may be other cortical regions and processes that are important for the totality of the pain experience. The most important point here is that the absolute dependence of pain experience on neocortical functions is now well established (Price, 1999; Treede et al., 1999).
(2002, PDF, Section VIII, p. 19)

What are the neural requirements for higher-order consciousness?

Higher-order consciousness depends on the concurrent presence of primary consciousness and its cortical substrate, but the additional complexities of this consciousness require functioning of additional cortical regions. For example, long-term, insightful planning of behavior requires broad regions of the “prefrontal” cortex. Likewise, awareness of one’s own bodily integrity requires activity of extensive regions of parietal lobe cortex (Kolb and Whishaw, 1995). In general, higher-order consciousness appears to depend on fairly broad integrity of the neocortex. Widespread degenerative changes in neocortex such as those accompanying Alzheimer’s disease, or multiple infarcts due to repeated strokes, can cause a loss of higher-order consciousness and result in dementia, while the basic functions of primary consciousness remain (Kandel et al., 2000; Kolb and Whishaw, 1995).
(2002, PDF, Section IV, pp. 5-6)

The reasons why neocortex is critical for consciousness have not been resolved fully, but the matter is under active investigation. It is becoming clear that the existence of consciousness requires widely distributed brain activity that is simultaneously diverse, temporally coordinated, and of high informational complexity (Edelman and Tononi, 1999; Iacoboni, 2000; Koch and Crick, 1999; 2000; Libet, 1999). Human neocortex satisfies these functional criteria because of its unique structural features: (1) exceptionally high interconnectivity within the neocortex and between the cortex and thalamus and (2) enough mass and local functional diversification to permit regionally specialized, differentiated activity patterns (Edelman and Tononi, 1999). These structural and functional features are not present in subcortical regions of the brain, which is probably the main reason that activity confined to subcortical brain systems can’t support consciousness. Diverse, converging lines of evidence have shown that consciousness is a product of an activated state in a broad, distributed expanse of neocortex. Most critical are regions of “association” or homotypical cortex (Laureys et al., 1999, 2000a-c; Mountcastle, 1998), which are not specialized for sensory or motor function and which comprise the vast majority of human neocortex. In fact, activity confined to regions of sensory (heterotypical) cortex is inadequate for consciousness (Koch and Crick, 2000; Lamme and Roelfsema, 2000; Laureys et al., 2000a,b; Libet, 1997; Rees et al., 2000).
(2002, PDF, Section IV, p. 6)

Which animals are conscious?

Because it is known that neocortex is necessary for consciousness in humans, it might also be assumed that other animals with neocortex, that is all mammals, should have some form of consciousness as well. In practice, there is a wide range of beliefs or working assumptions about this matter among neuroscientists. Macphail (1998) has argued that evidence from behavior warrants the conclusion that nonhuman mammals cannot have consciousness of any type. In contrast, some neuroscientists routinely use primates to investigate the cortical neural mechanisms underlying primary consciousness (Edelman and Tononi, 2000; Koch and Crick, 1999, 2000). While many neuroscientists seem to assume the existence of primary consciousness in at least some mammals, particularly primates, extended consciousness is generally considered a uniquely human capacity (Donald, 1991; Edelman and Tononi, 2000).
(2002, PDF, Section VII, pp. 12, 13)

Whether the neocortex of non-human mammals can support a rudimentary type of consciousness is not entirely clear…
(2002, PDF, Summary and Conclusions, p. 27)

Even among mammals there is an enormous range of cerebral cortex complexity. It seems likely that the character of pain, when it exists, would differ between mammalian species, a point that has been made previously by pain investigators (Melzack and Dennis, 1980; Bermond, 1997). Bermond has critiqued claims that non-human species can experience pain and suffering and argued that because conscious awareness of pain depends on extensively developed frontal lobe neocortex, few (if any) mammals besides humans possess an adequate cortical substrate for pain experience.
(2002, PDF, Section IX, p. 22)

Which animals possess higher-order consciousness?

While many neuroscientists seem to assume the existence of primary consciousness in at least some mammals, particularly primates, extended consciousness is generally considered a uniquely human capacity (Donald, 1991; Edelman and Tononi, 2000).
(2002, PDF, Section VII, p. 13.)

As explained earlier, the neural processes mediating conscious awareness appear to be highly complex, requiring large, structurally differentiated neocortical regions with great numbers of exactly interconnected neurons (Tononi and Edelman, 1998). What is more, the type of neocortex most essential to consciousness, the nonsensory association cortex, comprises the vast majority of human cerebral cortex, but it is a very small proportion of the neocortex in most mammals (Mountcastle, 1998; Deacon, 1992a). Consequently, conscious experience resembling that of humans would be extremely improbable for the great majority of mammals. Even great apes, having substantially less nonsensory association neocortex than humans (Deacon, 1992a), would be unlikely candidates for human-like higher-order consciousness, as their behavioral characteristics, such as inability to acquire true language use, indicate (Donald, 1991; Macphail, 1998).
(2002, PDF, Section IX, p. 21)

Before I go on, I’d like to point out one very funny thing: some of the authorities quoted by Professor Rose in support of the view that the neocortex plays a vital role in supporting consciousness are also signatories of the recent Cambridge Declaration on Consciousness! In particular, neuroscientists Christof Koch (whose mentor was the late Francis Crick) and Stephen Laureys are both quoted by Rose. What, you may wonder, prompted the turnaround? (By the way, the “Edelman” quoted by Rose is not Professor David Edelman, but his father Gerard, a famous neuroscientist who was awarded a Nobel Prize in 1972.)

(c) What’s scientifically contentious about the Cambridge Declaration on Consciousness?

The key differences between Dr. James Rose’s summary of the neurological literature on consciousness and the statements made in the Cambridge Declaration on Consciousness are as follows:

(i) the contemporary neurological literature on consciousness recognizes only two kinds of consciousness – primary and higher-order consciousness. What the authors of the Cambridge Declaration are really proposing, when they talk about “similar affective states” in humans and non-human animals and “shared primal affective qualia,” is the existence of a third and more basic kind of consciousness, which one of the signatories, Jaak Panksepp, commonly refers to as affective consciousness, and which the signatories of the Declaration claim is found in a wide variety of animals;

(ii) the authors of the Cambridge Declaration dispute the commonly accepted view that a functioning neocortex is a requirement for consciousness, and argue that subcortical regions of the brain are sufficient to support consciousness, even in the absence of a neocortex, which is very contentious, as the neocortex appears to have some unique properties – namely, (a) exceptionally high interconnectivity within the neocortex and between the cortex and thalamus and (b) sufficient mass and local functional diversification to permit regionally specialized, differentiated activity patterns – which suggest that it plays a vital and indispensable role in supporting consciousness;

(iii) although they don’t use the term “pain” in their statement but refer instead to “rewards” and “punishments,” the authors of the Cambridge Declaration are clearly of the view that pain is experienced by a wide variety of animals, whereas according to Dr. James D. Rose, many neuroscientists think it unlikely that pain is even experienced by all mammals;

(iv) the authors of the Cambridge Declaration are willing to impute higher-order human-like consciousness even to some birds, whereas Dr. Rose thinks it is probably confined to human beings, although it may possibly exist in the great apes.

The two big logical fallacies in the Cambridge Declaration on Consciousness

I claimed above that the Cambridge Declaration on Consciousness contains two logical fallacies: one relating to the cortex of the brain, and the other relating to emotions. Let’s begin with the cortex.

The authors of the Cambridge Declaration are perfectly correct when they say that not only the neocortex, but also subcortical regions of the brain, play an important role in supporting consciousness. Hence they are quite justified in asserting that: “The neural substrates of emotions do not appear to be confined to cortical structures.”

However, it is quite another thing to claim that conscious emotions can occur even in the absence of a neocortex – which is what the authors of the Cambridge Declaration do when they claim that “human and nonhuman animal emotional feelings arise from homologous subcortical brain networks.” This is an elementary logical fallacy, and I’m not the only one who has noticed it. Peter Hankins, over at his blog, Consciousentities.com, made the same point in a post on the Cambridge Declaration (October 14, 2012):

To make matters worse the Declaration seems to come close to a clunking logical error, along the lines of: other areas than the neocortex are involved in having feelings; animals have those other areas, therefore animals have feelings. That wouldn’t work: you could as well argue that: other organs than the eye are involved in seeing; people whose eyes have been gouged out have those other organs; therefore people whose eyes have been gouged out can see. You can’t really dismiss the neocortex that easily.

This is an important point, as the neocortex is found only in mammals, and a homologue of the neocortex is only known to occur in birds. If the neocortex – or some homologue of it – is required for consciousness, then we have good grounds for saying that only these two groups of animals are conscious, as no other animals possess neural structures with anything like the same degree of interconnectivity. However, mammals and birds make up just 13,000 species, or 0.2%, out of a total of at least 7.7 million species of animals. So even if all mammals and birds are sentient, it would still follow that the vast majority of animals (99.8% of all species) are not.

Second, the document fails to explain why emotions in animals require consciousness. Human beings sometimes have unconscious emotions; why couldn’t it be like that all the time for some animals?

The Cambridge Declaration makes an enormous logical leap in its argument that a wide variety of animals have conscious feelings. It is one thing to argue, as the authors of the Declaration do, that animals have emotions, such as fear or anger. It is quite another thing to argue that animals must also have conscious feelings, as the same authors go on to assert. The former does not imply the latter.

Why is the neocortex thought to be so important for consciousness?


Anatomical subregions of the cerebral cortex. The neocortex is the outer layer of the cerebral hemispheres. It is made up of six layers, labelled I to VI (with VI being the innermost and I being the outermost). The neocortex part of the brain of mammals. A homologous structure also exists in birds. Image (courtesy of Wikipedia) taken from Patrick Hagmann et al. (2008) “Mapping the Structural Core of Human Cerebral Cortex,” PLoS Biology 6(7): e159. doi:10.1371/journal.pbio.0060159.

According to Dr. James Rose’s paper, The Neurobehavioral Nature of Fishes and the Question of Awareness and Pain (Reviews in Fisheries Science, 10(1): 1–38, 2002):

The reasons why neocortex is critical for consciousness have not been resolved fully, but the matter is under active investigation. It is becoming clear that the existence of consciousness requires widely distributed brain activity that is simultaneously diverse, temporally coordinated, and of high informational complexity (Edelman and Tononi, 1999; Iacoboni, 2000; Koch and Crick, 1999; 2000; Libet, 1999). Human neocortex satisfies these functional criteria because of its unique structural features: (1) exceptionally high interconnectivity within the neocortex and between the cortex and thalamus and (2) enough mass and local functional diversification to permit regionally specialized, differentiated activity patterns (Edelman and Tononi, 1999). These structural and functional features are not present in subcortical regions of the brain, which is probably the main reason that activity confined to subcortical brain systems can’t support consciousness. Diverse, converging lines of evidence have shown that consciousness is a product of an activated state in a broad, distributed expanse of neocortex. Most critical are regions of “association” or homotypical cortex (Laureys et al., 1999, 2000a-c; Mountcastle, 1998), which are not specialized for sensory or motor function and which comprise the vast majority of human neocortex. In fact, activity confined to regions of sensory (heterotypical) cortex is inadequate for consciousness (Koch and Crick, 2000; Lamme and Roelfsema, 2000; Laureys et al., 2000a,b; Libet, 1997; Rees et al., 2000).
(2002, PDF, Section IV, p. 6)

Nine scientific flaws in the Cambridge Declaration on Consciousness

1. At least one of the signatories of the Declaration is on the record as acknowledging that there’s no scientific proof that animals are conscious.

Professor David Edelman is one of the signatories of the Cambridge Declaration on Consciousness – indeed, he helped edit it. Yet even he acknowledges that there is currently no proof that any non-human animals are conscious. For instance, David B. Edelman, Bernard J. Baars and Anil K. Seth acknowledge this point in their paper, Identifying hallmarks of consciousness in non-mammalian species (Consciousness and Cognition 14 (2005), 169-187):

In the absence of explicit report from a first person point of view, doubt could be cast on the assumption that members of any non-human species are conscious. Such doubt may even exist in the case of primates, where part of the problem is that much of the relevant behavioral research was not initiated with any sort of generally agreed upon definition of consciousness. Over the past two decades, quite rigorous playback experiments were carefully crafted and deployed to tease out evidence of some kind of social awareness, intentionality, or even a “theory of mind” in monkeys in the wild (Bergman, Beehner, Cheney, & Seyfarth, 2003; Cheney & Seyfarth, 1990; Seyfarth & Cheney, 2003). At the same time, sophisticated laboratory-based methodologies were also deployed to test for a theory of mind in apes (Premack & Woodruff, 1978; Premack & Premack, 1984; Savage-Rumbaugh, Sevcik, Rumbaugh, & Rubert, 1985). Although these studies clearly demonstrated a highly sophisticated social intelligence (Seyfarth & Cheney, 2003) and, some might argue, even self-awareness (Gallup, 1970) among certain primates, they did not allow the conclusion that these animals were conscious.

A synthetic approach to assessing primate consciousness requires combining behavioral evidence with neuroanatomical and neurophysiological analyses. Thus, the main point supporting the case for consciousness in monkeys and apes is that they have rich discriminatory behavior along with thalamocortical systems that enable complex reentrant neuronal signaling (Seth et al., 2004). Moreover, even in the absence of the semantic capabilities shown by chimpanzees and bonobos, experiments of the kind performed by Logothetis (1998) on monkeys show that higher cortical processing results in neural responses to reported percepts, not just to sensory signals. The point of the present discussion is to suggest that this sort of synthetic methodology should provide a model for researchers pursuing consciousness studies in non-mammalian species. (pp. 180-181)

In a companion paper by A. K. Seth, B. J. Baars and D. B. Edelman, entitled, Criteria for consciousness in humans and other mammals (Consciousness and Cognition, 14 (2005), 119-139), the authors enumerate 17 distinctive properties of consciousness, and then proceed to discuss whether the presence of these properties can be tested in non-human animals, in section 3 of their paper (“Putting it all together”). The authors argue that while many of the 17 properties can be tested for in animals, others cannot at the present time. They also suggest that the list of hallmark properties of consciousness may need to be revised in the future, as scientific knowledge advances.

How, in practice, can these properties be used to test comparative predictions about consciousness? Considering this question raises the issue that the foregoing properties vary considerably in their testability. Those that have to do with structural homologies of neuroanatomy are relatively easy to test; it is not difficult to identify a thalamocortical complex in a monkey or in a dog (criterion #2). It is also relatively straightforward to test for neural dynamics generated within these structures; EEG signature (#1), widespread brain activity (#3), informativeness (#5), rapid adaptivity (#6), and neural synchrony underlying sensory binding (#9) all fall into this class. These properties can therefore be treated sensibly as testable criteria.

Empirical data that pass these criteria can establish a beachhead from which others can be evaluated. Since consciousness – whether in humans or in other animals – arises from interactions among brains, bodies, and environments, we might next consider properties that involve a behavioral component. Such properties include whether putative consciousness in an animal facilitates learning (#14), whether it can generate accurate behavioral report (#11), and whether it aids voluntary decision making (#17).

The testability of the remaining properties is less evident. Some may seem difficult to test, but with sufficient ingenuity can in fact be tested. For example, good evidence for conscious seriality (#8) comes from paradigms such as binocular rivalry, in which human subjects report perceptual alterations despite stable sensory input. Application of this paradigm to non-human animals requires a sufficiently reliable means of behavioral report (Cowey & Stoerig, 1995; Leopold, Maier, & Logothetis, 2003). Given such means, neural activity following sensory input can be separated from neural activity that follows a (putatively) conscious percept. Similar approaches can be applied to internal consistency (#7) and perhaps also to stability of conscious contents (#15).

Even so, there are some properties which do not seem currently testable. Most prominently, subjectivity (#12) is not something that seems testable in a given experiment. Rather, subjectivity is a defining property of consciousness to which empirical results may be related. In this case, the best to hope for is to indirectly infer subjectivity from a sufficiently well-validated report in conjunction with a battery of consistent brain evidence…

Along with subjectivity, the wide range of conscious contents (#4), self-attribution (#10), focusfringe structure (#13), and allocentricity (#16) are most likely to remain as properties; they do not describe phenomena that are either present or not present in currently available empirical data…

Finally, we note that the present list should be treated as provisional. Neural theories of consciousness are young, and their further development may lead not only to migrations between properties and criteria, but also to a repopulation of the list itself.

According to Professor David Edelman, then, subjective consciousness is something we can only indirectly infer in animals.

Dr. David Edelman has also admitted that high-order consciousness may be confined to human beings. In an article entitled, Identifying hallmarks of consciousness in non-mammalian species (Consciousness and Cognition 14 (2005) 169-187), the authors, David B. Edelman, Bernard J. Baars and Anil K. Seth declared:

Higher order consciousness, which emerged as a concomitant of language, occurs in modern Homo sapiens and may or may not be unique to our species.

And yet the language of the recent Cambridge Declaration on Animal Consciousness is much more strident:

While comparative research on this topic is naturally hampered by the inability of non-human animals, and often humans, to clearly and readily communicate about their internal states, the following observations can be stated unequivocally:

…Systems associated with affect are concentrated in subcortical regions where neural homologies abound. Young human and nonhuman animals without neocortices retain these brain-mind functions. Furthermore, neural circuits supporting behavioral/electrophysiological states of attentiveness, sleep and decision making appear to have arisen in evolution as early as the invertebrate radiation, being evident in insects and cephalopod mollusks (e.g., octopus).

…Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots… Magpies in particular have been shown to exhibit striking similarities to humans, great apes, dolphins, and elephants in studies of mirror self-recognition.

… Evidence that human and nonhuman animal emotional feelings arise from homologous subcortical brain networks provide compelling evidence for evolutionarily shared primal affective qualia.

“Compelling evidence”? I would like to ask my readers: when was the last time you heard a bona fide scientist use hectoring language like that? Hmmm… on second thoughts, don’t ask. That’s the kind of hectoring language we are accustomed to hearing from militant Darwinists and global warming zealots. It’s a sad fact that some scientists use their position in academia as a bully pulpit. As a non-scientist, I can only say that in the long run, the tactic backfires: the audience becomes cynical.

2. Key scientists who work in the field, such as Professor Marian Stamp Dawkins, have warned against the dangers of anthropomorphism, and have argued that scientists should maintain a “militant agnosticism” on the subject of animal consciousness, in the course of their research.

Currently, there is no conclusive scientific evidence showing that any non-human animals are conscious – a point which is explicitly acknowledged by Marian Stamp Dawkins, Professor of Animal Behavior and Mary Snow Fellow in Biological Sciences, Somerville College, Oxford University. Marian Dawkins is herself sympathetic to the view that a large number of animals may be conscious. Nevertheless, she writes:

“[F]rom a scientific view, we understand so little about animal consciousness (and indeed our own consciousness) that to make the claim that we do understand it, and that we now know which animals experience emotions, may not be the best way to make the case for animal welfare. Anthropomorphism (seeing animals as just like humans) and anecdote were assuming a place in the study of animal consciousness that, it seemed to me, leaves the whole area very vulnerable to being completely demolished by logical argument…

It is, perhaps, not a comfortable conclusion to come to that the only scientific view of consciousness is that we don’t understand how it arises, nor do we know for certain which animals are conscious.
(Marian Stamp Dawkins, Professor of Animal Behavior and Mary Snow Fellow in Biological Sciences, Somerville College, Oxford University, writing in an online article entitled, Convincing the Unconvinced That Animal Welfare Matters, The Huffington Post, 8 June 2012.)

In her recently published book, Why Animals Matter: Animal consciousness, animal welfare, and human well-being (Oxford University Press, 2012), Professor Dawkins discusses the different issues relating to animal consciousness. Throughout the discussion, she maintains a skeptical outlook, because the scientific evidence is “indirect” (p. 111) and that “there is no proof either way about animal consciousness and that it does not serve animals well to claim that there is.” (p. 112). Summarizing the data surveyed, she writes:

The mystery of consciousness remains. The explanatory gap is as wide as ever and all the wanting in the world will not take us across it. Animals and plants can ‘want’ very effectively with never a hint of consciousness, as we can see with a tree wanting to grow in a particular direction. Preference tests, particularly those that provide evidence that animals are prepared to pay ‘costs’ to get what they want, are perhaps the closest we can get to what animals are feeling, but they are not a magic entry into consciousness. They do not solve the hard problem for us because everything that animals do when they make choices or show preferences or even ‘work’ to get what they want could be done without conscious experience at all. We have seen (Chapters 4 and 5) just how much we humans do unconsciously and how powerful our unconscious minds are in making decisions and even in having emotions. What is good enough for us may well be good enough for other species.

In the case of other humans, we use words to ask them what they are feeling, and use what they say as a reasonable working substitute for direct knowledge of what they are experiencing. Preference tests and their variations could be seen as the animal equivalents of asking people in words and it is tempting to say that they are as good as words, if not better. So if we are happy enough to use words as a rickety bridge across the chasm, why not use preference tests, choice, and operant conditioning to do the same for animals? This argument seems particularly compelling when we look at the evidence that animals will choose to give themselves the same drugs that we know have pain-relieving or anxiety-relieving properties in ourselves. Isn’t this direct evidence for conscious experience of pain in animals? Doesn’t this show that their experience of pain is like ours, not just in the external symptoms that they show but also in what they feel?

… The similarity between the behavioral responses of animals and humans to such drugs make it tempting to assume that because the behavior is similar, the conscious experiences must be similar too. Of course they may be, but there is no more ‘must’ about it than in the claim that animals ‘must’ consciously experience thirst before they drink or ‘must’ consciously experience hunger while they are searching for food. They may well do so, as we saw in Chapter 8. But there is no must about it. Animal bodies have evolved by natural selection to restore imbalances of food and water and to repair wounds and other kinds of damage. Neither food deprivation nor water deprivation, nor the symptoms of inflamed joints, are necessarily accompanied by any conscious experiences at all, although they may be. Just as our wounds heal up without any conscious intention on our part and we like certain foods without knowing why, so other animals, too, have a variety of mechanisms, for repairing and restoring their bodies to proper working order. Preference and choice and ‘what animals want’ are part of those mechanisms. They may well be accompanied by conscious experiences. But then again, they may not be. Once again, our path to finding out the answer is blocked by the implacable, infuriating obstacle known as the hard problem.” (pp. 171-174)

Finally, Dawkins argues that since at the present time, scientists don’t know which (if any) animals are conscious, it is better for animal welfare advocates to refuse to commit themselves on the question of which animals are conscious: “… it is much, much better for animals if we remain skeptical and agnostic [about consciousness] … Militantly agnostic if necessary, because this keeps alive the possibility that a large number of species have some sort of conscious experiences … For all we know, many animals, not just the clever ones and not just the overtly emotional ones, also have conscious experiences.” (p. 177)

3. Philip Low, who originally authored the Cambridge Declaration on Animal Consciousness before it was subsequently edited, has already used the Declaration for propagandistic purposes, in an interview he gave in the Brazilian magazine Veja on 16 July 2012, where he irresponsibly claimed that scientists now know that mammals, birds and octopuses suffer.

Here’s what Professor Low said in his interview. I’ll leave it to my readers to judge for themselves whether these are the measured words of a professional scientist:

We know that all mammals, all birds and many other creatures, like octopuses, have the nerve structures that produce consciousness. This means that these animals suffer. It’s an inconvenient truth: it was always easy to say that animals have no consciousness. Now we have a group of respected neuroscientists who study the phenomenon of consciousness, animal behavior, the neural network, anatomy and genetics of the brain. You can no longer say that we did not know.

That statement, I have to say, is an abuse of science: it creates the false impression among the public that the matter of animal consciousness is settled science. Dr. Low claims to know that animals, including octopuses, are conscious, and capable of suffering pain, even though neuroscientists of much greater eminence, such as Professor Marian Stamp Dawkins, have stated that we don’t know that any non-human animals are conscious, while others, such as Dr. James D. Rose, have stated that if animals do suffer, they suffer a lot less than we do.

4. As someone who corresponded with some of the signatories of the Cambridge Declaration on Animal Consciousness while writing my Ph.D. thesis, I can categorically state that their views on animal consciousness are not representative of what most neuroscientists think, on the subject of animal consciousness.

I have taken a very keen interest in animal consciousness for the past fifteen years, and I can confidently state that generally speaking, the view of the signatories of the Cambridge Declaration on Consciousness is considerably more liberal than that of most neuroscientists, on the subject of animal consciousness. I should add that I have corresponded with some of the authors of the Declaration, and they very kindly answered some of my queries while I was doing my thesis research. Nevertheless, it would be inaccurate to describe their views as typical of neuroscientists.

When I was putting the finishing touches to my thesis in 2007, the view that even birds (let alone reptiles or fish) are conscious in any way at all was highly contentious and only beginning to gain respectability. It was only after the discovery of an avian homologue for the mammalian neocortex that it gained real scientific legitimacy. Meanwhile, a few researchers in the field were suggesting that octopuses, whose brains contain up to 300,000,000 neurons, might be conscious. Finally, the view that insects might be conscious was virtually unheard of: Bruno van Swinderen and Christof Koch (who are both signatories of the Cambridge Declaration) were just about the only people propounding it. What does that tell you?

In order to convey the prevailing scientific mood just a decade-and-a-half ago, I can do no better than to cite the work of Stephen Budiansky (pictured above, courtesy of Wikipedia), a Yale and Harvard graduate who was the former Washington editor of the science journal Nature. Budiansky is the author of the best-selling book, If a Lion Could Talk: Animal Intelligence and the Evolution of Consciousness (The Free Press, 1998). Budiansky’s book was highly praised by Sir John Maddox, the Editor Emeritus of Nature, who described him as “the thinking person’s conservationist.” In the last chapter of his book, Budiansky proposes that while animals experience pain, they do not suffer. Only humans, he argues, are conscious:

Experimental evidence suggests that there is a great similarity between the unconscious thought processes of man and other animals… [W]e experience many emotions and sensations without the necessity to attach labels to them – pain, fear, hunger, thirst, surprise, pleasure, elation.

These are levels of sensations that it seems logical and justifiable to attribute to animals. Consciousness is quite another matter, though, for whether or not language causes consciousness, language is so intimately tied to consciousness that the two seem inseparable. The “monitor” that runs through our brains all the time is one that runs in language. The continual sense that we are aware of what is going on in a deliberate fashion is a sense that depends on words to give it shape and substance…

The premise of animal “rights” is that sentience is sentience, that an animal by virtue above all of its capacity to feel pain deserves equal consideration. But sentience is not sentience, and pain isn’t even pain. Or perhaps, following Daniel Dennett’s distinction, we should say that pain is not the same as suffering: “What is awful about losing your job, or your leg, or your reputation, or your loved one is not the suffering this event causes you, but the suffering this event is,” Dennett writes. Our ability to have thoughts about our experiences turns emotions into something far greater and sometimes far worse than mere pain. The multiple shades of many emotions that our language expresses reveal the crucial importance of social context – of the thoughts we have about our experiences and the thoughts we have about those thoughts – in our perception of those emotions. Sadness, pity, sympathy, condolence, self-pity, ennui, woe, heartbreak, distress, worry, apprehension, dejection, grief, wistfulness, pensiveness, mournfulness, brooding, rue, regret, misery, despair – all express shades of the pain of sadness whose full meaning comes only from our ability to reflect on their meaning, not just their feelings. The horror of breaking a limb that we experience is not merely the pain; the pain is but the beginning of the suffering we feel as we worry and anticipate the consequences. Pity and condolence and sympathy are all shades of feeling that are manifestly defined by the social context, by the mental-state attribution to another that we are capable of. Consciousness is a wonderful gift and a wonderful curse that, all the evidence suggests, is not in the realm of the sentient experience of other creatures. (1998, pp. 192-194)

I would invite the reader to contrast this language with that of the Cambridge Declaration, which ascribes “near human-like levels of consciousness” to African grey parrots, and declares that there is “compelling evidence for evolutionarily shared primal affective qualia” in human and non-human animals, before concluding that “all mammals and birds, and many other creatures, including octopuses,” possess the neurological substrates of consciousness. Ask yourselves: is there anything that has happened in the field of animal research since 1998 that would justify such an astonishing turn-around?

Let me finish with a personal observation. Several years ago, I emailed Professor Jaak Panksepp, one of the signatories of the Cambridge Declaration, asking him if there was any consensus among neurologists regarding the neural requirements for consciousness. On 16 June 2004, he very kindly responded, answering firmly in the negative. That was just nine years ago. If there was no consensus less than a decade ago as to what neurological criteria warrant the ascription of consciousness to animals, then how are we supposed to believe the current consensus, especially when it is a consensus of no more than a dozen or so scientists?

5. I also know for a fact that the signatories of the Cambridge Declaration disagree widely even amongst themselves as to which animals are conscious.

At least one signatory is a panpsychist

One of the signatories, Dr. Christof Koch, appears to be a kind of panpsychist, who believes that everything is conscious to some degree or other. In an interview he gave to The Huffington Post, entitled, “Consciousness is Everywhere” (15 August 2012), he stated that he believes that any integrated system possesses some level of consciousness, drawing upon the ideas of Professor Giulio Tononi:

By consciousness I mean the ability to feel something, anything — whether it’s the sensation of an azure-blue sky, a tooth ache, being sad, or worrying about the deadline two weeks from now. Indeed, it may be possible that all animals share some minimal amount of sentience with people, that all animals have some feelings, however primitive…

No matter what the NCC [neural correlates of consciousness – VJT] will prove to be, a skeptic can always ask why does this particular NCC give rise to a conscious experience but not another one? The cause and effect between neuronal activity in the brain and conscious thought can seem as magical as rubbing a brass lamp and having a genie emerge. It is here that the ideas of Giulio Tononi, a psychiatrist and neuroscientist, prove crucial. He advocates for a sophisticated theory that links information to consciousness. His integrated information theory introduces a precise measure capturing the extent of consciousness called Φ (phi). Expressed in bits, phi quantifies the extent to which any system of interacting parts is both differentiated and integrated when that system enters a particular state. Any one conscious experience is both highly differentiated from any other one but also unitary, holistic. The larger the phi, the richer the conscious experience of that system. Furthermore, the theory assigns any state of any network of causally interacting parts (these neurons are firing, those are quiet) to a shape in a high-dimensional space.

Integrated information makes specific predictions about which brain circuits are involved in consciousness and which one are peripheral players, even though they might contain many more neurons. The theory should allow clinicians to build a consciousness-meter to assess, in a quantitative manner, the extent to which severely brain injured patients are truly in a vegetative state, versus those who are partially conscious, but simply unable to signal their pain or discomfort. Most of us will remember Terri Schiavo, the woman who came to be at the heart of such a debate.

I’ve been careful to stress that any network possesses integrated information. The theory is very explicit on this point: Any system whose functional connectivity and architecture yield a phi value greater than zero has at least a trifle of experience. This would certainly include the brains of bees. Just because bees are small and fuzzy does not mean that they cannot have subjective states. So, the next time a bee hovers above your breakfast, attracted by the golden nectar on your toast, gently shoo her away. She might be a fellow sentient being, experiencing her brief interlude in the light.

[Update:]
In an article in Scientific American (August 18, 2009), entitled, A “Complex” Theory of Consciousness, Koch is even more explicit about his belief that all systems are conscious to some degree:

Measured in bits, Φ denotes the size of the conscious repertoire associated with any network of causally interacting parts. Think of Φ as the synergy of the system. The more integrated the system is, the more synergy it has, the more conscious it is. If individual brain regions are too isolated from one another or are interconnected at random, Φ will be low. If the organism has many neurons and is richly endowed with specific connections, Φ will be high – capturing the quantity of consciousness but not the quality of any one conscious experience…

At least in principle, the incredibly complex molecular interactions within a single cell have nonzero Φ. In the limit, a single hydrogen ion, a proton made up of three quarks, will have a tiny amount of synergy, of Φ. In this sense, IIT is a scientific version of panpsychism, the ancient and widespread belief that all matter, all things, animate or not, are conscious to some extent. Of course, IIT does not downplay the vast gulf that separates the Φ of the common roundworm Caenorhabditis elegans with its 302 nerve cells and the Φ associated with the 20 billion cortical neurons in a human brain.

The theory does not discriminate between squishy brains inside skulls and silicon circuits encased in titanium. Provided that the causal relations among the transistors and memory elements are complex enough, computers or the billions of personal computers on the Internet will have nonzero Φ. The size of Φ could even end up being a yardstick for the intelligence of a machine.

What more need I say? Professor Koch openly acknowledges here that he embraces a form of panpsychism, and he is prepared to say that hydrogen atoms and desktop computers are conscious to some degree. Suffice it to say that Koch’s opinions are not shared by the other signatories of the Cambridge Declaration, who were only prepared to impute consciousness to animals. And Koch’s views are certainly not typical of neuroscientists around the world.

Disagreement among the signatories about consciousness in insects

A honey bee pollinating a flower. Picture courtesy of Louise Docker and Wikipedia.

On the subject of consciousness in insects, the signatories have extremely divergent views. One of the signatories of the Cambridge Declaration on Consciousness, Christof Koch, defends the view that honeybees might well be conscious (What is it like to be a bee?”, Scientific American Mind, December 2008/January 2009, pp. 18-19). Christof Koch, has also suggested in his book, “Quest for Consciousness” (Roberts and Company, Colorado, 2004, p. 320) that even fruit flies might be conscious.

Writing from a similar perspective, another signatory, Bruno wan Swinderen, has argued that even fruit flies are conscious, because their brains are capable of assigning salience to a stimulus on the basis of odor, heat or novelty. Specifically, salient stimuli evoked local field potentials in the 20-30 Hz range (Van Swinderen B. and Greenspan R. 2003. “Salience modulates 20-30 Hz brain activity in Drosophila”, in Nature Neuroscience, Vol. 6, No. 6, pp. 579-586, June 2003).

By contrast, Cambridge Declaration signatory Dr. David Edelman rejects the view that honeybees are conscious. In a personal email dated 19 July 2004, he informed me that in his opinion, honeybees, despite their impressive cognitive feats, are incapable of consciousness, as their brains are too small. In his view, it was not likely that the interaction of a mere one million neurons (as in, say, the brain of a honeybee), would yield something we would call consciousness. (Of course, a fruit fly has even fewer neurons in its brain.) He also added that insects did not appear to possess any of the three distinguishing neurological properties of consciousness identified in mammals (and birds) listed above (Seth, Baars and Edelman, 2005).

Disagreeement about consciousness in cephalopods, too

Professor Jaak Panksepp has done excellent work on emotions in mammals, demonstrating that “There is good biological evidence for at least seven innate emotional systems ingrained within the mammalian brain” (1998, p. 47). Professor Panksepp has also argued for the view that a form of affective consciousness exists not only in mammals, but also birds and reptiles. However, he has also pointed out to me that the amygdala, which is thought to be responsible for the emotional evaluation of stimuli (Moren and Balkenius, 2000), is confined to vertebrates (Panksepp, personal communication, 11 April 2004). His own work on affective consciousness carries over well to reptiles, and perhaps to other vertebrates, insofar as they possess a basal ganglia – the deepest layer of the forebrain, where behavioral responses related to seeking, fear, anger and sexual lust originate. However, this structure is found in all vertebrates, and only in vertebrates.

Thus Panksepp’s own work on affective consciousness would imply that cephalopods are not conscious, whereas the Cambridge Declaration on Consciousness states that they are! Whom should we believe, then?

6. Some of the signatories of the Cambridge Declaration have emailed me in the past, expressing views which are at variance with statements made in the Declaration.

One of the signatories of the Cambridge Declaration, Professor David Edelman, was a co-author of a 2005 study which claimed to find three neurological identifying traits of what they called primary consciousness in animals. According to a paper by A. K. Seth, B. J. Baars and D. B. Edelman, entitled, Criteria for consciousness in humans and other mammals (Consciousness and Cognition, 14 (2005), 119-139), primary consciousness has three distinguishing features at the neurological level, which highlight the important role played by the cortex in supporting consciousness:

Physiologically, three basic facts stand out about consciousness.

2.1. Irregular, low-amplitude brain activity

Hans Berger discovered in 1929 that waking consciousness is associated with low-level, irregular activity in the raw EEG, ranging from about 20-70 Hz (Berger, 1929). Conversely, a number of unconscious states – deep sleep, vegetative states after brain damage, anesthesia, and epileptic absence seizures – show a predominance of slow, high-amplitude, and more regular waves at less than 4 Hz (Baars, Ramsoy, & Laureys, 2003). Virtually all mammals studied thus far exhibit the range of neural activity patterns diagnostic of both conscious states…

2.2. Involvement of the thalamocortical system

In mammals, consciousness seems to be specifically associated with the thalamus and cortex (Baars, Banks, & Newman, 2003)… To a first approximation, the lower brainstem is involved in maintaining the state of consciousness, while the cortex (interacting with thalamus) sustains conscious contents. No other brain regions have been shown to possess these properties… Regions such as the hippocampal system and cerebellum can be damaged without a loss of consciousness per se.

2.3. Widespread brain activity

Recently, it has become apparent that conscious scenes are distinctively associated with widespread brain activation (Srinivasan, Russell, Edelman, & Tononi, 1999; Tononi, Srinivasan, Russell, & Edelman, 1998c). Perhaps two dozen experiments to date show that conscious sensory input evokes brain activity that spreads from sensory cortex to parietal, prefrontal, and medial-temporal regions; closely matched unconscious input activates mainly sensory areas locally (Dehaene et al., 2001). Similar findings show that novel tasks, which tend to be conscious and reportable, recruit widespread regions of cortex; these tasks become much more limited in cortical representation as they become routine, automatic and unconscious (Baars, 2002)…

Together, these first three properties indicate that consciousness involves widespread, relatively fast, low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. Unconscious states are markedly different and much less responsive to sensory input or endogenous activity.

While it is true that David Edelman and Giulio Tononi’s top-down sensory model of consciousness (summarized in Butler, Manger, Lindahl and Arhem, 2005) posits that the brain’s limbic system structures – specifically, the septal regions, amygdala, and hippocampus, which are related to emotion and learning – are part of the brain’s mechanism for generating consciousness, their model does not posit the limbic system as the site of an independent center of consciousness.

The Cambridge Declaration on Consciousness, on the other hand, states “unequivocally” that affective states of consciousness (emotions) can occur even in the complete absence of a neocortex, and that this affective consciousness arises from subcortical brain structures:

The neural substrates of emotions do not appear to be confined to cortical structures… Systems associated with affect are concentrated in subcortical regions where neural homologies abound. Young human and nonhuman animals without neocortices retain these brain-mind functions… Evidence that human and nonhuman animal emotional feelings arise from homologous subcortical brain networks provide compelling evidence for evolutionarily shared primal affective qualia.

In other words, Professor David Edelman seems to have reversed his position. A few years ago, he apparently believed that a neocortex was essential for consciousness; now he believes that consciousness can occur in its absence. One might reasonably ask: why the turn-around? Have there been any new findings in the field of consciousness research that would warrant such a change of position? I have to say that there have been none that I am aware of. The only really significant news in the past few years is the recent discovery (reported in Science Daily, October 1, 2012) that the avian dorsal ventricular ridge (DVR) is homologous to the mammalian neocortex. That’s good news for bird-lovers, but it tells us nothing regarding whether fish, or cephalopods, let alone insects, are conscious. One might also mention the growing evidence for not only behavioral sleep but also brain sleep in invertebrates. Nevertheless, it remains true that apart from mammals, birds are the only group of animals known to unequivocally exhibit both slow-wave sleep (SWS) and REM sleep. In short: there have been no new developments in the field of animal consciousness which would warrant the conclusion that there’s compelling evidence, or even probable evidence for consciousness in creatures apart from mammals and birds.

7. The Cambridge Declaration on Consciousness goes far beyond the available evidence in assigning human-like levels of consciousness to parrots.

The Cambridge Declaration on Consciousness contains the following sentence, which I consider objectionable on scientific grounds:

“Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots.” (Bold emphasis mine – VJT.)

This statement is an exaggeration, which goes far beyond the available evidence.

Neuroscientists commonly distinguish between primary and higher-order forms of consciousness. In his essay, The Neurobehavioral Nature of Fishes (Reviews in Fisheries Science, 10(1): 1-38, 2002), Dr. James Rose defines higher-order consciousness as follows:

“Higher-order consciousness includes awareness of one’s self as an entity that exists separately from other entities; it has an autobiographical dimension, including a memory of past life events; an awareness of facts, such as one’s language vocabulary; and a capacity for planning and anticipation of the future.”

Primary consciousness, on the other hand, “refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions.”

While there is an impressive array of evidence that non-human animals (especially mammals, but also probably birds and just possibly cephalopods) possess primary consciousness, there is no solid evidence that any non-human animal possesses higher-order consciousness.

That includes chimps. If you want to see how stupid chimps can be, just watch this video: this video.

Anyone who thinks I am being unfair to chimps might like to have a look at Dr. Daniel Povinelli’s Web page. Povinelli has argued that there’s no good evidence to date indicating that nonhuman primates are capable of reasoning about unobservable mental states of others (e.g. perceptions, desires, and beliefs), or about unobservable aspects of physical interactions (e.g. gravity, force and mass). Nor do they appear to have a concept of self like ours.

When we come to birds, the evidence for higher-order consciousness is very weak indeed. Indeed, just a few years ago, most neuroscientists (including Dr. James Rose, whose article I cited above) were inclined to deny that birds were even capable of primary consciousness, let alone higher-order consciousness. That was before the recent discovery that the avian dorsal ventricular ridge (DVR) is homologous to the mammalian neocortex.

The late African grey parrot, Alex, was shown to display comprehension of a number of simple concepts when spoken to, but unfortunately did not demonstrate complete facility with the English language. Researchers were never able to verify the presence of higher consciousness in Alex.

While African grey parrots can answer simple questions relating to how many objects are present nearby, and what kinds of objects they are, no parrot has ever been shown to be capable of planning for tomorrow, let alone thirty years from now (long-range planning).

A European magpie in Helsinki, Finland. Picture courtesy of Teemu Lehtinen and Wikipedia.

The European magpie (not the African grey parrot) is the only bird which has ever been shown to be able to pass the mirror test, which is commonly used to assess self-awareness. Even this is being very generous: all the test really establishes, as the philosopher Michael P. T. Leahy has pointed out, is awareness of one’s body, as distinct from other bodies. It does not establish self-awareness.

No parrot or crow has ever given an interview in which it told its life story. Autobiographical memory is, as far as we are aware, unique to human beings.

And while the tool-making abilities of New Caledonian crows like Betty are indeed impressive, no crow has ever been able to explain why the solution it chose was the best. Nor is there even a smidgin of evidence that birds instruct their young in the art of tool-making, by telling them why one way of making tools is better than another. When discussing how chicks learn from their parents, it would be better to speak of imitation rather than “instruction.”

What’s more, there is good experimental evidence suggesting that even clever animals like chimpanzees (see this video) and elephants (see this one) lack a theory of mind. A chimpanzee, for instance, is incapable of realizing that a man with a bucket over his head cannot see anything, while an elephant can be easily fooled by a scarecrow. Indeed, primate researchers Derek Penn and Daniel Povinelli have written a paper entitled, On the lack of evidence that non-human animals possess anything remotely resembling a ‘theory of mind’ (Philosophical Transactions of the Royal Society B, 362, 731-744, doi:10.1098/rstb.2006.2023) in which they not only discuss the abilities of chimpanzees but also those of corvids (crows and related birds), and carefully explain why there is no reason to suppose that these animals have the capacity to impute mental states to others. At first sight, the evidence for a theory of mind in these birds looks convincing:

Corvids are quite adept at pilfering the food caches of other birds and will adjust their own caching strategies in response to the potential risk of pilfering by others. Indeed, not only do they remember which food caches were observed by competitors, but also they appear to remember the specific individuals who were present when specific caches were made and modify their re-caching behaviour accordingly (Dally et al. 2006).

However, the experiments performed to date suffer from a crucial flaw, as Penn and Povinelli point out: “Unfortunately, none of the reported experiments with corvids require the subjects to infer or encode any information that is unique to the cognitive perspective of the competitor.” The authors argue that simple rules can explain the birds’ behavior:

In all of the experiments with corvids cited above, it suffices for the birds to associate specific competitors with specific cache sites and to reason in terms of the information they have observed from their own cognitive perspective: e.g. ‘Re-cache food if a competitor has oriented towards it in the past’, ‘Attempt to pilfer food if the competitor who cached it is not present’, ‘Try to re-cache food in a site different from the one where it was cached when the competitor was present’, etc. The additional claim that the birds adopt these strategies because they understand that ‘The competitor knows where the food is located’ does no additional explanatory or cognitive work. (Emphasis mine – VJT.)

Penn and Povinelli also propose two carefully controlled experiments which could provide evidence of a “theory of mind” in non-human animals. Even adult chimpanzees who were used to interacting with human beings failed the first experiment proposed by the authors, while 18-month-old human infants passed the same test.

I conclude that the claim that some birds possess “near human-like levels of consciousness” is completely overblown.

8. The majority of neuroscientists currently working in the field of animal consciousness would disagree with the Cambridge Declaration’s contentious claim that a primitive affective consciousness (or emotional awareness) can be found in a wide variety of animals that lack a cortex, including insects.

The very legitimacy of the term, “affective consciousness,” is hotly disputed. In their exhaustively researched 2012 paper, “Can fish really feel pain?” (Fish and Fisheries, doiI: 10.1111/faf.12010), authors J D Rose, R Arlinghaus, S J Cooke, B K Diggles, W Sawynok, E D Stevens and C D L Wynne describe the terms currently used by neuroscientists to describe consciousness in humans and animals:

Although the exact terminology has varied from writer to writer, two principal manifestations of consciousness have long been recognized to exist in humans: (i) primary consciousness, the moment-to-moment awareness of sensory experiences and some internal states such as feelings and (ii) higher-order consciousness also called access consciousness or self-awareness (Macphail 1998; Damasio 1999; Edelman and Tononi 2000; Cohen and Dennett 2011; De Graaf et al. 2012; Vanhaudenhuyse et al. 2012). Higher-order consciousness includes awareness of one’s self as an entity that exists separately from other entities; an autobiographical dimension, including memory of past life events; an awareness of facts, such as one’s language vocabulary; and a capacity for planning and anticipation of the future. Differing components of neocortex and associated cingulate gyrus mesocortex have recently been shown to mediate these two forms of consciousness (Vanhaudenhuyse et al. 2012). Additional categories and subdivisions of consciousness have been proposed as well (e.g. medical awareness, De Graaf et al. 2012) but additional definitions and categorizations of consciousness remain a source of controversy (Baars and Laureys 2005; Overgaard et al. 2008). [Emphases mine – VJT]

In addition to primary consciousness and high-order consciousness, a few neuroscientists believe there is evidence for a more primitive kind of consciousness in animals, which they call affective consciousness, as distinct from the cognitive consciousness (generated by the cerebral cortex) that processes sensory inputs. These neuroscientists propose that this affective consciousness relates to primitive emotional states, such as fear and rage. Some of the signatories of the Cambridge Declaration on Consciousness also espouse this view – notably Dr. Jaak Panksepp. Panksepp maintains that this affective consciousness can be found in reptiles, as well as mammals and birds. He also postulates the existence of “subcortical” consciousness in decorticate rats (rats whose neocortex has been removed).

However, most neuroscientists would disagree with Panksepp regarding his claim that decorticate rats are conscious. They point out that one of the hallmarks of consciousness is its unified nature, i.e. the fact that we experience everything as an integrated whole. The reason why this is possible is the presence of the neocortex, with its massive horizontal inter-connectedness, enabling association cortical regions such as the prefrontal cortex to receive broad, convergent information and have it all available simultaneously. Unfortunately, subcortical forebrain and brainstem systems do not possess this kind of connectivity.

In my thesis, I summarized Panksepp’s view, along with the reasons why many neuroscientists continue to reject it:

The majority of neurologists consider primary consciousness to be the most basic form of subjective awareness. However, a few authors such as Panksepp (1998, 2001, 2003f) and Liotti and Panksepp (2003) have proposed that we possess two distinct kinds of consciousness: (i) cognitive consciousness, which includes perceptions, thoughts and higher-level thoughts about thoughts and requires a neocortex (a six-layered structure in the brain which comprises the bulk of the brain’s outer shell or cerebral cortex – the neurological consensus (Nieuwenhuys, 1998; Rose, 2002a, p. 6) is that only mammals possess this laminated structure in its developed form), and (ii) affective consciousness which relates to our feelings and arises within the brain’s limbic system, with the anterior cingulate cortex playing a pivotal role. Panksepp considers affective consciousness to be the more primitive form of consciousness. It is certainly true that the neural processing for cognitive and emotional responses in humans and other animals is quite distinct, which refutes the view that emotions are simply (conscious or unconscious) cognitions (LeDoux, 1999, p. 69). The term “limbic system” has been criticised as outdated by some neurologists (e.g. LeDoux, 1999, pp. 98-103); however, Panksepp defends it as a useful heuristic concept (1998, pp. 57, 71, 353)… (Thesis, p. 91.)

The limbic system (and in particular, the anterior cingulate cortex) (see Figure 1.3) has been proposed by other authors (Panksepp, 1998, 2001, 2003f; Liotti and Panksepp, 2003) as the site of a primitive affective consciousness, as distinct from the cognitive consciousness (generated by the cerebral cortex) that processes sensory inputs. However, the notion that the brain has an autonomous centre of consciousness residing in the limbic system is a highly contentious one, as it appears to conflict with brain monitoring data cited above (Roth, 2003, p. 36). Additionally, the very term “limbic system” has been attacked as outdated by some scientists (LeDoux, 1998, pp. 98-103), although Panksepp (1998, pp. 57, 71, 353) defends it as a useful heuristic concept. Moreover, Panksepp’s assertion that the anterior cingulate cortex (ACC) forms part of a “limbic region” which is separate from the cerebral cortex has been contested by Allman, Hakeem, Erwin, Nimchimsky and Hof (2001), who argue on anatomical grounds that the ACC is actually part of the cerebral cortex, as it also has a complex layered structure. Finally, the anterior cingulate cortex, like the neocortex, is peculiar to mammals – a fact which creates difficulties for the hypothesis that the emergence of affective consciousness in evolutionary history predated the appearance of the mammalian neocortex. (Thesis, pages 103-104.)

9. The notion that consciousness evolved in parallel in vertebrates and octopuses, as the Declaration suggests, is highly problematic on anatomical grounds.

Octopus opening a container with a screw cap. Picture courtesy of Matthias Kabel and Wikipedia.

[This section has been updated – VJT.]

I am of course aware that octopuses are very clever creatures. But as the reader will be aware by now, cleverness does not necessarily imply consciousness. A fair-minded summary of the current state of research into cephalopod consciousness can be found in Lindsay Jordan’s 2007 UC Davis term paper, What Lurks Beneath the Depths: Does Cephalopod Consciousness Exist?. Jordan also acknowledges, though, that some of the more extravagant claims made for octopuses’ mental feats (e.g. observational learning and play) are scientifically controversial.

Nevertheless, there are profound structural dissimilarities between the brains of vertebrates and octopuses. It would be very odd indeed if two such fundamentally different structures could both independently generate a specialized capacity like consciousness. As Dr. James Rose put it in his article, The Neurobehavioral Nature of Fishes and the Question of Awareness and Pain (Reviews in Fisheries Science, 10(1): 1–38, 2002):

It is a well-established principle in neuroscience that neural functions depend on specific neural structures. Furthermore, the form of those structures, to a great extent, dictates the properties of the functions they subserve. If the specific structures mediating human pain experience, or very similar structures, are not present in an organism’s brain, a reasonably close approximation of the pain experience can not be present. If some form of pain awareness were possible in the brain of a fish, which diverse evidence shows is highly improbable, its properties would necessarily be so different as to not be comparable to human-like experiences of pain and suffering.

If this is true even for fish, then how much more so for invertebrates such as cephalopods and insects, whose brains are even more radically different from ours?

To be fair, I should point out that Edelman, Baars and Seth (2005) have attempted to argue that the brains of cephalopods may be analogous to those of vertebrates, but even they acknowledge that there are profound underlying differences, in their paper, Identifying hallmarks of consciousness in non-mammalian species (Consciousness and Cognition 14 (2005), 169–187):

A peculiarity of the octopus nervous system is the density of neurons located in the tentacles, which taken together, exceeds the total number of neurons in the brain itself (Young, 1971). Consistent with this fact, a recent study showed that a detached octopus arm could be made to flail realistically when stimulated with short electrical pulses (Sumbre, Gutfreund, Fiorito, Flash, & Hochner, 2001)… [I]n a detached vertebrate limb it is simply not possible to produce the suite of coordinated movements that is characteristic of complex vertebrate locomotion. In contrast, what is striking about the octopus is the sophistication of the semi-autonomous neural networks in its tentacles and their local motor programs…

Identification of higher levels of neural organization in cephalopods poses even more profound challenges. Cell assemblies, modules, cortical columns, thalamocortical loops, blobs (well characterized in the cortex by cytochrome oxidase labeling; see Wong-Riley, 1989) and neuronal groups have been variously defined in mammals as groups of cells that share similar structure and/or function and that exist in large numbers within a particular defined region of the brain (i.e., cortex, nuclei, and ganglia) (Hebb, 1949; Izhekevich, Gally, & Edelman, 2004; Leise, 1990; Mountcastle, 1978; Szentagothai, 1975). In its most restricted definition, a column has been described as the smallest functional module of the mammalian neocortex (Mountcastle, 1978). A broader notion of the neural module, suggested by Leise (1990), would recognize large concentrations of neuropil in invertebrates as being functionally something like the so-called minicolumn (Buxhoeveden & Casanova, 2002). The validity of such an analogy remains untested, but it would perhaps be useful to search for concentrations of closely bundled elements in the cephalopod brain. Their discovery might indicate a region that has properties similar to isocortex in mammals (isocortex refers to the larger part of mammalian cortex that is relatively uniform in its histology). The layout of such modules might yield some insight into how functional neural maps are organized in the cephalopod brain…

Apart from their extraordinary behavioral repertoires, perhaps the most suggestive finding in favor of precursor states of consciousness in at least some members of Cephalopoda is the demonstration of EEG patterns, including event related potentials, that look quite similar to those in awake, conscious vertebrates (Bullock & Budelmann, 1991)… An obvious prerequisite to identifying cephalopod EEG patterns that reflect the signature of fast irregular activity, similar to that observed in human conscious states, will be to determine precisely where to record from. It is possible that the optic, vertical, and superior lobes of the octopus brain are relevant candidates and that they may function in a manner analogous to mammalian cortex…

Clearly, a strong case for even the necessary conditions of consciousness in the octopus has not been made. (pp. 178-180)

Given the present state of evidence, I think it would be decidedly premature to impute consciousness to cephalopods. The case for consciousness in birds is much stronger than for cephalopods, as the avian dorsoventricular ridge (DVR) is now known to be homologous (rather than merely analogous) to the mammalian neocortex.

Summary

To sum up: I contend that the Cambridge Declaration on Consciousness is a politically motivated stunt. The authors of the Declaration have good (but by no means compelling) grounds for affirming the existence of consciousness in mammals and birds. They are on much shakier ground when they impute consciousness to other animals, such as cephalopods (especially octopuses) and perhaps even insects. At the present time, it appears extremely unlikely that insects are conscious, and highly unlikely that even fish are conscious, for reasons I’ll discuss in a future post.

Can the Cambridge Declaration on Consciousness be used to discredit statements made by Professor William Lane Craig on animal suffering?

Regarding the controversy surrounding remarks made by Professor William Lane Craig on animal consciousness, I have already discussed Craig’s scientific errors in my previous post. However, I have to say that the video released on October 3, 2012, entitled, Can animals suffer? Debunking the philosophers who say no, from Descartes to William Lane Craig, which was made by an online skeptic called Skydivephil (whose real name is Phil Harper) and his friend Monica, goes too far in the opposite direction, when it cites the Cambridge Declaration on Animal Consciousness in support of the view that consciousness is widespread among animals. To put it bluntly, the Cambridge Declaration on Animal Consciousness is junk science – and that’s inexcusable, coming from a group of internationally respected scientists. Craig at least has the excuse of being a philosopher, and not a scientist.

Would the lack of self-awareness in non-human animals resolve the theological problem of animal suffering, as Professor William Lane Craig contends?

Before I conclude this post, I’d just like to go back to Professor William Lane Craig’s claim that if animals are not self-aware, then the theological problem of animal suffering disappears. Craig was viciously attacked for making that statement. Personally, I think that some animals possess a rudimentary self-awareness. But let’s suppose Craig is right, and that there are no animal “selves.” What follows then? Suppose that animal suffering occurs. Whose suffering is it? Quite literally: nobody’s. There’s no “self” undergoing the suffering. But if there’s nobody suffering, then how can the existence of this suffering possibly undermine the goodness of God? And on that contentious note, I’ll finish.

Comments
My cat causes me more pain than I cause him.Mung
February 18, 2013
February
02
Feb
18
18
2013
01:46 PM
1
01
46
PM
PDT
Hi Axel and tjguy, Thanks for your comments. I completely agree with you that it's obvious to any normal person that animals do indeed feel pain. What I would maintain, however, is that using the methods of science, it's not at all obvious. The conclusion I draw is that not all of our knowledge is scientific. I would also agree with your claim, Axel, that some animals are capable of feeling empathy. I've witnessed this myself. If that's true, then some animals must possess at least a rudimentary self-awareness. Once again, though, I can't prove that scientifically. In fact, I can't even show it to be a scientifically probable conclusion. tjguy, I'd just like to point out that ID has no position on animal pain. ID proponents hold a variety of views on the subject. You also argue that the doctrine of common descent would imply that "the Creator created everything using the cruel process of natural selection that requires death, bloodshed, disease, and suffering." Two points in reply: first, natural selection cannot be called "cruel" unless it is cruel to someone. So if you claim that natural selection is cruel, you are not only saying that animals suffer; you are also ascribing "self-hood" to animals, which Professor William Lane Craig does not. In other words, you are asserting that animals are "morally significant others." I should also point out that many scientists still consider self-awareness to be a uniquely human trait, although others disagree. Second, if it is indeed your view that some non-human animals are self-aware - and it's a view which I'm inclined to share, by the way - then the only adequate theodicy is one which includes some kind of animal immortality. Even if God is not responsible for animal suffering, He still lets it happen every day. If animals are "morally significant others" who matter in their own right, then they are (in Kantian terms) ends-in-themselves. Hence there is no "higher end" which would justify an omnipotent Being in allowing them to suffer, and then be annihilated. In other words, if animals are "others," then creationism won't get God off the hook, morally speaking. Regardless of whether animal suffering is ultimately caused by God, Satan, or Adam, any sentient animals who are also self-aware must be in some way recompensed in the hereafter, and the wrongs they have suffered righted. As I mentioned in a previous post, John Wesley and C. S. Lewis believed in a form of animal immortality, so the view has some theological legitimacy. Nevertheless, there are logistical problems with this view - for instance, would immortal animals still reproduce, and would God continue to make more room for their descendants? If they don't reproduce, what do they do instead? I discuss the problem of animal suffering (including Aquinas' and Dembski's theodicies) at further length here (sections 5 to 7). You might be interested in taking a look. To conclude: regarding the theological problem of animal suffering, all we know is that we do not know. And that includes the skeptic, who tries to twist the problem into an argument against God.vjtorley
February 18, 2013
February
02
Feb
18
18
2013
01:02 PM
1
01
02
PM
PDT
I think maybe some of the guys on here are just trying to 'get their foot in the door' of the consensus, tjguy; pandering to the 'scientismificists' by conceding or glossing over some of their scientism. I think that's the case anyway. 'It goes with the territory' for this blog. You know the saying: 'If you argue with a fool you just get two fools.' But it just has to be done, for the sake of exposing their folly - even when they are impervious to reason, and you know you'll be talking past each other. I believe some of the Christians are said to believe in Common Descent, though, which, in the light of this and associated blogs, seems very foolish.Axel
February 17, 2013
February
02
Feb
17
17
2013
04:19 PM
4
04
19
PM
PDT
So if I understand the argument correctly, you are saying that it cannot be scientifically demonstrated that animals feel pain. When you talk about animal "suffering" you are including physical pain in the definition, right? I'm with Axel here. If ID is going to try and deny this sort of seemingly obvious idea that animals feel pain, I think you are going to make yourselves look a bit foolish. Before defending that position, I think I would try and defend Dembski's idea of death, disease, suffering, etc being retroactive from the Fall. Certainly this is one of the biggest problems for Christian Bible-believing adherents to ID. It makes the Creator responsible for all the suffering in the world. It means the Creator created everything using the cruel process of natural selection that requires death, bloodshed, disease, and suffering. It is for this very reason that many Christians reject the ID view of creation that includes common descent. It also makes it hard for some non believers to accept ID because they don't think the ID God is worthy of faith, praise, honor, or glory. For me, this is a big problem. As a creationist, I certainly believe in a Designer, but not the Designer of ID. I'm not willing to impugn the character of God in order to reconcile common descent with God's Word. In my view, this is one of the problems with ID. As a science, I have no problem with ID in that I agree that we can find evidence of design in living creatures, but ID normally also accepts unbiblical assertions of science as well. And this is why in my view, it doesn't fit well with the Bible.tjguy
February 15, 2013
February
02
Feb
15
15
2013
06:20 PM
6
06
20
PM
PDT
The highest and wisest part of our human nature is affective, not cerebral. For scientists, the crucial thing seems to be animals' inability to reflect. However, for all the animal sacrifices in the Old Testament, for Jews and Christians, alike, to see animals as merely machines is definitely proscribed. In Proverbs 12:10, Solomon wrote: 'The upright has compassion on his animals, but the heart of the wicked is ruthless.' Some of the most beautiful passages in Isiah describe a love that fills the whole earth and every living creature. Indeed, before the Fall, did not all the animals eat grass? I don't think for a moment that William Craig sees animals as mere machines, but that, as I believe has been said, he was expounding the current state of our scientific and philosophical understanding - which in terms of the subject of this thread is clearly no great shakes on either score. And long may it remains so. The West sets far too much store by the busy-body left side of the brain.Axel
February 14, 2013
February
02
Feb
14
14
2013
04:05 PM
4
04
05
PM
PDT
Unless scientists can come to understand the nature of mammalian empathy, never mind, human empathy, a deeper understanding of their more than corporeal nature will always elude them. Compared to atheists, elephants are sublimely intelligent, in that they mourn their dead and humans they have loved; while atheists seem to want to proclaim how triumphant they feel on their own imminent departure, if Guardian articles are any guide. True, dogs have stayed by their dead masters and then eaten them, but the fact remains, it seems to me, that there is a subliminal intelligence, more like, a wisdom, in man; and something akin to it might well be possessed by mammals in varying degrees, although evidently it would differ from ours in being non-reflective - except for the mourning, which really does seem sublime. The last cat we had, had been neutered, but, when he developed cancer, and really needed assistance in getting down from an armchair, he adopted a very 'macho' attitude of wanting to do it on his own - even though it meant falling, losing his footing. So, in that sense, surely he had some notion of self, however non-reflective. He evidently cherished a sense of his own dignity, even machismo. God bless his wee soul.Axel
February 14, 2013
February
02
Feb
14
14
2013
03:27 PM
3
03
27
PM
PDT
Thank you, both ciphertext and vjtorley for explaining the issue to me so painstakingly. I did have a suspicion that the kinds of questions dealt with would, according to my necessarily general understanding, be along such lines. However, it concerns me greatly that it is now well established empirically that the mind and feelings are not co-terminous with the brain, so that continued reference to the concept of brain death (and emotional death, whatever the medical term for that is) in scientific matters disturbs me greatly. You must remember the Terry Schiavo case in the US. Now that poor young woman was supposed to be as good as brain-dead, so they refused to allow her nuclear family to have her fed and given liquid, deliberately causing her death, against the wish of her husband. But if you saw the photograph of her beatific smile when she saw her mother - even sitting up, with arms extended, I believe - you would know - even before this mind-body dualism was established - that she was able to respond in a manner that owed very little to the autonomic intelligence. In brief, I am of the opinion that the nature of even mammals, not just human beings with human souls and free wills, are, as far as their cognition and feelings are concerned, beyond the competence of human beings to understand. It's kind of a bizarre situation. I do suspect that it is the materialists who drive this sort of enquiry, and that scientists in good faith, such as yourselves come close to complicity in the establishment's perverse scientism. I mean, we have materialists, who believe that mind, even the human mind, is an emanation from matter. And to me that explains the interest even in trying to establish weird scientific details about whether mammals experience pain and if so, to what extent. Empirical science is ultimately about matter - until it becomes about philosophy and theology - though I'm sure it would play a significant role in areas of psychiatry; which, of course, itself, is of limited scope. Mathematics seems to have been playing a positively noxious role in economics, as Nassim Taleb depicts with great gusto. I do respect your courtesy and your explanations relating to the thread header, but am just adumbrating to you my own misgivings, such as they are.Axel
February 14, 2013
February
02
Feb
14
14
2013
02:58 PM
2
02
58
PM
PDT
RE: vjtorley post #9 Thanks for the clarification. I have used the term nociception to mean the sensation of pain, though to be more technically correct (better definition), I should make a better reference. I don't think that alters the meaning of what I was attempting to convey, however.ciphertext
February 14, 2013
February
02
Feb
14
14
2013
11:12 AM
11
11
12
AM
PDT
You may be interested in the work of Lynne Sneddon. Here's an article that seems appropriate.Alan Fox
February 14, 2013
February
02
Feb
14
14
2013
08:52 AM
8
08
52
AM
PDT
ciphertext, Thank you for your thoughtful comments. Regarding pain, I'd just like to point out that in current scientific usage, pain is by definition something you are consciously aware of - unlike nociception, which is defined as the ability to react to noxious stimuli. As the International Association for the Study of Pain states in its widely used definition: "Pain is an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage."vjtorley
February 14, 2013
February
02
Feb
14
14
2013
08:38 AM
8
08
38
AM
PDT
Axel, Thank you for your posts. Regarding your observation that a dog will yelp with pain, I would reply that although I have absolutely no doubt, personally speaking, that dogs feel pain, the sudden yelp that a dog makes if you step on its tail cannot be used to argue that it feels pain. As Dr. James Rose explains in his article, The Neurobehavioral Nature of Fishes and the Question of Awareness and Pain (Reviews in Fisheries Science, 10(1): 1–38, 2002):
...[T]he neural mechanisms generating nociceptive behaviors operate at lower levels of the nervous system and run their course regardless of whether there is a higher level where the conscious experience of pain is produced. (2002, PDF, Section XI, p. 23.) In all vertebrates, including humans, innate responses to nociceptive stimuli, such as limb withdrawal, facial displays, and vocalizations are generated by neural systems in subcortical levels of the nervous system, mainly the spinal cord and brainstem. Understanding that the display of behavioral responses to nociceptive stimuli does not, by itself, imply conscious awareness of pain is vital for a valid conceptualization of the neural basis of pain. For example, humans that are completely unconscious due to massive damage of the cerebral cortex can still show facial, vocal, and limb responses to nociceptive stimuli even though the experience of pain is impossible.(2002, PDF, Summary and Conclusions, pp. 27-28.)
Of course, a dog's pain repertoire is much more complex than mere yelping. The only point I wanted to make is that scientifically speaking, it's not that simple. Re your question in #6 as to whether science is an apt discipline to determine whether animals feel pain, I would answer that: (i) science can at least rule out the possibility that certain creatures feel pain; (ii) science can identify the varying degrees of similarity between different animals' neurological and behavioral responses to noxious stimuli and those of sentient human beings. We can thus say on scientific grounds that the case for sentience in mammals is much stronger than it is in fish or octopuses. I hope that helps.vjtorley
February 14, 2013
February
02
Feb
14
14
2013
08:33 AM
8
08
33
AM
PDT
RE: Axel post #1 I don't think the issue is really that dogs can or cannot experience pain. I think the issue that Dr. Torley is taking, is that there isn't enough information available to argue against Dr. Craig's position using that line of reasoning. I'm sure dogs can feel pain, and likely even remember the experience as unpleasant. I would hypothesize that any trainable animal would be able to experience pain or other nociceptive stimuli. It is a mechanism used to great effect to train animals (e.g. the "invisible fence", training collars that send electric shock, and "choke" chains). Beyond the ability to detect pain, the question then becomes "to what degree"? Obviously, they remember the pain as unpleasant and they remember the source of the stimulus. Otherwise, its use as a training aide would be worthless. RE: Axel post #6 I think that Dr. Torley is "meeting the critics" on their own ground. They use natural science to study how animals receive, process, and react to nociceptive stimuli. What I believe Dr. Torley is saying, is that the results of such studies don't provide much in detail concerning the "quality" of the process. We can poke, prod, electrocute, expose to extreme temperatures, and stimulate nerve activation by other means (chemical usually); but the most we get from such studies can be summed up like the following: 1) animal X has the ability to detect presence of nociceptive stimuli. 2) animal X has the ability to process detection of nociceptive stimuli (think the trainability of animals) 3) animal X has the ability to react to presence of nociceptive stimuli. This would be to varying degrees across the animal kingdom as you substitute different members of the kingdom for "x". We could say that based upon the discovery, through disection, of nerve receptors in an animal; they have the ability to detect the presence of nociceptive stimuli. We could even go so far as to determine which stimuli it could detect (cold, heat, electric field, etc...). We could say that a more complex animal like a dog, cat, or porpoise has in addition to the ability to detect the stimulus, the ability to process the stimuli. After all, they can be trained based upon the presences or lack there-of of such stimuli. Additionally, we can observe how such animals react to the nociceptive stimuli. The more difficult issues are when we consider a more broad definition of "pain" than simply the condition of experiencing nociceptive stimuli to any degree. An example might be the cognitive processing of "suffering" due to what one might call "mental anguish/pain". Such that the nociceptive stimuli is generated in-situ by an animal's cognitive system. Even though, no external stimulus was presented to trigger such a response.ciphertext
February 14, 2013
February
02
Feb
14
14
2013
07:54 AM
7
07
54
AM
PDT
It baffles me, Mr Torley, that you should imagine that science would be an apt discipline with which to verify if animals feel pain. I mean, aside from the import of my post above.Axel
February 14, 2013
February
02
Feb
14
14
2013
05:01 AM
5
05
01
AM
PDT
Neurobollocks!Box
February 13, 2013
February
02
Feb
13
13
2013
06:05 PM
6
06
05
PM
PDT
You can imagine him in a debate. NO! I won't be ordered to answer your question. He'd make Dawkins seem a star, wouldn't he?Axel
February 13, 2013
February
02
Feb
13
13
2013
04:50 PM
4
04
50
PM
PDT
You leave Greg alone, Joe! I won't hear word against him. He's been bullied on here for far too long. Repeatedly asking him the same questions, if you please. Just because he won't be bullied into answering them.Axel
February 13, 2013
February
02
Feb
13
13
2013
04:47 PM
4
04
47
PM
PDT
So if we can find humans who are not self aware, what does that mean?Joe
February 13, 2013
February
02
Feb
13
13
2013
04:12 PM
4
04
12
PM
PDT
Has nobody heard a dog yelp with pain? If that's simplistic, philosophically, colour me simple.Axel
February 13, 2013
February
02
Feb
13
13
2013
03:06 PM
3
03
06
PM
PDT

Leave a Reply