Uncommon Descent Serving The Intelligent Design Community

Minds, brains, computers and skunk butts

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[This post will remain at the top of the page until 10:00 am EST tomorrow, May 22. For reader convenience, other coverage continues below. – UD News]

In a recent interview with The Guardian, Professor Stephen Hawking shared with us his thoughts on death:

I have lived with the prospect of an early death for the last 49 years. I’m not afraid of death, but I’m in no hurry to die. I have so much I want to do first. I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.

Now, Stephen Hawking is a physicist, not a biologist, so I can understand why he would compare the brain to a computer. Nevertheless, I was rather surprised that Professor Jerry Coyne, in a recent post on Hawking’s remarks, let the comparison slide without comment. Coyne should know that there are no less than ten major differences between brains and computers, a fact which vitiates Hawking’s analogy. (I’ll say more about these differences below.)

But Professor Coyne goes further: not only does he equate the human mind with the human brain (as Hawking does), but he also regards the evolution of human intelligence as no more remarkable than the evolution of skunk butts, according to a recent report by Faye Flam in The Philadelphia Inquirer:

Many biologists are not religious, and few see any evidence that the human mind is any less a product of evolution than anything else, said Chicago’s Coyne. Other animals have traits that set them apart, he said. A skunk has a special ability to squirt a caustic-smelling chemical from its anal glands.

Our special thing, in contrast, is intelligence, he said, and it came about through the same mechanism as the skunk’s odoriferous defense.

In a recent post, Coyne defiantly reiterated his point, declaring: “I absolutely stand by my words.”

So today, I thought I’d write about three things: why the brain is not like a computer, why the evolution of the brain is not like the evolution of the skunk’s butt, and why the human mind cannot be equated with the human brain. Of course, proving that the mind and the brain are not the same doesn’t establish that there is an afterlife; still, it leaves the door open to that possibility, particularly if you happen to believe in God.

Why the brain is not like a computer

For readers wishing to understand why the human brain is not like a computer, I would highly recommend a 2007 blog article entitled, 10 Important Differences Between Brains and Computers, by Chris Chatham, a 2nd year Grad student pursuing a Ph.D. in Cognitive Neuroscience at the University of Colorado, Boulder, over on his science blog, Developing Intelligence. Let me say at the outset that Chatham is a materialist who believes that the human mind supervenes upon the human brain. Nevertheless, he regards the brain-computer metaphor as being of very limited value, insofar as it obscures the many ways in which the human brain exceeds a computer in flexibility, parallel processing and raw computational power, not to mention the fact that the human brain is part of a living human body.

Anyway, here is a short, non-technical summary of the ten differences between brains and computers which are discussed by Chatham:

1. Brains are analogue; computers are digital.
Digital 0’s and 1’s are binary (“on-off”). However, the brain’s neuronal processing is directly influenced by processes that are continuous and non-linear. Because early computer models of the human brain overlooked this simple point, they severely under-estimated the information processing power of the brain’s neural networks.

2. The brain uses content-addressable memory.
Computers have byte-addressable memory, which relies on information having a precise address. With the brain’s content-addressable memory, on the other hand, information can be accessed by “spreading activation” from closely-related concepts. As Chatham explains, your brain has a built-in Google, allowing an entire memory to be retrieved from just a few cues (key words). Computers can only replicate this feat by using massive indices.

3. The brain is a massively parallel machine; computers are modular and serial.
Instead of having different modules for different capacities or functions, as a computer does, the brain often uses one and the same area for a multitude of functions. Chatham provides an example: the hippocampus is used not only for short-term memory, but also for imagination, for the creation of novel goals and for spatial navigation.

4. Processing speed is not fixed in the brain; there is no system clock.
Unlike a computer, the human brain has no central clock. Time-keeping in the brain is more like ripples on a pond than a standard digital clock. (To be fair, I should add that some CPUs, known as asynchronous processors, don’t use system clocks.)

5. Short-term memory is not like RAM.
As Chatham writes: “Short-term memory seems to hold only ’pointers’ to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk.” One advantage of this flexibility of the brain’s short-term memory is that its capacity limit is not fixed: it fluctuates over time, depending on the speed of neural processing, and an individual’s expertise and familiarity with the subject.

6. No hardware/software distinction can be made with respect to the brain or mind.
The tired old metaphor of the mind as the software for the brain’s hardware overlooks the important point that the brain’s cognition is not a purely symbolic process: it requires a physical implementation. Some scientists believe that the inadequacy of the software metaphor for the mind was responsible for the embarrassing failure of symbolic AI.

7. Synapses are far more complex than electrical logic gates.
Because the signals which are propagated along axons are actually electrochemical in nature, they can be modulated in countless different ways, enhancing the complexity of the brain’s processing at each synapse. No computer even comes close to matching this feat.

8. Unlike computers, processing and memory are performed by the same components in the brain.
In Chatham’s words: “Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain.” We can make our memories stronger by the simple act of retrieving them.

9. The brain is a self-organizing system.
Chatham explains:

…[E]xperience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit – something known as “trauma-induced plasticity” kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction…

Chatham argues that failure to take into account the brain’s “trauma-induced plasticity” is having an adverse impact on the emerging field of neuropsychology. A whole science is being stunted by a bad metaphor.

10. Brains have bodies.
Embodiment is a marvelous advantage for a brain. For instance, as Chatham points out, it allows the brain to “off-load” many of its memory requirements onto the body.

I would also add that since computers are physical but not embodied, they lack the built-in teleology of an organism.

As a bonus, Chatham adds an eleventh difference between brains and computers:

11. The brain is much, much bigger than any [current] computer.

Chatham writes:

Accurate biological models of the brain would have to include some 225,000,000,000,000,000 (225 million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn’t include the influences of dendritic geometry, or the approximately 1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion.

Readers may ask why I am taking the trouble to point out the many differences between brains and computers, when both are, after all, physical systems with a finite lifespan. But the point I wish to make is that human beings are debased by Professor Stephen Hawking’s comparison of the human brain to a computer. The brain-computer metaphor is, as we have seen, a very poor one; using it as a rhetorical device to take pot shots at people who believe in immortality is a cheap trick. If Professor Hawking thinks that belief in immortality is scientifically or philosophically indefensible, then he should argue his case on its own merits, instead of resorting to vulgar characterizations.

Why the evolution of the brain is not like the evolution of the skunk’s butt

As we saw above, Professor Jerry Coyne maintains that human intelligence came about through the same mechanism as the skunk’s odoriferous defense. I presume he is talking about the human brain. However, there are solid biological grounds for believing that the brain is the outcome of a radically different kind of process from the one that led to the skunk’s defense system. I would argue that the brain is not the product of an undirected natural process, and that some Intelligence must have directed the evolution of the brain.

Skeptical? I’d like to refer readers to an online article by Steve Dorus et al., entitled, Accelerated Evolution of Nervous System Genes in the Origin of Homo sapiens. (Cell, Vol. 119, 1027–1040, December 29, 2004). Here’s an excerpt:

[T]he evolution of the brain in primates and particularly humans is likely contributed to by a large number of mutations in the coding regions of many underlying genes, especially genes with developmentally biased functions.

In summary, our study revealed the following broad themes that characterize the molecular evolution of the nervous system in primates and particularly in humans. First, genes underlying nervous system biology exhibit higher average rate of protein evolution as scaled to neutral divergence in primates than in rodents. Second, such a trend is contributed to by a large number of genes. Third, this trend is most prominent for genes involved a implicated in the development of the nervous system. Fourth, within primates, the evolution of these genes is especially accelerated in the lineage leading to humans. Based on these themes, we argue that accelerated protein evolution in a large cohort of nervous system genes, which is particularly pronounced for genes involved in nervous system development, represents a salient genetic correlate to the profound changes in brain size and complexity during primate evolution, especially along the lineage leading to Homo sapiens. (Emphases mine – VJT.)

Here’s the link to a press release relating to the same article:

Human cognitive abilities resulted from intense evolutionary selection, says Lahn by Catherine Gianaro, in The University of Chicago Chronicle, January 6, 2005, Vol. 24, no. 7.

University researchers have reported new findings that show genes that regulate brain development and function evolved much more rapidly in humans than in nonhuman primates and other mammals because of natural selection processes unique to the human lineage.

The researchers, led by Bruce Lahn, Assistant Professor in Human Genetics and an investigator in the Howard Hughes Medical Institute, reported the findings in the cover article of the Dec. 29, 2004 issue of the journal Cell.

“Humans evolved their cognitive abilities not due to a few accidental mutations, but rather from an enormous number of mutations acquired through exceptionally intense selection favoring more complex cognitive abilities,” said Lahn. “We tend to think of our own species as categorically different – being on the top of the food chain,” Lahn said. “There is some justification for that.”

From a genetic point of view, some scientists thought human evolution might be a recapitulation of the typical molecular evolutionary process, he said. For example, the evolution of the larger brain might be due to the same processes that led to the evolution of a larger antler or a longer tusk.

We’ve proven that there is a big distinction. Human evolution is, in fact, a privileged process because it involves a large number of mutations in a large number of genes,” Lahn said.
“To accomplish so much in so little evolutionary time – a few tens of millions of years – requires a selective process that is perhaps categorically different from the typical processes of acquiring new biological traits.” (Emphases mine – VJT.)

Professor Lahn’s remarks on elephants’ tusks apply equally to the evolution of skunk butts. Professor Jerry Coyne’s comparison of the evolution to the evolution of the skunk’s defense system therefore misses the mark. The two cases do not parallel one another.

Finally, here’s an excerpt from another recent science article: Gene Expression Differs in Human and Chimp Brains by Dennis Normile, in “Science” (6 April 2001, pp. 44-45):

“I’m not interested in what I share with the mouse; I’m interested in how I differ from our closest relatives, chimpanzees,” says Svante Paabo, a geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Such comparisons, he argues, are the only way to understand “the genetic underpinnings of what makes humans human.” With the human genome virtually in hand, many researchers are now beginning to make those comparisons. At a meeting here last month, Paabo presented work by his team based on samples of three kinds of tissue, brain cortex, liver, and blood from humans, chimps, and rhesus macaques. Paabo and his colleagues pooled messenger RNA from individuals within each species to get rid of intraspecies variation and ran the samples through a microarray filter carrying 20,000 human cDNAs to determine the level of gene expression. The researchers identified 165 genes that showed significant differences between at least two of the three species, and in at least one type of tissue. The brain contained the greatest percentage of such genes, about 1.3%. It also produced the clearest evidence of what may separate humans from other primates. Gene expression in liver and blood tissue is very similar in chimps and humans, and markedly different from that in rhesus macaques. But the picture is quite different for the cerebral cortex. “In the brain, the expression profiles of the chimps and macaques are actually more similar to each other than to humans,” Paabo said at the workshop. The analysis shows that the human brain has undergone three to four times the amount of change in genes and expression levels than the chimpanzee brain… “Among these three tissues, it seems that the brain is really special in that humans have accelerated patterns of gene activity,” Paabo says.” (Emphasis mine – VJT.)

I would argue that these changes that have occurred in the human brain are unlikely to be natural, because of the deleterious effects of most mutations and the extensive complexity and integration of the biological systems that make up the human brain. If anything, this hyper-fast evolution should be catastrophic.

We should remember that the human brain is easily the most complex machine known to exist in the universe. If the brain’s evolution did not require intelligent guidance, then nothing did.

As to when the intelligently directed manipulation of the brain’s evolution took place, my guess would be that it started around 30 million years ago when monkeys first appeared, but became much more pronounced after humans split off from apes around 6 million years ago.

Why the human mind cannot be equated with the human brain

The most serious defect of a materialist account of mind is that it fails to explain the most fundamental feature of mind itself: intentionality. Professor Edward Feser, who has written several books on the philosophy of mind, defines intentionality as “the mind’s capacity to represent, refer, or point beyond itself” (Aquinas, 2009, Oneworld, Oxford, p. 50). For example, when we entertain a concept of something, our mind points at a certain class of things, and it points at the conclusion of an argument when we reason, at some state of affairs when we desire something, and at some person (or animal) when we love someone.

Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves. However, brain processes cannot possess this kind of meaning, because physical states of affairs have no inherent meaning as such. Hence our thoughts cannot be the same as our brain processes. As Professor Edward Feser puts it in a recent blog post (September 2008):

Now the puzzle intentionality poses for materialism can be summarized this way: Brain processes, like ink marks, sound waves, the motion of water molecules, electrical current, and any other physical phenomenon you can think of, seem clearly devoid of any inherent meaning. By themselves they are simply meaningless patterns of electrochemical activity. Yet our thoughts do have inherent meaning – that’s how they are able to impart it to otherwise meaningless ink marks, sound waves, etc. In that case, though, it seems that our thoughts cannot possibly be identified with any physical processes in the brain. In short: Thoughts and the like possess inherent meaning or intentionality; brain processes, like ink marks, sound waves, and the like, are utterly devoid of any inherent meaning or intentionality; so thoughts and the like cannot possibly be identified with brain processes.

Four points need to be made here, about the foregoing argument. First, Professor Feser’s argument does not apply to all mental states as such, but to mental acts – specifically, those mental acts (such as thoughts) which possess inherent meaning. My seeing a red patch here now would qualify as a mental state, but since it is not inherently meaningful, it is not covered by Feser’s argument. However, if I think to myself, “That red thing is a tomato” while looking at a red patch, then I am thinking something meaningful. (The reader will probably be wondering, “What about an animal which recognizes a tomato but lacks the linguistic wherewithal to say to itself, ‘This is a tomato’?” Is recognition inherently meaningful? The answer, as I shall argue in part (b) below, depends on whether the animal has a concept of a tomato which is governed by a rule or rules, which it considers normative and tries to follow – e.g. “This red thing is juicy but has no seeds on the inside, so it can’t be a tomato but might be a strawberry; however, that green thing with seeds on the inside could be a tomato.”)

Second, Professor Feser’s formulation of the argument from the intentionality of mental acts is very carefully worded. Some philosophers have suggested that the characteristic feature of mental acts is their “aboutness”: thoughts, arguments, desires and passions in general are about something. But this is surely too vague, as DNA is “about” something too: the proteins it codes for. We can even say that DNA possesses functionality, which is certainly a form of “aboutness.” What it does not possess, however, is inherent meaning, which is a distinctive feature of mental acts. DNA is a molecule that does a job, but it does not and cannot “mean” anything, in and of itself. If (as I maintain) DNA was originally designed, then it was meant by its Designer to do something, but this meaning would be something extrinsic to it. Its functionality, on the other hand, would be something intrinsic to it.

Third, it is extremely difficult to disagree with Feser’s premise that thoughts possess inherent meaning. To do that, one would have to either deny that there are any such things as thoughts, or one would need to locate inherent meaning somewhere else, outside the domain of the mental.

There are a few materialists, known as eliminative materialists, who deny the very existence of mental processes such as thoughts, beliefs and desires. The reason why I cannot take eliminative materialism seriously is that any successful attempt to argue for the truth of eliminative materialism – or, indeed, for the truth of any theory – would defeat eliminative materialism, since argument is, by definition, an attempt to change the beliefs of one’s audience, and eliminative materialism says we have none. If eliminative materialism is true, then argumentation of any kind, about any subject, is always a pointless pursuit, as argumentation is defined as an attempt to change people’s beliefs, and neither attempts not beliefs refer to anything, on an eliminative materialist account.

The other way of arguing against the premise that thoughts possess inherent meaning would be to claim that inherent meaning attaches primarily to something outside the domain of the mental, rather than to our innermost thoughts as we have supposed. But what might this “something” be? The best candidate would be public acts, such as wedding vows, the signing of contracts, initiation ceremonies and funerals. Because these acts are public, one might argue that they are meaningful in their own right. But this will not do. We can still ask: what is it about these acts and ceremonies that makes them meaningful? (A visiting alien might find them utterly meaningless.) And in the end, the only satisfactory answer we can give is: the cultural fact that within our community, we all agree that these acts are meaningful (which presupposes an mental act of assent on the part of each and every one of us), coupled with the psychological fact that the participants are capable of the requisite mental acts needed to perform these acts properly (for instance, someone who is getting married must be capable of understanding the nature of the marriage contract, and of publicly affirming that he/she is acting freely). Thus even an account of meaning which ascribes meaning primarily to public acts still presupposes the occurrence of mental acts which possess meaning in their own right.

Fourth, it should be noted that Professor Feser’s argument works against any materialist account of the mind which identifies mental acts with physical processes (no matter what sort of processes they may be) – regardless of whether this identification is made at the generic (“type-type”) level or the individual (“token-token”) level. The reason is that there is a fundamental difference between mental acts and physical processes: the former possess an inherent meaning, while the latter are incapable of doing so.

Of course, the mere fact that mental acts and physical processes possess mutually incompatible properties does not prove that they are fundamentally different. To use a well-worn example, the morning star has the property of appearing only in the east, while the evening star has the property of appearing only in the west, yer they are one and the same object (the planet Venus). Or again: Superman has the property of being loved by Lois Lane, but Clark Kent does not; yet in the comic book story, they are one and the same person.

However, neither of these examples is pertinent to the case we are considering here, since the meaning which attaches to mental acts is inherent. Hence it must be an intrinsic feature of mental acts, rather than an extrinsic one, like the difference between the morning star and the evening star. As for Superman’s property of being loved by Lois Lane: this is not a real property, but a mere Cambridge property, to use a term coined by the philosopher Peter Geach: in this case, the love inheres in Lois Lane, not Superman. (By contrast, if Superman loves Lois, then the same is also true of Clark Kent. This love is an example of a real property, since it inheres in Superman.)

The difference between mental acts and physical processes does not merely depend on one’s perspective or viewpoint; it is an intrinsic difference, not an extrinsic one. Moreover, it is a real difference, since the property of having an inherent meaning is a real property, and not a Cambridge property. Since mental acts possess a real, intrinsic property which physical processes lack, we may legitimately conclude that mental acts are distinct from physical processes. (Of course, “distinct from” does not mean “independent of”.)

A general refutation of materialism

Feser’s argument can be extended to refute all materialistic accounts of mental acts. Any genuinely materialistic account of mental acts must be capable of explaining them in terms of physical processes. There are only three plausible ways to do this: (a) identifying mental acts with physical processes, (b) showing how mental acts are caused by physical processes, and (c) showing how mental acts are logically entailed by physical processes. No other way of explaining mental acts in terms of physical processes seems conceivable.

The first option, as we have seen, is ruled out: as we saw earlier, mental acts cannot be equated with physical processes, because the former possess inherent meaning as a real, intrinsic property, while the latter do not.

The second option is also impossible, for two reasons. Firstly, if the causal law is to count as a genuine explanation of mental acts, then it must account for their intentionality, or inherent meaningfulness. In other words, we would need a causal law that not only links physical processes to mental acts, but a causal law that links physical processes to meanings. However, meaningfulness is a semantic property, whereas the properties picked out by laws of nature are physical properties. To suppose that there are laws linking physical processes and mental acts, one would have to suppose the existence of a new class of laws of nature: physico-semantic laws.

Secondly, we know for a fact that there are some physical processes (e.g. precipitation) which are incapable of generating meaning: they are inadequate for the task at hand. If we are to suppose that certain other physical processes are capable of generating meaning, then we must believe that these processes are causally adequate for the task of generating meaning, while physical processes such as precipitation are not. But this only invites the further question: why? We might be told that causally inadequate processes lack some physical property (call it F) which causally adequate processes possess – but once again, we can ask: why is physical property F relevant to the task of generating meaning, while other physical properties are not?

So much for the first and second options, then. Mental acts which possess inherent meaning are neither identifiable with physical processes, nor caused by them. The third option is to postulate that mental acts are logically entailed by physical processes. This option is even less promising than the first two: for in order to show that physical processes logically entail mental acts, we would have to show that physical properties logically entail semantic properties. But if we cannot even show that they are causally related, then it will surely be impossible for us to show that they are logically connected. Certainly, the fact that an animal (e.g. a human being) has the property of having a large brain with complex inter-connections that can store a lot of information does not logically entail that this animal – or its brain, or its neural processes, or its bodily movements – has the property of having an inherent meaning.

Hence not only are mental acts distinct from brain processes, but they are incapable of being caused by or logically entailed by brain processes. Since these are the only modes of explanation open to us, it follows that mental acts are incapable of being explained in terms of physical processes.

Let us recapitulate. We have argued that eliminative materialism is false, as well as any version of materialism which identifies mental acts with physical processes, and also any version of materialism in which mental acts supervene upon brain processes (either by being caused by these processes or logically entailed by them). Are there any versions of materialism left for us to consider?

It may be objected that some version of monism, in which one and the same entity has both physical and mental properties, remains viable. Quite so; but monism is not materialism.

We may therefore state the case against materialism as follows:

1. Mental acts are real.
(Denial of this premise entails denying that there can be successful arguments, for an argument is an attempt to change the thoughts and beliefs of the listener, and if there are no mental acts then there are no thoughts and beliefs.)

2. At least some mental acts – e.g. thoughts – are have the real, intrinsic property of being inherently meaningful.
(Justification: it is impossible to account for the meaningfulness of any act, event or process without presupposing the existence of inherently meaningful thoughts.)

3. Physical processes are not inherently meaningful.

4. If process X has a real, intrinsic property F which process Y lacks, then X cannot be identified with Y.

5. By 2, 3 and 4, physical processes cannot be identified with inherently meaningful mental acts.

6. Physical processes are only capable of causing other processes if there is some law of nature linking the former to the latter.

7. Laws of nature are only explanatory of the respective properties they invoke, for the processes they link.
(More precisely: if a law of nature links property F of process X with property G of process Y, then the law explains properties F and G, but not property H which also attaches to process Y. To explain that, one would need another law.)

8. The property of having an inherent meaning is a semantic property.

9. There are not, and there cannot be, laws of nature linking physical properties to semantic properties.
(Justification: No such “physico-semantic” laws have ever been observed; and in any case, semantic properties are not reducible to physical ones.)

10. By 6, 7, 8 and 9, physical processes are incapable of causing inherently meaningful mental acts.

11. Physical processes do not logically entail the occurrence of inherently meaningful mental acts.

12. If inherently meaningful mental acts exist, and if physical processes cannot be identified with them and are incapable of causing them or logically entailing all of them, then materialism is false.
(Justification: materialism is an attempt to account for mental states in physical terms. This means that physical processes must be explanatorily prior to, or identical with, the mental events they purport to explain. Unless physical processes are capable of logically or causally generating mental states, then it is hard to see how they can be said to be capable of explaining them.)

13. Hence by 1, 5, 10, 11 and 12, materialism is false.

Why doesn’t the mind remain sober when the body is drunk?

The celebrated author Mark Twain (1835-1910) was an avowed materialist, as is shown by the following witty exchange he penned:

Old man (sarcastically): Being spiritual, the mind cannot be affected by physical influences?
Young man: No.
Old man: Does the mind remain sober when the body is drunk?

Drunkenness does indeed pose a genuine problem for substance dualism, or the view that mind and body are two distinct things. For even if the mind (which thinks) required sensory input from the body, this would only explain why a physical malady or ailment would shut off the flow of thought. What it would not explain is the peculiar, erratic thinking of the drunkard.

However, the view I am defending here is not Cartesian substance dualism, but a kind of “dual-operation monism”: each of us is one being (a human being), who is capable of a whole host of bodily operations (nutrition, growth, reproduction and movement, as well as sensing and feeling), as well as a few special operations (e.g. following rules and making rational choices) which we perform, but not with our bodies. That doesn’t mean that we perform these acts with some spooky non-material thing hovering 10 centimeters above our heads (a Cartesian soul, which is totally distinct from the body). It just means that not every act performed by a human animal is a bodily act. For rule-following acts, the question, “Which agent did that?” is meaningful; but the question, “Which body part performed the act of following the rule?” is not. Body parts don’t follow rules; people do.

Now, it might be objected that the act of following a rule must be a material act, because we are unable to follow rules when our neuronal firing is disrupted: as Twain pointed out, drunks can’t think straight. But this objection merely shows that certain physical processes in the brain are necessary, in order for rational thought to occur. What it does not show is that these neural processes are sufficient to generate rational thought. As the research of the late Wilder Penfield showed, neurologists’ attempts to produce thoughts or decisions by stimulating people’s brains were a total failure: while stimulation could induce flashbacks and vividly evoke old memories, it never generated thoughts or choices. On other occasions, Penfield was able to make a patient’s arm go up by stimulating a region of his/her brain, but the patient always denied responsibility for this movement, saying: “I didn’t do that. You did.” In other words, Penfield was able to induce bodily movements, but not the choices that accompany them when we act freely.

Nevertheless, the reader might reasonably ask: if the rational act of following a rule is not a bodily act, then why are certain bodily processes required in order for it to occur? For instance, why can’t drunks think straight? The reason, I would suggest, is that whenever we follow an abstract rule, a host of subsidiary physical processes need to take place in the brain, which enable us to recall the objects covered by that rule, and also to track our progress in following the rule, if it is a complicated one, involving a sequence of steps. Disruption of neuronal firing interferes with these subsidiary processes. However, while these neural processes are presupposed by the mental act of following a rule, they do not constitute the rule itself. In other words, all that the foregoing objection shows is that for humans, the act of rule-following is extrinsically dependent on physical events such as neuronal firing. What the objection does not show is that the human act of following or attending to a rule is intrinsically or essentially dependent on physical processes occurring in the brain. Indeed, if the arguments against materialism which I put forward above are correct, the mental act of following a rule cannot be intrinsically dependent on brain processes: for the mental act of following a rule is governed by its inherent meaning, which is something that physical processes necessarily lack.

I conclude, then, that attempts to explain rational choices made by human beings in terms of purely material processes taking place in their brains are doomed to failure, and that whenever we follow a rule (e.g. when we engage in rational thought) our mental act of doing so is an immaterial, non-bodily act.

Implications for immortality

The fact that rational choices cannot be identified with, caused by or otherwise explained by material processes does not imply that we will continue to be capable of making these choices after our bodies die. But what it does show is that the death of the body, per se, does not entail the death of the human person it belongs to. We should also remember that it is in God that we live and move and have our being (Acts 17:28). If the same God who made us wishes us to survive bodily death, and wishes to keep our minds functioning after our bodies have cased to do so, then assuredly He can. And if this same God wishes us to partake of the fullness of bodily life once again by resurrecting our old bodies, in some manner which is at present incomprehensible to us, then He can do that too. This is God’s universe, not ours. He wrote the rules; our job as human beings is to discover them and to follow them, insofar as they apply to our own lives.

Comments
#104 nullasus
Let me put this another way: Let’s say a person claims that they have a dog who can play chess. I ask to see the evidence, and they show me a dog who, when faced with a chess board, will pick up some of the pieces in his mouth and gnaw at them. It does not good for that person to tell me, “See, I call what the dog is doing “playing chess”.”
I am sure Elizabeth will do an excellent job of responding to most of your comment – but I would like to pick up on the question of definitions. It is a common objection to materialism in the philosophy of mind that the materialist has changed the definition of “decide” or “intention” or “meaning” or whatever. All I am aware of is (1) The mutually observable external world (2) Reconstructions of that observable world in my own head (memories etc) (3) Actions I take that are either external bodily actions (including audible speech) or internal reconstructions of such actions (imagining myself doing them, speaking to myself) (4) Experts such as Elizabeth make me aware of possible brain organisations and structure that would account for (2) and (3). To take one example I think a decision to do X is brain state which results in a propensity to do X (it is subtler than that – because that brain state arises through a conscious process – but I will leave the hard problem of consciousness for another day). Possibly you think I am lying or deluding myself about this.  It doesn’t matter.  It is sufficient to imagine that a creature exists who has the above characteristics (the perfect Turing machine if you like) As far as I am concerned mental constructs such as decisions are very useful ways of talking about 1, 2 and 3.  Just as a wave is a very useful, indeed almost essential, way of talking about water molecules in a certain context. Suppose you say that a mental activity such a decision is an immaterial something else and materialists have escaped the issue by redefining “decision”. So we are talking about different things. So let’s invent different words for the different things. Let us call this immaterial something else a DECISION (in upper case) to differentiate it from a material decision (in lower case) which is a propensity to act. Now there are a couple of problems about DECISIONs: (a) How do you know that what you are referring to when you refer to a “DECISION” is the same as what anyone else (including another dualist) is referring to?  All that you and another person can mutually observe is the external world.  You say there is something else going on in your head when DECIDE but how do you know it is the same thing as what is going in other people’s heads? (b) If a DECISION is different from a decision then that means it is at least logically possible to have one without the other. So it would be logically possible to DECIDE to do something and not do it, although there is nothing stopping you.  But someone who says they have decided to do X and then does not do X when there is nothing to prevent or dissuade them, is either lying, changed their mind, or does not  understand what “decide” means.  It is irrelevant what other activities went on in their head.  That is what “decide” normally means.  So DECIDE seems to be the word that does not fit in with normal usage.markf
May 25, 2011
May
05
May
25
25
2011
01:56 AM
1
01
56
AM
PDT
Elizabeth, I think you have explained very well how the brain works in all the more intelligent animals. But on both the metaphysical and the phenomenological level you seem to be dismissing everything that is uniquely human about how intentionality enters into human experience. Abstract / symbolic thought simply does not exist in the same type of physical relationship with the brain as sense experience does. Husserl makes a strong case for a motivational causality grounded in intentionality that is quite unlike any form of real or physical causality. I have posted a paper, linked to my name, that I think you and several others here might be interested in.Lamont
May 25, 2011
May
05
May
25
25
2011
12:55 AM
12
12
55
AM
PDT
@ nullasalus: Thanks for your detailed and thoughtful response. I'm a bit busy today, but I'll try to get to it this evening. Cheers LizzieElizabeth Liddle
May 25, 2011
May
05
May
25
25
2011
12:40 AM
12
12
40
AM
PDT
Eric: Oh man . . . . I always get lost in discussions like this, way over my head really. I'd say I mostly agree with your scenario 2 ("Somehow at some level of complexity and organization, consciousness arises as an emergent property, which then is somehow (at least partly) decoupled from the underlying matter. yadda, yadda, yadda) except I'm not sure about the uncoupling. Hey, I don't want to admit that my mind is merely a product of my neurons firing but I have yet to see any evidence that convinces me otherwise. I don't want to die and disappear but . . . . A few years ago I had an operation and was under general anaesthetic which was one of the creepiest experiences of my life. It was a complete blank. Nothing. No passage of time, no sensations. Just a big discontinuity. And I keep thinking . . . if there is any part of me that exists outside of my body then why is that time (a few hours) just gone? It's like the clocks jumped. Really. And the best explanation I can find is that my brain was turned off, mostly.ellazimm
May 24, 2011
May
05
May
24
24
2011
11:12 PM
11
11
12
PM
PDT
ellazimm @ 89, Thanks. I'd be interested to know your thoughts about my comments in 87. Specifically, does my understanding of the options available to the materialist approach capture the essence, or is there something fundamental I've left out? Thanks,Eric Anderson
May 24, 2011
May
05
May
24
24
2011
10:20 PM
10
10
20
PM
PDT
Elizabeth Liddle, No, I’m not saying “there are no thoughts or intentions” – just because I think they can be accounted for by observable mechanisms, doesn’t mean I think they don’t exist! Yes, the “material state” is “blind” but, as I tried to make clear, that doesn’t mean that theperson is blind, because the person exists at a higher level of analysis than a given “material state”. The only possible way to make sense of a "higher level of analysis" in this context would be either A) as a useful fiction (and if it's a fiction, it's not going to be explanatory), B) in terms of weak emergence (in which case intention and meaning is 'nothing but' operation by that which is devoid of meaning and intention, and thus ultimately eliminative), or C) strong emergence (in which case appeals to the material constituents will not be explanatory, even if they are in some sense required - there's something above and beyond those constituents in play that they themselves don't explain, or our understanding of said constituents is incomplete, and there's more to the physical than materialism and mechanism supposed.) If there's another option, you're going to have to outline it - "higher level of analysis" in and of itself isn't very helpful. :) And the name I give to the kind of decision-making in which I weigh up various options and decide, with malice (or otherwise) aforethought, on a specific course of action, is “intending”. And I experience it as “deciding”. But what happens in my brain when I do that deciding, I would contend,is that a serious of “blind mechanical material states” chunter through a series of operations the final output of which is “my” decision. And I call it mine because I consider myself incorporated (again I use the word absolutely literally) in that neural machinery. As I said above, this either results in your explanation of meaning and/or intention as a useful fiction (and ultimately non-explanatory), weakly emergent (and thus eliminative), or strongly emergent (and thus the material isn't what we thought it was after all.) Let me put this another way: Let's say a person claims that they have a dog who can play chess. I ask to see the evidence, and they show me a dog who, when faced with a chess board, will pick up some of the pieces in his mouth and gnaw at them. It does not good for that person to tell me, "See, I call what the dog is doing "playing chess"." Likewise, if you commit yourself to the view that all that exists is a material world, blindly and deterministically churning out results without thought or intention, it does little good to point at one or another particular bit of churning and say "I'm going to call this 'decision-making'!" The matter is making decisions the way a dog plays chess. :) It’s our capacity to simulate, i.e. to re-enter simulated output from potential actions as input that endows us with the capacity to intend – to foresee the consequences of various course of actions and choose between them. But what counts as a simulation only does so relative to a mind to begin with. If I arrange a few sticks and stones on the ground to represent or simulate the layout of (say) a camp, they 'simulate' or 'represent' only in virtue of what meaning I or another person assign to the sticks, stones and their arrangement. And explaining "the meaning in the sticks and stones" this way, what I've done is assert that the sticks and stones are devoid of meaning - the meaning is in my mind. If I then say that the mind is nothing but the brain... etc. OK, let’s try to do this without the c word at all. If metaphors become a problem we need to drop them. ... So we can think of a neuron as a logic gate. Maybe it's not possible to get rid of these metaphors, eh? Regardless, you gave a good material rundown of what goes on in a brain. But if that description is offered up as total - as in, total for the brain, and therefore total for the mind - then please note that there is no intention or meaning anywhere in your description. Yes, yes, I know - higher level of analysis. As I mentioned above, eventually just what the 'higher level of analysis' means has to be cashed out, and it can only be cashed out in constituents you're willing to let into your metaphysics. I mean, it's not as if eliminative materialists are a non-existent breed of thinker. They do exist, and on can come to conclusions or make assertions that place one in that camp. I don't think this can be evaded by simply changing metaphorical language ('Well, so long as I call this a decision and the EM doesn't, I'm not an EM even if we mean the same exact thing.') What I am doing is trying to outline the process by which those options are considered – the manner in which the decision process is weighted by factors other than a simple imperative stimulus. But there is no "decision process", nor is anything being "considered". At least not given your view of matter, and I think that's by admission. Take the example of a brain "simulating" this or that. Is the relevant portion of the brain or the process objectively, intrinsically 'about' what it is 'simulating'? Well, if so, then we're no longer dealing with a material world as traditionally conceived. Is it not? Then the brain is only 'about' something else, and is only 'simulating', in virtue of a third-person view. You seem to be saying that because my explanation is materialistic that it must eliminate the phenomenon it seeks to explain, which is non-material. I disagree, and I think the problem, as I keep saying (apologies for repetition) is one of level-of-analysis. And I've repeated myself here as well. More than that, it seems to me like your reply can basically be summed up as "just because X is really (this description) doesn't mean I'm not allowed to use a metaphor or a useful fiction!" To which I'll respond, sure you can - but I can also point out just what is 'really' meant, and must be 'really' meant, once we push away poetic language, metaphor, and fiction. The example of the wave doesn't work, because there's no need to dispute that some physical thing X is ultimately constituted by a number of smaller physical things Y. Put another way, just because a bowling ball really is just a conglomeration of smaller material things (though whether it's even right to call them 'material' anymore, given quantum physics, is an open question) poses no problem here, precisely because a "bowling ball" as a useful fiction, or only 'really' existing relative to a mind, isn't terribly controversial to most people. Just as the same knife can be 'a piece of cutlery', 'an antique', 'a weapon', etc relative to a mind, though most everyone would agree that the knife is just a collection of atoms, etc, in this or that arrangement, which we call various things in different contexts and as shorthand. Put another way: If someone tells me that they saw a ghost, and if my investigation indicates that what they saw was a white sheet attached to a string, did I just provide an explanation for ghosts ('Ghosts are sheets attached to strings!')? Or is it more apt to say 'You didn't see a ghost at all'? Eliminative positions do exist. And we should call them that when they are embraced. :) Sure, because you know, and I know, that the mp3 player is not screaming because it needs help. So, as receivers, we know it doesn’t “mean” help, and as sender, we know that the mp3 player isn’t screaming “help” as its response to being doused in water. In the case of the robot, however, its scream does mean that “there’s is water here”, so as receivers we can regard the robot’s announcement as “meaning” precisely what the words say. Sure we can - because we assign the meaning to those words, and to the robot itself. The robot is not trying to communicate anything to us, and the "meaning" of the scream only exists relative to us and our minds. No, the robot is not really 'asking for help'. No, the meaning is not "precisely what the words say". Really, you could eliminate the words "There's water here" from the robot's cry, and "Help" would "mean" "there's water here" - if you decided to assign it that meaning. Strip away the metaphors, the useful fictions and the poetic language when talking about intention and meaning (and even consciousness and experience) in a materialist world and there's just not much left.nullasalus
May 24, 2011
May
05
May
24
24
2011
05:33 PM
5
05
33
PM
PDT
@ vjtorley: yes, I like your exposition of the waving/drowning scenario, but it was exactly what I was trying to convey! You conveyed it better, however :)Elizabeth Liddle
May 24, 2011
May
05
May
24
24
2011
03:27 PM
3
03
27
PM
PDT
Nullasus @ #100 Thanks for your response! OK:
I certainly don’t think that a line of pebbles “means stop” except in the context of an interpreting brain.
And then the problem becomes: What series of actions in a brain “means interpreting”? Or is it, again, that a brain only “means interpreting” in the context of yet another interpreting brain?
Well, I thought I’d clarified that, but maybe not! I’ll have another go, but essentially, what I mean is that it is, IMO, only sensible to talk about meaning, where there is an interpreter, so, as you rightly note, that shifts the problem on to how interpretation occurs within a brain. However, I don’t think that’s an insoluble problem, as I tried to say.
You sketch out a series of physical actions that take place when a brain – given a certain context – is reacting to a stimulus. But as far as the question of meaning and thought itself goes, the explanation doesn’t show up. You say that a “memory” or a “thought” is “implemented as an increased probability of that pattern being repeated”. Now, if you mean that memories or thoughts are nothing but particular physical states in mundane operation, then you’re either taking an eliminative stance about these things (‘there are no thoughts or intentions, there’s nothing but blind mechanical material states’), or you’re making the “material” out to be more than it was (property dualism, or any other number of options).
No, I’m not saying “there are no thoughts or intentions” – just because I think they can be accounted for by observable mechanisms, doesn’t mean I think they don’t exist! Yes, the “material state” is “blind” but, as I tried to make clear, that doesn’t mean that theperson is blind, because the person exists at a higher level of analysis than a given “material state”. “I” am not coterminous with my state at the instant I typed the letter “I”, any more than the light from the light bulb above my desk is coterminous with the specific photon that just arrived on my keyboard. What I call “I” is a whole decision-making shebang, and my material state at any given time point is simply a snapshot of that decision-maker in action. And the name I give to the kind of decision-making in which I weigh up various options and decide, with malice (or otherwise) aforethought, on a specific course of action, is “intending”. And I experience it as “deciding”. But what happens in my brain when I do that deciding, I would contend,is that a serious of “blind mechanical material states” chunter through a series of operations the final output of which is “my” decision. And I call it mine because I consider myself incorporated (again I use the word absolutely literally) in that neural machinery.
Nothing more than physical laws are required to execute the “coding”, and nothing more than physical laws are required to explain why the water flows down the creeks, rather than over the top of the flats. What makes it what we call a “code” is the feedback between water-flow and creek topography. Same with brains, except we call it “long-term potentiation” (LTP).
And another way of putting what you’re apparently saying here is “humans have as much intention, thought and experience as we take creek beds in mudflats to have – none at all”.
No, I’m not saying that. I’m simply using a creekbed as an illustration of very simple natural feedback coding. I don’t think any intention is encoded in a creekbed for the very simple reason that for a creekbed, there is no simulation involved. It is what it is. Whereas a brain is able to simulate output from a potential course of action, and re-enter that output as input. Creeks can’t do this, so we don’t regard them as intentional agents (except poetically). It’s our capacity to simulate, i.e. to re-enter simulated output from potential actions as input that endows us with the capacity to intend – to foresee the consequences of various course of actions and choose between them. But the feedback process, at the level of protein expression, isn’t, I’d argue, any less mechanical than what happens in a creek. The difference lies in the architecture of feedback networks themselves.
You say that we shouldn’t push the “code” metaphor too far, but it seems to me that what would be “too far” would also happen to be the only way the metaphor would really make sense of the view you’re advocating. Otherwise it’s like saying, “Our brains encode our thoughts. Also, there’s no such thing as encoding.”
OK, let’s try to do this without the c word at all. If metaphors become a problem we need to drop them. To go no lower than the neuron (though lots of extremely interesting things happen within the neuron, but we can work above that level for now): neurons essentially sum inputs over time to produce outputs. So if lots of positive inputs come in over a short time period, the neuron will fire. If lots of negative inputs come in, it will be inhibited, and become less likely to fire. So we can think of a neuron as a logic gate. But because it synapses on to other neurons which in turn contribute to its own inputs, we have potential feedback loops, and ongoing oscillations. We also have billions of neurons, and the number of synapses is orders of magnitude greater than that. And when a sensory stimulus arrives (light on the retina for instance) that triggers a whole cascade of neural firing patterns that resonate through the whole brain, potentiating that pathway so that in future, any given pair of neurons is more likely in future to fire together. I used to be a musician in an earlier life, and I have a lovely old viola da gamba which has been played for over 300 years, and I like to think that everyone who ever played it left their mark on it in the form of folding patterns in the sound board that make that folding pattern – that sound – more likely to be triggered by subsequent players. That’s probably a better metaphor than my muddy creek, come to think of it. I could say that those sounds have been “coded” into the sound board, but I won’t ? But back to the brain – if the light pattern on my retina turns out to “signify”, for example, a fork in the road, then that cascade of neural firing will also include the preparation of my muscles for turning right, and for turning left. The winning program (the one that reaches execution threshold by means of excitatory connections from the most other brain regions in the cascade) will be my decision, which I will refer to as my intended action.
I’d say that meaning arises in your mind by means of neural mechanisms that ensure that a stimulus (say that line of pebbles) triggers a range of options for action, and those action programs (this is the tricky part) include actions (highly attenuated actions, many well below execution threshold) implicated in your perception of yourself as a meaning-making agent. Yes it may sound as though there is an obvious problem there, but that’s because (I suggest) we have an innate tendency to regard feedback loops as some kind of intellectual cop-out (“which came first, the chicken or the egg?” “who or what was the prime mover?”).
But sometimes the reason something sounds like an intellectual cop-out is precisely because that’s what it is. You’re saying that meaning arises because sometimes a stimulus triggers a range of options (but given determinism there’s only a range in a poetic sense since only one response is actually possible) for action programs (but there aren’t really programs, that’s just metaphor) implicated in my perception of myself as a meaning-making agent (but my perception of myself is yet another meaning which has to be explained, and ultimately ‘means’ your explanation for meaning is yet more meaning). No, there’s a range in an absolutely real and measurable sense, and by “programs” I also mean something perfectly real and measurable. For example, we can physically measure the degree to which a neuron responsible for triggering a movement (for example a neuron implicated in a saccadic eye movement) responds to a stimulus in a given location in the visual field, and the degree to which a second stimulus, in a different location, stimulates a competing neuron, and how that competition between the two is resolved. So yes, there are two real options, and the outcome depends on various factors, and is the result of what I am calling a “motor program” – an electrical signal that actually results in a physical eye movement. The fact that in a deterministic universe (not that the universe seems, at present, to be deterministic, but let us assume for now that I think it is) only one outcome is possible for the actual scenario we are considering, is, I think, irrelevant – the whole concept of “choice” implies a decision is not a reflex, but is the result of considered options. What I am doing is trying to outline the process by which those options are considered – the manner in which the decision process is weighted by factors other than a simple imperative stimulus.
Really, sometimes a problem is obvious because that’s what it is. I submit it’s the case here.
Well, I disagree :)
Now, you object that it’s not a cop-out and really is an explanation, and try to illustrate that with your sand metaphor. But that metaphor is woefully inadequate, because what’s going on in that example is nothing but physical causation producing phenomena that, under a mechanistic understanding of matter, is mundane – there’s no thought, mind, or meaning to speak of there. Unless that’s what you want to say in the case of brains and go down a full-fledged eliminative route. In which case, problems with that view aside, you’re not offering an explanation for meaning anyway – you’re saying there is no meaning to explain.
But it seems to me that your argument is somewhat circular. You seem to be saying that because my explanation is materialistic that it must eliminate the phenomenon it seeks to explain, which is non-material. I disagree, and I think the problem, as I keep saying (apologies for repetition) is one of level-of-analysis. To take yet another marine metaphor: an ocean wave is a phenomenon that consists of a pattern movement of air and water. But an ocean wave is not made of either air or water, and it can be travelling in a quite different direction to both. The wave is actually a property not of either the air or the water, but of the interface. But, if I attempt to give a “materialistic” account of the wave in terms of movement of air or water molecules, you might turn round and say I have “eliminated” the wave. No I haven’t – it’s that the wave exists at a different level of analysis from the water and air molecules. Same with thought and intentions. I can explain them (I would contend) in terms of neurons and ions and action potentials and networks and protein expression. That doesn’t mean I’ve eliminated thoughts and intentions, it’s just that I’ve accounted for them at a level below that at which it is normally most useful to consider them (just as it is far more useful to describe a wave in terms of frequency and amplitude and direction than in terms of the trajectories of air and water molecules). In everyday language we speak of thoughts and intentions, just as we talk of breakers and tsunamis. That doesn’t mean that we can’t account for them at a molecular level, nor does it mean that by doing so we are eliminating the phenomenon.
Our motor programs have no “intrinsic meaning”. What has meaning is their relationship to their inputs.
But then I ask, is the meaning of that relationship intrinsic? Let’s look at your example. A waving arm has no “intrinsic meaning”. The waving arm I raise in response to struggling in a rip tide, however, is my meaningful response to the sensation of water up my nose, i.e. a motor action intended to elicit help. But motor actions in and of themselves have no intentions or meaning – they have them only relative to a mind. If I create a robot that plays a recording which says “Help! There’s water here!” whenever it’s in the proximity of water, insofar as there’s nothing there but stimulus and response, there’s also no meaning or intention on the part of the robot. If I press a button on an mp3 player and it plays the sound of a scream, there’s no reason to apologize to the mp3 player.
Sure, because you know, and I know, that the mp3 player is not screaming because it needs help. So, as receivers, we know it doesn’t “mean” help, and as sender, we know that the mp3 player isn’t screaming “help” as its response to being doused in water. In the case of the robot, however, its scream does mean that “there’s is water here”, so as receivers we can regard the robot’s announcement as “meaning” precisely what the words say. However we do not regard the robot as a sender as having an “intention” because we have no reason to think it weighed up the options and decided that really, the best outcome would be if we mopped the floor. Although we might one day, and it’s of interest that in that apparently robots that collectively develop their own language with which to communicate with each other have just been been devised: http://www.bbc.co.uk/news/technology-13510988Elizabeth Liddle
May 24, 2011
May
05
May
24
24
2011
03:24 PM
3
03
24
PM
PDT
Hi Markf and Elizabeth Lidddle, Thank you very much for your thoughtful and detailed responses. Both of you appealed to forward models, which makes sense. I'll be back in about 15 hours, but for the time being, a quick response re the drowning example: the real reason why the message gets across is because the spectators are able to perform a mental simulation and ask themselves, "If I were out there in the sea, why would I be waving so frantically like that? Aha! That person must be drowning!" On the waver's part, the intentionality, I submit, derives not from previewing the action of waving in one's head, but from a pre-existing need to communicate a proposition which one already knows ("I am drowning"). The drowning person then performs forward models of various actions which might convey this fact ("Jump out of the water like a dolphin? No, that won't get the message across. Wave vigorously? Yes, that'll do it!") Thus the intention is logically prior to the choice of motor sequence in this case. Still, I liked the example very much. Markf, I'm quite surprised to find an atheist who has no problem with the injustice of predestination. Interesting.vjtorley
May 24, 2011
May
05
May
24
24
2011
03:10 PM
3
03
10
PM
PDT
Elizabeth Liddle, I certainly don’t think that a line of pebbles “means stop” except in the context of an interpreting brain. And then the problem becomes: What series of actions in a brain "means interpreting"? Or is it, again, that a brain only "means interpreting" in the context of yet another interpreting brain? You sketch out a series of physical actions that take place when a brain - given a certain context - is reacting to a stimulus. But as far as the question of meaning and thought itself goes, the explanation doesn't show up. You say that a "memory" or a "thought" is "implemented as an increased probability of that pattern being repeated". Now, if you mean that memories or thoughts are nothing but particular physical states in mundane operation, then you're either taking an eliminative stance about these things ('there are no thoughts or intentions, there's nothing but blind mechanical material states'), or you're making the "material" out to be more than it was (property dualism, or any other number of options). Nothing more than physical laws are required to execute the “coding”, and nothing more than physical laws are required to explain why the water flows down the creeks, rather than over the top of the flats. What makes it what we call a “code” is the feedback between water-flow and creek topography. Same with brains, except we call it “long-term potentiation” (LTP). And another way of putting what you're apparently saying here is "humans have as much intention, thought and experience as we take creek beds in mudflats to have - none at all". You say that we shouldn't push the "code" metaphor too far, but it seems to me that what would be "too far" would also happen to be the only way the metaphor would really make sense of the view you're advocating. Otherwise it's like saying, "Our brains encode our thoughts. Also, there's no such thing as encoding." I’d say that meaning arises in your mind by means of neural mechanisms that ensure that a stimulus (say that line of pebbles) triggers a range of options for action, and those action programs (this is the tricky part) include actions (highly attenuated actions, many well below execution threshold) implicated in your perception of yourself as a meaning-making agent. Yes it may sound as though there is an obvious problem there, but that’s because (I suggest) we have an innate tendency to regard feedback loops as some kind of intellectual cop-out (“which came first, the chicken or the egg?” “who or what was the prime mover?”). But sometimes the reason something sounds like an intellectual cop-out is precisely because that's what it is. You're saying that meaning arises because sometimes a stimulus triggers a range of options (but given determinism there's only a range in a poetic sense since only one response is actually possible) for action programs (but there aren't really programs, that's just metaphor) implicated in my perception of myself as a meaning-making agent (but my perception of myself is yet another meaning which has to be explained, and ultimately 'means' your explanation for meaning is yet more meaning). Really, sometimes a problem is obvious because that's what it is. I submit it's the case here. Now, you object that it's not a cop-out and really is an explanation, and try to illustrate that with your sand metaphor. But that metaphor is woefully inadequate, because what's going on in that example is nothing but physical causation producing phenomena that, under a mechanistic understanding of matter, is mundane - there's no thought, mind, or meaning to speak of there. Unless that's what you want to say in the case of brains and go down a full-fledged eliminative route. In which case, problems with that view aside, you're not offering an explanation for meaning anyway - you're saying there is no meaning to explain. Our motor programs have no “intrinsic meaning”. What has meaning is their relationship to their inputs. But then I ask, is the meaning of that relationship intrinsic? Let's look at your example. A waving arm has no “intrinsic meaning”. The waving arm I raise in response to struggling in a rip tide, however, is my meaningful response to the sensation of water up my nose, i.e. a motor action intended to elicit help. But motor actions in and of themselves have no intentions or meaning - they have them only relative to a mind. If I create a robot that plays a recording which says "Help! There's water here!" whenever it's in the proximity of water, insofar as there's nothing there but stimulus and response, there's also no meaning or intention on the part of the robot. If I press a button on an mp3 player and it plays the sound of a scream, there's no reason to apologize to the mp3 player.nullasalus
May 24, 2011
May
05
May
24
24
2011
01:45 PM
1
01
45
PM
PDT
vj #94
I must say I had no idea you were an amateur actor.
No reason why you should – I get up to all sorts of things which I don’t put on the internet :-).  I expect the same is true of you.
Getting back to intentions: it seems you are saying that Brutus intends to kill Caesar if he has a disposition to perform an action (e.g. stabbing) that would normally result in Caesar’s death. Hmm. Suppose Brutus first decided to kill Caesar and then asked himself: “How? (Dagger, spear or sword?) When? Where? With whom?” Until these questions are answered, it seems that we cannot speak of adisposition to perform a particular action pattern. Yet you would surely agree that Brutus had the intention when he first decided to kill Caesar – never mind how.
I don’t see the problem. Initially Brutus has a disposition to kill Caesar  - by any means – after some planning this becomes a disposition to kill him in a specific manner. 
Here’s another problem. If an intention to kill is a disposition to implement an action pattern which normally results in someone’s death (e.g. stabbing), then how does Brutus know that he has the intention of killing Caesar, in advance of the act? Is it only because he (physically or mentally) goes through the motion of stabbing (i.e. practices the act, with or without a dagger) while rehearsing the assassination of Julius Caesar? Would you say that until then, he has no intention of killing Caesar?
This is a more interesting point.  How does you know that you intend to do something? Wittgenstein would ask whether this question meant anything – but I think it can be answered. Suppose I decide to run a marathon. I might state my intention to myself  and others. I  might have fantasies of crossing the finish line.  But if I did not train when the opportunity was there and did not fill in the application form, or did both and simply did not run – then I would have to acknowledge that I never really intended to do it.  i.e. we know our intentions in much the same way as others know them – through our behaviour (with the addition of being able to do things such as imagine and speak to ourselves).  And like others we can be wrong and realise we never did have that disposition.
Regarding freedom: suppose it turns out that we are all living inside a Matrix-style simulation. Are Brutus’ actions still free?
I have never seen The Matrix but I guess we are talking about the world where none of our perceptions or actions are real – like living in a dream.  I would say in this context Brutus has free will, and an intention, to kill Ceasar, but he is not free because he was being fooled in the same sense that a prisoner has free will but is not free. Both are being denied the opportunity to fulfil their dispositions - albeit in different ways because in the Matrix presumably they think they have fulfilled them.
Or suppose it transpires that a Calvinist God predestined Brutus to act as he did. Are Brutus’ actions still free?
Yes. And most Calvinists would agree I think.  markf
May 24, 2011
May
05
May
24
24
2011
05:31 AM
5
05
31
AM
PDT
allanius @ 96 Thanks for your post, and yes, I was a bit of a Thomist once (well, I used to attend a Dominican priory on Sundays, which was a hotbed of Thomist scholars), so I'm happy with the first part of your post. Then you say:
But here is the difference: According to Thomas, what you refer to as “morality” comes about only through grace, not through felicitous brain chemistry. Right choices—life-giving choices—which, in the Bible, are choices to love sincerely and unselfishly—are against our mortal nature, which is in “bondage to the grave,” and are only made when we have been changed, through grace, and brought over into the realm of life. Grace is the difference between what you are describing and the Christian view of the human predicament.
Well, you may be right. But as I see it, it's partly "a difference that makes no difference" (would it matter if "grace" was reflected in “felicitous brain chemistry”?) and partly an assertion I don’t find supported. I don’t think “loving sincerely and unselfishly” are particularly “against our ...nature” and I don’t find the adjective “mortal” terribly illuminating. Yes, we are mortal, but the fact that we are mortal doesn’t seem to make us particularly selfish. Indeed I’d argue that it is our very awareness of our own mortality (something shared with few, if any, other species) that allows us to entertain “wants” that go beyond our own immediate physical needs and embrace the needs of others (for example, because we “know we can’t take it with us” we are inclined to leave our possessions to cat’s homes and such). I’d agree that we have inherited (we might even choose to call it “original sin”) selfish desires, not surprisingly (or not surprising to one who accepts evolution) but I’d strongly suggest that we have also inherited (also through evolutionary processes) unselfish desires, as well as the capacity to present those desires as distal goals that may often trump our proximal wants and needs. In my religious days, I called this “grace” – or at least the capacity for grace, the inherited capacity to reify unselfish, often distal goals and present them as desirable alternatives to the fulfilment of selfish, often proximal, desires, a capacity enlarged by what I then called prayer. And I can see the mythic power of presenting those unselfish desires as belonging to some other kind of “life” than the one we call “mortal”, or “earthly”, “physical”> But I can also see terrible traps, not least being a dualistic view of the mind/soul and body which is not supported by evidence, nor necessary (IMO) for a perfectly viable account of human behaviour and experience, but which also include a denigration of the physical world and our physical selves which can be profoundly destructive. Yes, I agree with you that there is a richness to lives in which the self is sublimated into a greater whole, and the emulation of such lives is a noble goal. But I see no reason to think that such lives are exclusively Christian, and much evidence that such “grace” is equally prevalent amongst non-Christians, and, indeed, non-theists. I still like the concept of “grace” – I still find it powerful. But I don’t think it’s necessary to think of it as magic-stuff. I think it is, precisely “felicitous brain chemistry” and it is an aspect of our freedom that it is available to all of us, whatever belief system we happen, or not, to hold.Elizabeth Liddle
May 24, 2011
May
05
May
24
24
2011
05:27 AM
5
05
27
AM
PDT
nullasus @84
Well, I would say they are encoded as repertoire of weighted models of options (although that probably suggests something far simpler than I have in mind, which is a highly nested and contingent set of options).
But what supplies the meaning of the ‘code’? In the case of computer programs, while code takes place in a software/hardware situation, what does or does not count as code is determined by us in virtue of our minds. Just like a line of pebbles on the ground may “mean” ‘stop here’ – it’s not that a line of pebbles, even that specific line of pebbles, innately (again, under normal views) “means stop”. It’s because that’s the meaning assigned to it by a mind.
Right. I hope I have clarified this now, a little, in my response above to vjtorley (#95). I certainly don’t think that a line of pebbles “means stop” except in the context of an interpreting brain. What I am trying to say is that when a person infers a meaning from what s/he then calls a sign, what is happening, neurally, is that a diverging cascade of possible action programs are triggered, produce simulated output which is then fed back into the system as input until an output of some kind is executed, which may be no more than a series of eye movements, but which may also be a complex utterance, which may itself remain un-uttered but leave a trace as what we call a “memory” of a “thought” that is implemented as an increased probability of that pattern being repeated. But that doesn’t mean that the coding mechanism is the meaning – the medium is not the message! As for what supplies the “code” (scare quotes intentional) is probably beyond the scope of this OP, and strays on to ID territory :) I would say that what supplies the code is evolution, but you may think otherwise. But either way, I think, we have to be very careful not to press the “code” metaphor too far – I mean it only in the sense that the creek beds in mudflats tend to “code” for their own persistence – once a creekbed is established, water tends to run down it, and deepen the creek. In neuroscience we call that “Hebb’s rule”: “what fires together, wires together”. A new feature in the estuary – a landslide, for instance – will disrupt the existing creeks, and new creeks will form. Nothing more than physical laws are required to execute the “coding”, and nothing more than physical laws are required to explain why the water flows down the creeks, rather than over the top of the flats. What makes it what we call a “code” is the feedback between water-flow and creek topography. Same with brains, except we call it “long-term potentiation” (LTP).
When I hear the alarm go off in the morning I know it “means” I have to get up. That meaning is not inherent in a given neural state, it’s inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning.
And again, the same problem. You talk about programs in the brain, but in the case of computers what does and doesn’t count as a program (and what meaning those programs ‘encode’) is determined by us to begin with. That’s like saying, “of course a string of 1s and 0s has no inherent meaning, but by the time you get to a textfield in Actionscript, meaning “emerges” and now taken together, these hardware states and this software means “duck”. No doubt they do – in my mind. But now how does meaning arise in my mind? By virtue of myself assigning the meaning to my own actions? By virtue of someone else assigning meaning to my actions? I think the problem there is obvious.
And what is obvious ain’t necessarily so :) I’d say that meaning arises in your mind by means of neural mechanisms that ensure that a stimulus (say that line of pebbles) triggers a range of options for action, and those action programs (this is the tricky part) include actions (highly attenuated actions, many well below execution threshold) implicated in your perception of yourself as a meaning-making agent. Yes it may sound as though there is an obvious problem there, but that’s because (I suggest) we have an innate tendency to regard feedback loops as some kind of intellectual cop-out (“which came first, the chicken or the egg?” “who or what was the prime mover?”). But in this case, I suggest, it’s not a cop-out at all, any more than the sand ripples below the tideline on a shallow beach are a cop-out (I seem to be into shoreline metaphors today) – once a ripple gets started, it self-perpetuates, and the frequency and amplitude of the ripples will be highly stable over many years, even though the actual topography is constantly changing, and the actual sandgrains constantly being moved and replaced. Over our developmental history, I suggest, we gradually come to perceive ourselves as one of the kind of creatures we seem to share a world with, who also seem to be causal, meaning-making agents, but over whose decision-making and meaning-making we have uniquely elevated control. And bingo- problem solved.
On the other hand, if you want to turn around and say that no, there are programs in the human brain that have intrinsic meaning – “original intentionality” – alright. But then materialism is out of the question anyway.
No, I won’t say that, because it’s not what I’m trying to say. Our motor programs have no “intrinsic meaning”. What has meaning is their relationship to their inputs. A waving arm has no “intrinsic meaning”. The waving arm I raise in response to struggling in a rip tide, however, is my meaningful response to the sensation of water up my nose, i.e. a motor action intended to elicit help. I can only hope that my intended meaning is correctly interpreted by those on the shore, and that it triggers motor programs in their brains that involve throwing me a life-belt – that it “means” I’m drowning.Elizabeth Liddle
May 24, 2011
May
05
May
24
24
2011
05:02 AM
5
05
02
AM
PDT
Hmmm, yes, nice work, Elizabeth. A plenitude of wants = freedom. Of course you realize this is the position of Thomas Aquinas and in fact the basis of the Judeo-Christian religion. “Love the Lord your God with all your heart and soul and mind” and all that. First, “God is love.” Next, the fall of man is depicted as a false and destructive choice between vanity (the desire to be “like God”) and a God-like love. The law, which is wholly based on love of God and neighbor, is offered as an opportunity to correct this colossal death-bringing blunder (“do this and you shall live”). And the cross is described as a sacrifice of love that reconciles man to God in spite of his limitations and false choices. But here is the difference: According to Thomas, what you refer to as “morality” comes about only through grace, not through felicitous brain chemistry. Right choices—life-giving choices—which, in the Bible, are choices to love sincerely and unselfishly—are against our mortal nature, which is in “bondage to the grave,” and are only made when we have been changed, through grace, and brought over into the realm of life. Grace is the difference between what you are describing and the Christian view of the human predicament.allanius
May 24, 2011
May
05
May
24
24
2011
04:54 AM
4
04
54
AM
PDT
vjtorley (@92 Thank you for your thoughtful response. I think there are two separable issues here: one is the meaning of meaning; the second is the nature of moral responsibility. So let me tackle your response in two parts:
It seeems to me that you are claiming that meaning is inherent in causal action patterns that occur reliably, and that a brain state can embody inherent meaning to the extent that it is part of a program which reliably triggers an action pattern in its human bearer.
I wouldn’t say that meaning is “inherent in the causal action patterns” because I can’t actually parse that :). Again, I think we have a level-of-analysis problem. I would say that I make meaning when I interpret a signal as having implications for some future action. In common parlance something “means” something when it acts as a token for something else. So “money means power”; “dark clouds mean rain”; “the word cat means a four legged furry mammal with a tail”; “the time means I’m late”; “the fire alarm means I have to get out in a hurry if I don’t want to burn alive”. And all those things are easy to think of in neural terms, especially if you think of each sign (i.e. meaningful stimulus) as being a trigger for potential action. So meaning is simply inherent in the concept of a sign itself, which is why “to signify” is a synonym for “to mean”. So for my cat, the rattle of the tin-opener “means” food, and I observe that it “means” food for the cat, because I observe that it making a dash for the kitchen, and looks up expectantly. In reverse, the cat “means” that it wants food when it rubs against my legs and mews pathetically. So we have two-directional communication of meaning. But there is nothing mysterious about that. My laptop can tell me that its battery is low, and I understand its meaning, and I can tell my laptop to shut down, and it understands mine. So it’s not either mine, or the cat’s (or even the laptop’s) neural state that “embodies inherent meaning” it’s the interaction between me and the cat with regard to the food, or me and the laptop with regard to the battery.
The problem I have with this view is that it would make our mental states only retrospectively meaningful: they would only possess meaning insofar as we could say that they were part of a program that successfully resulted in the action pattern desired and intended by the individual. Even desires and intentions would, on your view, only acquire meaning by virtue of their resulting in actions. .
I don’t think so, or only in the trivial sense that we make sense of some word, or event, or sign, we have to perceive it before we can interpret it – consider what it means. The reason being that actions are programmed long before they are executed, and in many cases never are – the brain operates by means of re-entrant circuits in which simulated output, as it were (e.g. a program for action that is not executed, but rises to near-execution threshold) is re-entered as input. This is the essence, I suggest, of the mechanism of intention. As sophisticated brain possessers, human beings have the ability to test possible action options before execution, and evaluate their simulated consequences against both proximal and distal goals. For example, imagine I am trying to decide between having a coffee-break and carrying on working: my brain activates the motor programs involved in going for a coffee-break, which in turn activates with the sensory programs that will result in such a break, and we call this “imagining going for a coffee-break”; it also activates the motor programs involved in staying at my desk and working, and, in turn the consequences of doing so, including the sense of satisfaction I will have if I get my project finished by lunchtime. These options duke it out in my brain, some pathways being mutually excitatory, some being mutually inhibitory, until a winning action reaches execution threshold and I act. I call this “deciding, on the basis of alternative outcomes, whether to have a coffee-break”, and if I do decide to have a coffee-break, that I “intended” to have a coffee-break. It’s not retrospective – nothing gets executed until the options have been considered. But nor is it very mysterious – it’s fairly easy to model a very simple version of that kind of decision making, and indeed lots of control programs work on just that sort of basis – it’s an implementation of fuzzy logic if you like.
However, I think that introspection would overwhelmingly contradict you on this point. Just ask anyone who has ever planned a murder or a bank robbery whether their plans had meaning only after being communicated, written down or executed. I think any criminal would say that plans per se are meaningful, whether or not they’re revealed to anyone.
Well, no. I think the concept of the “forward model” is relevant here, and it’s well established in the motor control literature. We constantly, at quite a trivial level, make a “forward model” of the consequences of actions that have not yet taken place, and then revise that model in the light of results of that action once it has. It’s fundamental to coordinated movement, including eye-movements, which means it’s also fundamental our data-collection processes – we notice things that are unexpected, i.e. which violate our forward model, and we even know how this works at a very precise neural level (at the level of individual neurons, even). Brains are, par excellence, predicting machines, which is why they are so efficient – they are set up to reserve processing power for the unexpected, which is obviously advantageous to survival!
You proposal would also obviate the distinction between first- and second-degree murder, it seems. For in both cases, someone dies. If a death is called intentional only by virtue of its being properly executed, then it seems to me that only truly accidental deaths would get off the hook, legally speaking.
Well, I apologise for being unclear, but hope it is now clearer that this is not what I am saying! Forming the intention to kill someone involves a “forward model” of the consequences of the killing action. If it misfires, because your forward model was inaccurate, there’s no reason to think that lets you off the hook. Which lets us segue nicely into your second issue:
Finally, you argue that you are free even if your choices are fully determined:
I regard my freedom as the freedom that the thing I call “I” possesses in virtue of being a highly evolved decision-maker.
Any sentient animal could claim the same freedom. Yet we don’t jail chimps. While your account explains how my actions can still be said to be mine, even if they’re fully determined, it fails to explain how they can be said to be free. There’s more to freedom than just doing what you want.
Heh. OK, this is a big one, and I’m not going to be able to do it justice in a comment on a blog! But let me have a go: Yes, indeed, there is “more to freedom than doing what you want”, in the narrow sense of that phrase, but part of that more is the very freedom to want different things. To take my coffee-break example, I have, in fact, two competing wants. I want my cup of coffee; I also want to finish my project. One is, if you like, a proximal want, the other a more distal want, although often the battle can be between two proximal or two distal wants, or any combination of a vast range of options with varying payback timescales, and indeed, complex combinations of rewards. For example, something that might weigh on my decision to have a coffee-break might be that I know that a particular colleague is feeling a bit lonely, and that it would be a good excuse to have a chat. Or something that might weigh on my decision to keep working is the thought that if I finish early, I would have time to visit my elderly aunt on the way home, or even slip into the pub for a quick pint. All these things are wants. I would say my freedom resides in the sheer number of options, and possible outcomes that I am capable, by virtue of my sophisticated human, symbol-using brain, of putting into the melting pot before initiating an actual course of action. What makes us so much freer than other animals, and even from other primates like chimps, is what is sometimes called our “freedom from immediacy”, conferred, I suggest, largely by our extraordinary capacity for language, and the tools it provides us with for simulating distal goals, and recalling outcomes from previous actions. So, is that why we don’t jail chimps? Partly, I suggest – we don’t hold a chimp responsible, just as we don’t always hold a child responsible, in part because we agree that the chimp/child was not in a position to consider the full import of his/her actions. Most importantly, we agree, I think, that neither a chimp nor a small child has much “Theory of Mind capacity” – cannot easily imagine – simulate – the consequences of their action from the point of view (literally) of another being. But I also suggest, following Dennett, that moral responsibility is coterminous with the act of defining the self; as Dennett repeats throughout Freedom Evolves: “if you make yourself really small, you can externalise virtually anything”. By the same token, he argues, it is by accepting moral responsibility for our actions that we define ourselves. And this is relevant to the chimp question – we don’t jail chimps in part because we don’t accord them a full human self. With adult human beings we mostly do, which is why we sometimes jail them when they fail to accept their human moral responsibilities. Sometimes we don’t, in which case we say they are “not fully responsible”, and, by the same token, we make them a little smaller – we say they are damaged, ill, crazy, not fully in control of their own actions. In other words, we draw the boundaries of their selves rather tightly, and regard much of what their brains do as “not them”. I regard myself as free, even if the universe proves after all to be deterministic, not because there could be an alternative universe in which I could have done something different, but because I identify the thing I call “I” with the decision-making machinery that is my brain (together with all the things that make it what it is, including my own past decisions). In other words, I am free because I accept moral responsibily, not morally responsible because I’m free :)Elizabeth Liddle
May 24, 2011
May
05
May
24
24
2011
04:22 AM
4
04
22
AM
PDT
markf (#93) I must say I had no idea you were an amateur actor. Of course I very much enjoyed reading Shakespeare's plays at school, including Julius Caesar. Unfortunately, I never was much of an actor, although my wife did some stage acting for a while, and my brother-in-law acted in a Japanese rendition of Julius Caesar at a theater in Tokyo a few years ago. Getting back to intentions: it seems you are saying that Brutus intends to kill Caesar if he has a disposition to perform an action (e.g. stabbing) that would normally result in Caesar's death. Hmm. Suppose Brutus first decided to kill Caesar and then asked himself: "How? (Dagger, spear or sword?) When? Where? With whom?" Until these questions are answered, it seems that we cannot speak of a disposition to perform a particular action pattern. Yet you would surely agree that Brutus had the intention when he first decided to kill Caesar - never mind how. Here's another problem. If an intention to kill is a disposition to implement an action pattern which normally results in someone's death (e.g. stabbing), then how does Brutus know that he has the intention of killing Caesar, in advance of the act? Is it only because he (physically or mentally) goes through the motion of stabbing (i.e. practices the act, with or without a dagger) while rehearsing the assassination of Julius Caesar? Would you say that until then, he has no intention of killing Caesar? Regarding freedom: suppose it turns out that we are all living inside a Matrix-style simulation. Are Brutus' actions still free? Or suppoose it transpires that a Calvinist God predestined Brutus to act as he did. Are Brutus' actions still free? I'm curious to see how you would answer these questions.vjtorley
May 24, 2011
May
05
May
24
24
2011
04:19 AM
4
04
19
AM
PDT
vj #65 & 92
Thank you for your post. You seem to adopt a more robust account of intentions than markf.
I think Elilzabeth and my views of intentions are compatible – although she is more knowledgeable and therefore more specific.  She sees intentions as “as repertoire of weighted models of options”. I am saying that having such a model results in a disposition to act in certains ways.
Incidentally, one problem with markf’s behavioral characterization of intentions (as dispositions to act in certain ways) is that it fails to explain intentions relating to speech. A speech utterance has propositional content; consequently, it must have an inherent meaning.
Not at all – speech gets its meaning from what we intend when we speak.  If I utter the words “Brutus is an honourable man” this can mean many different things depending on what I intend. It might even be irony and mean he has behaved badly.  As a keen amateur actor I spend a lot of time trying to decide on what characters mean by their lines!  To do that I analyse what they are trying to do when they speak.  This is in turn is largely determined by context including a history of what lead up to that moment.  
The problem I have with this view is that it would make our mental states only retrospectively meaningful: they would only possess meaning insofar as we could say that they were part of a program that successfully resulted in the action pattern desired and intended by the individual. Even desires and intentions would, on your view, only acquire meaning by virtue of their resulting in actions.
However, I think that introspection would overwhelmingly contradict you on this point. Just ask anyone who has ever planned a murder or a bank robbery whether their plans had meaning only afterbeing communicated, written down or executed. I think any criminal would say that plans per se are meaningful, whether or not they’re revealed to anyone.
This is just the problem that is avoided by recognising that intentions are dispositions.  A person or object can have a disposition to do something without having the opportunity to actually do it.  For example, a chess playing computer will have a disposition to put its opponent in checkmate (in this case we might even say it has an intention) but if it does not get the opportunity it will never actually get an opponent in checkmate.  
Any sentient animal could claim the same freedom. Yet we don’t jail chimps.
That’s not because they lack free will.  It is because they don’t understand the consequences of their actions and they have little or sense of acting rightly or wrongly.
There’s more to freedom than just doing what you want.
Well that is the issue under discussion.  Acting according to free will includes: * acting according to your desires (including  such complications as a balance of long and short term desires and desires to be moral) * acting consciously (as opposed to in your sleep or a reflex action) What other ingredient is there?  How do you know that or anyone else has it?  Why does it matter if you have it or not?markf
May 24, 2011
May
05
May
24
24
2011
02:56 AM
2
02
56
AM
PDT
Elizabeth Liddle (#82) Thank you for your comments. You write:
When I hear the alarm go off in the morning I know it “means” I have to get up. That meaning is not inherent in a given neural state, it’s inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning.
It seeems to me that you are claiming that meaning is inherent in causal action patterns that occur reliably, and that a brain state can embody inherent meaning to the extent that it is part of a program which reliably triggers an action pattern in its human bearer. The problem I have with this view is that it would make our mental states only retrospectively meaningful: they would only possess meaning insofar as we could say that they were part of a program that successfully resulted in the action pattern desired and intended by the individual. Even desires and intentions would, on your view, only acquire meaning by virtue of their resulting in actions. However, I think that introspection would overwhelmingly contradict you on this point. Just ask anyone who has ever planned a murder or a bank robbery whether their plans had meaning only after being communicated, written down or executed. I think any criminal would say that plans per se are meaningful, whether or not they're revealed to anyone. You proposal would also obviate the distinction between first- and second-degree murder, it seems. For in both cases, someone dies. If a death is called intentional only by virtue of its being properly executed, then it seems to me that only truly accidental deaths would get off the hook, legally speaking. Finally, you argue that you are free even if your choices are fully determined:
I regard my freedom as the freedom that the thing I call “I” possesses in virtue of being a highly evolved decision-maker.
Any sentient animal could claim the same freedom. Yet we don't jail chimps. While your account explains how my actions can still be said to be mine, even if they're fully determined, it fails to explain how they can be said to be free. There's more to freedom than just doing what you want.vjtorley
May 24, 2011
May
05
May
24
24
2011
01:21 AM
1
01
21
AM
PDT
#82 Elizabeth Liddle Like markf I’m a compatibilist, but unlike markf, I don’t think the “free will” we possess depends on a bit of quantum randomness; Actually I almost totally agree with you. I don't think free will depends on quantum randomness. I just think it might include an element of such randomness in that sometimes when we make a decision the result might be truly random and not determined by our current brain state plus environment. I am really enjoying your comments. So nice to hear from someone with some real knowledge.markf
May 23, 2011
May
05
May
23
23
2011
10:40 PM
10
10
40
PM
PDT
tragic, Too bad his persuasive arguments only apply to problems that exist within his ridiculous worldview. Actually, the A-T arguments highlight problems in other worldviews, and which don't exist in the A-T worldview. The real problem is a Catholic church that insists on holding onto all its ancient and medieval non-Christian nonsense. Materialism is as ancient as the Aristotilean view. That you don't understand something, doesn't make it nonsense. Just you wait null. Aristotle is going down. You heard it here first. No, I didn't. This is a centuries, even millenia-old line - you're not original at all, not even in what you misunderstand. But by all means, endorse materialism if you wish.nullasalus
May 23, 2011
May
05
May
23
23
2011
09:33 PM
9
09
33
PM
PDT
Eric: Ahhhhhh, I think (or do I?) I've got you now. Thanks for taking the time. It all seemed so easy when Descarte said: Cogito ergo sum. Now it all seems so complicated . . . .ellazimm
May 23, 2011
May
05
May
23
23
2011
09:27 PM
9
09
27
PM
PDT
@null: LOL! Too bad his persuasive arguments only apply to problems that exist within his ridiculous worldview. The real problem is a Catholic church that insists on holding onto all its ancient and medieval non-Christian nonsense. Just you wait null. Aristotle is going down. You heard it here first.tragic mishap
May 23, 2011
May
05
May
23
23
2011
09:25 PM
9
09
25
PM
PDT
ellazimm @ 81, Thanks. I guess I don't see how a strictly deterministic approach is of any use whatsoever. In approach #1 I outlined, we end up with a purely deterministic position that, while perhaps interesting for a few academic minutes as we discuss angels on the head of a pin, is really useless, both because it is self-refuting and because it doesn't give us any useful information about ourselves or anyone else we interact with. I don't think there is much good evidence that consciousness is illusory, and plenty of evidence that it is real. Certainly we all (even the alleged reductionists) conduct our lives as though it is real. That said, I think approach #2, while being materialistic in origin, arguably provides a basis for consciousness/free will. Again, I don't necessarily hold to that view, but I'm just trying to make sure I understand the options available to someone who argues for a materialistic origin of consciousness. In other words, in discussing these issues, I think we have to be careful (and some have not been careful) to distinguish between: (i) the idea of an ongoing materialistic cause for all action, which negates free will, and (ii) the idea of a materialistic *origin* for consciousness, which argues for a materialistic basis or underpinning for consciousness, but also views consciousness as a real phenomenon that has somehow become partially disconnected from its underlying source and can act in its own right.Eric Anderson
May 23, 2011
May
05
May
23
23
2011
07:45 PM
7
07
45
PM
PDT
I long for the day when people stop taking Aristotle so seriously. I think Aristotle isn't taken very seriously at all. It's those damn persuasive arguments he has and that others have developed, building on him.nullasalus
May 23, 2011
May
05
May
23
23
2011
06:35 PM
6
06
35
PM
PDT
I long for the day when people stop taking Aristotle so seriously. Somebody needs to take him out forever. The dude is seriously annoying the heck out of me.tragic mishap
May 23, 2011
May
05
May
23
23
2011
06:16 PM
6
06
16
PM
PDT
Well, I would say they are encoded as repertoire of weighted models of options (although that probably suggests something far simpler than I have in mind, which is a highly nested and contingent set of options). But what supplies the meaning of the 'code'? In the case of computer programs, while code takes place in a software/hardware situation, what does or does not count as code is determined by us in virtue of our minds. Just like a line of pebbles on the ground may "mean" 'stop here' - it's not that a line of pebbles, even that specific line of pebbles, innately (again, under normal views) "means stop". It's because that's the meaning assigned to it by a mind. When I hear the alarm go off in the morning I know it “means” I have to get up. That meaning is not inherent in a given neural state, it’s inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning. And again, the same problem. You talk about programs in the brain, but in the case of computers what does and doesn't count as a program (and what meaning those programs 'encode') is determined by us to begin with. That's like saying, "of course a string of 1s and 0s has no inherent meaning, but by the time you get to a textfield in Actionscript, meaning "emerges" and now taken together, these hardware states and this software means "duck". No doubt they do - in my mind. But now how does meaning arise in my mind? By virtue of myself assigning the meaning to my own actions? By virtue of someone else assigning meaning to my actions? I think the problem there is obvious. On the other hand, if you want to turn around and say that no, there are programs in the human brain that have intrinsic meaning - "original intentionality" - alright. But then materialism is out of the question anyway.nullasalus
May 23, 2011
May
05
May
23
23
2011
04:52 PM
4
04
52
PM
PDT
Oh, and thanks for having me :)Elizabeth Liddle
May 23, 2011
May
05
May
23
23
2011
02:55 PM
2
02
55
PM
PDT
#65 vjtorley You write:
You write that “things-with-brains can intend, and we already know a lot about just how that intention is coded.” I would beg to differ here. The work of the late Wilder Penfield provides direct empirical evidence to the contrary: no matter how he stimulated his patients’ brains, he was unable to make them intend to do anything. He was able to make them raise their arms, but inevitably their response was: “I didn’t do that. You did.” Evidence of this sort caused Penfield to reject his earlier belief in materialism.
But an awful lot has happened since Penfield! Great man though he was. Not only are there many cases of illusions of alien agency, there are also many accounts of induced illusions of self-agency, not least being the results of simple priming experiments,but also including technologies like TMS. There's an excellent review of the current neuroscience of volition by Patrick Haggard here: http://www.nature.com/nrn/journal/v9/n12/abs/nrn2497.html
What we do know a lot about is how intentions are realized, as motor patterns. But of course, some intentions don’t relate to bodily movements at all, while others relate to bodily movements only generally, or in the distant future.
Absolutely. That's why I mentioned both distal and proximal goals.
For instance, I might formulate the intention to henceforth multiply numbers in my head from left to right instead of from right to left, when performing mental arithmetic (left to right is much better, by the way). Or I might formulate the intention to pray silently while meditating, instead of trying to achieve a Zen-like state of “empty mind”. (As it happens, I don’t meditate.) Or I might formulate the general intention to get up 15 minutes earlier on weekdays, or the long-term intention to complete a course of study. How are these intentions “coded” in the brain?
Well, I would say they are encoded as repertoire of weighted models of options (although that probably suggests something far simpler than I have in mind, which is a highly nested and contingent set of options).
I don’t think they are. What’s there to code? To be sure, all of these intentions have an inherent meaning – but as I argued in my post above, that’s one thing that a neural state cannot possess, in any case.
I think you are confusing levels here. Certainly a "neural state" cannot possess a "meaning". "Meaning" inheres at a higher level of analysis than than the state. That doesn't mean that a temporal sequence of states doesn't embody what we (e.g. me, as an agent) calls "meaning". When I hear the alarm go off in the morning I know it "means" I have to get up. That meaning is not inherent in a given neural state, it's inherent in the programs of optional action the sound of the alarm triggers in my brain, which includes highly attenuated action programs that give rise to my sense of myself as an intender to whom the sound of the alarm has meaning. Like markf I'm a compatibilist, but unlike markf, I don't think the "free will" we possess depends on a bit of quantum randomness; I regard my freedom as the freedom that the thing I call "I" possesses in virtue of being a highly evolved decision-maker. I am free to choose, not just randomly (which would be a funny kind of freedom, but freedom of a sort, and I do possess that too - I can decide not to decide, but to flip a coin instead), but after taking account of the pros and cons, short and long-term. The fact that we can account (at least I don't see why we can't) for that account-taking in physical terms doesn't make my freedom any less, it just incorporates (literally) as the decision-making thing.Elizabeth Liddle
May 23, 2011
May
05
May
23
23
2011
02:54 PM
2
02
54
PM
PDT
KF: Okay, I'm not sure how your definition of phase space differs from sample space in this example but I hear you. Eric: I suppose but how EXACTLY do either of those two options . . .oh I get it now. I was going to say how do either of those two options allow for my clear ability to come up with many different sequences of zeroes and ones of a designated length. And you would say, I think, see, that proves your consciousness is not a strictly materialistic process which would only give one output. Is that right? I'm not saying I agree with that but am I getting your argument?ellazimm
May 23, 2011
May
05
May
23
23
2011
02:20 PM
2
02
20
PM
PDT
ellazimm: "I apologise if I’m still being dense but I don’t see how a materialist would expect/restrict an individual to only picking one sequence based on this argument." Isn't it the case that the materialist has one of the following two options? 1. Actions are the result of matter and energy interacting only, namely, whatever neural pathways, interactions, etc. exist cause an action to take place. Free will is therefore not real, but illusory. 2. Somehow at some level of complexity and organization, consciousness arises as an emergent property, which then is somehow (at least partly) decoupled from the underlying matter. In this scenario, choices would be real, free will would exist, but it would have arisen from materialistic processes (and in some viewpoints would still depend on the underlying matter, such that upon the death or destruction of the matter, the consciousness would cease to exist).Eric Anderson
May 23, 2011
May
05
May
23
23
2011
02:03 PM
2
02
03
PM
PDT
1 2 3 4 5

Leave a Reply