Uncommon Descent Serving The Intelligent Design Community

How is libertarian free will possible?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In this post, I’m going to assume that the only freedom worth having is libertarian free will: the free will I possess if there are choices that I have made during my life where I could have chosen differently, under identical circumstances. That is, I believe that libertarian free will is incompatible with determinism. By contrast, indeterminism is compatible with the existence of libertarian freedom, but in no way implies it.

There are some people who think that even if your choices are fully determined by your circumstances, they are still free, if you selected them for a reason and if you are capable of being educated to act for better reasons. People who think like that are known as compatibilists. I’m not one of them; I’m an incompatibilist. Specifically, I’m what an agent-causal incompatibilist: I believe that humans have a kind of agency (an ability to act) that cannot be explained in terms of physical events.

Some time ago, I came across The Cogito Model of human freedom, on The Information Philosopher Web site, by Dr. Roddy Doyle. The Website represents a bold philosophical attempt to reconcile the valid insights underlying both determinism and indeterminism. The authors of the model argue that it accords well with the findings of quantum theory, and guarantees humans libertarian freedom, but at the same time avoids the pitfall of making chance the cause of our actions. Here’s an excerpt:

Our Cogito model of human freedom combines microscopic quantum randomness and unpredictability with macroscopic determinism and predictability, in a temporal sequence.

Why have philosophers been unable for millennia to see that the common sense view of human freedom is correct? Partly because their logic or language preoccupation makes them say that either determinism or indeterminism is true, and the other must be false. Our physical world includes both, although the determinism we have is only an adequate description for large objects. So any intelligible explanation for free will must include both indeterminism and adequate determinism.

At first glance, Dr. Doyle’s Cogito Model appears to harmonize well with the idea of libertarian free will. Doyle makes a point of disavowing determinism, upholding indeterminism, championing Aristotle, admiring Aquinas and upholding libertarian free will. However, it turns out that he’s no Aristotelian, and certainly no Thomist. Indeed, he isn’t even a bona fide incompatibilist. Nevertheless, Doyle’s Cogito Model is a highly instructive one, for it points the way to how a science-friendly, authentically libertarian account of freedom might work.

There are passages on Dr. Doyle’s current Web site (see for instance paragraphs 3 and 4 of his page on Libertarianism) where he appears to suggest that our character and our values determine our actions. This is of course absurd: if I could never act out of character, then I could not be said to have a character. I would be a machine.

Misleadingly, in his Web page on Libertarianism, Dr. Doyle conflates the incoherent view that “an agent’s decisions are not connected in any way with character and other personal properties” (which is surely absurd) with the entirely distinct (and reasonable) view that “one’s actions are not determined by anything prior to a decision, including one’s character and values, and one’s feelings and desires” (emphases mine). Now, I have no problem with the idea that my bodily actions are determined by my will, which is guided by my reason. However, character, values, feelings and desires are not what makes an action free – especially as Doyle makes it quite clear in his Cogito Model that he envisages all these as being ultimately determined by non-rational, physicalistic causes:

Macro Mind is a macroscopic structure so large that quantum effects are negligible. It is the critical apparatus that makes decisions based on our character and values.

Information about our character and values is probably stored in the same noise-susceptible neural circuits of our brain…

The Macro Mind has very likely evolved to add enough redundancy, perhaps even error detection and correction, to reduce the noise to levels required for an adequate determinism.

The Macro Mind corresponds to natural selection by highly determined organisms.

There is a more radical problem with Doyle’s model: he acknowledges the reality of downward causation, but because he is a materialist, he fails to give a proper account of downward causation. He seems to construe it in terms of different levels of organization in the brain: Macro Mind (“a macroscopic structure so large that quantum effects are negligible… the critical apparatus that makes decisions based on our character and values”) and Micro Mind (“a random generator of frequently outlandish and absurd possibilities”) – the latter being susceptible to random quantum fluctuations, from which the former makes a rational selection.

Doyle goes on to say:

Our decisions are then in principle predictable, given knowledge of all our past actions and given the randomly generated possibilities in the instant before decision. However, only we know the contents of our minds, and they exist only within our minds. Thus we can feel fully responsible for our choices, morally and legally.

This passage leads me to conclude that Doyle is a sort of compatibilist, after all. As I’ve said, I’m not.

So how do I envisage freedom? I’d like to go back to a remark by Karl Popper, in his address entitled, Natural Selection and the Emergence of Mind, delivered at Darwin College, Cambridge, November 8, 1977. Let me say at the outset that I disagree with much of what Popper says. However, I think he articulated a profound insight when he said:

A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.

Let’s get back to the problem of downward causation. How does it take place? The eminent neurophysiologist and Nobel prize winner, Sir John Eccles, openly advocated a “ghost in the machine” model in his book Facing Reality, 1970 (pp. 118-129). He envisaged that the “ghost” operates on neurons that are momentarily poised close to a threshold level of excitability.

But that’s not how I picture it.

My model of libertarian free will

Reasoning and choosing are indeed immaterial processes: they are actions that involve abstract, formal concepts. (By the way, computers don’t perform formal operations; they are simply man-made material devices that are designed to mimic these operations. A computer is no more capable of addition than a cash register, an abacus or a Rube Goldberg machine.)

Reasoning is an immaterial activity. This means that reasoning doesn’t happen anywhere – certainly not in some spooky Cartesian soul hovering 10 centimeters above my head. It has no location. Ditto for choice. However, choices have to be somehow realized on a physical level, otherwise they would have no impact on the world. The soul doesn’t push neurons, as Eccles appears to think; instead, it selects from one of a large number of quantum possibilities thrown up at some micro level of the brain (Doyle’s micro mind). This doesn’t violate quantum randomness, because a selection can be non-random at the macro level, but random at the micro level. The following two rows of digits will serve to illustrate my point.

1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 1
0 0 1 0 0 0 0 1 1 0 1 1 0 1 1 1 0 1

The above two rows of digits were created by a random number generator. Now suppose I impose the macro requirement: keep the columns whose sum equals 1, and discard the rest. I now have:

1 0 1 1 1 0 0 0 0 0 1
0 1 0 0 0 1 1 0 1 1 0

Each row is still random, but I have imposed a non-random macro-level constraint. That’s how my will works when I make a choice.

For Aristotelian-Thomists, a human being is not two things – a soul and a body – but one being, capable of two radically different kinds of acts – material acts (which other animals are also capable of) and formal, immaterial actions, such as acts of choice and deliberation. In practical situations, immaterial acts of choice are realized as a selection from one of a large number of randomly generated possible pathways.

On a neural level, what probably happens when an agent decides to raise his/her arm is this: the arm goes through a large number of micro-level muscular movements (tiny twitches) which are randomly generated at the quantum level. The agent tries these out over a very short interval of time (a fraction of a second) before selecting the one which feels right – namely, the one which matches the agent’s desire to raise his/her arm. This selection continues during the time interval over which the agent raises his/her arm. The wrong (randomly generated quantum-level) micro-movements are continually filtered out by the agent.

The agent’s selection usually reflect his/her character, values and desires (as Doyle proposes) – but on occasion, it may not. We can and do act out of character, and we sometimes act irrationally. Our free will is not bound to act according to reason, and sometimes we act contrary to it (akrasia, or weakness of will, being a case in point).

So I agree with much of what Doyle has to say, but with this crucial difference: I do not see our minds as having been formed by the process of natural selection. Since thinking is an immaterial activity, any physicalistic account of its origin is impossible in principle.

Comments
Elizabeth Liddle:
Mung, I do like talking to you when you aren’t accusing me of lying.
In the States here we have a phenomenon known as "9/11 Truthers." http://www.911truth.org/
TO EXPOSE the official lies and cover-up surrounding the events of September 11th, 2001 in a way that inspires the people to overcome denial and understand the truth; namely, that elements within the US government and covert policy apparatus must have orchestrated or participated in the execution of the attacks for these to have happened in the way that they did.
No doubt these people firmly believe the "truth" of what they espouse. Do you believe it is possible for someone to perpetuate a lie without knowing it's a lie? I do. Does that make it less of a lie?Mung
July 19, 2011
July
07
Jul
19
19
2011
01:30 PM
1
01
30
PM
PST
Mung:
Elizabeth Liddle:
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
I don’t know what sort of statement that is, but it does not sound like an evolutionary account.
It isn't. But it's perfectly consistent with evolutionary mechanisms.
But I’m trying to get down to the fundamentals.
Cool.
So you’re not saying that pain itself is an aversion mechanism?
No, I'm saying it is the result of an aversion mechanism, operating within a conscious system.
But are you saying that underneath pain there is an aversion mechanism?
Yes. Or phylogenetically earlier. Possibly developmentally earlier as well. In that sense "underneath".
So how far down the chain of being do these aversion mechanisms extend? Bacteria? Can a bacterium sense danger or potential harm and seek to avert it? Is a brain required for the existence of aversion mechanisms?
Well, as you probably realise, I don't subscribe to the Chain of Being stuff anyway. But it extends at least as far as nemotodes. Not sure about bacteria, but I wouldn't be surprised. Also some plants (mimosa being a famous example). Definitely sea anenomes. It's a pretty useful trick. Mung, I do like talking to you when you aren't accusing me of lying. Can it stay this way?Elizabeth Liddle
July 19, 2011
July
07
Jul
19
19
2011
01:19 PM
1
01
19
PM
PST
@Doveton post 111 RE: Abstraction vs. Emergent Property My idea of "abstraction", one that I'm trying to convey is hopefully explained in the metaphor of software programming. The program (or assemblage of source code) that executes on a hardware platform is at a level of abstraction from that platform. It isn't a part of the platform in that the two are mutually exclusive. I don't have to communicate with the hardware platform directly to make the hardware platform perform some action. I do interact with the platform indirectly. I write computer programs using one of many existent computer programming "languages". It beats having to write the machine code directly! A compiler/interpreter (itself another software application) sits between my program source code and the hardware. That compiler translates my source code (along with linking the code to necessary libraries of functions) into machine code understandable by my hardware platform. Now, the abstract nature of source code renders it useless without a coexistence with some hardware, hence my acceptance of an "abstract mind" requiring collocation of the mind with the body. I don't think that it is necessary that the mind be an "emergent property" of the body (underlying hardware). For the same reason that I don't believe source code is an emergent property of computing hardware.ciphertext
July 19, 2011
July
07
Jul
19
19
2011
12:36 PM
12
12
36
PM
PST
Elizabeth Liddle:
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
I don't know what sort of statement that is, but it does not sound like an evolutionary account. But I'm trying to get down to the fundamentals. So you're not saying that pain itself is an aversion mechanism? But are you saying that underneath pain there is an aversion mechanism? So how far down the chain of being do these aversion mechanisms extend? Bacteria? Can a bacterium sense danger or potential harm and seek to avert it? Is a brain required for the existence of aversion mechanisms?Mung
July 19, 2011
July
07
Jul
19
19
2011
12:26 PM
12
12
26
PM
PST
Yikes! I need to be more careful with my mouse. I submitted prior to completing my post. Here is the remainder of my thought. ------------------------------------- cont. Here again, we need to examine whether we really mean a suitably "high" level of abstraction or true non-physical existence. Given your question assumes an "immateriality" of the mind, then I would have to agree that "in principle" any person's mind could interact with any other person's body (or expressed differently other hardware). Though, for this to occur we must also constrain the concept of "mind" such that it (the mind) is "standard" and not merely common. Similarly, the body (hardware) would need to have a standard form, and not merely a common form. The distinction I am making can be thought of in this manner. Standard parts (or more accurately, standardized parts) can be swapped between hardware systems (computers, cars, appliances, etc...) quite easily. Indeed, that is one of the reasons for using standard parts. As it reduces maintenance and repair costs. Common parts, aren't so easily swapped. Computer parts used to be this way, in that your computers used to be specially built. A lot of supercomputer systems are still this way, meaning you cannot simply use "off the shelf" components. Software development can still be this way, though most software development efforts prefer to use standard software development paradigms and standard software "architectures" (OSGI compliant). Prior to the advent to some standard hardware communication protocol development, your DEC Alpha wouldn't take some parts from your Cray YMP or your IBM System 360. Because of the way those systems were designed, they still don't use standard parts. Indeed, your IBM System 360 won't utilize parts from an IBM Blade Center either. Though, the IBM Blade Center will use parts from Intel, AMD, Nvidia, Fujitsu, Seagate, and Western Digital. Same way with automobiles for some of their parts.ciphertext
July 19, 2011
July
07
Jul
19
19
2011
12:14 PM
12
12
14
PM
PST
Ciphertext,
If the mind is immaterial and interacts somehow with the material body, should it matter where the mind is in relation to the body in order to interact with it?
I don’t know. I think there are three ways to approach an answer to that question. The first method would depend upon what we mean when we say “immaterial”. I wonder sometime if what we really mean is “abstraction”. Similar to how I used “abstraction” in the thread going about DNA is code. Perhaps the “mind” is really just an abstraction some level(s) above the hardware (the brain structures). In that sense, the mind must be collocated with the hardware it is using as the basis for its execution. This is a very complex notion. That is the notion of abstraction, which has typically been exhibited by intelligent agents (humans only, in the most abstract thought). We should be able to determine if the mind is an abstraction, in much a similar way we determine the difference between a computing device’s “OS and allied applications” and the devices hardware systems.
Your use of the term "abstraction" strikes me as little different from "emergent property". Feel free to elaborate if you meant it differently. That said, I like the approach you take with the abstraction. If the mind is an abstraction, then I'd think that co-location is necessary. I also think that control of someone else's body would not be possible since the mind would be an abstraction of only one person's body. I'll go along with that.
As a second approach if we truly do mean “non physical” by using the term immaterial, then I don’t believe that a mind would be required to collocate with the hardware. The converse isn’t required either. Certainly, once can make an assertion that a “simulated” environment could be an example of the “guiding mind” being removed from the hardware subjected to the experience. In the most rudimentary form, you have remote controlled vehicles (i.e. predator unmanned aerial vehicle, RC cars, and teleconferencing [in a way]). In all of these examples, we show that a “guiding mind”, physically separated from the hardware subjected to an experience, can control the operation of that hardware. However, while the examples are physically separated by “distance” they are still connected physically. Meaning that they are tethered to each other via “the stuff of existence” (atoms, electrons, forces, etc…) for at least as long as they are interacting. You could even say that they are still tethered by mere “existence” (gravity anyone?).
Wow, you hit some of the examples I thought of when I arrived at this question. Yes...radio controlled model aircraft and drones and the like certainly are "tethered" via control waves to some source "mind" and given the type of particles that mind is made of, then mental control can extend across distance, removing the need for co-location. If our minds are, as some seem to posit, non-physical, should this not be possible for our body and mind arrangements? But, this then made me think: if our minds truly are non-physical and require no actual co-location, why don't we see one person's mind controlling someone else's body. I've certainly never heard any stories of such events. There are times in radio model aircraft flying that the radio signals get crossed and someone finds they are controlling someone else's plane. But then in model radio aircraft, there are limited frequencies. Could it be that minds and bodies are tied together by some specific "frequency"? It's possible I suppose, though I suspect the answer is significantly more simple; that the implication of the evidence points to an arrangement more like scenario 1 above rather than a truly non-physical mind.
The third avenue of exploration may involves the current “incompleteness” in our knowledge of the nature of reality. Perhaps there is still as-of-yet undiscovered forces which allow for the communication of immaterial objects/entities with physical objects/entities. Much like there isn’t a complete picture yet with respect to what provides a particle with “mass”. Perhaps, the concepts of “immaterial” and “material” will be reviewed for completeness at a later date, as we amass more knowledge on the nature of reality.
Could be. Not a very satisfying answer, but sometimes you just have to shrug and accept such.
Related to the above, if it does not matter where the mind is in relation to the body for interaction to take place, is it possible for any person’s mind to interact with any other person’s body? If not, why do you suppose that is?
Here again, we need to examine whether we really mean a suitably “high” level of abstraction or true non-physical existence
True. Hence my comment above. :)Doveton
July 19, 2011
July
07
Jul
19
19
2011
11:46 AM
11
11
46
AM
PST
Lizzie,
By this I mean, if pain is mechanical, why should we even need to be conscious of it?
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
Another point to consider: being conscious of pain allows an organism to plan to avoid pain or plan on accepting pain in certain circumstances. For example, if one has the conscious capacity to see a wasp and associate the visual cue with a previous pain event, one does not have to be stung before trying to move away. On the flip side, being conscious of pain allows one to recognize that some pain - say the pain derived from a needle poked into the skin - should not be avoided if the pain is the result of trying to remove a sliver.Doveton
July 19, 2011
July
07
Jul
19
19
2011
11:23 AM
11
11
23
AM
PST
@Doveton Post 106 RE: Immateriality of mind
If the mind is immaterial and interacts somehow with the material body, should it matter where the mind is in relation to the body in order to interact with it?
I don't know. I think there are three ways to approach an answer to that question. The first method would depend upon what we mean when we say "immaterial". I wonder sometime if what we really mean is "abstraction". Similar to how I used "abstraction" in the thread going about DNA is code. Perhaps the "mind" is really just an abstraction some level(s) above the hardware (the brain structures). In that sense, the mind must be collocated with the hardware it is using as the basis for its execution. This is a very complex notion. That is the notion of abstraction, which has typically been exhibited by intelligent agents (humans only, in the most abstract thought). We should be able to determine if the mind is an abstraction, in much a similar way we determine the difference between a computing device's "OS and allied applications" and the devices hardware systems. As a second approach if we truly do mean "non physical" by using the term immaterial, then I don't believe that a mind would be required to collocate with the hardware. The converse isn't required either. Certainly, once can make an assertion that a "simulated" environment could be an example of the "guiding mind" being removed from the hardware subjected to the experience. In the most rudimentary form, you have remote controlled vehicles (i.e. predator unmanned aerial vehicle, RC cars, and teleconferencing [in a way]). In all of these examples, we show that a "guiding mind", physically separated from the hardware subjected to an experience, can control the operation of that hardware. However, while the examples are physically separated by "distance" they are still connected physically. Meaning that they are tethered to each other via "the stuff of existence" (atoms, electrons, forces, etc...) for at least as long as they are interacting. You could even say that they are still tethered by mere "existence" (gravity anyone?). The third avenue of exploration may involves the current "incompleteness" in our knowledge of the nature of reality. Perhaps there is still as-of-yet undiscovered forces which allow for the communication of immaterial objects/entities with physical objects/entities. Much like there isn't a complete picture yet with respect to what provides a particle with "mass". Perhaps, the concepts of "immaterial" and "material" will be reviewed for completeness at a later date, as we amass more knowledge on the nature of reality.
Related to the above, if it does not matter where the mind is in relation to the body for interaction to take place, is it possible for any person’s mind to interact with any other person’s body? If not, why do you suppose that is?
Here again, we need to examine whether we really mean a suitably "high" level of abstraction or true non-physical existenceciphertext
July 19, 2011
July
07
Jul
19
19
2011
11:16 AM
11
11
16
AM
PST
Mung - yes, indeed, interesting questions. Here's a shot at some answers:
So I have some painful questions I’d like to inject. Which came first, pain or consciousness? Pain or conscious awareness of pain?
My hypothesis that aversive reactions are likely to have preceded consciousness of aversive reactions, but I would say that there is no point in talking about pain unless something is feeling it. But I suggest that it grew out of aversive reactions, which would have a selective advantage.
Is it possible to speak of pain if there is no brain? IOW, can organisms which have no brain experience pain?
No.
Can we dispense with minds and brains and consciousness and still have something we can call pain?
We can talk about aversive reactions, but not pain IMO.
If the evolutionary view presented by Elizabeth is correct, one would think so.
Only in the sense I gave above. I think consciousness is also selectively advantageous, though, as is consciousness of pain. But that's only possible with a brain. IMO.
What is the necessary connection, if there is any, between pain and conscious awareness of pain? By this I mean, if pain is mechanical, why should we even need to be conscious of it?
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
If we come into contact with a hot plate we could be aware of our hand making a jerking motion away from the plate, but feel no pain as a conscious experience, right?
Yes, that would be cool. I wish it worked like that. But let's put another scenario: We have no pain, just a jerking reaction; but we are trapped against the hot plate by a heavy object, preventing the hand from detaching from the hot plate. How do we know that we need to take some other action? Because we do - we need to all we can to get our hand away from that source of heat. This is the kind of think concsciousness allows us to do, IMO - to react flexibly to complicated scenarios. Moths just fly into candle flames and die. We can say: that candle flame looks so beautiful but must resist, must resist..... Consciousness is thus intimately connected with volition IMO - with free will, no less, i.e. flexibility of response, with soptions to weigh instant gratification against future benefit. But the quid pro quo is that stuff hurts.
Upon the evolutionary view, one would need to think that pain came first, and only later did it get wired into the brain and consciousness of it.
Well, I think the aversive reaction came first, and only later, with the advent of the capacity to plan, reflect, choose, did pain manifest itself.
What really happens when we experience pain? Say I step on something sharp. Obviously, something must happen first at the cellular level, right? What happens, and how does that get translated into “pain”?
Well, you can probably look up the physiological reactions but I would say (and Nullasalus would tell me this is illogical, but I'll say it anyway, because I don't think it is) that pain is the knowledge that we are in the grip of an aversive reaction. Which requires a concept called "I" and a sense of time. I think integration of input over time is absolutely critical to conscious experience, and that although we perceive it as a flow, it operates as a series of discrete summations. There is fairly good evidence to support this.
Are cells conscious? Can they react to something in the environment that has the potential to harm them?
No, and yes, IMO (in that order).
That’s basically the definition of pain Elizabeth is using, right?
No - but it's the underpinnings of it. There's another necessary part IMO.Elizabeth Liddle
July 19, 2011
July
07
Jul
19
19
2011
10:39 AM
10
10
39
AM
PST
Mung,
Darwinian evolution only accounts for what will benefit our offspring. It’s forward looking then is it?
In truth, evolution doesn't "look" anywhere. It doesn't "try" to impart any effect on anything. It isn't planning anything either. Evolution is merely the term we give to the process of biological change and the adaptation (or difficulty) the change presents for a group of organisms in a given environment. Evolution doesn't know where it's going, but the options available for change are, to some extent, determined by where it's been.
So I have some painful questions I’d like to inject. Which came first, pain or consciousness? Pain or conscious awareness of pain? Is it possible to speak of pain if there is no brain? IOW, can organisms which have no brain experience pain? Can we dispense with minds and brains and consciousness and still have something we can call pain? If the evolutionary view presented by Elizabeth is correct, one would think so. What is the necessary connection, if there is any, between pain and conscious awareness of pain? By this I mean, if pain is mechanical, why should we even need to be conscious of it? If we come into contact with a hot plate we could be aware of our hand making a jerking motion away from the plate, but feel no pain as a conscious experience, right? Upon the evolutionary view, one would need to think that pain came first, and only later did it get wired into the brain and consciousness of it. What really happens when we experience pain? Say I step on something sharp. Obviously, something must happen first at the cellular level, right? What happens, and how does that get translated into “pain”? Are cells conscious? Can they react to something in the environment that has the potential to harm them? That’s basically the definition of pain Elizabeth is using, right?
Cool set of questions, Mung. Pain is very well understood at the mechanical level. When an object impacts an an organism at some location of the body two main neural transmitters are stimulated - thin A-delta fibers and C fibers that carry a signal to the spinal column, which then transmit the signal to the brain. The A-delta fibers register intensity of the initial body contact (or initial damage pain) while the C fibers register the site damage sensation (or the dull continuous pain of any damaged area). These sensations are the result of chemical releases (such as Cox-1 and Cox-2 enzymes released at the damage site that initiate the process of swelling to help with healing) and electrical stimulation of the nerve fibers that are transmitted to the thalamus that then signals the various centers appropriate for the given pain/damage. The interesting question you raise, though, is whether "pain" exists apart from consciousness. Certainly some organisms without higher cerebral functions react (generally by moving away from the stimulus) to what we think would normally register as pain, but are they experiencing "pain"? I have no idea. I doubt it given what we do know about pain, but I simply don't know. The point is though, within the human pain system, the sensation of "pain" is arrived at through the interaction of a variety of brain areas and nerve systems and involves a number of chemical compounds. It's not clear to me that "pain" can be experienced without them.Doveton
July 19, 2011
July
07
Jul
19
19
2011
09:43 AM
9
09
43
AM
PST
Interesting discussion. A question to consider: If the mind is immaterial and interacts somehow with the material body, should it matter where the mind is in relation to the body in order to interact with it? Related to the above, if it does not matter where the mind is in relation to the body for interaction to take place, is it possible for any person's mind to interact with any other person's body? If not, why do you suppose that is? Entertaining such questions might be fruitful for conceptualizing how the mind interacts with the body and what, if any, limits exist in that arrangement.Doveton
July 19, 2011
July
07
Jul
19
19
2011
08:32 AM
8
08
32
AM
PST
Fascinating discussion. I'm not being sarcastic either. It is particularly fascinating because from one view, you are trying to define human perception and subsequent processing of that perception as if from outside the "apparatus" responsible for both. That introspection should be quite difficult, because you would be using the very tools you are attempting to measure, as the measuring devices! I know of no "standards body" that has developed standard "weights and measures" to which you could "calibrate" your measuring devices. Thus, if you are operating without calibrated devices (as I believe to be the case) then all measurements are relative are they not? It reminds me of hospital procedures to have the patient specify their "pain" on a level of 1-10. The chart that defines 1-10 are "smiley" faces depicting various stages of discomfort. From smiling all the way to a contorted, crying, frazzled face. The real question is, what makes such measurements subjective in nature? Most likely because each human "system" is unique. The "wiring" (euphemism for nervous system) is common, but not really standard. The "motherboard" complete with CPU and associated microprocessors (the brain structures) are common construction, but not standardized. The "OS and application programs" are also common in base functionality, though not necessarily so in terms of their extended functionality. As is the case with the other "systems", the "software system" is by no means standardized. All of these systems have the added differentiation of being highly specialized to the unit in which they operate. Hence, the inability to make "standardized" parts. The best you can do is make "common" components.ciphertext
July 19, 2011
July
07
Jul
19
19
2011
07:52 AM
7
07
52
AM
PST
mike1962, Here's a splendid quote from Galen Strawson regarding talk of denying experience. I think we should feel very sober, and a little afraid, at the power of human credulity, the capacity of human minds to be gripped by theory, by faith. For this particular denial is the strangest that has ever happened in the whole history of human thought, not just the whole history of philosophy. It falls, unfortunately, to philosophy, not religion, to reveal the greatest woo-woo of the human mind. I find this grievous, but, next to this denial, every known religious belief is only a little less sensible than the belief that grass is green.nullasalus
July 18, 2011
July
07
Jul
18
18
2011
06:36 PM
6
06
36
PM
PST
Pain, pleasure, color, sound, smell, all these qualia are primary. Thoughts you have about them are inferences. Inferences can be wrong but conscious experience can never be "wrong." You might be wrong about what is causing pain, but can't be "wrong" about the experience of pain.mike1962
July 18, 2011
July
07
Jul
18
18
2011
06:20 PM
6
06
20
PM
PST
Darwinian evolution only accounts for what will benefit our offspring. It's forward looking then is it? So I have some painful questions I'd like to inject. Which came first, pain or consciousness? Pain or conscious awareness of pain? Is it possible to speak of pain if there is no brain? IOW, can organisms which have no brain experience pain? Can we dispense with minds and brains and consciousness and still have something we can call pain? If the evolutionary view presented by Elizabeth is correct, one would think so. What is the necessary connection, if there is any, between pain and conscious awareness of pain? By this I mean, if pain is mechanical, why should we even need to be conscious of it? If we come into contact with a hot plate we could be aware of our hand making a jerking motion away from the plate, but feel no pain as a conscious experience, right? Upon the evolutionary view, one would need to think that pain came first, and only later did it get wired into the brain and consciousness of it. What really happens when we experience pain? Say I step on something sharp. Obviously, something must happen first at the cellular level, right? What happens, and how does that get translated into "pain"? Are cells conscious? Can they react to something in the environment that has the potential to harm them? That's basically the definition of pain Elizabeth is using, right?Mung
July 18, 2011
July
07
Jul
18
18
2011
04:59 PM
4
04
59
PM
PST
Null: Likewise, I’d question whether memory is a ‘key component’.
Anyone who's ever witnessed the birth of a baby can see a baby in pain, and is responsive to painful stimuli. Doubtful they have memory of anything. They have no idea what pain is, or why they have it, or what it is related to. They just feel it. Waaaaaaa!mike1962
July 18, 2011
July
07
Jul
18
18
2011
04:37 PM
4
04
37
PM
PST
Liz: I guess experience could be “fundamental” in our universe, but the close correlation between mental experience and neural activity suggests at the minimum that it has a very close relationship with brains.
No doubt. But when I am at the movie theater there is a close correlation with the consciousness with the Big Screen. Doesn't mean that screens cause consciousness, and it doesn't mean that brains cause consciousness either. All we know so far scientifically is that brain states correlation with conscious experience. There is not one whit of scientific evidence that consciousness is an epiphenomenon of brains.mike1962
July 18, 2011
July
07
Jul
18
18
2011
04:32 PM
4
04
32
PM
PST
No, we can’t be wrong about what we are aware of – what we are aware of is what we are aware of, no more no less. If I am aware of pain, I am aware of pain. ... But that’s exactly how far you are experiencing pain – to the limits of awareness! Of course you aren’t mistaken. You are aware of pain. That’s all you are aware of. Then you, contra Dennett, support the existence of qualia. Or you're dealing in metaphors, and you don't actually agree with me after all. One or the other. Evolutionary speaking, the ability to feel pain is a advantage. The inability to feel pain is a disorder. Of course that doesn’t mean that anaesthetics are a bad thing – but it would be very bad if, say your feet were continuously anaesthetised. You’d soon seriously damage your feet. Says who? This is a shade away from saying 'things that don't experience pain could never survive because they'd never avoid things which harm them'. I'm playing a video game lately, Demon's Souls. Great game, difficult. There are characters who, if you keep hitting them with a sword, will run away from you. Are they experiencing pain? Do I need to bring in qualia to describe what's going on there? You earlier said "What pain is for in an animal is to eject the animal from a dangerous object." But pain doesn't 'eject the animal from a dangerous object'. Movement does. And no, 'evolutionary speaking' that's not correct. If I'm trying to evolve something which does not experience pain, not experiencing pain is an advantage. If there's a situation where not experiencing pain is advantageous, there's the tautology - it's advantageous. Not a disorder. I guess experience could be “fundamental” in our universe, but the close correlation between mental experience and neural activity suggests at the minimum that it has a very close relationship with brains. Does the fact that all of our knowledge of the world is in the form of thought suggest at minimum that the universe is mental a la Berkeley? Does the fact that only humans are able to use language to communicate 'I am in pain' suggest at minimum that only humans feel pain? You're tying experience to the ability to report. This is a 'looking for your keys under the streetlamp because that's where the light is' problem. Moreoever we know that pain is accompanied by physiological “fight or flight” responses, which is at least consistent with my hypothesis. Consistency is cheap. All the data is consistent with the design hypothesis, the panpsychism hypothesis, the idealism hypothesis, and more. We also can have pain that we neither fight nor fly from, but simply endure. We have pain that we reflect on. Sure, you can bring in the evolutionary framework again - 'well, sometimes pain isn't connected to fight or flight - entirely consistent with Darwinism!' Entirely consistent with design too. Or even designed evolution. Indeed, again in my own experience, the “rawest” pain I have ever experience has been when I’ve been closest to unconsciousness – in the recovery room after surgery, for instance, or while delirious. And a panpsychist could question whether this 'unconscious' state really exists, as opposed to a state of experience without memory, or without retained memory. . And actually, I think you hit a key point with “reportability”. I suggest that a second key component of the “raw” experience of pain is memory (self-reporting if you like, but at scarcely symbolic level, perhaps not symbolic at all). But I'm not agreeing that 'reportability' is a key component of experience - I'm questioning that, now and previously. Likewise, I'd question whether memory is a 'key component'. I don’t know about you, but I’d be reluctant to withhold the notion that the poor thing was in pain and distress, and that I should tell it not to worry about the tea, just concentrate on sorting itself out, and I’d bring it a screwdriver and nice a dehumidifier pronto. A pragmatic attitude is not an explanation, or even evidence of an explanation. Nor is an emotional attitude. You don't need to hypothesize about a futuristic technology, we know not what, which has you reflexively ascribing mental states and experience to that which you can't certainly verify is actually having experience. It's just "the problem of other minds" all over again. So the sensation of “red” may be the result of all kinds of motor programs associated with that colour, as will as physiological autonomic responses to stimuli of that colour – excitement (an edible berry! meat!) fear (I’m bleeding!) warmth (fire!) fear again (fire!) and that the “raw feel” of redness is the direct re-input of the output from various sub-execution courses of action, modified by context (is it a flickering red? Red on green? Red on flesh), felt as a gestalt, but comprised of a smorgasbord of motor and autonomic outputs reentred as inputs. Lots of words, zero explanation. This basically boils down to 'Gosh, the brain is complex. If you get complex enough, maybe experience jumps out. I don't know how it could, but...' Or maybe we've misconceived matter, and experience is fundamental. Or maybe a mechanistic depiction of matter is flawed and should be discarded for another view. Or maybe substance dualism is right after all. And worse, you are - like it or not - back in the intentional muck with all this. Models are models of something, an intentional concept. Simulations are simulations of something, also intentional. But you take a position where the only way things 'model', or the only way things 'simulate', is in virtue of our assigning that meaning to them. Back to the example of how a brain 'models the future' the way a rock on the ground 'represents Dallas' - it does so in virtue of another mind's derivation only on your view. And the other mind's derivation only derives - really, is only a mind - in virtue of yet another derivation. And on we go. That applies to all of your examples. "A berry!" "Meat!" All derivations, which are derivations, which are... Again: Yeah, I know. You think vicious regress is okay. I disagree. Oh, it would fit with an imperfect Designer OK. Or several designers. Or an incompetent designer. Or a malign designer. Or even, I guess, a designer who wanted us to feel pain because we need to know that pain hurts in order to learn compassion, or learn how truly we are forgiven. Or something. I’m sure a theology could be made to fit. And that eliminates the suggestion that Darwinism has some kind of advantage versus a design hypothesis here. I’m hoping what I say here is not dependent on Darwinism, it just seems to make most sense to me that way. But then again, I grew up with Darwinism (and theism) and never saw a conflict. Still don’t. You've already admitted that your hypothesis "is fairly firmly embedded in an evolutionary framework, and I would concede that it makes less sense outside it.", so I don't see how you can say that what you're saying isn't dependent on Darwinism. As for theism's compatibility with Darwinism, that depends on how both are defined. Common definitions of Darwinism preclude traditional views of theism, and vice versa. And we do see them. Darwinian evolution only accounts for what will benefit our offspring. It accounts for pretty much anything in terms of whatever degree of function quality, especially when Darwinism is expanded to mean 'evolution, period'. No, I’m saying that we experience it as pure pain – as a gestalt. That doesn’t mean that the gestalt isn’t the net sum of conflicting drives. And indeed, if we do start to conceptualise pain, If we start to conceptualize pain, then what we're dealing with is a concept, not pain itself. We're discussing pain at this very moment - we're using it as a concept. That is not identical - it is obvious not identical - to the experience of pain. And talk of pain as 'the net sum of conflicting drives' is as useful as talk of pain as the result of a particular symphony of bug farts. There's still the question of what this could possibly mean, and appealing to complexity and emergence won't do the job. Let’s leave Dennett out of here for now, shall we? Although I did say that Dennett’s intentionality was related to qualia Which it is. But let’s not go there right now. No can do. If you're bringing up intentions, then what intentions are is the next step then and there. That puts intentions-as-derived, along with Dennett's stated denial of qualia, as right on the menu. I'm not going to nod my head and say 'Sure, there are programs in the brain, these programs lead to certain outputs' and holster the point that for Dennett, the only 'programs' that exist in the brain are programs only in virtue of another mind deriving them to be such. And I agree – to the extent that we are aware. But I suggest that if we could drill down to “what is it like to feel pain?” a la Nagel and his bat, the question isn’t quite as impossible as it sounds at first. We don’t do it when we are in the throes of it, perhaps (not without Shamanic training anyway), but I suggest that in repose/on reflection, we can perhaps parse “pure” pain into a desperately urge to leave the present place and time, And again: The concept of pain is not pain, anymore than looking at a definition of pain is 'seeing pain'. To "parse pain" is to deal with a concept. And if you claim that pain is nothing but a parsed concept, then you're disagreeing with me about pain being raw experience rather than 'being of or about something'. So all the "I agree"s were just for show. That sort of move is real tiring. But at least we have reached the point where we have a simple disagreement about our premises It seems you knew this the moment I answered your question. Why you decided to only get to this point after trying to rephrase what my stance into its opposite multiple times, repeatedly saying you agree with me when you knew you had a (fundamental, no less) disagreement, is the stuff of wonder. I think my view has support from independent evidence. But you could be right. Maybe experience is just ground zero. I think just the circulating of an infinite loop. It has support from independent evidence so long as you start with a materialist framework and rule out anything outside of that framework. But hey, that's just another kind of loop, right? Unless what you mean by evidence is consistency. But consistency is cheap.nullasalus
July 18, 2011
July
07
Jul
18
18
2011
04:16 PM
4
04
16
PM
PST
Nullasalus:
Well, not in my sentence above. And I certainly don’t think you can be “mistaken” about pain. I totally agree that it isn’t a theory. That’s why I said: pain is just pain, as far as we are aware.
“As far as we are aware” implies that we can be mistaken. If I ask ‘What are these nails for?’ and X replies “For hanging pictures from, as far as I am aware”, the point of the qualification is that X could be wrong about it. Maybe the nails are for something else. Maybe they’re not for anything at all.
That's why I said "as far as we are aware. It was a double entendre if you like (but I hinted as much). No, we can't be wrong about what we are aware of - what we are aware of is what we are aware of, no more no less. If I am aware of pain, I am aware of pain. That's the sense in which it is raw. but it may be more than that below the awareness level (as I tried to explain).
There’s no possibility of being mistaken regarding experience itself. If I’m experiencing pain, I’m experiencing pain, period. I’m not experiencing pain, “as far as I am aware”.
But that's exactly how far you are experiencing pain - to the limits of awareness! Of course you aren't mistaken. You are aware of pain. That's all you are aware of.
What automatic plug ejection is for in a kettle is to stop the element burning out. What pain is for in an animal is to eject the animal from a dangerous object. I’m positing that that’s what pain evolved to do/was designed for (right now it doesn’t matter which); we can, I hope, agree that lack of ability to feel pain is a bad thing?
Why would I agree to that? I gave examples of anaesthetics – are anaesthetics ‘bad things’? As far as ‘designed for’ goes, that which is bad is only bad relative to a mind – the lack of ability to feel pain can be a good thing, clearly.
I meant for our survival and well-being. Evolutionary speaking, the ability to feel pain is a advantage. The inability to feel pain is a disorder. Of course that doesn't mean that anaesthetics are a bad thing - but it would be very bad if, say your feet were continuously anaesthetised. You'd soon seriously damage your feet.
Finally, saying pain was ‘evolved to do’ something – especially insofar as pain is supposed to represent consciousness, experience – comes with the built-in suggestion that consciousness ‘comes from the non-conscious’. But again, why should I believe that, even granting evolution in general? As I said, maybe experience is fundamental in our universe, in a variety of possible ways.
Well, I didn't say that pain was "supposed to represent consciousness". I think it's a very good example of something we can be conscious of at a ver very elemental level, so I was pleased you brought it up. Anyway, I'm not asking you to believe my hypothesis, just presenting it. I guess experience could be "fundamental" in our universe, but the close correlation between mental experience and neural activity suggests at the minimum that it has a very close relationship with brains. Moreoever we know that pain is accompanied by physiological "fight or flight" responses, which is at least consistent with my hypothesis. We also know that people do actually conceptualise their pain at least some of the time, to some degree - can describe pain as "sharp" or "stinging" or "a dull ache" or "squeezing", so it's not even always raw. Indeed, again in my own experience, the "rawest" pain I have ever experience has been when I've been closest to unconsciousness - in the recovery room after surgery, for instance, or while delirious. That again is consistent with the idea that the less we conceptualise pain, i.e. the less we handle it using our capacity for abstraction and narrative, the more it resembles a very basic animal drive. But obviously my hypothesis is fairly firmly embedded in an evolutionary framework, and I would concede that it makes less sense outside it.
Well, I guess here I cite evidence: certain conditions result in in ability to feel pain.
If pain is functioning as a stand-in for experience, then where’s the evidence that (physical, I would assume) conditions ‘result in’ this ability? Absolutely, if you stab people in certain conditions you can get a report of pain out of them fairly reliably. But that says something about reportability, not necessarily experience.
I'm not sure where the idea that pain is a "stand-in" for experience came from :). We experience pain; we both agree on that. And actually, I think you hit a key point with "reportability". I suggest that a second key component of the "raw" experience of pain is memory (self-reporting if you like, but at scarcely symbolic level, perhaps not symbolic at all). For example it would be possible to design a robot that would do something a bit more complex than the old kettle when confronted with danger - perhaps we program it to go into reverse if it bumps into something. Then we pin it in a corner by a large heavy piece of furniture. It keeps reversing and hitting another object, reversing hitting another; it starts over-heating, its fans start whirring, eventually it breaks down. Now, I'm sure you would agree that it was not "experiencing pain", even though it was in a state of frustrated aversive drive (yes, I've just undermined my own argument). Now, fast forward a few thousand years to quantum computers full of nanorobotic neurons capable of generating novel to solutions to problems and acquiring habits of behaviour that best enable it to carry out the functions built into it by its Intelligent Designers. And one morning, trying to make your cup of tea, a jolt from a passing aircraft breaking the sound barrier causes it to spill the tea on its circuitry, triggering chaotic movements that send it crashing around the kitchen; however much of its circuitry remains functional, because it is equipped with lots of alternative routings for its circuitry so it searches for a screw-driver to unscrew the circuit plate, but its spasmodic movements cause it to drop the screw driver into its innards, causing more short-circuits and more chaotic movements, meanwhile its "make the tea for master" circuitry is whizzing away, relaying "tea's late, tea's late" at which point it activates the alarm system, and you get a message, saying "help! I've spilled the tea on my circuitry, and I can't do a thing, I'm so sorry, I know you want your tea, but you are going to have to fix this circuitry or my motherboard is going to blow", then, as you drag yourself out of bed, it says "never mind the screwdriver, just for goodness' sake deactivate my tea-making circuit then at least I'll be able to concentrate on trying to get my circuitry fixed". I don't know about you, but I'd be reluctant to withhold the notion that the poor thing was in pain and distress, and that I should tell it not to worry about the tea, just concentrate on sorting itself out, and I'd bring it a screwdriver and nice a dehumidifier pronto.
that pain evolved/was designed as a protection against injury, just as the sympathetic “fight or flight” response evolved/was designed to facilitate survival in the face of predators or rivals.
Back to the problems I have with this claim as mentioned above. Further, under the typical view, physical behaviors can be selected for – ‘running away’, ‘staying and fighting’, etc. But unless you’re assuming that experiences (like pain) are physical behavior, they can’t be ‘selected for’ like that. You can hedge and say, ‘well, this physical behavior was selected for, and this physical behavior is correlated with this experience’. But then the experience is free riding.
Well, I think that's a key point (more key than my little fantasy above) but I have an answer: the thing about brains (and we have lots of evidence for this) is that when we think about an action we activate the brain areas involved in actually executing the action, but at a level (probably measured in neural population size, or, electophysiologically by oscillatory amplitude) that is insufficient to trigger outflow of signal to the muscles that would execute the action. We rev the engine, as it were, before letting out the clutch. Not only that, but the activation of the circuitry that will be implicated in the action in turn activates circuitry that would respond to the results of that action. This is best studied in the visual system - when we plan an eye movement, neurons into the receptive field of which that eye movement will bring some new stimulus start to fire even before the eye has moved. In other words our motor system constantly makes "forward models" of the consequences of alternative courses of action, and the simulated results of those actions are fed back as input - if the results correspond with what we want, that circuitry will be boosted; if the opposite, it will be inhibited. So I suggest (not original-ly) that what we experience as "raw feels" are generated by the forward modelling of courses of action triggered by the stimulus, and fed back as input in a continuing loop. So the sensation of "red" may be the result of all kinds of motor programs associated with that colour, as will as physiological autonomic responses to stimuli of that colour - excitement (an edible berry! meat!) fear (I'm bleeding!) warmth (fire!) fear again (fire!) and that the "raw feel" of redness is the direct re-input of the output from various sub-execution courses of action, modified by context (is it a flickering red? Red on green? Red on flesh), felt as a gestalt, but comprised of a smorgasbord of motor and autonomic outputs reentred as inputs.
In that sense I’d argue that the pain response when pain is not useful is a kind of epiphenomenon arising from a selectable response (i.e. one that helped us survive), and is therefore better explained in a Darwinian framework than an ID one – an ID might have ensured that the pain response only occurred when aversive behaviour would be advantageous.
‘Might have’? Why – because designers are perfect under ID? Because ID claims to have access to all the intents of the designer? Because any designer would only introduce pain when it’s “advantageous” as you define it?
Oh, it would fit with an imperfect Designer OK. Or several designers. Or an incompetent designer. Or a malign designer. Or even, I guess, a designer who wanted us to feel pain because we need to know that pain hurts in order to learn compassion, or learn how truly we are forgiven. Or something. I'm sure a theology could be made to fit :) I'm hoping what I say here is not dependent on Darwinism, it just seems to make most sense to me that way. But then again, I grew up with Darwinism (and theism) and never saw a conflict. Still don't.
Under the Darwinian framework, it’s entirely conceivable that pain is only present “when it’s advantageous” as well – why, that’s just the power of natural selection at work, resulting in fit individuals. And if it’s present when it’s not advantageous, Darwinism can explain that too – Darwinism is not perfectly designed, you see, so we can expect kludges here and there.
And we do see them. Darwinian evolution only accounts for what will benefit our offspring. Just be glad you aren't a female hyena :)
So no, I can’t accept that “the Darwinian framework” functions better here, or that the ID framework functions worse.
OK. My point was not a Darwinian one.
What I’m saying is that the experience of pain is the frustrated violent urge to be somewhere else; to do the impossible, leave your body behind – pure “aversion”.
And I’d disagree, because you’re back to making pain ‘of’ or ‘about’, a thing of conceptualization. I’m saying that pain, and experience generally, is not a conceptualization. You can conceptualize pain, but to deal with that is to deal with a concept, not pain.
No, I'm saying that we experience it as pure pain - as a gestalt. That doesn't mean that the gestalt isn't the net sum of conflicting drives. And indeed, if we do start to conceptualise pain, the language people use tends to reflect the kind of underlying motor programs that I suggest give rise to the gestalt: "Just get me out of here! Take away the pain! Hold me-don't touch me!".
Worse, even under your scheme that comes with the heavy qualification – since you’re trying to make pain/experience a thing we apply meaning to, but all meaning is derived by Dennett’s view. So…
Let's leave Dennett out of here for now, shall we? Although I did say that Dennett's intentionality was related to qualia :) Which it is. But let's not go there right now. But in any case - we do apply meaning to pain. We apply meaning to most things. I'm sure you don't disagree. What you are saying is that there is a "ground floor" as it were, of "raw feels" where meaning absent. And I agree - to the extent that we are aware. But I suggest that if we could drill down to "what is it like to feel pain?" a la Nagel and his bat, the question isn't quite as impossible as it sounds at first. We don't do it when we are in the throes of it, perhaps (not without Shamanic training anyway), but I suggest that in repose/on reflection, we can perhaps parse "pure" pain into a desperately urge to leave the present place and time, coupled, often, with an equally desperate contradictory urge to curl up and sleep. And our autonomic responses reflect those urges (increased heart rate; reduced vagal tone).
But what I’m suggesting is that even when we get to what we think of as “raw” experience (“raw feels” or “qualia”), they are proxies for something more specific – the frustrated urge to flee, in the case of pain for instance.
..The ‘frustrated urge to flee’ would only be ‘the frustrated urge to flee’ by an assignment of meaning. You experience pain the way this rock on the ground represents Seattle – by virtue of someone deciding that that’s what this or that ‘means’. And what someone decides what this or that ‘means’ only does so by virtue of yet another derivation. And so on unto infinity or a brute stop.
Well I'm putting into words, inevitably, what we do not put into words. I don't think, when we experience pain, we always say "I feel like I want to flee". Though we sometimes do. I'm saying that the experience of pain is the urge to flee, but instead of flight relieving the pain, the whole thing gets into an iterative negative feedback loop and all we know is "I am in pain".
Yes, I know. You’re aware of this and accept it and think it’s peachy. I think it demonstrates the entire project has gone wrong. You say that no, your question wasn’t rhetorical. But you certainly seem to be treating it as such, because so far this entire exchange really seems to be working under the assumption that, while I maintain experience is exactly that – an experience, undeniable subjective sensation, not a concept that is interpreted – your response seems to be to just nod your head and continue the conversation as if that’s not what experience is and we all agree. But we don’t agree. Experience is a datum, not a posit to explain some other datum.
OK. But at least we have reached the point where we have a simple disagreement about our premises :) I think my view has support from independent evidence. But you could be right. Maybe experience is just ground zero. I think just the circulating of an infinite loop.Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
02:48 PM
2
02
48
PM
PST
Well, not in my sentence above. And I certainly don’t think you can be “mistaken” about pain. I totally agree that it isn’t a theory. That’s why I said: pain is just pain, as far as we are aware. "As far as we are aware" implies that we can be mistaken. If I ask 'What are these nails for?' and X replies "For hanging pictures from, as far as I am aware", the point of the qualification is that X could be wrong about it. Maybe the nails are for something else. Maybe they're not for anything at all. There's no possibility of being mistaken regarding experience itself. If I'm experiencing pain, I'm experiencing pain, period. I'm not experiencing pain, "as far as I am aware". What automatic plug ejection is for in a kettle is to stop the element burning out. What pain is for in an animal is to eject the animal from a dangerous object. I’m positing that that’s what pain evolved to do/was designed for (right now it doesn’t matter which); we can, I hope, agree that lack of ability to feel pain is a bad thing? Why would I agree to that? I gave examples of anaesthetics - are anaesthetics 'bad things'? As far as 'designed for' goes, that which is bad is only bad relative to a mind - the lack of ability to feel pain can be a good thing, clearly. Finally, saying pain was 'evolved to do' something - especially insofar as pain is supposed to represent consciousness, experience - comes with the built-in suggestion that consciousness 'comes from the non-conscious'. But again, why should I believe that, even granting evolution in general? As I said, maybe experience is fundamental in our universe, in a variety of possible ways. Well, I guess here I cite evidence: certain conditions result in in ability to feel pain. If pain is functioning as a stand-in for experience, then where's the evidence that (physical, I would assume) conditions 'result in' this ability? Absolutely, if you stab people in certain conditions you can get a report of pain out of them fairly reliably. But that says something about reportability, not necessarily experience. that pain evolved/was designed as a protection against injury, just as the sympathetic “fight or flight” response evolved/was designed to facilitate survival in the face of predators or rivals. Back to the problems I have with this claim as mentioned above. Further, under the typical view, physical behaviors can be selected for - 'running away', 'staying and fighting', etc. But unless you're assuming that experiences (like pain) are physical behavior, they can't be 'selected for' like that. You can hedge and say, 'well, this physical behavior was selected for, and this physical behavior is correlated with this experience'. But then the experience is free riding. In that sense I’d argue that the pain response when pain is not useful is a kind of epiphenomenon arising from a selectable response (i.e. one that helped us survive), and is therefore better explained in a Darwinian framework than an ID one – an ID might have ensured that the pain response only occurred when aversive behaviour would be advantageous. 'Might have'? Why - because designers are perfect under ID? Because ID claims to have access to all the intents of the designer? Because any designer would only introduce pain when it's "advantageous" as you define it? Under the Darwinian framework, it's entirely conceivable that pain is only present "when it's advantageous" as well - why, that's just the power of natural selection at work, resulting in fit individuals. And if it's present when it's not advantageous, Darwinism can explain that too - Darwinism is not perfectly designed, you see, so we can expect kludges here and there. So no, I can't accept that "the Darwinian framework" functions better here, or that the ID framework functions worse. What I’m saying is that the experience of pain is the frustrated violent urge to be somewhere else; to do the impossible, leave your body behind – pure “aversion”. And I'd disagree, because you're back to making pain 'of' or 'about', a thing of conceptualization. I'm saying that pain, and experience generally, is not a conceptualization. You can conceptualize pain, but to deal with that is to deal with a concept, not pain. Worse, even under your scheme that comes with the heavy qualification - since you're trying to make pain/experience a thing we apply meaning to, but all meaning is derived by Dennett's view. So... But what I’m suggesting is that even when we get to what we think of as “raw” experience (“raw feels” or “qualia”), they are proxies for something more specific – the frustrated urge to flee, in the case of pain for instance. ..The 'frustrated urge to flee' would only be 'the frustrated urge to flee' by an assignment of meaning. You experience pain the way this rock on the ground represents Seattle - by virtue of someone deciding that that's what this or that 'means'. And what someone decides what this or that 'means' only does so by virtue of yet another derivation. And so on unto infinity or a brute stop. Yes, I know. You're aware of this and accept it and think it's peachy. I think it demonstrates the entire project has gone wrong. You say that no, your question wasn't rhetorical. But you certainly seem to be treating it as such, because so far this entire exchange really seems to be working under the assumption that, while I maintain experience is exactly that - an experience, undeniable subjective sensation, not a concept that is interpreted - your response seems to be to just nod your head and continue the conversation as if that's not what experience is and we all agree. But we don't agree. Experience is a datum, not a posit to explain some other datum.nullasalus
July 18, 2011
July
07
Jul
18
18
2011
01:22 PM
1
01
22
PM
PST
Nullasalus:
Right: well, what I suggest (as a hypothesis) is that as far as we are aware (I use that term advisedly) pain is just pain.
This really seems like you’re going right back to treating pain as an ‘of’ or ‘about’ or ‘object’ all over again. As if my having experience is a theory about something, some posit I can be mistaken about.
Well, not in my sentence above. And I certainly don't think you can be "mistaken" about pain. I totally agree that it isn't a theory. That's why I said: pain is just pain, as far as we are aware.
What justification is there for treating pain/experience like this? Again, I’m saying pain – experience, in this sense – is a raw datum. Not a concept I come up with, not a theory about something I’ve observed.
And I'm agreeing that is what we experience.
But I think we can drill down beneath that level (to the unconscious if you like, or what I might call a reflexive functional level) to understand what might, at the conscious level above, present as “pure” pain.
What is ‘the unconscious’ and how do I know it exists, much less that it’s “beneath” experience in a grounding way? What if the panpsychists are correct? What if the neutral monists are? What about the idealists? And insofar as you suggest pain/experience is grounded by ‘unconscious’ levels, that seems to beg the question against the various dualisms too.
Well, that's why I also provided what I think is a better description: "a reflex functional level". Unconscious as in Not Conscious, not as in Freud's Id. The sort of thing an old fashioned kettle used to do when it boiled dry - a bimetallic strip would bend and force out the plug. Something mechanistic. That's the level on which I was asking "what is pain for?" What automatic plug ejection is for in a kettle is to stop the element burning out. What pain is for in an animal is to eject the animal from a dangerous object. I'm positing that that's what pain evolved to do/was designed for (right now it doesn't matter which); we can, I hope, agree that lack of ability to feel pain is a bad thing?
And we know the purpose of pain – pain is a warning – it’s the signal that tells us: “danger: back off”.
“We know” this how? Why can’t pain be a punishment? Why can’t it be a cruel joke? Why can’t it be something that simply happens? And why is pain necessary for ‘backing off’ anyway?
That's interesting. Well, I guess here I cite evidence: certain conditions result in in ability to feel pain. They are considered diseases because they lead to disability - burns and other injuries. But OK, I will walk back "we know" - let me rephrase it as a hypothesis - that pain evolved/was designed as a protection against injury, just as the sympathetic "fight or flight" response evolved/was designed to facilitate survival in the face of predators or rivals.
People who can’t feel pain are disadvantaged – people with leprosy for instance.
Except when it’s advantageous to not feel pain, like when defending loved ones from threats, being operated on, etc. We have entire industries devoted to eliminating pain. Should we be banning anaesthetics?
Not at all. Your response is intriguing. I do not assume that what is natural is always good. Nor that what is natural is morally right. I think pain is natural, and has a clear use. But that doesn't mean that it sometimes kicks in when it is useless or even damaging (phantom limb pain, for instance). And to some extent we have evolved/were designed to block pain when feeling it would be dangerous - when in "fight or flight" mode for instance, when the sympathetic drive appears to block the pain response, enabling us to win the fight or flee to safety. In that sense I'd argue that the pain response when pain is not useful is a kind of epiphenomenon arising from a selectable response (i.e. one that helped us survive), and is therefore better explained in a Darwinian framework than an ID one - an ID might have ensured that the pain response only occurred when aversive behaviour would be advantageous. However, Darwinian theory would predict as long as it is sometimes advantageous, then it will tend to be selected, and we just have to put up with it when it isn't. Hence our industries devoted to eliminating pain, and our use of anaesthetics.
People who feel pain can be disadvantaged as well.
Yes indeed.
So I’d say that although we do not (necessarily) conceptualise what pain is “about”, and although we may perceive it as something “raw” and contentless, we can, at least in repose, recognise it as the experience of extreme conflict between the urge to leave
Well, there it is again. “Experience of”.
Good catch. But let me point out: I said "in repose". I do not think we necessarily recognise it at the time as the conflict between the urge too flee and the urge to be still (or even as simply the frustrated urge to flee). But on reflection ("in repose") we can recognise it as that. I don't know about you, but I recall, muzzy from an anaesthetic, being in pain, but somehow locating it somewhere else - thinking it belonged to the person on the next gurney, and thinking it would be fine once I was wheeled out of recovery. On that occasion, at the time, I sort of conceptualised it, but wrongly - but the conceptualization itself revealed something, I suggest, of the essence of "pure" pain - the desire to escape - be somewhere where the pain isn't. Animals (cats for instance) often leave home when they are dying, I understand (from vets) - one explanation is that they are seeking a pain-free place, literally. What I'm saying is that the experience of pain is the frustrated violent urge to be somewhere else; to do the impossible, leave your body behind - pure "aversion".
I have to ask, was that initial question of ‘Is all consciousness consciousness about something?’ rhetorical? Because I’m starting to get the impression that you want to have a conversation with the stipulation that yes, all ‘experience’ is ‘experience of/about’, alternatives be damned.
No. I rarely ask rhetorical question, and if I do, I usually say so. I wanted to know. If your answer had been "yes" we could have gone straight on from there. Your answer was, however, "no", which is probably the better answer. But what I'm suggesting is that even when we get to what we think of as "raw" experience ("raw feels" or "qualia"), they are proxies for something more specific - the frustrated urge to flee, in the case of pain for instance. In the case of things like texture, or even colour, I'd specify something else that I think underlies the "raw" surface - in other words, I suggest that "feels" aren't as raw as they seem, it's just that we don't generally have conscious access to what is still rawer, below, and that what is still rawer below can be expressed as a program of action and/or a program of physiological change.
If pain is not a theory or a concept – if experience is a datum, a “sensory quale”, not a thing that is ‘of’ or ‘about’ something else – it seems your hypothesis here doesn’t even begin to get off the ground.
Indeed. Which is why qualia lie at the bottom of this discussion :)
I’m off for a bit.
Hope to see you later. I appreciate the conversation. Truly. I understand the frustration. Cheers LizzieElizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
08:17 AM
8
08
17
AM
PST
Another thread that reinforces my suspicion that some humans are zombies. The words that people like Elizabeth Liddle use when they speak of consciousness has scant to do with my own experience. Whereas the words that people like Nullasalus use clearly do. I find the people I encounter and discuss such things with are divided quite neatly into these two groups. I wonder if I can get a research grant.mike1962
July 18, 2011
July
07
Jul
18
18
2011
07:53 AM
7
07
53
AM
PST
Concerning the "the unconscious thing names itself and become self-conscious" blather -- I'd bet that with not too much effort, you guys can get EL to say something really profound, like: "Sentient entities are how "the universe" becomes self-conscious."Ilion
July 18, 2011
July
07
Jul
18
18
2011
07:45 AM
7
07
45
AM
PST
Right: well, what I suggest (as a hypothesis) is that as far as we are aware (I use that term advisedly) pain is just pain. This really seems like you're going right back to treating pain as an 'of' or 'about' or 'object' all over again. As if my having experience is a theory about something, some posit I can be mistaken about. What justification is there for treating pain/experience like this? Again, I'm saying pain - experience, in this sense - is a raw datum. Not a concept I come up with, not a theory about something I've observed. But I think we can drill down beneath that level (to the unconscious if you like, or what I might call a reflexive functional level) to understand what might, at the conscious level above, present as “pure” pain. What is 'the unconscious' and how do I know it exists, much less that it's "beneath" experience in a grounding way? What if the panpsychists are correct? What if the neutral monists are? What about the idealists? And insofar as you suggest pain/experience is grounded by 'unconscious' levels, that seems to beg the question against the various dualisms too. And we know the purpose of pain – pain is a warning – it’s the signal that tells us: “danger: back off”. "We know" this how? Why can't pain be a punishment? Why can't it be a cruel joke? Why can't it be something that simply happens? And why is pain necessary for 'backing off' anyway? People who can’t feel pain are disadvantaged – people with leprosy for instance. Except when it's advantageous to not feel pain, like when defending loved ones from threats, being operated on, etc. We have entire industries devoted to eliminating pain. Should we be banning anaesthetics? People who feel pain can be disadvantaged as well. So I’d say that although we do not (necessarily) conceptualise what pain is “about”, and although we may perceive it as something “raw” and contentless, we can, at least in repose, recognise it as the experience of extreme conflict between the urge to leave Well, there it is again. "Experience of". I have to ask, was that initial question of 'Is all consciousness consciousness about something?' rhetorical? Because I'm starting to get the impression that you want to have a conversation with the stipulation that yes, all 'experience' is 'experience of/about', alternatives be damned. If pain is not a theory or a concept - if experience is a datum, a "sensory quale", not a thing that is 'of' or 'about' something else - it seems your hypothesis here doesn't even begin to get off the ground. I'm off for a bit.nullasalus
July 18, 2011
July
07
Jul
18
18
2011
07:34 AM
7
07
34
AM
PST
Right: well, what I suggest (as a hypothesis) is that as far as we are aware (I use that term advisedly) pain is just pain. I agree it isn't "about" anything. To rephrase, I'd say that we are not conscious of anything other than pain (or may not be - certainly I have experienced the horribleness of knowing nothing but pain, as it were, fortunately not too often - or to use your phraseology (which I might also use) we have not "conceptualised" the pain - we are not thinking "I am in pain" or "this pain is terrible" or "my guts are screaming". Cognition is essentially absent - all we know is pain. But I think we can drill down beneath that level (to the unconscious if you like, or what I might call a reflexive functional level) to understand what might, at the conscious level above, present as "pure" pain. In other words: what might pain be "for"? We have agreed that it is not "about" anything, but it certainly has a purpose (let's even assume ID if you like at this point, though I don't think we have to). And we know the purpose of pain - pain is a warning - it's the signal that tells us: "danger: back off". It causes us to draw back from a thorn before it does too much damage; to let go a hot coal before it burns us too badly; even to curl up and hide, to help us heal. People who can't feel pain are disadvantaged - people with leprosy for instance. Pain, in other words is an aversive urge - the drive to avoid, either a stimulus or further damage, or both. And so, I suggest that what, at a conscious level is "pure" or "raw" continous pain could also be described as the state being driven to escape coupled with the drive to remain still - a particularly horrible combination, leaving us strung out between two opposing urges- no wonder we physically "writhe" in pain! So I'd say that although we do not (necessarily) conceptualise what pain is "about", and although we may perceive it as something "raw" and contentless, we can, at least in repose, recognise it as the experience of extreme conflict between the urge to leave(frustrated because we doing so does not remove the stimulus to leave and the urge to stay still (also frustrated because staying still does not remove the stimulus to stay either). I think I can anticipate the response you will have to this, but let's see :)Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
06:56 AM
6
06
56
AM
PST
vjtorley: "Each row is still random, but I have imposed a non-random macro-level constraint. That’s how my will works when I make a choice... For Aristotelian-Thomists, a human being is not two things – a soul and a body... In practical situations, immaterial acts of choice are realized as a selection from one of a large number of randomly generated possible pathways."
What exactly is doing the choosing in your model?mike1962
July 18, 2011
July
07
Jul
18
18
2011
06:33 AM
6
06
33
AM
PST
Nullasalus:
Still waiting for that quote of me saying I was infallible to begin with. I was certainly accused of thinking I was infallible, more than once.
Yes, I did say I thought you thought you were infallible, though I don't believe I said that you'd said that you were. Either way, I retract both claims (if I indeed made the second).Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
06:22 AM
6
06
22
AM
PST
So – we are ready to go on? Go for it.nullasalus
July 18, 2011
July
07
Jul
18
18
2011
06:21 AM
6
06
21
AM
PST
Nullasalus:
Right. So pain can sometimes have no content other than, as it were, itself? It is not “about” anything other than pain?
How is it ‘about’ pain? That’s back to making it sound like an object. It is, in this case, an experience.
OK, fair point. Yes it does rather. So let us say then: for you pain is "about" nothing - it is simply pain.
You asked if all consciousness is consciousness ‘of’ something, and for those who said no, you asked an explanation of being conscious ‘of’ nothing. I pointed out the difficulty there. Now you’re swapping out ‘of’ for ‘about’, but that seems like the same problem all over again.
Yes, I think the two are fairly interchangeable. But I am now clear, I think, that for you, pain is "raw" experience, by which you mean it is conscious experience (actually that's probably tautological, but at least it's clear) that is not "of" or "about" anything.
Pain in this case is an experience, period. Not an experience ‘of’ or experience ‘about’, but a subjective experience, period.
Heh, I just typed the above before scrolling down in the typebox. Looks like we are on the same page eh?
We can pull back and conceptualize experience, turn it into an object, but then we’re into concepts rather than experience.
Yes indeed. So - we are ready to go on?Elizabeth Liddle
July 18, 2011
July
07
Jul
18
18
2011
06:19 AM
6
06
19
AM
PST
OK, Nullasalus, I am happy that we agree that neither of us are infallible. Still waiting for that quote of me saying I was infallible to begin with. I was certainly accused of thinking I was infallible, more than once.nullasalus
July 18, 2011
July
07
Jul
18
18
2011
06:05 AM
6
06
05
AM
PST
1 2 3 4 5 6

Leave a Reply