Uncommon Descent Serving The Intelligent Design Community

How is libertarian free will possible?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

In this post, I’m going to assume that the only freedom worth having is libertarian free will: the free will I possess if there are choices that I have made during my life where I could have chosen differently, under identical circumstances. That is, I believe that libertarian free will is incompatible with determinism. By contrast, indeterminism is compatible with the existence of libertarian freedom, but in no way implies it.

There are some people who think that even if your choices are fully determined by your circumstances, they are still free, if you selected them for a reason and if you are capable of being educated to act for better reasons. People who think like that are known as compatibilists. I’m not one of them; I’m an incompatibilist. Specifically, I’m what an agent-causal incompatibilist: I believe that humans have a kind of agency (an ability to act) that cannot be explained in terms of physical events.

Some time ago, I came across The Cogito Model of human freedom, on The Information Philosopher Web site, by Dr. Roddy Doyle. The Website represents a bold philosophical attempt to reconcile the valid insights underlying both determinism and indeterminism. The authors of the model argue that it accords well with the findings of quantum theory, and guarantees humans libertarian freedom, but at the same time avoids the pitfall of making chance the cause of our actions. Here’s an excerpt:

Our Cogito model of human freedom combines microscopic quantum randomness and unpredictability with macroscopic determinism and predictability, in a temporal sequence.

Why have philosophers been unable for millennia to see that the common sense view of human freedom is correct? Partly because their logic or language preoccupation makes them say that either determinism or indeterminism is true, and the other must be false. Our physical world includes both, although the determinism we have is only an adequate description for large objects. So any intelligible explanation for free will must include both indeterminism and adequate determinism.

At first glance, Dr. Doyle’s Cogito Model appears to harmonize well with the idea of libertarian free will. Doyle makes a point of disavowing determinism, upholding indeterminism, championing Aristotle, admiring Aquinas and upholding libertarian free will. However, it turns out that he’s no Aristotelian, and certainly no Thomist. Indeed, he isn’t even a bona fide incompatibilist. Nevertheless, Doyle’s Cogito Model is a highly instructive one, for it points the way to how a science-friendly, authentically libertarian account of freedom might work.

There are passages on Dr. Doyle’s current Web site (see for instance paragraphs 3 and 4 of his page on Libertarianism) where he appears to suggest that our character and our values determine our actions. This is of course absurd: if I could never act out of character, then I could not be said to have a character. I would be a machine.

Misleadingly, in his Web page on Libertarianism, Dr. Doyle conflates the incoherent view that “an agent’s decisions are not connected in any way with character and other personal properties” (which is surely absurd) with the entirely distinct (and reasonable) view that “one’s actions are not determined by anything prior to a decision, including one’s character and values, and one’s feelings and desires” (emphases mine). Now, I have no problem with the idea that my bodily actions are determined by my will, which is guided by my reason. However, character, values, feelings and desires are not what makes an action free – especially as Doyle makes it quite clear in his Cogito Model that he envisages all these as being ultimately determined by non-rational, physicalistic causes:

Macro Mind is a macroscopic structure so large that quantum effects are negligible. It is the critical apparatus that makes decisions based on our character and values.

Information about our character and values is probably stored in the same noise-susceptible neural circuits of our brain…

The Macro Mind has very likely evolved to add enough redundancy, perhaps even error detection and correction, to reduce the noise to levels required for an adequate determinism.

The Macro Mind corresponds to natural selection by highly determined organisms.

There is a more radical problem with Doyle’s model: he acknowledges the reality of downward causation, but because he is a materialist, he fails to give a proper account of downward causation. He seems to construe it in terms of different levels of organization in the brain: Macro Mind (“a macroscopic structure so large that quantum effects are negligible… the critical apparatus that makes decisions based on our character and values”) and Micro Mind (“a random generator of frequently outlandish and absurd possibilities”) – the latter being susceptible to random quantum fluctuations, from which the former makes a rational selection.

Doyle goes on to say:

Our decisions are then in principle predictable, given knowledge of all our past actions and given the randomly generated possibilities in the instant before decision. However, only we know the contents of our minds, and they exist only within our minds. Thus we can feel fully responsible for our choices, morally and legally.

This passage leads me to conclude that Doyle is a sort of compatibilist, after all. As I’ve said, I’m not.

So how do I envisage freedom? I’d like to go back to a remark by Karl Popper, in his address entitled, Natural Selection and the Emergence of Mind, delivered at Darwin College, Cambridge, November 8, 1977. Let me say at the outset that I disagree with much of what Popper says. However, I think he articulated a profound insight when he said:

A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.

Let’s get back to the problem of downward causation. How does it take place? The eminent neurophysiologist and Nobel prize winner, Sir John Eccles, openly advocated a “ghost in the machine” model in his book Facing Reality, 1970 (pp. 118-129). He envisaged that the “ghost” operates on neurons that are momentarily poised close to a threshold level of excitability.

But that’s not how I picture it.

My model of libertarian free will

Reasoning and choosing are indeed immaterial processes: they are actions that involve abstract, formal concepts. (By the way, computers don’t perform formal operations; they are simply man-made material devices that are designed to mimic these operations. A computer is no more capable of addition than a cash register, an abacus or a Rube Goldberg machine.)

Reasoning is an immaterial activity. This means that reasoning doesn’t happen anywhere – certainly not in some spooky Cartesian soul hovering 10 centimeters above my head. It has no location. Ditto for choice. However, choices have to be somehow realized on a physical level, otherwise they would have no impact on the world. The soul doesn’t push neurons, as Eccles appears to think; instead, it selects from one of a large number of quantum possibilities thrown up at some micro level of the brain (Doyle’s micro mind). This doesn’t violate quantum randomness, because a selection can be non-random at the macro level, but random at the micro level. The following two rows of digits will serve to illustrate my point.

1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 1
0 0 1 0 0 0 0 1 1 0 1 1 0 1 1 1 0 1

The above two rows of digits were created by a random number generator. Now suppose I impose the macro requirement: keep the columns whose sum equals 1, and discard the rest. I now have:

1 0 1 1 1 0 0 0 0 0 1
0 1 0 0 0 1 1 0 1 1 0

Each row is still random, but I have imposed a non-random macro-level constraint. That’s how my will works when I make a choice.

For Aristotelian-Thomists, a human being is not two things – a soul and a body – but one being, capable of two radically different kinds of acts – material acts (which other animals are also capable of) and formal, immaterial actions, such as acts of choice and deliberation. In practical situations, immaterial acts of choice are realized as a selection from one of a large number of randomly generated possible pathways.

On a neural level, what probably happens when an agent decides to raise his/her arm is this: the arm goes through a large number of micro-level muscular movements (tiny twitches) which are randomly generated at the quantum level. The agent tries these out over a very short interval of time (a fraction of a second) before selecting the one which feels right – namely, the one which matches the agent’s desire to raise his/her arm. This selection continues during the time interval over which the agent raises his/her arm. The wrong (randomly generated quantum-level) micro-movements are continually filtered out by the agent.

The agent’s selection usually reflect his/her character, values and desires (as Doyle proposes) – but on occasion, it may not. We can and do act out of character, and we sometimes act irrationally. Our free will is not bound to act according to reason, and sometimes we act contrary to it (akrasia, or weakness of will, being a case in point).

So I agree with much of what Doyle has to say, but with this crucial difference: I do not see our minds as having been formed by the process of natural selection. Since thinking is an immaterial activity, any physicalistic account of its origin is impossible in principle.

Comments
F/N: I have put up a reference clip on the Smith Model (a cybernetic architecture) and linked issues and ideas, here. KF kairosfocus
LIZ: Well, I don’t think any “part” of the brain is conscious! I think I am conscious, and that I consist of not just my brain but the rest of me as wel, including my history over time.
As you probably know, brain studies indicate that certain areas are directly related to conscious experience and some are not. You can probe/excite one spot and the subject is consciously aware of it, and probe another and the subject is not. So then, it would seem my characterization is a fair one. While I agree that "you" are conscious, it seems empirically true that only parts of your brain, that is, only a subset of neurons and synapses and dendrites, are associated directly with conscious experience. If this is true, it is an interesting and unanswered question how this can be true if consciousness is reducible to certain neurons, synapses and dendrites. What makes those particular neurons, synapses and dendrites so special? mike1962
Watch carefully as I emit a cloud of ink so as to elude my most dangerous predator, the dreaded clear position.
classic Mung
Yes. But some are less wrong than others, mostly by being more complete. However, sometimes a simpler, less complete, more wrong, model is good enough, and easier to handle. And as of right now, the 'filling in' model is the better model than the 'refrigerator light' model, given the data. All this wriggling about how you think the other model is better, even though it's wrong, because all models are wrong, but some are better than others, but the filling in model is a good model... it's not necessary. It’s a fair point, and an interesting one – indeed, one I have been making, that consciousness is intrinsically related to memory. So I will rephrase: on waking we are conscious of very little that has occurred since we fell asleep, and what we are conscious of, is frequently confusing. However, interestingly, we usually are conscious that time has passed, which is often not true on recovery from an anaesthetic, suggesting that anaesthesia promotes, at the least, a more profound amnesia than natural sleep. See, it's not a 'point' you have been making. An assertion, a fiat declaration, is not a point. You say consciousness is intrinsically related to memory - but the only support you have of this is reportability. But that suggests a link between memory and reportability. Consciousness, experience itself, isn't necessarily tied to memory. Well, in terms of understanding how consciousness work, efficiency seems a reasonable variable to consider – an efficient brain can be more compact than an inefficient one, and when designing, say, “seeing” robots, efficiency is important. And we have good behavioural evidence that the kind of efficiency we build into “seeing” robots is also displayed by the visual systems of living organisms, which, indeed, is where we got the idea from! The behavioural evidence includes a wealth of data from research into illusions, including something called “change blindness”. You're talking about 'an efficient brain', but if experience is simply a property of all matter (among other possibilities), efficiency goes out the window. It would be like talking about how brains would be more efficient if "they only were affected by gravity when they needed it", then going off to speculate on how evolution developed the ability for brains to be affected by gravity. It's wrongheaded from the start. My point from the start has been that when we talk about what we recall or report, we are talking about our memory - but that memory and experience are not necessarily linked. It's entirely possible to have an experience but to have no memory of it, or to be unable to recall a memory. To talk about memory is not to talk about experience - and once you recognize that point, the model you say is "what the data shows", turns out not to be "what the data shows". It's one interpretation you get which is powered more by your assumptions than the data. I've offered an explanation in reply that makes different assumptions and uses the same data. So when you say... Again, it’s possible that the findings are all nonsense, just as it’s possible that omphalism or solipsism are true. ...You're being tremendously disingenuous. It's not the data, the actual 'findings' that I'm questioning. It's the interpretation. That point is driven home when you recognize that omphalism and solipsism do not need to deny any 'data' - they are frameworks for interpreting data. But then, you're using a framework to interpret the data too. No, I don’t appreciate that. Please explain. A person can be a reductionist while still finding certain models useful even if they know they're wrong. They can even use a 'holistic' model when they reject holism in the particular case. Let's say that someone is modeling the throw of a baseball. They don't think 'the baseball' is some irreducible thing - in fact, they're sure the baseball reduces entirely to its parts. But it takes a tremendous amount of time to model the throw of a baseball in terms of quanta. It may not even be feasible yet, they may even lack all the data required. The fact that they model the baseball as a single unit, the fact that they find it convenient to talk about the baseball as 'a baseball', doesn't make them non-reductive materialist or an emergentist. I do agree with what I think Hofstadter is saying, which is, I think, that we could (in theory only) model a complete brain from the model up, but that we would be incapable of understanding the model as an explanation. I’d put it as I’ve put it earlier, similarly: that an explanation of the brain in terms of, say the interactions of the quarks of which they are composed would not be an explanation. And I wouldn’t use the word “reductionist” to describe an explanation that wasn’t an explanation”! An incomprehensible explanation is an oxymoron. But then Hof likes playing with words. "Hof likes playing with words". Granted, he BSs quite a lot and makes it sound like he said something profound when he mostly produced obfuscation. But, let's see what you're saying here. 1) Hof says outright that his position is not antireductionistic. 2) He further asserts flatly that he has no doubt that a totally reductionistic but incomprehensible explanation of the brain exists. 3) He also claims that nothing he's saying should be taken as being in conflict with reductionism. So what's your response? A) You agree entirely with Hof. B) But you don't think that an entirely reductionistic explanation of the brain exists. C) Indeed, to explain the brain entirely in reductionist terms would not be an explanation at all. D) So, clearly Hof had to be kidding. Because otherwise he'd either be talking nonsense, or worse, you and him would disagree. Don't you think "Hof likes to play with words" is a little desperate here? You're saying that Hof can't really believe that there is a completely reductionistic explanation of the brain, even though he just said explicitly he's sure such a thing exists. You're trying to insist he's not a reductionist, when he's flatly stating that his views should not be taken as antireductionist. Surprise: You're the one who's 'playing with words' here, more than Hof. No, I don’t, because I don’t think it does. I think the conflict is only apparent, not real, and is apparent because of different ways of using words. Bwahaha. And how would you register a real, rather than apparent, conflict? Well, if a different way of using words was involved! Your big defense here is to say that a completely reductive explanation of the mind is an oxymoron and therefore cannot exist. Therefore, Hof can't really believe this exists, even though he just said it did. It can't be that Hof and you disagree, right? Or worse, that Hof's own ideas are deeply flawed? No. Has to be a big misunderstanding. No, because I don’t think the non-reductive models are fictions. I don’t even know what that would mean. Non-reductive models can fit the data as well, at a huge efficiency saving, as reductive ones. You've never heard of fictionalism, or even the term "useful fiction"? You just went on talking about how models can be useful while being wrong, but you can't wrap your head around the idea that a model can be false but pragmatic? It isn’t even a fiction that geocentrism is true. Any point in the universe can be regarded as a reference point, and we are, by definition, at the very centre of the observable universe. So not only is a geocentric reference point perfectly reasonable, it’s has a more profound truth value. I love it. So, I point out that someone can reject geocentrism and still use a geocentric model, and your response is to looking-glass geocentrism until you can say that geocentrism is true? This is epic. Parsing the world in terms of “patterns” and “information” is something that minds do. I’m no solipsist, so I would not argue that the referents for those words don’t represent an underlying reality. But, as I’ve probably said before, but I’ll say it again; I do not consider we have direct access to reality – all we have are models. We can test those models against data (which themselves are models at a level closer to “raw” reality) but we can never actually access reality directly. But the better our models predict the data, the closer we can infer they describe an underlying reality. And that, I suggest, is how science works – by making ever-more closely fitting models of reality. Metaphors and propositions expressed linguistically are one kind of model. There are others. And here we go again. EL: And if information and patterns are, in your lexicon, “immaterial” then I’m not a materialist because I am totally persuaded they exist. NS: So these "information" and "patterns" exist in nature, entirely independent of any minds deriving or assigning them? EL: Watch carefully as I emit a cloud of ink so as to elude my most dangerous predator, the dreaded clear position. You're damn certain that information and patterns exist! And by that you mean that there are patterns and information in your head. Do patterns and information exist independently of your mind? ... Who's to say, perhaps the world is naught but illusion! (We don't have direct access to reality, but somehow we can check our models against it. Using the data which is also a model.) So, that certainty that information and patterns exist, turns out to not be quite so certain. It's a way minds parse things. But wait, do minds actually parse things, independent of other minds assigning or deriving that they are parsing? Oh, I know the answer: Back to infinite loops again. Not sure what you mean in this context. Information is often compressible. I was asking if information 'reduced to' something else, but since you've moved from being certain that information and patterns exist to declaring them to be the result of mental parsing, it's moot. nullasalus
Elizabeth Liddle:
Not sure what you mean in this context. Information is often compressible.
Sigh. When information is compressed, how is that information reduced? The context, clearly, is reductionism. So the question should be understood as not one about whether information and patterns can be compressed, but rather whether they can be reduced.
Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents.[1] This can be said of objects, phenomena, explanations, theories, and meanings.
http://en.wikipedia.org/wiki/Reductionism Mung
I don't especially mind labels as long as they don't indicate to their reader that I hold views that I don't in fact hold. Mung: I don't deny the existence of patterns and information. You say these are immaterial. Why then, do you call someone who ponts to the existence of patterns and information a "materialist"? Elizabeth Liddle
Elizabeth, You are a self-described materialist, even if you don't care for the label. Mung
Nullasalus:
Because all models are wrong, although some are less wrong than others. More importantly, all models are incomplete, but sometimes a simpler,though more incomplete model is more useful than a more complex, though more complete one. So, the filling in model is a good model but probably wrong and the fridge light model is better despite the filling in model being better supported by the data. Also, all models are wrong.
Yes. But some are less wrong than others, mostly by being more complete. However, sometimes a simpler, less complete, more wrong, model is good enough, and easier to handle.
For example, we are asleep, we aren’t conscious of much, and yet when we awake we are usually aware that time has passed. We aren’t conscious of much? According to who? How do we know we’re not plenty conscious but simply don’t remember what we were conscious of? The mere existence of dreams, and the difficulty recalling them, speaks against your view here.
It’s a fair point, and an interesting one – indeed, one I have been making, that consciousness is intrinsically related to memory. So I will rephrase: on waking we are conscious of very little that has occurred since we fell asleep, and what we are conscious of, is frequently confusing. However, interestingly, we usually are conscious that time has passed, which is often not true on recovery from an anaesthetic, suggesting that anaesthesia promotes, at the least, a more profound amnesia than natural sleep.
We have no awareness of what we are not seeing, probably because we do not need to – if we do need to get information from somewhere else, we can. So rather than “filling in” I think it’s better to think that we are aware on a “need to know” basis – “need to be conscious of”. Much more efficient than being conscious of everything all the time. Or we have an experience that is not remembered. You say ‘much more efficient’, but who says efficiency is key here? It would be very efficient if, re: that old quantum physics joke, the moon wasn’t there unless we were looking at it. It’s not a good argument that the moon isn’t there if we’re not looking at it.
Well, in terms of understanding how consciousness work, efficiency seems a reasonable variable to consider – an efficient brain can be more compact than an inefficient one, and when designing, say, “seeing” robots, efficiency is important. And we have good behavioural evidence that the kind of efficiency we build into “seeing” robots is also displayed by the visual systems of living organisms, which, indeed, is where we got the idea from! The behavioural evidence includes a wealth of data from research into illusions, including something called “change blindness”. Again, it’s possible that the findings are all nonsense, just as it’s possible that omphalism or solipsism are true. There’s no way of knowing that we are not acutely conscious throughout our sleeping hours, but have no access to any of the stuff we were conscious of during those hours subsequently. But then this is why I think that memory is intrinsic to consciousness – without memory, however short term, I suggest that consciousness makes no sense, which is why Edelman calls consciousness “the remembered present”.
Now the reason I reject the label “reductionist” is that it seems wildly inappropriate as a term to describe a holistic view of the self. I can scarcely think of more counter-intuitive term. Do you appreciate that ‘reductionist’ and ‘not reductionist’ are not about mere pragmatic descriptions, but realities?
No, I don’t appreciate that. Please explain.
Hof affirms explicitly that he believes a reductionist explanation of mind is true, but incomprehensible. So, since we can’t comprehend the reductionist explanation of mind, we use metaphors. Let’s put this question in as stark relief as I think is possible, EL. Here’s the quote from Hof: “In principle, I have no doubt that a totally reductionistic but incomprehensible explanation of the brain exists; the problem is how to translate it into a language we ourselves can fathom.” A) Do you agree that ‘a totally reductionistic’ explanation of the brain exists?
Well, as I don’t “appreciate” (yet!) that “reductionist” and “not-reductions” are not mere pragmatic descriptions, I’m going to answer on the basis that they are. I do agree with what I think Hofstadter is saying, which is, I think, that we could (in theory only) model a complete brain from the model up, but that we would be incapable of understanding the model as an explanation. I’d put it as I’ve put it earlier, similarly: that an explanation of the brain in terms of, say the interactions of the quarks of which they are composed would not be an explanation. And I wouldn’t use the word “reductionist” to describe an explanation that wasn’t an explanation”! An incomprehensible explanation is an oxymoron. But then Hof likes playing with words.
B) If not, do you realize this places you in apparent conflict with Hof, and possibly Dennett?
No, I don’t, because I don’t think it does. I think the conflict is only apparent, not real, and is apparent because of different ways of using words.
B) Do you realize that a reductionist can nevertheless make use of useful non-reductive fictions – in the same way that, say, a geocentric reference point can be used in satellite launches without someone having to affirm that geocentrism is true?
No, because I don’t think the non-reductive models are fictions. I don’t even know what that would mean. Non-reductive models can fit the data as well, at a huge efficiency saving, as reductive ones. It isn’t even a fiction that geocentrism is true. Any point in the universe can be regarded as a reference point, and we are, by definition, at the very centre of the observable universe. So not only is a geocentric reference point perfectly reasonable, it’s has a more profound truth value.
And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. I think the me-ness of me inheres in my pattern, not in the material in which that pattern is instantiated. Fantastic. A) Do these “patterns” and “information” (‘information about’, it would seem) exist independently of any minds deriving or assigning them?
Parsing the world in terms of “patterns” and “information” is something that minds do. I’m no solipsist, so I would not argue that the referents for those words don’t represent an underlying reality. But, as I’ve probably said before, but I’ll say it again; I do not consider we have direct access to reality – all we have are models. We can test those models against data (which themselves are models at a level closer to “raw” reality) but we can never actually access reality directly. But the better our models predict the data, the closer we can infer they describe an underlying reality. And that, I suggest, is how science works – by making ever-more closely fitting models of reality. Metaphors and propositions expressed linguistically are one kind of model. There are others.
B) Are these patterns and information themselves reducible?
Not sure what you mean in this context. Information is often compressible. Elizabeth Liddle
What's wrong with it, Mung? We are talking labels here, right? What does the term "materialist" mean, in your lexicon? Elizabeth Liddle
A further point. Elizabeth says if we define information as being immaterial, and she is totally persuaded that information exists, it follows that she is not a materialist. This sort of asinine "reasoning" is just one example [they are Legion] of what I find so frustrating about our dear Lizzie. Mung
Elizabeth Liddle:
But what you call me depends how you define you concepts and terms.
Well, I don't think I've ever called you a liar, though I suppose it's possible. Yet you seem to think that I do so with some regularity. But I prefer to let you speak for yourself on the question of whether or not you are a materialist:
I do not see why a “purposeless, mindless process” should not produce purposeful entities, and indeed, I think it did and does. - Elizabeth Liddle
Don't you think that makes you a materialist, regardless of how we define "information" or "pattern"? Mung
Because all models are wrong, although some are less wrong than others. More importantly, all models are incomplete, but sometimes a simpler,though more incomplete model is more useful than a more complex, though more complete one. So, the filling in model is a good model but probably wrong and the fridge light model is better despite the filling in model being better supported by the data. Also, all models are wrong. For example, we are asleep, we aren’t conscious of much, and yet when we awake we are usually aware that time has passed. We aren't conscious of much? According to who? How do we know we're not plenty conscious but simply don't remember what we were conscious of? The mere existence of dreams, and the difficulty recalling them, speaks against your view here. We have no awareness of what we are not seeing, probably because we do not need to – if we do need to get information from somewhere else, we can. So rather than “filling in” I think it’s better to think that we are aware on a “need to know” basis – “need to be conscious of”. Much more efficient than being conscious of everything all the time. Or we have an experience that is not remembered. You say 'much more efficient', but who says efficiency is key here? It would be very efficient if, re: that old quantum physics joke, the moon wasn't there unless we were looking at it. It's not a good argument that the moon isn't there if we're not looking at it. Now the reason I reject the label “reductionist” is that it seems wildly inappropriate as a term to describe a holistic view of the self. I can scarcely think of more counter-intuitive term. Do you appreciate that 'reductionist' and 'not reductionist' are not about mere pragmatic descriptions, but realities? Hof affirms explicitly that he believes a reductionist explanation of mind is true, but incomprehensible. So, since we can't comprehend the reductionist explanation of mind, we use metaphors. Let's put this question in as stark relief as I think is possible, EL. Here's the quote from Hof: "In principle, I have no doubt that a totally reductionistic but incomprehensible explanation of the brain exists; the problem is how to translate it into a language we ourselves can fathom." A) Do you agree that 'a totally reductionistic' explanation of the brain exists? B) If not, do you realize this places you in apparent conflict with Hof, and possibly Dennett? C) Do you realize that a reductionist can nevertheless make use of useful non-reductive fictions - in the same way that, say, a geocentric reference point can be used in satellite launches without someone having to affirm that geocentrism is true? And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. I think the me-ness of me inheres in my pattern, not in the material in which that pattern is instantiated. Fantastic. A) Do these "patterns" and "information" ('information about', it would seem) exist independently of any minds deriving or assigning them? B) Are these patterns and information themselves reducible? nullasalus
Mung:
Like a wave?
Yep, but a lot more complicated.
And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. So what we think makes you not a materialist? Interesting.
What you think makes no difference to what I think unless you persuade me to think differently. But what you call me depends how you define you concepts and terms. And if information and patterns are, in your lexicon, "immaterial" then I'm not a materialist because I am totally persuaded they exist. Elizabeth Liddle
It isn't about the 'ists', it's about the '-ism'. Ilion
Elizabeth Liddle:
REDUCTIONISTS DO NOT NECESSARILY REDUCE THINGS TO THEIR CONSTITUENT PARTS; SOME OF THEM ELEVATE SPECIFIC ARRANGEMENTS CONSTITUENT PARTS TO THE STATUS OF A WHOLE THAT HAS PROPERTIES NOT SHARED WITH ANY OF THE PARTS.
Like a wave? And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. So what we think makes you not a materialist? Interesting. Mung
Mike 1962:
… How can “part” of the brain, or the “backend” of the brain be conscious and not the whole damn thing? What is so special about that particular arrangement of synapses and dendrites that makes it conscious?
Well, I don't think any "part" of the brain is conscious! I think I am conscious, and that I consist of not just my brain but the rest of me as wel, including my history over time. Unfortunately we have found ourselves in a closed loop with this conversation, so it will probably just languish, but the basis of my position is that the consciousness question is ill-posed. What I have been trying to do (and what I think both Dennett and Hofstadter have tried to do) is to pose the question so that it is answerable. Of course the natural reaction to that is "but you haven't answered the question!" or, worse "you are denying the existence of the very explanandum!" Well, in a sense yes, but just because I don't think there is an entity called "consciousness" doesn't mean that I don't think we are conscious. I do. I think organisms (not brains, or part-brains, or brain regions) are conscious, and that one of the things that very smart organisms, such as people, are conscious of is their own existence, specifically as objects with the same set of properties as other similar objects, e.g. other people. Now the reason I reject the label "reductionist" is that it seems wildly inappropriate as a term to describe a holistic view of the self. I can scarcely think of more counter-intuitive term. But if, by reductionist, you mean that I think that the self is the result of lots of subprocesses and systems, going right down to subatomic processes and systems, then sure, I guess I just have to wear that label. But if you insist on my wearing such a label, please also read the hazard warning printed in red letters below, saying: REDUCTIONISTS DO NOT NECESSARILY REDUCE THINGS TO THEIR CONSTITUENT PARTS; SOME OF THEM ELEVATE SPECIFIC ARRANGEMENTS CONSTITUENT PARTS TO THE STATUS OF A WHOLE THAT HAS PROPERTIES NOT SHARED WITH ANY OF THE PARTS. And, by the same token, if you think that "pattern" and "information" are immaterial, then I am not a materialist. I think the me-ness of me inheres in my pattern, not in the material in which that pattern is instantiated. Hope that clears things up a little. Elizabeth Liddle
Nullasalus:
But FWIW – I think Dennett is right about “filling in” – it’s a moderately good working model, but not really supported by the data. The fridge light is a better model IMO.
If it’s not really supported by the data, then how can it be a good working model?
Because all models are wrong, although some are less wrong than others. More importantly, all models are incomplete, but sometimes a simpler,though more incomplete model is more useful than a more complex, though more complete one. "Filling in" is a reasonable shorthand for what the brain does, and at least it is more accurate than the idea that we see a whole scene at a glance, because we don't. But I think it's a poor metaphor for something that is in many ways far more interesting.
“Well, the data is against it – but it’s still a good model.”? Others seem to think that it’s a good model because it IS supported by the data. And the fridge light model (I take this to mean ‘conscious experience shows up and disappears as needed or whatever’) is better how – especially given that you yourself said that it’s difficult, and I suggest it may be impossible, to test whether consciousness is going ‘off’ as opposed to not being accessible / being forgotten?
By the "fridge light" metaphor, I mean that whenever we need to be conscious of something, we are able to be conscious of it, and so we don't ever register being unconscious of it, just as we don't register that the fridge light is off. As I've said, I think the model of consciousness in which it is "off" or "on" is a poor one too - I think it's much better to think in terms of what we are conscious of, than in terms of "whether we are conscious". For example, we are asleep, we aren't conscious of much, and yet when we awake we are usually aware that time has passed. Interestingly, after most anaesthetics this is not the case. This suggests that awareness of time passing does not cease during normal sleep, but does cease during anaesthesia. Or another way of putting that might be that on waking from normal sleep we can access information (become conscious) of the time that has elapsed, whereas this is not possible after an anaesthetic until we see a clock. I have an odd memory of seeing a clock reading 1.00pm as the anaesthetist injected my hand, and then, as it seemed, after an eyeblink, seeing a clock reading 4.00 - except that it was a different clock! Back to the fridge light, and filling in: there's an interesting experiment you can do with an eyetracker, whereby you program a computer display only to display text where the participant is looking (a couple of degrees of visual angle), and just replace the rest of the text with random letters. The extraordinary thing is that to the participant, the screen looks completely normal, but to those watching, there is a constant flicker, as normal text appears wherever the participant is looking, but nowhere else. We have no awareness of what we are not seeing, probably because we do not need to - if we do need to get information from somewhere else, we can. So rather than "filling in" I think it's better to think that we are aware on a "need to know" basis - "need to be conscious of". Much more efficient than being conscious of everything all the time. Elizabeth Liddle
mike1962, High compliment, thank you. EL, You say you are answering my questions, then you don’t answer them! What did I not answer here? You asked me if I think it's incorrect that the 'evidence suggests' it takes time for an apple to be experienced. I explained my problem with taking the 'evidence suggests' that way. Is it that I'm not giving the right answers? That I'm giving rationales rather than some straight-up yes/no? Which not only did I not say, I said I do not say it, and do not even hypthesise! You're telling me directly if the 'evidence suggests' something, I gave my reasons for being skeptical of that view. Not to mention my reasons for thinking that even if it did 'take time' that this would hardly affect there being a 'stream of consciousness' - I'm splitting hairs, because some hairs have it coming. The you talk about “the emergence dance”. It’s as though you just scan my posts for stimuli, and press the nearest button, without actually addressing the reasoning. Elizabeth, there is no reasoning in the Emergence Dance. I've seen you appeal to it in multiple threads - each time it follows a similar pattern. "The self/consciousness emerges!", the 'how' is asked for, you talk about feedback loops, someone asks how that leads to the self or consciousness, you go for the metaphors. Even Hof seems to think he hasn't really given an explanation of consciousness and mind so much as gestured in some direction or come up with some useful fiction. And Dennett's position is largely one of keeping the materialist faith while trying to make it sound not ridiculous. What I'm doing is taking note of your typical moves, and pointing them out. And really, for someone who's accusing me of not answering questions - you've dodged quite a lot of mine, and left various observations unanswered. I gave a quote where Hof (who you've appealed to more than once, and who is typically taken as being on the exact same page as Dennett) affirms the truth of an entirely reductionist view of the mind - he just thinks it's not comprehensible to humans, so we have to use some convenient language. (Let's put aside for a moment the problem of declaring reductionism to be true while admitting he can't comprehend how. And then trying to translate what he can't comprehend.) So here I am, wondering. Hof claims to be a reductionist. You claim to reject reductionism, but you've strongly defended Hof and Dennett. So where's your disagreement with Hof? Is it that you don't disagree, and you just call yourself a non-reductive materialist because you think it sounds better? Or you think Hof's views are non-reductive and he just can't figure that out? I hope you appreciate that this is a pretty important question, since it touches on a claim you've repeated about your views. If you're a reductionist after all, that's going to be quite a change. If you're not a reductionist, then at least given what I've provided it's clear you're in conflict with Hof and quite possibly Dennett - I'd like to explore that. And either way, the result is going to be you've either misunderstood something, misrepresented your views, or think Hof (and possibly Dennett) don't even know what their own views commit them to. I mean, this quote does seem to be accurate - it's pretty straightforward. And I have another source which at least suggests that Dennett rejects non-reductive materialism. But FWIW – I think Dennett is right about “filling in” – it’s a moderately good working model, but not really supported by the data. The fridge light is a better model IMO. If it's not really supported by the data, then how can it be a good working model? "Well, the data is against it - but it's still a good model."? Others seem to think that it's a good model because it IS supported by the data. And the fridge light model (I take this to mean 'conscious experience shows up and disappears as needed or whatever') is better how - especially given that you yourself said that it's difficult, and I suggest it may be impossible, to test whether consciousness is going 'off' as opposed to not being accessible / being forgotten? nullasalus
"Materialists are hell bent on denying themselves. It’s weird. Really really weird." Indeed, and the fact was noted centuries ago. For example: "He must pull out his own eyes, and see no creature, before he can say, he sees no God; He must be no man, and quench his reasonable soul, before he can say to himself, there is no God." -- John Donne Ilion
Nullasalus: You say you are answering my questions, then you don't answer them! And you say you are reading my post, then you say things like this:
The problem is that you’re not entertaining the possibility in this case, because you’re presenting the idea that if there is no report, there was no experience
Which not only did I not say, I said I do not say it, and do not even hypthesise! The you talk about "the emergence dance". It's as though you just scan my posts for stimuli, and press the nearest button, without actually addressing the reasoning. Well I'm out most of today, I hope to get back later. But FWIW - I think Dennett is right about "filling in" - it's a moderately good working model, but not really supported by the data. The fridge light is a better model IMO. Elizabeth Liddle
Nullasalus, you have a superb mind. Thanks for typing out all the words. mike1962
I am a conscious thing. I am consciousness. That is primary. Everything else is secondary. Including inferences of reason which my consciousness experienceses as a mere process. Duh. Materialists are hell bent on denying themselves. It's weird. Really really weird. mike1962
"... Ultimately, however, studies have confirmed that the visual cortex does perform a very complex “filling in” process ..." In grade-school, during a game the murderous version of dodge-ball we played, I once caught the ball in the crook of my elbow; it wasn't that I snagged it with my arm and re-directed its flight into the core of my body, so that I could trap it with both arms (as one girl who was very good at the game regularly did). Now, I didn't consciously plan this feat, and as I recall, most of my conscious concentration was directed at avoiding the *other* ball heading my way. Ilion
There is no such thing as a non-reductive non-eliminative materialism. What there is are materialists who do not, or will not, recognize/understand what materialism means and entails. Ilion
Nullasalus @ 148 "Sure, hallucinations are possible. Misremembering is possible. What’s not possible is being mistaken that you’re having a subjective experience when you’re having it. And this is where Dennett makes a very key and telling fumble – he confuses memory of qualia with qualia itself. ‘You think you saw something, but you cannot have seen that!’ merits a reply of, ‘Perhaps what I saw was an illusion. But my seeing an illusion is not open to being an illusion itself.’" Yeppers. A "philosophical zombie" or a robot -- which are essentially the same things, actually, and which is what Dennett and LE assert and insisting that we are -- can never recognize that it has experienced an “illusion” … or that it made a non-illusory mis-judgment. For that matter, a zombie/robot cannot ever actually have/experience an illusion (even the phrases, “it experienced an illusion” and "it was mistaken" cannot properly be applied to a zombie/robot). What it can have is a malfunction of its sensory receptors, or a malfunction of its “brain” or CPU; and, unless it has (at least) triply-redundant systems, it has no means by which it initiate a question of the veracity or validity of its inputs, or its outputs. Ilion
mike1962, What I find particularly odd here is that EL has insisted - strongly - that she's not a reductionist, and how everyone is mistaken in thinking that materialism entails reductionism. Hence the emergence dance. Except she's also leaned heavily on Dennett and Hofstadter. Hof throws out 'emergence' fairly often - but then he turns around and says that this isn't antireductionism, and in fact he's sure an entirely reductive explanation of the mind is true (yet incomprehensible, etc.) And Hof and Dennett are said to be on exactly the same page. So, something's up. Maybe you could make the argument that Hof's a reductionist and doesn't even realize it, but that's a little like saying Hof's kind of stupid. Or maybe EL has deeply misunderstood Hof at least, and possibly Dennett as well. (I know, Dennett wrote a paper blasting 'greedy reductionism', but he contrasted it with the 'good kind' of reductionism.) Or EL's going on about being a non-reductive materialist but taking a position that's indistinguishable from a reductive materialist (or...). Or she disagrees sharply with Dennett and Hof, but isn't explaining how. So, the whole thing's a mess. There's also a claim on wikipedia (unsourced) that Dennett regards the claim that consciousness is an irreducible and emergent 'thing' (and nonreductive physicalism along with it) as mysterianism. nullasalus
... How can "part" of the brain, or the "backend" of the brain be conscious and not the whole damn thing? What is so special about that particular arrangement of synapses and dendrites that makes it conscious? Unless... mike1962
Null: "Dennett had powerfully argued that such “filling in” was unnecessary, based on his objections to a Cartesian theater. "
"Powerfully" is not the abverb I would use, but whatever. At any rate, the fact that the whole brain ain't conscious, that it can clearly be shown by experiment that high degrees of processing is done by the brain before it reaches the "conscious part", demonstrates that in some sense there is a "cartesian theater" going on. mike1962
Nullasalus, I appreciate that this is frustrating for both of us, but it would be really helpful if you would answer my questions! I take it that your answer is yes? I've been answering your questions and pointing out the problems with what you're presenting. It's pretty straightforward. I've also said straight up what the problem is with your 'this is what the evidence suggests' claim. It's not what the evidence suggests, it's what the evidence, with various assumptions that I question, leads you to model. And I’d also appreciate it if you’d actually read my posts! I’ve said several times now, that I think that that something can be consciously experienced and then forgotten! Then re-remembered! I wrote a whole post about that, don’t you remember (heh)? I've been reading your posts repeatedly - what is with this 'if you disagree with me, clearly you haven't read or understood me' attitude? Of course you agree that something can be experienced and then forgotten. The problem is that you're not entertaining the possibility in this case, because you're presenting the idea that if there is no report, there was no experience - *I* am pointing out that this idea is flawed. Will you acknowledge that it is flawed? Will you acknowledge the possibility of experience without report, or without memory - particularly in this case? Now I accept that it is possible that even when the brain doesn’t appear to register that a face has been shown, that “you” somehow nonetheless experience it but immediately forget it. It would be difficult to test this of course. But I also suggest that it is at least a reasonable working model to posit that the brain takes a short while (tens of milliseconds) to do the processing that is required to enable “face” to be experienced, either “at the time” (namely a few tens of milliseconds after presentation) or later. You shrug off 'it would be difficult to test for this' - really, it may well be impossible to test for it - as if that isn't a concern. And that's one difference between you and me here; I think it's actually a very big concern. What's more, your 'reasonable working model' isn't required to explain the data - in fact, I'm replying with an even better model, one that takes into account the limits of what the tests can show. "Stimulus exposure under certain amounts of time are unlikely to be reported or apparently retained". What makes you think that your rendition of the data becomes more reasonable than mine? The fact that yours contains assumptions that aren't open to testing? And notice that this isn't even some kind of essential point for me - say it takes time for something to enter consciousness and you're still left with subjective experience to account for. Really, you're still left with a stream of consciousness since something taking time to fully enter the stream doesn't mean there is no stream. But I'm actually bothering to be careful with the data and the interpretation, and pointing out the limits and problems that come with a third-person examination of such. You seem only concerned with these problems when they're attached to conclusions you want to dispute - if you don't want to dispute them, it's not much of a worry. As I said, it’s possible that all this is bunk – that the self-consistent model we get from neuroscience is not in fact what happens! But it correlates very highly with various ways of tapping into subjective experience (behavioural performance, self-report) and there comes a point in science when the data fit the model so well, it seems odd to insist that there is something fundamentally wrong with the model. Have you noticed that my counter-model fits the data splendidly as well - and has the added virtue of properly taking into account the limits and pitfalls of the methodology? Nothing I've said conflicts with the data or the reports - in fact, what I've said is entirely consistent with the reports. In fact I question whether your model better fits with the behavior. You earlier stated: Because that is what the evidence suggests – that we “become conscious” of a stimulus, and its properties, over a period of a substantial number of milliseconds, and that if the stimulus duration is shorter than that, we have no conscious recollection of it, even though it may influence our subsequent choices. You're saying that the stimulus was never experienced, period. I'm proposing it was possible it was experienced but could not be recalled. But then you mention that the stimulus 'may influence our subsequent choices' - and that seems like a good reason to speculate that the stimulus was in fact experienced, even if recollection of it is unavailable. This is what I mean by experience (from the first person perspective) being summed over time – we experience continuity, but in fact we back-project that continuity after-the-fact. So we experience continuity because continuity is what we're actually experiencing. What you question, at least when putting it this way, is the source of the stream, not the stream itself - like having the experience of watching a river flow, but not being able to tell if we're watching 'an actual river in front of us' or a very realistic movie of a river flowing. Or maybe we experience an amalgam of both - maybe we're looking at a real river through screen that has a see-through projection on it. As I said, know your assumptions and the limitations of your methods of inquiry. But also know the limits of your subjective experience – we frequently think we see things that cannot have seen. Sure, hallucinations are possible. Misremembering is possible. What's not possible is being mistaken that you're having a subjective experience when you're having it. And this is where Dennett makes a very key and telling fumble - he confuses memory of qualia with qualia itself. 'You think you saw something, but you cannot have seen that!' merits a reply of, 'Perhaps what I saw was an illusion. But my seeing an illusion is not open to being an illusion itself.' An alternative of course is that there is some unknown Stuff called consciousness that enables us to see things that our retinas are incapable of registering! But doesn’t it seem more likely that the neuroscience model is correct? I pointed out the problems with your earlier model, and indeed why at a glance mine actually seems to perform better than yours. As for this situation, it depends on what you're saying. Is it possible to hallucinate, to misremember, or to have an experience that isn't 1:1 with what the retinas are aimed at? Sure, entirely possible - that's very mundane. But if the neuroscientist says 'You could not have possibly had the experience you claimed to have had' - notice that this is about having the experience, not the source of the experience - then so much worse for the neuroscientist, model be damned. Subjective experience trumps models. If the philosopher says, 'Materialism is true, therefore no thoughts can really be 'about' anything (as Alex Rosenberg and others outright claim) and subjective experienced will have to be eliminated by a more complete science (as Pat Churchland and others suggest)', so much the worse for them as well. Funny you should bring up the visual system - I recalled this incident and hunted down the wikipedia reference: "Another criticism comes from investigation into the human visual system. Although both eyes each have a blind spot, conscious visual experience does not subjectively seem to have any holes in it. Some scientists and philosophers had argued, based on subjective reports, that perhaps the brain somehow "fills in" the holes, based upon adjacent visual information. Dennett had powerfully argued that such "filling in" was unnecessary, based on his objections to a Cartesian theater. Ultimately, however, studies have confirmed that the visual cortex does perform a very complex "filling in" process (Pessoa & De Weerd, 2003)." nullasalus
Nullasalus:
For example to “experience” a red apple takes tens of milliseconds, it is not instantaneous. This is what our evidence suggests. Do you think this is incorrect?
I repeat: You determine what is or is not experienced by whether it is or isn’t reported. The idea that something can be experienced then forgotten before it is reported doesn’t seem to register with you – and it’s not because of the data in and of itself, but because of the assumptions you bring to the data.
Nullasalus, I appreciate that this is frustrating for both of us, but it would be really helpful if you would answer my questions! I take it that your answer is yes? And I'd also appreciate it if you'd actually read my posts! I've said several times now, that I think that that something can be consciously experienced and then forgotten! Then re-remembered! I wrote a whole post about that, don't you remember (heh)?
“If a stimulus only lasts less than X amount of time, it will not be reported” does not itself equal “If a stimulus only lasts X amount of time, there is no experience of it”. Not unless you insist that there is no experience unless it’s reported – and then we’re back to the example of not remembering yesterday meaning I had no experience yesterday.
Well, not exactly, but perhaps I see where the roadblock is. Let me try and put the hypothesis as straightforwardly as I can, and for now, for simplicity, we'll confine ourselves to the visual modality, as it is the best studied. If I flash a face on a screen while you are in an MRI scanner, then we can fairly reliably show that a certain region of the brain (called the "face area") will become active. However, if the stimulus is shown for a very short period of time and then mask it (to erase the retinal afterimage), or if we degrade the image and show it for a very short amount of time, then we observe no activation in the "face area". We also find that we can predict whether or not an image is recalled, or, even, whether it influences subsequent behaviour (by using it as a "priming" stimulus) is highly correlated with whether or not we see activation in that face area. A picture of a house activates a different area, and so faces and houses are useful stimuli for figuring out how long it takes for the brain to process the different stimuli. Now I accept that it is possible that even when the brain doesn't appear to register that a face has been shown, that "you" somehow nonetheless experience it but immediately forget it. It would be difficult to test this of course. But I also suggest that it is at least a reasonable working model to posit that the brain takes a short while (tens of milliseconds) to do the processing that is required to enable "face" to be experienced, either "at the time" (namely a few tens of milliseconds after presentation) or later. Certainly we can do experiments where people show neural evidence of having recognised a stimulus as a face or a house, but nonetheless cannot recall the picture when presented later; it seems reasonable on these occasions to assume they saw - experienced - the face or house, but that they did not "store" the information in such a way that it was available for access later. And we even know quite a bit (we think) about how this happens. As I said, it's possible that all this is bunk - that the self-consistent model we get from neuroscience is not in fact what happens! But it correlates very highly with various ways of tapping into subjective experience (behavioural performance, self-report) and there comes a point in science when the data fit the model so well, it seems odd to insist that there is something fundamentally wrong with the model. Especially when we know, again from the visual system, that we make "forward models" of the world. You probably know that people make several saccadic (i.e. jerky) eye movements per second, and so the image of the world on the retina is constantly changing. Not only that, but only the image right at the centre of the retina (the fovea) actually registers much in the way of detail, including colour. So if our eyes were movie cameras, they'd be jerky hand held cameras loaded with slow film that only registered colour and high resolution detail in the centre, the rest of the field of view being recorded as looming shapes in grey-scale. But this of course is not what we see! And the reason seems to be that our brains use the information about where our eyes are going to move next to make a predictive model about how the retinal image will change, and rejig everything so that the actual eye movement is discounted. not only that, but the visual system is set up so that anything of interest elicits an eye movement to it. So our impression is of a wide, detailed visual scene, observed as a gestalt, or simultaneously. But this cannot be the case - that image is simply not what appears on the retina - at any given time, most of it is missing. This is what I mean by experience (from the first person perspective) being summed over time - we experience continuity, but in fact we back-project that continuity after-the-fact. At least, it is difficult to see how anything else could possibly be the case, given the data.
Know your assumptions and the limitations of your methods of inquiry, particularly with regards to subjective experience.
Well, sure, and the best we can do is model. But also know the limits of your subjective experience - we frequently think we see things that cannot have seen. We have good explanations of this in neuroscience, to the extent that we can use those explanations to design artificial vision, so the model seems good. An alternative of course is that there is some unknown Stuff called consciousness that enables us to see things that our retinas are incapable of registering! But doesn't it seem more likely that the neuroscience model is correct? Elizabeth Liddle
Ciphertext,
If you accept the premise that a mind need not be an emergent property by necessity. Then you have several options available to you in terms of from where a “mind” could spring forth. While an interesting question in and of itself. At least more interesting for me is “How did my mind spring forth?” and similarly “How did my mind become connected with my hardware?”. I am assuming that your mind is separate from my mind. At least in terms of how you and I perceive it to be.
I'm certainly not beholden to the 'mind as emergent property' hypothesis by necessity, though I do find it compelling for a number of reasons. Lizzie notes a number of those reasons above. But, given the nature of the concept, it is fruitful (to say nothing of interesting) to consider other ideas. Plus, I freely admit this is not my area I have much expertise in (though I have some) so I just find the concepts interesting to explore. Like positing whether the mind is a complex AI program (from your post above).
Perhaps, then, there would be a sufficient “neural pattern” to indicate the local storage of such a complex AI program (mind). That “neural pattern” would essentially be the executable, object code, (note, it wouldn’t necessarily be the source code, though an argument could be made for the coexistence of both on the same hardware) for the mind.
Indeed. Of course if such were the case, you'd think (ha!) there would be some way to locate and analyze evidence of the program. Would be interesting to do so. Doveton
Elizabeth: You ask: "do you accept that it is possible that consciousness is not a continuous flow, but that rather it consists of a series of summations of input over time?" Anything is possible. I don't accept that it is true. It is false. Consciosness is a continuous flow. Through different states of consciousness. gpuccio
Elizabeth Liddle:
do you accept that it is possible that consciousness is not a continuous flow, but that rather it consists of a series of summations of input over time?
Summations of what? Who or what is doing the summing? Mung
@Doveton Post 142 RE: Abstraction vs. Emergent If you accept the premise that a mind need not be an emergent property by necessity. Then you have several options available to you in terms of from where a "mind" could spring forth. While an interesting question in and of itself. At least more interesting for me is "How did my mind spring forth?" and similarly "How did my mind become connected with my hardware?". I am assuming that your mind is separate from my mind. At least in terms of how you and I perceive it to be. Is the mind really a complex AI program (as we would term it) that is executed by the "Human OS" (for lack of a better term)? [a-la Battlestar Galactica remake] My term "Human OS" is what I call the autonomic system, in that it provides a similar role as to a computer's OS. The OS provides application programs with access to the underlying hardware via "drivers" and coordinates the use of system resources at a macroscopic level. The CPU and other chips have their own "on-board" systems to manage the threading and prioritization of instruction sets. The applications only need to use the OS to facilitate communication with the underlying hardware (video, audio, receive input). Perhaps, then, there would be a sufficient "neural pattern" to indicate the local storage of such a complex AI program (mind). That "neural pattern" would essentially be the executable, object code, (note, it wouldn't necessarily be the source code, though an argument could be made for the coexistence of both on the same hardware) for the mind. ciphertext
Ciphertext,
I don’t think that it is necessary that the mind be an “emergent property” of the body (underlying hardware). For the same reason that I don’t believe source code is an emergent property of computing hardware.
Hmmm...ok. I think I get what your concept is. Interesting. I'll ponder this for a bit, but on first look, I like it. Doveton
For example to “experience” a red apple takes tens of milliseconds, it is not instantaneous. This is what our evidence suggests. Do you think this is incorrect? I repeat: You determine what is or is not experienced by whether it is or isn't reported. The idea that something can be experienced then forgotten before it is reported doesn't seem to register with you - and it's not because of the data in and of itself, but because of the assumptions you bring to the data. "If a stimulus only lasts less than X amount of time, it will not be reported" does not itself equal "If a stimulus only lasts X amount of time, there is no experience of it". Not unless you insist that there is no experience unless it's reported - and then we're back to the example of not remembering yesterday meaning I had no experience yesterday. Know your assumptions and the limitations of your methods of inquiry, particularly with regards to subjective experience. nullasalus
working=wording. Gotta run. Elizabeth Liddle
No, I'm saying that subjective experience is NOT like a film with infinitessimally short frames. Sorry, that was clumsy working. I'm saying that it arises from discrete processes during which the inputs are smeared over time. For example to "experience" a red apple takes tens of milliseconds, it is not instantaneous. This is what our evidence suggests. Do you think this is incorrect? Elizabeth Liddle
I am saying that what we call “subjective experience” is a function of memory – which in turn is a function of the integration of inputs over time. What you've said is that (x) emerges out of infinite recursive loops and how this is entirely materialist and non-reductive, then pointed to Hofstadter for explanation, who comes out as a reductionist and doesn't see 'emergence' and his loops as incompatible with reductionism. And who himself doesn't explain all that much. Not that it doesn’t exist, but that it is discrete, and that each discrete perception is smeared over time, not continuous, like a film (or like a film with infinitesimally short frame durations anyway). Er, infinitesimally short frame durations that are smeared over time? Take your pick. You realize that your move here relies on assumptions about time itself, right? Not exactly the most clear topic itself. nullasalus
Nullasalus: at no point have I said we do not have subjective experience! I'd like you to go through my post again, with as open a mind as you can, and see if you can see what I'm saying (i.e. do not assume that I am "in effect" saying something else). I am saying that what we call "subjective experience" is a function of memory - which in turn is a function of the integration of inputs over time. Not that it doesn't exist, but that it is discrete, and that each discrete perception is smeared over time, not continuous, like a film (or like a film with infinitesimally short frame durations anyway). Elizabeth Liddle
But you don’t have “your own subjective experience” once you’ve forgotten it! Although you might have it again, if you remember it! No, I can't recall a memory of a subjective experience if I've forgotten it. But I certainly have subjective experience here and now, and it's entirely possible for there to be subjective experience sans memory. Unless you make the assumption "conscious experience needs memory" of course. If we cast consciousness in this form, the questions I asked above become simply answerable. Yes, if you make a bunch of assumptions, you can answer a question. I pretty much said this myself; the key is to remember that they are assumptions. You've just told me 'I can tell you what happened in that hypothetical story if you just let me make assumptions about what happened!' No duh, Elizabeth. Seen dynamically in this way, I think consciousness becomes tractable to explanation, specifically in terms of memory, and even more specifically, in terms of a model of memory in which recall is a renactment of the state that accompanied the remembered stimulus, and applies not merely to retrospective states but to forward models too (although we don’t generally call forward models “memory”, but I suggest that they are closely related, as, again, data from neuroscience seems to suggest – hence the key role of the hippocampus in both memory and spatial navigation). Yeah, you always talk about how 'if I just assume these things then consciousness becomes explainable', missing the part that A) You're making assumptions, B) That you also leave out data, data that's far more primary than your personal metaphysical speculations, and most importantly C) You never actually explain much of anything relevant. You just gesture wildly and buzzword up your sentences with 'emergence' and 'recursion' and 'non-reductive!' Then when it's pointed out that you haven't really explained much of anything, we get the metaphors. And this is what leads us to infer that the coupling is very close indeed – so close that when we observe an absence of certain neural events, we infer that a person is unconscious, possibly irreversibly so. Congratulations, you've discovered correlates of experience and the fact that what we call the human body is tied to the human mind. Something no one - not Chalmers, not panpsychists, not neutral monists, not substance dualists, not hylemorphic dualists, not even freaking idealists - denies. Except perhaps eliminative materialists, since they dispense with the whole 'mind' thing. And throughout your list of examples, notice that everything you say works with the assumption that if it can't report, there was no experience. That not all experience that is actual is reported, or even reportable, doesn't seem to occur to you - someone has to remind you that what you're doing is making a model, with certain assumptions (some of them downright controversial). And when we infer irreversible uncsonsciousness we say that the person is dead. And when the unconsciousness ends up being reversed after the fact, you say "oops". And if a person gives reports during the time they were inferred to have been unconscious, you flail and revise. But at the end of the day, we still have subjective experience, and we still have intentionality. And your explanations for both are non-explanations - largely dogma that melts into metaphors the moment any light shines on them. nullasalus
But you don't have "your own subjective experience" once you've forgotten it! Although you might have it again, if you remember it! And that seems to me to be the key to the whole problem - I suggest that we are not so much "conscious" as that we "conch". And what we conch at any given time may be something that happened in the immediate past (a few milliseconds earlier) or something that happened in the more remote past, or even something that may or may not happen in the future. If we cast consciousness in this form, the questions I asked above become simply answerable. If a stimulus is flashed for too short a time it appears to leave no retrievable trace at all - there is no space to get in and say: "what did you see"? and data suggests that the stimulus never got further than very minimal processing by the primary sensory systems. However, if it is flashed up for a little longer, we find that it is subsequently retrievable, after a fashion, and indeed, it might be possible to train people to infer what the prime might have been from their apparent intuitive response to subsequent stimuli. In other words, the subsequent stimuli might induce some kind of consciousness of the prime. Seen dynamically in this way, I think consciousness becomes tractable to explanation, specifically in terms of memory, and even more specifically, in terms of a model of memory in which recall is a renactment of the state that accompanied the remembered stimulus, and applies not merely to retrospective states but to forward models too (although we don't generally call forward models "memory", but I suggest that they are closely related, as, again, data from neuroscience seems to suggest - hence the key role of the hippocampus in both memory and spatial navigation). But I fear this discussion is stalling over a very different set of assumptions about what neuroscience can (and can't) tell us about the way we experience the world. I find it puzzling when people say: oh, there are correlates alright between neural events and mental events, but we can't ascertain the direction of causality. Well, yes we can - the way we infer causality in science is by manipulating a variable. And we can do this in both directions - we can manipulate mental events, by presenting task relevant stimuli, and we can check that the task has been peformed by looking at the behavioural output, and then look at the neural correlates of that mental activity. We can also manipulate neural events in various ways, by drugs, electodes, trans-cranial magnetic stimulation, etc, and correlate these with the participants subjective reports of their experience, and/or their behavioural response to a task. For instance, by timing and placing a TMS pulse carefully, we can show precisely when and where a disruption to a neural process results in disruption to task performance. We can also ask for reports of subjective experience. We can even scan people (using fMRI for instance) and ask them to note, with a button press, whenever a mental event of some sort occurs - a novel thought, an auditory hallucination, the urge to tic, whatever, and examine the concomitant neural evidence - in the case of fMRI the blood flow to a region that follows neural firing. These are not haphazard observation - they are reproducable effects, in both directions, that allow us to predict with a high degree of confidence how a manipulation of mental events is reflected in neural events and how manipulation of neural events is reflected in mental events. And this is what leads us to infer that the coupling is very close indeed - so close that when we observe an absence of certain neural events, we infer that a person is unconscious, possibly irreversibly so. And when we infer irreversible uncsonsciousness we say that the person is dead. Elizabeth Liddle
The data (specifically EEG data from priming experiments) suggest that if a stimulus is sufficiently brief, it may influence a subsequent decision, even though the participant has no awareness of the stimulus and the EEG trace lacks features normally associated with “late processing”. And again I point out, this doesn't show that there was no awareness of the stimulus, period. The best you can get is that they are unable to recall having such an experience at a later point. Whether they had one at the time, for whatever brief moment, is up in the air. Recall that I question the very existence of 'the unconscious' (as opposed to 'something I'm not conscious of right now'). Now, you raise the interesting point: if I subsequently forget something, was I ever conscious of it? Or, even more interestingly, let’s say I pass someone in the corridor, and do not recognise them. Then, a few yards further on, I think to myself “hey, that was Jim!” That happened to me the other day, and I apologised to “Jim” for cutting him dead. Does that mean I was aware of Jim when I saw him, subsequently forgot it was Jim, then remembered? Or was I not aware of Jim until I’d progressed a few yards down the corridor? (Dennett has an example of this, near the beginning of Consciousness Explained IIRC). Is it even a sensible question? Of course it's a sensible question. And "Who's to say?" is a sensible answer. You're asking a hypothetical question about what could have possibly taken place in a space of time that a person can't recall and asking me what happened. Here's a sensible conclusion: Talking about what was or wasn't experienced is fraught with assumption that most people miss, and discussing what 'the data shows' about first-person experience from a third-person point of view typically relies on these assumptions. But I don't have to make any assumptions when it comes to my own subjective experience - I have it. It's data, not theory. nullasalus
Well, I have to disagree, nullasalus. Or rather, I think you put your finger on a key point, but it is not the point I was making! The data (specifically EEG data from priming experiments) suggest that if a stimulus is sufficiently brief, it may influence a subsequent decision, even though the participant has no awareness of the stimulus and the EEG trace lacks features normally associated with "late processing". For example, if the word "couch" is flashed up briefly, then masked, and the subject is then asked whether the word "bofa" is a word, their reaction time will be slower than if the word "string" had been flashed up. In other words, there is evidence that a subliminally presented "prime" affects subsequent behaviour, even though the participant has no awareness of the content of the prime (as ascertained by various tests). Now, you raise the interesting point: if I subsequently forget something, was I ever conscious of it? Or, even more interestingly, let's say I pass someone in the corridor, and do not recognise them. Then, a few yards further on, I think to myself "hey, that was Jim!" That happened to me the other day, and I apologised to "Jim" for cutting him dead. Does that mean I was aware of Jim when I saw him, subsequently forgot it was Jim, then remembered? Or was I not aware of Jim until I'd progressed a few yards down the corridor? (Dennett has an example of this, near the beginning of Consciousness Explained IIRC). Is it even a sensible question? (I have my answer, but I'm interested to hear yours first :)) Elizabeth Liddle
Because that is what the evidence suggests – that we “become conscious” of a stimulus, and its properties, over a period of a substantial number of milliseconds, and that if the stimulus duration is shorter than that, we have no conscious recollection of it, even though it may influence our subsequent choices. No, the data doesn't 'suggest' that. There is data, and then there is an interpretation within one or another model - a model often filled with a variety of assumptions to begin with. And right here you're confusing conscious experience of something with recollection of an experience - a mistake Dennett makes as well. If I can't remember yesterday, does that mean I had no conscious experiences yesterday? nullasalus
Nullasalus:
But before we get to that: do you accept the possibility that conscious experience of the world might be summed over periods of time, rather than being a continuous flow? Because that is an important underpinning to my approach (and Dennett’s, I think)
You’re essentially asking me if I accept it’s possible that I could be wrong about qualia, having experience. No, it’s not possible.
No, I did not ask you if you could be wrong about qualia. I asked you something quite different: do you accept that it is possible that consciousness is not a continuous flow, but that rather it consists of a series of summations of input over time? Because that is what the evidence suggests - that we "become conscious" of a stimulus, and its properties, over a period of a substantial number of milliseconds, and that if the stimulus duration is shorter than that, we have no conscious recollection of it, even though it may influence our subsequent choices. Elizabeth Liddle
Nullasalus @ 123 "Hofstadter says he’s certain that a reductionistic explanation of mind is true." Mike 1962 @ 126 "That just means he’s a fool or a liar. Who buys into this drivel?" Just to quibble, 'fool' and 'liar' overlap; specifically, a 'fool' is a special type of liar. A mere liar lies episodically: he lies about specific things, generally for specific reasons. On the other hand, a fool lies systemically: he lies about the very nature of reason and of truth. That is, just as a (common) hypocrite asserts a double standard with respect to morality, a fool asserts a double standard with respect to the intellect. To accuse another of being a 'fool' is to make a moral condemnation of him; it is to accuse him of being intellectually dishonest; it is to accuse him of being an intellectual hypocrite. I mean, I do quite understand what you meant by saying, "That just means he’s a fool or a liar." Obviously, you didn't mean, "That just means he's a liar or a liar". Rather, you meant "That means he's incapable of understanding the truth, or unwilling to understand the truth"; or, in simpler terms, "That just means he's stupid, or a liar". Ilion
nullasalus, hehe, indeed. You'd think these guys would just acknowledge the philosophical brick wall, and humbly bow their heads instead of spinning vacuous drivel. Ah, the depths of human arrogance. I'm not immune to it myself. mike1962
He never says how “reflective loops” generate the consciousness that I am He doesn’t know. He’s just burying himself in levels of verbal cow poo poo in an attempt to hide the dearth of real explanation. (Since there is none.) Who actually buys into this drivel? Of course, there's also the whole "the self comes into being the moment it has the power to reflect itself" thing. So, there's no self, until the self reflects itself. Then the self shows up. But the self had to exist to reflect itself, so... nullasalus
"Hofstadter says he’s certain that a reductionistic explanation of mind is true. Also, he thinks the reductionistic explanation is incomprehensible. So, we have to translate the incomprehensible into something we comprehend. Think about that for a moment: “I can’t comprehend what this means. So I have to translate what this means into something I can comprehend.”"
That, my friend, is a great example of a subtle insanity at work. Weird people. mike1962
"Hofstadter says he’s certain that a reductionistic explanation of mind is true." That just means he's a fool or a liar. Who buys into this drivel? mike1962
"the problem is how to translate it into a language we ourselves can fathom." Of course, consciousness never can be put into "fathomable" language apart from the brute experience itself. A congenitally blind man will never know what color is from a description, no matter how cleverly composed. Conscious qualia: it takes one to know one. mike1962
"The self comes into being at the moment it has the power to reflect itself."
So if I point a video camera at a mirror, is it conscious? Is any feedback loop conscious as I experience consciousness? He never says how "reflective loops" generate the consciousness that I am He doesn't know. He's just burying himself in levels of verbal cow poo poo in an attempt to hide the dearth of real explanation. (Since there is none.) Who actually buys into this drivel? mike1962
And just to get at some of what I'm suggesting about Hofstadter... My belief is that the explanations of "emergent" phenomena in our brains-for instance, ideas, hopes, images, analogies, and finally consciousness and free will-are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back down towards the bottom level and influences it, while at the same time being itself determined by the bottom level. In other words, a self-reinforcing "resonance" between different levels--quite like the Henkin sentence which, by merely asserting its own provability, actually becomes provable. The self comes into being at the moment it has the power to reflect itself. This should not be taken as an antireductionist position. It just implies that a reductionistic explanation of a mind, in order to be comprehensible, must bring in "soft" concepts such as levels, mappings, and meanings. In principle, I have no doubt that a totally reductionistic but incomprehensible explanation of the brain exists; the problem is how to translate it into a language we ourselves can fathom. There's the man on his view of consciousness, self, intentionality, etc. Notice a few things. * Hofstadter insists his view is not antireductionistic. Indeed, he's certain reductionism about the mind is true. So either Liddle (who repeatedly claims to be a non-reductive materialist) has some sharp disagreement with Hofstadter's view (which she's apparently endorsed), or she'd have to accuse Hofstadter of not even being able to properly identify whether or not he's a reductionist. * Hofstadter says he's certain that a reductionistic explanation of mind is true. Also, he thinks the reductionistic explanation is incomprehensible. So, we have to translate the incomprehensible into something we comprehend. Think about that for a moment: "I can't comprehend what this means. So I have to translate what this means into something I can comprehend." But if you can't comprehend something, you can't translate it - and you certainly can't know it's true. In addition, his talk of 'soft concepts' combined with his commitment to reductionism stongly implies that what he's dealing with are useful fictions. * So why does Hofstadter know this view is true? He gives a clue elsewhere: And this is our central quandary. Either we believe in a nonmaterial soul that lives outside the laws of physics, which amounts to a nonscientific belief in magic, or we reject that idea, in which case the eternally beckoning question ‘What could ever make a mere physical pattern be me?’ – the question that philosopher David Chalmers nicknamed ‘The Hard Problem’ – seems just as far from having an answer today (or, for that matter, at any time in the future) as it was many centuries ago. First, note that at least according to this, Hofstadter himself doesn't think his 'loops' solve consciousness - we're still left with the hard problem. But more than that, notice one reason he gives for stumbling in the direction of translating the incomprehensible, and waving his arms while talking about loops and emergence: Because the alternative is 'magic'. And what makes it magic? Because we have a certain picture of the world now, and Hofstadter is drawing a line in the sand, saying that this (metaphysical) picture cannot be changed. Imagine if this sort of game was played with quantum physics: "If we accept the apparent results of the Stern-Gerlach experiment, it would mean classical mechanics is incorrect. That's tantamount to a belief in magic. So, we have to reject it and hold out for an explanation consistent with classical mechanics. And we have to do this in defense of science." But if you drop Hofstadter's false dilemma - if you accept that science (particularly our current science) can be incomplete, or that materialism can in fact be wrong - a lot of these problems melt away. Maybe physics will just have to be revised in the future. Maybe a materialistic understanding of the world should be jettisoned. nullasalus
Mung: "Do you believe it is possible for someone to perpetuate a lie without knowing it’s a lie? I do. Does that make it less of a lie?" Or, to put it another way, had Seinfeld's George Constanza discovered a loophole in the general prohibition against lying (*) when he counseled/rationalized, “It’s not a lie if you really believe it”? One has an obligation to have done “due diligence” regarding the things one asserts; one has the obligation to have proper rational warrant for believing what one believes and especially regarding what one asserts. Thus, even if one “totally believes” something but hasn’t the rational warrant for believing it, one may indeed lie in asserting it – even if the belief is objectively true. Isn’t it curious, the things one can learn, if one thinks carefully about what one already knows? Equally curious will be the reaction to the above by persons who do not wish to understand it. (*) Not every act of lying is immoral; sometimes, morality *requires* one to lie: the famous test case being the Nazis-at-the-door looking for the person(s) you have conspired to hid from them. Ilion
Nullasalus: my position, which I believe I share with Dennett, is that “qualia” is ultimately an incoherent concept. Dennett's position amounts to that claim that qualia cannot exist, because if they do then materialism is false. And because the commitment to materialism is primary, experience must be eliminated from the picture. What makes qualia 'incoherent' to Dennett is the assumptions of materialism. But we don't need to assume materialism anyway. I accept that certain sensations seem very “raw”. But I think if we examine them we find that they are not as raw as we think they are. Qualia are sensations - experience. They're what you're denying. You don't 'examine the sensations'; what you do is redefine the mind to exclude qualia, throw in what you think replaces it, and then try to give an explanation of how your replacement could come to be. And you don't even succeed there, because it relies on an account of intentionality that itself collapses into actual incoherence. Again, even Hostadter gives strong indications that he knows he's in a bad situation with this game and justifies it largely on a 'but the alternative is materialism is wrong' plea. I certainly do not “deny the existence of experience” which would indeed be insane. I just don’t think we need a special word for certain kinds of experience such as “qualia”. And here comes the word game. You're not objecting to 'a special word' here, as if this is a mere argument over terminology. Qualia is the experiential, and this is what is being denied. So you replace qualia with function and some mumblings about infinity and emergence, call this experience, then get huffy when you're accused of denying experience. The alternative is that you're screwing around with the word 'qualia' - since qualia is not 'a certain kind of experience'. It's subjective experience, period. Back to an example I always use: If I define Bigfoot as a delusion, and point out that delusions are real - then get upset when someone accuses me of being a Bigfoot denier because 'all I've done is sort out my definition of Bigfoot - and what I define Bigfoot to be absolutely exists!', it's pretty easy to see I'm BSing. I think it's clear this is what's going on here with 'I don't deny consciousness/experience exists, I just deny that qualia exists'. Qualia is subjective experience. Nor does Dennett think there is no such thing as conscious experience. He wrote an entire book about how it can be explained. I don’t expect he’d have bothered if he didn’t think it existed And here's the familiar refrain: Ignoring that Dennett's critics (including fellow in-name materialists), after reading his book, argued that what Dennett did was explain away consciousness. And really, insofar as Dennett rules out qualia and the experiential from the start, that's exactly what he did. Again: If Dennett wrote a book on 'Bigfoot Explained', defined Bigfoot to be a delusion, and then spent the rest of his book explaining how people come to have this delusion - surprise, Dennett denies the existence of Bigfoot, title be damned. But before we get to that: do you accept the possibility that conscious experience of the world might be summed over periods of time, rather than being a continuous flow? Because that is an important underpinning to my approach (and Dennett’s, I think) You're essentially asking me if I accept it's possible that I could be wrong about qualia, having experience. No, it's not possible. And before you do the 'well you think you're right, and thinking you're right is a great way to mislead yourself' song and dance, I point out that you're not open to the possibility that experience is real, that there are qualia as opposed to nothing but function. I'd rather you hold your ground and justify your explanation of intentionality - namely, that all intentionality is derived. Let's draw out that show, where you say it's an infinite circle, but that's okay because emergence and also look at waves. I'd also like to see you justify your denial of qualia, since qualia is subjective experience, but you claim not to deny subjective experience. So either you are in fact denying subjective experience, or you're botching qualia by defining it to be something other than subjective experience. nullasalus
Liz: Nor does Dennett think there is no such thing as conscious experience. He wrote an entire book about how it can be explained.
One of the worst wastes of paper I ever spent good money on. He "explains" it by redefining the word to something else, then knocking that down. My conclusion: either the man is wicked, lazy or insane..., or a zombie. Whatever he's trying to explain is not what I experience as an instance of consciousness. mike1962
Nullasalus: my position, which I believe I share with Dennett, is that "qualia" is ultimately an incoherent concept. I accept that certain sensations seem very "raw". But I think if we examine them we find that they are not as raw as we think they are. I certainly do not "deny the existence of experience" which would indeed be insane. I just don't think we need a special word for certain kinds of experience such as "qualia". Nor does Dennett think there is no such thing as conscious experience. He wrote an entire book about how it can be explained. I don't expect he'd have bothered if he didn't think it existed :) You write:
You leave out that the “concept called “I”" (along with the ‘sense of time’) can only be had, under your view, by derivation – meaning, you only ‘have the concept of I’ by means of a third party deriving that you have the ‘concept of I’. But that third party only ‘derives that you have the concept of I’ by virtue of another party deriving that that are deriving that you have the concept of ‘I’ – and so on. When I point this out, your response is ‘yes well it’s infinite circularity, I accept that, no problem there’.
Well, there isn't. And there is no third party either. But before we get to that: do you accept the possibility that conscious experience of the world might be summed over periods of time, rather than being a continuous flow? Because that is an important underpinning to my approach (and Dennett's, I think). If you think this is false, then perhaps that's the next thing to discuss. Except that I'm going to be out of action for the next few days. But I'd certainly like to know your response. Elizabeth Liddle
Null: “you don’t believe that there is qualia, or conscious experience. There is no “experience”, there is only function.” I think the robot future-fantasy analogy clearly illustrated this view. junkdnaforlife
Well, you can probably look up the physiological reactions but I would say (and Nullasalus would tell me this is illogical, but I’ll say it anyway, because I don’t think it is) that pain is the knowledge that we are in the grip of an aversive reaction. Which requires a concept called “I” and a sense of time. I think integration of input over time is absolutely critical to conscious experience, and that although we perceive it as a flow, it operates as a series of discrete summations. There is fairly good evidence to support this. See, you say "Nullasalus would tell me this is illogical". You don't mention why I lodge the objections I do. You leave out that the "concept called "I"" (along with the 'sense of time') can only be had, under your view, by derivation - meaning, you only 'have the concept of I' by means of a third party deriving that you have the 'concept of I'. But that third party only 'derives that you have the concept of I' by virtue of another party deriving that that are deriving that you have the concept of 'I' - and so on. When I point this out, your response is 'yes well it's infinite circularity, I accept that, no problem there'. You talk about conscious experience, but you leave out the part that - unless you sharply disagree with Dennett - you don't believe that there is qualia, or conscious experience. There is no "experience", there is only function. You say "there is fairly good evidence to support this" - but your "evidence" in this case is this: 'This is the only thing, given my metaphysics, that could be taking place. Therefore, rather than amend my metaphysics in any way or be open to that possibility, I will interpret all data in light of this and call it evidence.' Yes, I say that asserting that all intentionality is derived (so that if I think 'I'm going to the supermarket', I only think this in virtue of another observer, perhaps internal, interpreting (brain processes, what-have-you) as 'it thinks it is going to the supermarket' which in turn only means this by virtue of yet another observer interpreting that as 'it thinks it thinks it is going to the supermarket') and then trying to defend it by saying 'emergence! infinite circularity!' is both incoherent and a ridiculous dodge. Even Hofstadter gives off the impression that he knows it's ridiculous - he justifies his taking this route in large part because the alternatives are too religious for his liking, and we can't have that. (The man's also a reductionist about the mind by his own admission, though his reductionism comes with a real telling caveat.) Yes, I think denying the existence of experience, of qualia, is insane - even many materialists would agree (and many would agree that the denial of original intentionality is just as insane). nullasalus
Elizabeth Liddle:
Mung, I do like talking to you when you aren’t accusing me of lying.
In the States here we have a phenomenon known as "9/11 Truthers." http://www.911truth.org/
TO EXPOSE the official lies and cover-up surrounding the events of September 11th, 2001 in a way that inspires the people to overcome denial and understand the truth; namely, that elements within the US government and covert policy apparatus must have orchestrated or participated in the execution of the attacks for these to have happened in the way that they did.
No doubt these people firmly believe the "truth" of what they espouse. Do you believe it is possible for someone to perpetuate a lie without knowing it's a lie? I do. Does that make it less of a lie? Mung
Mung:
Elizabeth Liddle:
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
I don’t know what sort of statement that is, but it does not sound like an evolutionary account.
It isn't. But it's perfectly consistent with evolutionary mechanisms.
But I’m trying to get down to the fundamentals.
Cool.
So you’re not saying that pain itself is an aversion mechanism?
No, I'm saying it is the result of an aversion mechanism, operating within a conscious system.
But are you saying that underneath pain there is an aversion mechanism?
Yes. Or phylogenetically earlier. Possibly developmentally earlier as well. In that sense "underneath".
So how far down the chain of being do these aversion mechanisms extend? Bacteria? Can a bacterium sense danger or potential harm and seek to avert it? Is a brain required for the existence of aversion mechanisms?
Well, as you probably realise, I don't subscribe to the Chain of Being stuff anyway. But it extends at least as far as nemotodes. Not sure about bacteria, but I wouldn't be surprised. Also some plants (mimosa being a famous example). Definitely sea anenomes. It's a pretty useful trick. Mung, I do like talking to you when you aren't accusing me of lying. Can it stay this way? Elizabeth Liddle
@Doveton post 111 RE: Abstraction vs. Emergent Property My idea of "abstraction", one that I'm trying to convey is hopefully explained in the metaphor of software programming. The program (or assemblage of source code) that executes on a hardware platform is at a level of abstraction from that platform. It isn't a part of the platform in that the two are mutually exclusive. I don't have to communicate with the hardware platform directly to make the hardware platform perform some action. I do interact with the platform indirectly. I write computer programs using one of many existent computer programming "languages". It beats having to write the machine code directly! A compiler/interpreter (itself another software application) sits between my program source code and the hardware. That compiler translates my source code (along with linking the code to necessary libraries of functions) into machine code understandable by my hardware platform. Now, the abstract nature of source code renders it useless without a coexistence with some hardware, hence my acceptance of an "abstract mind" requiring collocation of the mind with the body. I don't think that it is necessary that the mind be an "emergent property" of the body (underlying hardware). For the same reason that I don't believe source code is an emergent property of computing hardware. ciphertext
Elizabeth Liddle:
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
I don't know what sort of statement that is, but it does not sound like an evolutionary account. But I'm trying to get down to the fundamentals. So you're not saying that pain itself is an aversion mechanism? But are you saying that underneath pain there is an aversion mechanism? So how far down the chain of being do these aversion mechanisms extend? Bacteria? Can a bacterium sense danger or potential harm and seek to avert it? Is a brain required for the existence of aversion mechanisms? Mung
Yikes! I need to be more careful with my mouse. I submitted prior to completing my post. Here is the remainder of my thought. ------------------------------------- cont. Here again, we need to examine whether we really mean a suitably "high" level of abstraction or true non-physical existence. Given your question assumes an "immateriality" of the mind, then I would have to agree that "in principle" any person's mind could interact with any other person's body (or expressed differently other hardware). Though, for this to occur we must also constrain the concept of "mind" such that it (the mind) is "standard" and not merely common. Similarly, the body (hardware) would need to have a standard form, and not merely a common form. The distinction I am making can be thought of in this manner. Standard parts (or more accurately, standardized parts) can be swapped between hardware systems (computers, cars, appliances, etc...) quite easily. Indeed, that is one of the reasons for using standard parts. As it reduces maintenance and repair costs. Common parts, aren't so easily swapped. Computer parts used to be this way, in that your computers used to be specially built. A lot of supercomputer systems are still this way, meaning you cannot simply use "off the shelf" components. Software development can still be this way, though most software development efforts prefer to use standard software development paradigms and standard software "architectures" (OSGI compliant). Prior to the advent to some standard hardware communication protocol development, your DEC Alpha wouldn't take some parts from your Cray YMP or your IBM System 360. Because of the way those systems were designed, they still don't use standard parts. Indeed, your IBM System 360 won't utilize parts from an IBM Blade Center either. Though, the IBM Blade Center will use parts from Intel, AMD, Nvidia, Fujitsu, Seagate, and Western Digital. Same way with automobiles for some of their parts. ciphertext
Ciphertext,
If the mind is immaterial and interacts somehow with the material body, should it matter where the mind is in relation to the body in order to interact with it?
I don’t know. I think there are three ways to approach an answer to that question. The first method would depend upon what we mean when we say “immaterial”. I wonder sometime if what we really mean is “abstraction”. Similar to how I used “abstraction” in the thread going about DNA is code. Perhaps the “mind” is really just an abstraction some level(s) above the hardware (the brain structures). In that sense, the mind must be collocated with the hardware it is using as the basis for its execution. This is a very complex notion. That is the notion of abstraction, which has typically been exhibited by intelligent agents (humans only, in the most abstract thought). We should be able to determine if the mind is an abstraction, in much a similar way we determine the difference between a computing device’s “OS and allied applications” and the devices hardware systems.
Your use of the term "abstraction" strikes me as little different from "emergent property". Feel free to elaborate if you meant it differently. That said, I like the approach you take with the abstraction. If the mind is an abstraction, then I'd think that co-location is necessary. I also think that control of someone else's body would not be possible since the mind would be an abstraction of only one person's body. I'll go along with that.
As a second approach if we truly do mean “non physical” by using the term immaterial, then I don’t believe that a mind would be required to collocate with the hardware. The converse isn’t required either. Certainly, once can make an assertion that a “simulated” environment could be an example of the “guiding mind” being removed from the hardware subjected to the experience. In the most rudimentary form, you have remote controlled vehicles (i.e. predator unmanned aerial vehicle, RC cars, and teleconferencing [in a way]). In all of these examples, we show that a “guiding mind”, physically separated from the hardware subjected to an experience, can control the operation of that hardware. However, while the examples are physically separated by “distance” they are still connected physically. Meaning that they are tethered to each other via “the stuff of existence” (atoms, electrons, forces, etc…) for at least as long as they are interacting. You could even say that they are still tethered by mere “existence” (gravity anyone?).
Wow, you hit some of the examples I thought of when I arrived at this question. Yes...radio controlled model aircraft and drones and the like certainly are "tethered" via control waves to some source "mind" and given the type of particles that mind is made of, then mental control can extend across distance, removing the need for co-location. If our minds are, as some seem to posit, non-physical, should this not be possible for our body and mind arrangements? But, this then made me think: if our minds truly are non-physical and require no actual co-location, why don't we see one person's mind controlling someone else's body. I've certainly never heard any stories of such events. There are times in radio model aircraft flying that the radio signals get crossed and someone finds they are controlling someone else's plane. But then in model radio aircraft, there are limited frequencies. Could it be that minds and bodies are tied together by some specific "frequency"? It's possible I suppose, though I suspect the answer is significantly more simple; that the implication of the evidence points to an arrangement more like scenario 1 above rather than a truly non-physical mind.
The third avenue of exploration may involves the current “incompleteness” in our knowledge of the nature of reality. Perhaps there is still as-of-yet undiscovered forces which allow for the communication of immaterial objects/entities with physical objects/entities. Much like there isn’t a complete picture yet with respect to what provides a particle with “mass”. Perhaps, the concepts of “immaterial” and “material” will be reviewed for completeness at a later date, as we amass more knowledge on the nature of reality.
Could be. Not a very satisfying answer, but sometimes you just have to shrug and accept such.
Related to the above, if it does not matter where the mind is in relation to the body for interaction to take place, is it possible for any person’s mind to interact with any other person’s body? If not, why do you suppose that is?
Here again, we need to examine whether we really mean a suitably “high” level of abstraction or true non-physical existence
True. Hence my comment above. :) Doveton
Lizzie,
By this I mean, if pain is mechanical, why should we even need to be conscious of it?
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
Another point to consider: being conscious of pain allows an organism to plan to avoid pain or plan on accepting pain in certain circumstances. For example, if one has the conscious capacity to see a wasp and associate the visual cue with a previous pain event, one does not have to be stung before trying to move away. On the flip side, being conscious of pain allows one to recognize that some pain - say the pain derived from a needle poked into the skin - should not be avoided if the pain is the result of trying to remove a sliver. Doveton
@Doveton Post 106 RE: Immateriality of mind
If the mind is immaterial and interacts somehow with the material body, should it matter where the mind is in relation to the body in order to interact with it?
I don't know. I think there are three ways to approach an answer to that question. The first method would depend upon what we mean when we say "immaterial". I wonder sometime if what we really mean is "abstraction". Similar to how I used "abstraction" in the thread going about DNA is code. Perhaps the "mind" is really just an abstraction some level(s) above the hardware (the brain structures). In that sense, the mind must be collocated with the hardware it is using as the basis for its execution. This is a very complex notion. That is the notion of abstraction, which has typically been exhibited by intelligent agents (humans only, in the most abstract thought). We should be able to determine if the mind is an abstraction, in much a similar way we determine the difference between a computing device's "OS and allied applications" and the devices hardware systems. As a second approach if we truly do mean "non physical" by using the term immaterial, then I don't believe that a mind would be required to collocate with the hardware. The converse isn't required either. Certainly, once can make an assertion that a "simulated" environment could be an example of the "guiding mind" being removed from the hardware subjected to the experience. In the most rudimentary form, you have remote controlled vehicles (i.e. predator unmanned aerial vehicle, RC cars, and teleconferencing [in a way]). In all of these examples, we show that a "guiding mind", physically separated from the hardware subjected to an experience, can control the operation of that hardware. However, while the examples are physically separated by "distance" they are still connected physically. Meaning that they are tethered to each other via "the stuff of existence" (atoms, electrons, forces, etc...) for at least as long as they are interacting. You could even say that they are still tethered by mere "existence" (gravity anyone?). The third avenue of exploration may involves the current "incompleteness" in our knowledge of the nature of reality. Perhaps there is still as-of-yet undiscovered forces which allow for the communication of immaterial objects/entities with physical objects/entities. Much like there isn't a complete picture yet with respect to what provides a particle with "mass". Perhaps, the concepts of "immaterial" and "material" will be reviewed for completeness at a later date, as we amass more knowledge on the nature of reality.
Related to the above, if it does not matter where the mind is in relation to the body for interaction to take place, is it possible for any person’s mind to interact with any other person’s body? If not, why do you suppose that is?
Here again, we need to examine whether we really mean a suitably "high" level of abstraction or true non-physical existence ciphertext
Mung - yes, indeed, interesting questions. Here's a shot at some answers:
So I have some painful questions I’d like to inject. Which came first, pain or consciousness? Pain or conscious awareness of pain?
My hypothesis that aversive reactions are likely to have preceded consciousness of aversive reactions, but I would say that there is no point in talking about pain unless something is feeling it. But I suggest that it grew out of aversive reactions, which would have a selective advantage.
Is it possible to speak of pain if there is no brain? IOW, can organisms which have no brain experience pain?
No.
Can we dispense with minds and brains and consciousness and still have something we can call pain?
We can talk about aversive reactions, but not pain IMO.
If the evolutionary view presented by Elizabeth is correct, one would think so.
Only in the sense I gave above. I think consciousness is also selectively advantageous, though, as is consciousness of pain. But that's only possible with a brain. IMO.
What is the necessary connection, if there is any, between pain and conscious awareness of pain? By this I mean, if pain is mechanical, why should we even need to be conscious of it?
Because as conscious, planning organisms, we need to know we are in the grip of an aversive reaction in order to decide how best to proceed.
If we come into contact with a hot plate we could be aware of our hand making a jerking motion away from the plate, but feel no pain as a conscious experience, right?
Yes, that would be cool. I wish it worked like that. But let's put another scenario: We have no pain, just a jerking reaction; but we are trapped against the hot plate by a heavy object, preventing the hand from detaching from the hot plate. How do we know that we need to take some other action? Because we do - we need to all we can to get our hand away from that source of heat. This is the kind of think concsciousness allows us to do, IMO - to react flexibly to complicated scenarios. Moths just fly into candle flames and die. We can say: that candle flame looks so beautiful but must resist, must resist..... Consciousness is thus intimately connected with volition IMO - with free will, no less, i.e. flexibility of response, with soptions to weigh instant gratification against future benefit. But the quid pro quo is that stuff hurts.
Upon the evolutionary view, one would need to think that pain came first, and only later did it get wired into the brain and consciousness of it.
Well, I think the aversive reaction came first, and only later, with the advent of the capacity to plan, reflect, choose, did pain manifest itself.
What really happens when we experience pain? Say I step on something sharp. Obviously, something must happen first at the cellular level, right? What happens, and how does that get translated into “pain”?
Well, you can probably look up the physiological reactions but I would say (and Nullasalus would tell me this is illogical, but I'll say it anyway, because I don't think it is) that pain is the knowledge that we are in the grip of an aversive reaction. Which requires a concept called "I" and a sense of time. I think integration of input over time is absolutely critical to conscious experience, and that although we perceive it as a flow, it operates as a series of discrete summations. There is fairly good evidence to support this.
Are cells conscious? Can they react to something in the environment that has the potential to harm them?
No, and yes, IMO (in that order).
That’s basically the definition of pain Elizabeth is using, right?
No - but it's the underpinnings of it. There's another necessary part IMO. Elizabeth Liddle
Mung,
Darwinian evolution only accounts for what will benefit our offspring. It’s forward looking then is it?
In truth, evolution doesn't "look" anywhere. It doesn't "try" to impart any effect on anything. It isn't planning anything either. Evolution is merely the term we give to the process of biological change and the adaptation (or difficulty) the change presents for a group of organisms in a given environment. Evolution doesn't know where it's going, but the options available for change are, to some extent, determined by where it's been.
So I have some painful questions I’d like to inject. Which came first, pain or consciousness? Pain or conscious awareness of pain? Is it possible to speak of pain if there is no brain? IOW, can organisms which have no brain experience pain? Can we dispense with minds and brains and consciousness and still have something we can call pain? If the evolutionary view presented by Elizabeth is correct, one would think so. What is the necessary connection, if there is any, between pain and conscious awareness of pain? By this I mean, if pain is mechanical, why should we even need to be conscious of it? If we come into contact with a hot plate we could be aware of our hand making a jerking motion away from the plate, but feel no pain as a conscious experience, right? Upon the evolutionary view, one would need to think that pain came first, and only later did it get wired into the brain and consciousness of it. What really happens when we experience pain? Say I step on something sharp. Obviously, something must happen first at the cellular level, right? What happens, and how does that get translated into “pain”? Are cells conscious? Can they react to something in the environment that has the potential to harm them? That’s basically the definition of pain Elizabeth is using, right?
Cool set of questions, Mung. Pain is very well understood at the mechanical level. When an object impacts an an organism at some location of the body two main neural transmitters are stimulated - thin A-delta fibers and C fibers that carry a signal to the spinal column, which then transmit the signal to the brain. The A-delta fibers register intensity of the initial body contact (or initial damage pain) while the C fibers register the site damage sensation (or the dull continuous pain of any damaged area). These sensations are the result of chemical releases (such as Cox-1 and Cox-2 enzymes released at the damage site that initiate the process of swelling to help with healing) and electrical stimulation of the nerve fibers that are transmitted to the thalamus that then signals the various centers appropriate for the given pain/damage. The interesting question you raise, though, is whether "pain" exists apart from consciousness. Certainly some organisms without higher cerebral functions react (generally by moving away from the stimulus) to what we think would normally register as pain, but are they experiencing "pain"? I have no idea. I doubt it given what we do know about pain, but I simply don't know. The point is though, within the human pain system, the sensation of "pain" is arrived at through the interaction of a variety of brain areas and nerve systems and involves a number of chemical compounds. It's not clear to me that "pain" can be experienced without them. Doveton
Interesting discussion. A question to consider: If the mind is immaterial and interacts somehow with the material body, should it matter where the mind is in relation to the body in order to interact with it? Related to the above, if it does not matter where the mind is in relation to the body for interaction to take place, is it possible for any person's mind to interact with any other person's body? If not, why do you suppose that is? Entertaining such questions might be fruitful for conceptualizing how the mind interacts with the body and what, if any, limits exist in that arrangement. Doveton
Fascinating discussion. I'm not being sarcastic either. It is particularly fascinating because from one view, you are trying to define human perception and subsequent processing of that perception as if from outside the "apparatus" responsible for both. That introspection should be quite difficult, because you would be using the very tools you are attempting to measure, as the measuring devices! I know of no "standards body" that has developed standard "weights and measures" to which you could "calibrate" your measuring devices. Thus, if you are operating without calibrated devices (as I believe to be the case) then all measurements are relative are they not? It reminds me of hospital procedures to have the patient specify their "pain" on a level of 1-10. The chart that defines 1-10 are "smiley" faces depicting various stages of discomfort. From smiling all the way to a contorted, crying, frazzled face. The real question is, what makes such measurements subjective in nature? Most likely because each human "system" is unique. The "wiring" (euphemism for nervous system) is common, but not really standard. The "motherboard" complete with CPU and associated microprocessors (the brain structures) are common construction, but not standardized. The "OS and application programs" are also common in base functionality, though not necessarily so in terms of their extended functionality. As is the case with the other "systems", the "software system" is by no means standardized. All of these systems have the added differentiation of being highly specialized to the unit in which they operate. Hence, the inability to make "standardized" parts. The best you can do is make "common" components. ciphertext
mike1962, Here's a splendid quote from Galen Strawson regarding talk of denying experience. I think we should feel very sober, and a little afraid, at the power of human credulity, the capacity of human minds to be gripped by theory, by faith. For this particular denial is the strangest that has ever happened in the whole history of human thought, not just the whole history of philosophy. It falls, unfortunately, to philosophy, not religion, to reveal the greatest woo-woo of the human mind. I find this grievous, but, next to this denial, every known religious belief is only a little less sensible than the belief that grass is green. nullasalus
Pain, pleasure, color, sound, smell, all these qualia are primary. Thoughts you have about them are inferences. Inferences can be wrong but conscious experience can never be "wrong." You might be wrong about what is causing pain, but can't be "wrong" about the experience of pain. mike1962
Darwinian evolution only accounts for what will benefit our offspring. It's forward looking then is it? So I have some painful questions I'd like to inject. Which came first, pain or consciousness? Pain or conscious awareness of pain? Is it possible to speak of pain if there is no brain? IOW, can organisms which have no brain experience pain? Can we dispense with minds and brains and consciousness and still have something we can call pain? If the evolutionary view presented by Elizabeth is correct, one would think so. What is the necessary connection, if there is any, between pain and conscious awareness of pain? By this I mean, if pain is mechanical, why should we even need to be conscious of it? If we come into contact with a hot plate we could be aware of our hand making a jerking motion away from the plate, but feel no pain as a conscious experience, right? Upon the evolutionary view, one would need to think that pain came first, and only later did it get wired into the brain and consciousness of it. What really happens when we experience pain? Say I step on something sharp. Obviously, something must happen first at the cellular level, right? What happens, and how does that get translated into "pain"? Are cells conscious? Can they react to something in the environment that has the potential to harm them? That's basically the definition of pain Elizabeth is using, right? Mung
Null: Likewise, I’d question whether memory is a ‘key component’.
Anyone who's ever witnessed the birth of a baby can see a baby in pain, and is responsive to painful stimuli. Doubtful they have memory of anything. They have no idea what pain is, or why they have it, or what it is related to. They just feel it. Waaaaaaa! mike1962
Liz: I guess experience could be “fundamental” in our universe, but the close correlation between mental experience and neural activity suggests at the minimum that it has a very close relationship with brains.
No doubt. But when I am at the movie theater there is a close correlation with the consciousness with the Big Screen. Doesn't mean that screens cause consciousness, and it doesn't mean that brains cause consciousness either. All we know so far scientifically is that brain states correlation with conscious experience. There is not one whit of scientific evidence that consciousness is an epiphenomenon of brains. mike1962
No, we can’t be wrong about what we are aware of – what we are aware of is what we are aware of, no more no less. If I am aware of pain, I am aware of pain. ... But that’s exactly how far you are experiencing pain – to the limits of awareness! Of course you aren’t mistaken. You are aware of pain. That’s all you are aware of. Then you, contra Dennett, support the existence of qualia. Or you're dealing in metaphors, and you don't actually agree with me after all. One or the other. Evolutionary speaking, the ability to feel pain is a advantage. The inability to feel pain is a disorder. Of course that doesn’t mean that anaesthetics are a bad thing – but it would be very bad if, say your feet were continuously anaesthetised. You’d soon seriously damage your feet. Says who? This is a shade away from saying 'things that don't experience pain could never survive because they'd never avoid things which harm them'. I'm playing a video game lately, Demon's Souls. Great game, difficult. There are characters who, if you keep hitting them with a sword, will run away from you. Are they experiencing pain? Do I need to bring in qualia to describe what's going on there? You earlier said "What pain is for in an animal is to eject the animal from a dangerous object." But pain doesn't 'eject the animal from a dangerous object'. Movement does. And no, 'evolutionary speaking' that's not correct. If I'm trying to evolve something which does not experience pain, not experiencing pain is an advantage. If there's a situation where not experiencing pain is advantageous, there's the tautology - it's advantageous. Not a disorder. I guess experience could be “fundamental” in our universe, but the close correlation between mental experience and neural activity suggests at the minimum that it has a very close relationship with brains. Does the fact that all of our knowledge of the world is in the form of thought suggest at minimum that the universe is mental a la Berkeley? Does the fact that only humans are able to use language to communicate 'I am in pain' suggest at minimum that only humans feel pain? You're tying experience to the ability to report. This is a 'looking for your keys under the streetlamp because that's where the light is' problem. Moreoever we know that pain is accompanied by physiological “fight or flight” responses, which is at least consistent with my hypothesis. Consistency is cheap. All the data is consistent with the design hypothesis, the panpsychism hypothesis, the idealism hypothesis, and more. We also can have pain that we neither fight nor fly from, but simply endure. We have pain that we reflect on. Sure, you can bring in the evolutionary framework again - 'well, sometimes pain isn't connected to fight or flight - entirely consistent with Darwinism!' Entirely consistent with design too. Or even designed evolution. Indeed, again in my own experience, the “rawest” pain I have ever experience has been when I’ve been closest to unconsciousness – in the recovery room after surgery, for instance, or while delirious. And a panpsychist could question whether this 'unconscious' state really exists, as opposed to a state of experience without memory, or without retained memory. . And actually, I think you hit a key point with “reportability”. I suggest that a second key component of the “raw” experience of pain is memory (self-reporting if you like, but at scarcely symbolic level, perhaps not symbolic at all). But I'm not agreeing that 'reportability' is a key component of experience - I'm questioning that, now and previously. Likewise, I'd question whether memory is a 'key component'. I don’t know about you, but I’d be reluctant to withhold the notion that the poor thing was in pain and distress, and that I should tell it not to worry about the tea, just concentrate on sorting itself out, and I’d bring it a screwdriver and nice a dehumidifier pronto. A pragmatic attitude is not an explanation, or even evidence of an explanation. Nor is an emotional attitude. You don't need to hypothesize about a futuristic technology, we know not what, which has you reflexively ascribing mental states and experience to that which you can't certainly verify is actually having experience. It's just "the problem of other minds" all over again. So the sensation of “red” may be the result of all kinds of motor programs associated with that colour, as will as physiological autonomic responses to stimuli of that colour – excitement (an edible berry! meat!) fear (I’m bleeding!) warmth (fire!) fear again (fire!) and that the “raw feel” of redness is the direct re-input of the output from various sub-execution courses of action, modified by context (is it a flickering red? Red on green? Red on flesh), felt as a gestalt, but comprised of a smorgasbord of motor and autonomic outputs reentred as inputs. Lots of words, zero explanation. This basically boils down to 'Gosh, the brain is complex. If you get complex enough, maybe experience jumps out. I don't know how it could, but...' Or maybe we've misconceived matter, and experience is fundamental. Or maybe a mechanistic depiction of matter is flawed and should be discarded for another view. Or maybe substance dualism is right after all. And worse, you are - like it or not - back in the intentional muck with all this. Models are models of something, an intentional concept. Simulations are simulations of something, also intentional. But you take a position where the only way things 'model', or the only way things 'simulate', is in virtue of our assigning that meaning to them. Back to the example of how a brain 'models the future' the way a rock on the ground 'represents Dallas' - it does so in virtue of another mind's derivation only on your view. And the other mind's derivation only derives - really, is only a mind - in virtue of yet another derivation. And on we go. That applies to all of your examples. "A berry!" "Meat!" All derivations, which are derivations, which are... Again: Yeah, I know. You think vicious regress is okay. I disagree. Oh, it would fit with an imperfect Designer OK. Or several designers. Or an incompetent designer. Or a malign designer. Or even, I guess, a designer who wanted us to feel pain because we need to know that pain hurts in order to learn compassion, or learn how truly we are forgiven. Or something. I’m sure a theology could be made to fit. And that eliminates the suggestion that Darwinism has some kind of advantage versus a design hypothesis here. I’m hoping what I say here is not dependent on Darwinism, it just seems to make most sense to me that way. But then again, I grew up with Darwinism (and theism) and never saw a conflict. Still don’t. You've already admitted that your hypothesis "is fairly firmly embedded in an evolutionary framework, and I would concede that it makes less sense outside it.", so I don't see how you can say that what you're saying isn't dependent on Darwinism. As for theism's compatibility with Darwinism, that depends on how both are defined. Common definitions of Darwinism preclude traditional views of theism, and vice versa. And we do see them. Darwinian evolution only accounts for what will benefit our offspring. It accounts for pretty much anything in terms of whatever degree of function quality, especially when Darwinism is expanded to mean 'evolution, period'. No, I’m saying that we experience it as pure pain – as a gestalt. That doesn’t mean that the gestalt isn’t the net sum of conflicting drives. And indeed, if we do start to conceptualise pain, If we start to conceptualize pain, then what we're dealing with is a concept, not pain itself. We're discussing pain at this very moment - we're using it as a concept. That is not identical - it is obvious not identical - to the experience of pain. And talk of pain as 'the net sum of conflicting drives' is as useful as talk of pain as the result of a particular symphony of bug farts. There's still the question of what this could possibly mean, and appealing to complexity and emergence won't do the job. Let’s leave Dennett out of here for now, shall we? Although I did say that Dennett’s intentionality was related to qualia Which it is. But let’s not go there right now. No can do. If you're bringing up intentions, then what intentions are is the next step then and there. That puts intentions-as-derived, along with Dennett's stated denial of qualia, as right on the menu. I'm not going to nod my head and say 'Sure, there are programs in the brain, these programs lead to certain outputs' and holster the point that for Dennett, the only 'programs' that exist in the brain are programs only in virtue of another mind deriving them to be such. And I agree – to the extent that we are aware. But I suggest that if we could drill down to “what is it like to feel pain?” a la Nagel and his bat, the question isn’t quite as impossible as it sounds at first. We don’t do it when we are in the throes of it, perhaps (not without Shamanic training anyway), but I suggest that in repose/on reflection, we can perhaps parse “pure” pain into a desperately urge to leave the present place and time, And again: The concept of pain is not pain, anymore than looking at a definition of pain is 'seeing pain'. To "parse pain" is to deal with a concept. And if you claim that pain is nothing but a parsed concept, then you're disagreeing with me about pain being raw experience rather than 'being of or about something'. So all the "I agree"s were just for show. That sort of move is real tiring. But at least we have reached the point where we have a simple disagreement about our premises It seems you knew this the moment I answered your question. Why you decided to only get to this point after trying to rephrase what my stance into its opposite multiple times, repeatedly saying you agree with me when you knew you had a (fundamental, no less) disagreement, is the stuff of wonder. I think my view has support from independent evidence. But you could be right. Maybe experience is just ground zero. I think just the circulating of an infinite loop. It has support from independent evidence so long as you start with a materialist framework and rule out anything outside of that framework. But hey, that's just another kind of loop, right? Unless what you mean by evidence is consistency. But consistency is cheap. nullasalus
Nullasalus:
Well, not in my sentence above. And I certainly don’t think you can be “mistaken” about pain. I totally agree that it isn’t a theory. That’s why I said: pain is just pain, as far as we are aware.
“As far as we are aware” implies that we can be mistaken. If I ask ‘What are these nails for?’ and X replies “For hanging pictures from, as far as I am aware”, the point of the qualification is that X could be wrong about it. Maybe the nails are for something else. Maybe they’re not for anything at all.
That's why I said "as far as we are aware. It was a double entendre if you like (but I hinted as much). No, we can't be wrong about what we are aware of - what we are aware of is what we are aware of, no more no less. If I am aware of pain, I am aware of pain. That's the sense in which it is raw. but it may be more than that below the awareness level (as I tried to explain).
There’s no possibility of being mistaken regarding experience itself. If I’m experiencing pain, I’m experiencing pain, period. I’m not experiencing pain, “as far as I am aware”.
But that's exactly how far you are experiencing pain - to the limits of awareness! Of course you aren't mistaken. You are aware of pain. That's all you are aware of.
What automatic plug ejection is for in a kettle is to stop the element burning out. What pain is for in an animal is to eject the animal from a dangerous object. I’m positing that that’s what pain evolved to do/was designed for (right now it doesn’t matter which); we can, I hope, agree that lack of ability to feel pain is a bad thing?
Why would I agree to that? I gave examples of anaesthetics – are anaesthetics ‘bad things’? As far as ‘designed for’ goes, that which is bad is only bad relative to a mind – the lack of ability to feel pain can be a good thing, clearly.
I meant for our survival and well-being. Evolutionary speaking, the ability to feel pain is a advantage. The inability to feel pain is a disorder. Of course that doesn't mean that anaesthetics are a bad thing - but it would be very bad if, say your feet were continuously anaesthetised. You'd soon seriously damage your feet.
Finally, saying pain was ‘evolved to do’ something – especially insofar as pain is supposed to represent consciousness, experience – comes with the built-in suggestion that consciousness ‘comes from the non-conscious’. But again, why should I believe that, even granting evolution in general? As I said, maybe experience is fundamental in our universe, in a variety of possible ways.
Well, I didn't say that pain was "supposed to represent consciousness". I think it's a very good example of something we can be conscious of at a ver very elemental level, so I was pleased you brought it up. Anyway, I'm not asking you to believe my hypothesis, just presenting it. I guess experience could be "fundamental" in our universe, but the close correlation between mental experience and neural activity suggests at the minimum that it has a very close relationship with brains. Moreoever we know that pain is accompanied by physiological "fight or flight" responses, which is at least consistent with my hypothesis. We also know that people do actually conceptualise their pain at least some of the time, to some degree - can describe pain as "sharp" or "stinging" or "a dull ache" or "squeezing", so it's not even always raw. Indeed, again in my own experience, the "rawest" pain I have ever experience has been when I've been closest to unconsciousness - in the recovery room after surgery, for instance, or while delirious. That again is consistent with the idea that the less we conceptualise pain, i.e. the less we handle it using our capacity for abstraction and narrative, the more it resembles a very basic animal drive. But obviously my hypothesis is fairly firmly embedded in an evolutionary framework, and I would concede that it makes less sense outside it.
Well, I guess here I cite evidence: certain conditions result in in ability to feel pain.
If pain is functioning as a stand-in for experience, then where’s the evidence that (physical, I would assume) conditions ‘result in’ this ability? Absolutely, if you stab people in certain conditions you can get a report of pain out of them fairly reliably. But that says something about reportability, not necessarily experience.
I'm not sure where the idea that pain is a "stand-in" for experience came from :). We experience pain; we both agree on that. And actually, I think you hit a key point with "reportability". I suggest that a second key component of the "raw" experience of pain is memory (self-reporting if you like, but at scarcely symbolic level, perhaps not symbolic at all). For example it would be possible to design a robot that would do something a bit more complex than the old kettle when confronted with danger - perhaps we program it to go into reverse if it bumps into something. Then we pin it in a corner by a large heavy piece of furniture. It keeps reversing and hitting another object, reversing hitting another; it starts over-heating, its fans start whirring, eventually it breaks down. Now, I'm sure you would agree that it was not "experiencing pain", even though it was in a state of frustrated aversive drive (yes, I've just undermined my own argument). Now, fast forward a few thousand years to quantum computers full of nanorobotic neurons capable of generating novel to solutions to problems and acquiring habits of behaviour that best enable it to carry out the functions built into it by its Intelligent Designers. And one morning, trying to make your cup of tea, a jolt from a passing aircraft breaking the sound barrier causes it to spill the tea on its circuitry, triggering chaotic movements that send it crashing around the kitchen; however much of its circuitry remains functional, because it is equipped with lots of alternative routings for its circuitry so it searches for a screw-driver to unscrew the circuit plate, but its spasmodic movements cause it to drop the screw driver into its innards, causing more short-circuits and more chaotic movements, meanwhile its "make the tea for master" circuitry is whizzing away, relaying "tea's late, tea's late" at which point it activates the alarm system, and you get a message, saying "help! I've spilled the tea on my circuitry, and I can't do a thing, I'm so sorry, I know you want your tea, but you are going to have to fix this circuitry or my motherboard is going to blow", then, as you drag yourself out of bed, it says "never mind the screwdriver, just for goodness' sake deactivate my tea-making circuit then at least I'll be able to concentrate on trying to get my circuitry fixed". I don't know about you, but I'd be reluctant to withhold the notion that the poor thing was in pain and distress, and that I should tell it not to worry about the tea, just concentrate on sorting itself out, and I'd bring it a screwdriver and nice a dehumidifier pronto.
that pain evolved/was designed as a protection against injury, just as the sympathetic “fight or flight” response evolved/was designed to facilitate survival in the face of predators or rivals.
Back to the problems I have with this claim as mentioned above. Further, under the typical view, physical behaviors can be selected for – ‘running away’, ‘staying and fighting’, etc. But unless you’re assuming that experiences (like pain) are physical behavior, they can’t be ‘selected for’ like that. You can hedge and say, ‘well, this physical behavior was selected for, and this physical behavior is correlated with this experience’. But then the experience is free riding.
Well, I think that's a key point (more key than my little fantasy above) but I have an answer: the thing about brains (and we have lots of evidence for this) is that when we think about an action we activate the brain areas involved in actually executing the action, but at a level (probably measured in neural population size, or, electophysiologically by oscillatory amplitude) that is insufficient to trigger outflow of signal to the muscles that would execute the action. We rev the engine, as it were, before letting out the clutch. Not only that, but the activation of the circuitry that will be implicated in the action in turn activates circuitry that would respond to the results of that action. This is best studied in the visual system - when we plan an eye movement, neurons into the receptive field of which that eye movement will bring some new stimulus start to fire even before the eye has moved. In other words our motor system constantly makes "forward models" of the consequences of alternative courses of action, and the simulated results of those actions are fed back as input - if the results correspond with what we want, that circuitry will be boosted; if the opposite, it will be inhibited. So I suggest (not original-ly) that what we experience as "raw feels" are generated by the forward modelling of courses of action triggered by the stimulus, and fed back as input in a continuing loop. So the sensation of "red" may be the result of all kinds of motor programs associated with that colour, as will as physiological autonomic responses to stimuli of that colour - excitement (an edible berry! meat!) fear (I'm bleeding!) warmth (fire!) fear again (fire!) and that the "raw feel" of redness is the direct re-input of the output from various sub-execution courses of action, modified by context (is it a flickering red? Red on green? Red on flesh), felt as a gestalt, but comprised of a smorgasbord of motor and autonomic outputs reentred as inputs.
In that sense I’d argue that the pain response when pain is not useful is a kind of epiphenomenon arising from a selectable response (i.e. one that helped us survive), and is therefore better explained in a Darwinian framework than an ID one – an ID might have ensured that the pain response only occurred when aversive behaviour would be advantageous.
‘Might have’? Why – because designers are perfect under ID? Because ID claims to have access to all the intents of the designer? Because any designer would only introduce pain when it’s “advantageous” as you define it?
Oh, it would fit with an imperfect Designer OK. Or several designers. Or an incompetent designer. Or a malign designer. Or even, I guess, a designer who wanted us to feel pain because we need to know that pain hurts in order to learn compassion, or learn how truly we are forgiven. Or something. I'm sure a theology could be made to fit :) I'm hoping what I say here is not dependent on Darwinism, it just seems to make most sense to me that way. But then again, I grew up with Darwinism (and theism) and never saw a conflict. Still don't.
Under the Darwinian framework, it’s entirely conceivable that pain is only present “when it’s advantageous” as well – why, that’s just the power of natural selection at work, resulting in fit individuals. And if it’s present when it’s not advantageous, Darwinism can explain that too – Darwinism is not perfectly designed, you see, so we can expect kludges here and there.
And we do see them. Darwinian evolution only accounts for what will benefit our offspring. Just be glad you aren't a female hyena :)
So no, I can’t accept that “the Darwinian framework” functions better here, or that the ID framework functions worse.
OK. My point was not a Darwinian one.
What I’m saying is that the experience of pain is the frustrated violent urge to be somewhere else; to do the impossible, leave your body behind – pure “aversion”.
And I’d disagree, because you’re back to making pain ‘of’ or ‘about’, a thing of conceptualization. I’m saying that pain, and experience generally, is not a conceptualization. You can conceptualize pain, but to deal with that is to deal with a concept, not pain.
No, I'm saying that we experience it as pure pain - as a gestalt. That doesn't mean that the gestalt isn't the net sum of conflicting drives. And indeed, if we do start to conceptualise pain, the language people use tends to reflect the kind of underlying motor programs that I suggest give rise to the gestalt: "Just get me out of here! Take away the pain! Hold me-don't touch me!".
Worse, even under your scheme that comes with the heavy qualification – since you’re trying to make pain/experience a thing we apply meaning to, but all meaning is derived by Dennett’s view. So…
Let's leave Dennett out of here for now, shall we? Although I did say that Dennett's intentionality was related to qualia :) Which it is. But let's not go there right now. But in any case - we do apply meaning to pain. We apply meaning to most things. I'm sure you don't disagree. What you are saying is that there is a "ground floor" as it were, of "raw feels" where meaning absent. And I agree - to the extent that we are aware. But I suggest that if we could drill down to "what is it like to feel pain?" a la Nagel and his bat, the question isn't quite as impossible as it sounds at first. We don't do it when we are in the throes of it, perhaps (not without Shamanic training anyway), but I suggest that in repose/on reflection, we can perhaps parse "pure" pain into a desperately urge to leave the present place and time, coupled, often, with an equally desperate contradictory urge to curl up and sleep. And our autonomic responses reflect those urges (increased heart rate; reduced vagal tone).
But what I’m suggesting is that even when we get to what we think of as “raw” experience (“raw feels” or “qualia”), they are proxies for something more specific – the frustrated urge to flee, in the case of pain for instance.
..The ‘frustrated urge to flee’ would only be ‘the frustrated urge to flee’ by an assignment of meaning. You experience pain the way this rock on the ground represents Seattle – by virtue of someone deciding that that’s what this or that ‘means’. And what someone decides what this or that ‘means’ only does so by virtue of yet another derivation. And so on unto infinity or a brute stop.
Well I'm putting into words, inevitably, what we do not put into words. I don't think, when we experience pain, we always say "I feel like I want to flee". Though we sometimes do. I'm saying that the experience of pain is the urge to flee, but instead of flight relieving the pain, the whole thing gets into an iterative negative feedback loop and all we know is "I am in pain".
Yes, I know. You’re aware of this and accept it and think it’s peachy. I think it demonstrates the entire project has gone wrong. You say that no, your question wasn’t rhetorical. But you certainly seem to be treating it as such, because so far this entire exchange really seems to be working under the assumption that, while I maintain experience is exactly that – an experience, undeniable subjective sensation, not a concept that is interpreted – your response seems to be to just nod your head and continue the conversation as if that’s not what experience is and we all agree. But we don’t agree. Experience is a datum, not a posit to explain some other datum.
OK. But at least we have reached the point where we have a simple disagreement about our premises :) I think my view has support from independent evidence. But you could be right. Maybe experience is just ground zero. I think just the circulating of an infinite loop. Elizabeth Liddle
Well, not in my sentence above. And I certainly don’t think you can be “mistaken” about pain. I totally agree that it isn’t a theory. That’s why I said: pain is just pain, as far as we are aware. "As far as we are aware" implies that we can be mistaken. If I ask 'What are these nails for?' and X replies "For hanging pictures from, as far as I am aware", the point of the qualification is that X could be wrong about it. Maybe the nails are for something else. Maybe they're not for anything at all. There's no possibility of being mistaken regarding experience itself. If I'm experiencing pain, I'm experiencing pain, period. I'm not experiencing pain, "as far as I am aware". What automatic plug ejection is for in a kettle is to stop the element burning out. What pain is for in an animal is to eject the animal from a dangerous object. I’m positing that that’s what pain evolved to do/was designed for (right now it doesn’t matter which); we can, I hope, agree that lack of ability to feel pain is a bad thing? Why would I agree to that? I gave examples of anaesthetics - are anaesthetics 'bad things'? As far as 'designed for' goes, that which is bad is only bad relative to a mind - the lack of ability to feel pain can be a good thing, clearly. Finally, saying pain was 'evolved to do' something - especially insofar as pain is supposed to represent consciousness, experience - comes with the built-in suggestion that consciousness 'comes from the non-conscious'. But again, why should I believe that, even granting evolution in general? As I said, maybe experience is fundamental in our universe, in a variety of possible ways. Well, I guess here I cite evidence: certain conditions result in in ability to feel pain. If pain is functioning as a stand-in for experience, then where's the evidence that (physical, I would assume) conditions 'result in' this ability? Absolutely, if you stab people in certain conditions you can get a report of pain out of them fairly reliably. But that says something about reportability, not necessarily experience. that pain evolved/was designed as a protection against injury, just as the sympathetic “fight or flight” response evolved/was designed to facilitate survival in the face of predators or rivals. Back to the problems I have with this claim as mentioned above. Further, under the typical view, physical behaviors can be selected for - 'running away', 'staying and fighting', etc. But unless you're assuming that experiences (like pain) are physical behavior, they can't be 'selected for' like that. You can hedge and say, 'well, this physical behavior was selected for, and this physical behavior is correlated with this experience'. But then the experience is free riding. In that sense I’d argue that the pain response when pain is not useful is a kind of epiphenomenon arising from a selectable response (i.e. one that helped us survive), and is therefore better explained in a Darwinian framework than an ID one – an ID might have ensured that the pain response only occurred when aversive behaviour would be advantageous. 'Might have'? Why - because designers are perfect under ID? Because ID claims to have access to all the intents of the designer? Because any designer would only introduce pain when it's "advantageous" as you define it? Under the Darwinian framework, it's entirely conceivable that pain is only present "when it's advantageous" as well - why, that's just the power of natural selection at work, resulting in fit individuals. And if it's present when it's not advantageous, Darwinism can explain that too - Darwinism is not perfectly designed, you see, so we can expect kludges here and there. So no, I can't accept that "the Darwinian framework" functions better here, or that the ID framework functions worse. What I’m saying is that the experience of pain is the frustrated violent urge to be somewhere else; to do the impossible, leave your body behind – pure “aversion”. And I'd disagree, because you're back to making pain 'of' or 'about', a thing of conceptualization. I'm saying that pain, and experience generally, is not a conceptualization. You can conceptualize pain, but to deal with that is to deal with a concept, not pain. Worse, even under your scheme that comes with the heavy qualification - since you're trying to make pain/experience a thing we apply meaning to, but all meaning is derived by Dennett's view. So... But what I’m suggesting is that even when we get to what we think of as “raw” experience (“raw feels” or “qualia”), they are proxies for something more specific – the frustrated urge to flee, in the case of pain for instance. ..The 'frustrated urge to flee' would only be 'the frustrated urge to flee' by an assignment of meaning. You experience pain the way this rock on the ground represents Seattle - by virtue of someone deciding that that's what this or that 'means'. And what someone decides what this or that 'means' only does so by virtue of yet another derivation. And so on unto infinity or a brute stop. Yes, I know. You're aware of this and accept it and think it's peachy. I think it demonstrates the entire project has gone wrong. You say that no, your question wasn't rhetorical. But you certainly seem to be treating it as such, because so far this entire exchange really seems to be working under the assumption that, while I maintain experience is exactly that - an experience, undeniable subjective sensation, not a concept that is interpreted - your response seems to be to just nod your head and continue the conversation as if that's not what experience is and we all agree. But we don't agree. Experience is a datum, not a posit to explain some other datum. nullasalus
Nullasalus:
Right: well, what I suggest (as a hypothesis) is that as far as we are aware (I use that term advisedly) pain is just pain.
This really seems like you’re going right back to treating pain as an ‘of’ or ‘about’ or ‘object’ all over again. As if my having experience is a theory about something, some posit I can be mistaken about.
Well, not in my sentence above. And I certainly don't think you can be "mistaken" about pain. I totally agree that it isn't a theory. That's why I said: pain is just pain, as far as we are aware.
What justification is there for treating pain/experience like this? Again, I’m saying pain – experience, in this sense – is a raw datum. Not a concept I come up with, not a theory about something I’ve observed.
And I'm agreeing that is what we experience.
But I think we can drill down beneath that level (to the unconscious if you like, or what I might call a reflexive functional level) to understand what might, at the conscious level above, present as “pure” pain.
What is ‘the unconscious’ and how do I know it exists, much less that it’s “beneath” experience in a grounding way? What if the panpsychists are correct? What if the neutral monists are? What about the idealists? And insofar as you suggest pain/experience is grounded by ‘unconscious’ levels, that seems to beg the question against the various dualisms too.
Well, that's why I also provided what I think is a better description: "a reflex functional level". Unconscious as in Not Conscious, not as in Freud's Id. The sort of thing an old fashioned kettle used to do when it boiled dry - a bimetallic strip would bend and force out the plug. Something mechanistic. That's the level on which I was asking "what is pain for?" What automatic plug ejection is for in a kettle is to stop the element burning out. What pain is for in an animal is to eject the animal from a dangerous object. I'm positing that that's what pain evolved to do/was designed for (right now it doesn't matter which); we can, I hope, agree that lack of ability to feel pain is a bad thing?
And we know the purpose of pain – pain is a warning – it’s the signal that tells us: “danger: back off”.
“We know” this how? Why can’t pain be a punishment? Why can’t it be a cruel joke? Why can’t it be something that simply happens? And why is pain necessary for ‘backing off’ anyway?
That's interesting. Well, I guess here I cite evidence: certain conditions result in in ability to feel pain. They are considered diseases because they lead to disability - burns and other injuries. But OK, I will walk back "we know" - let me rephrase it as a hypothesis - that pain evolved/was designed as a protection against injury, just as the sympathetic "fight or flight" response evolved/was designed to facilitate survival in the face of predators or rivals.
People who can’t feel pain are disadvantaged – people with leprosy for instance.
Except when it’s advantageous to not feel pain, like when defending loved ones from threats, being operated on, etc. We have entire industries devoted to eliminating pain. Should we be banning anaesthetics?
Not at all. Your response is intriguing. I do not assume that what is natural is always good. Nor that what is natural is morally right. I think pain is natural, and has a clear use. But that doesn't mean that it sometimes kicks in when it is useless or even damaging (phantom limb pain, for instance). And to some extent we have evolved/were designed to block pain when feeling it would be dangerous - when in "fight or flight" mode for instance, when the sympathetic drive appears to block the pain response, enabling us to win the fight or flee to safety. In that sense I'd argue that the pain response when pain is not useful is a kind of epiphenomenon arising from a selectable response (i.e. one that helped us survive), and is therefore better explained in a Darwinian framework than an ID one - an ID might have ensured that the pain response only occurred when aversive behaviour would be advantageous. However, Darwinian theory would predict as long as it is sometimes advantageous, then it will tend to be selected, and we just have to put up with it when it isn't. Hence our industries devoted to eliminating pain, and our use of anaesthetics.
People who feel pain can be disadvantaged as well.
Yes indeed.
So I’d say that although we do not (necessarily) conceptualise what pain is “about”, and although we may perceive it as something “raw” and contentless, we can, at least in repose, recognise it as the experience of extreme conflict between the urge to leave
Well, there it is again. “Experience of”.
Good catch. But let me point out: I said "in repose". I do not think we necessarily recognise it at the time as the conflict between the urge too flee and the urge to be still (or even as simply the frustrated urge to flee). But on reflection ("in repose") we can recognise it as that. I don't know about you, but I recall, muzzy from an anaesthetic, being in pain, but somehow locating it somewhere else - thinking it belonged to the person on the next gurney, and thinking it would be fine once I was wheeled out of recovery. On that occasion, at the time, I sort of conceptualised it, but wrongly - but the conceptualization itself revealed something, I suggest, of the essence of "pure" pain - the desire to escape - be somewhere where the pain isn't. Animals (cats for instance) often leave home when they are dying, I understand (from vets) - one explanation is that they are seeking a pain-free place, literally. What I'm saying is that the experience of pain is the frustrated violent urge to be somewhere else; to do the impossible, leave your body behind - pure "aversion".
I have to ask, was that initial question of ‘Is all consciousness consciousness about something?’ rhetorical? Because I’m starting to get the impression that you want to have a conversation with the stipulation that yes, all ‘experience’ is ‘experience of/about’, alternatives be damned.
No. I rarely ask rhetorical question, and if I do, I usually say so. I wanted to know. If your answer had been "yes" we could have gone straight on from there. Your answer was, however, "no", which is probably the better answer. But what I'm suggesting is that even when we get to what we think of as "raw" experience ("raw feels" or "qualia"), they are proxies for something more specific - the frustrated urge to flee, in the case of pain for instance. In the case of things like texture, or even colour, I'd specify something else that I think underlies the "raw" surface - in other words, I suggest that "feels" aren't as raw as they seem, it's just that we don't generally have conscious access to what is still rawer, below, and that what is still rawer below can be expressed as a program of action and/or a program of physiological change.
If pain is not a theory or a concept – if experience is a datum, a “sensory quale”, not a thing that is ‘of’ or ‘about’ something else – it seems your hypothesis here doesn’t even begin to get off the ground.
Indeed. Which is why qualia lie at the bottom of this discussion :)
I’m off for a bit.
Hope to see you later. I appreciate the conversation. Truly. I understand the frustration. Cheers Lizzie Elizabeth Liddle
Another thread that reinforces my suspicion that some humans are zombies. The words that people like Elizabeth Liddle use when they speak of consciousness has scant to do with my own experience. Whereas the words that people like Nullasalus use clearly do. I find the people I encounter and discuss such things with are divided quite neatly into these two groups. I wonder if I can get a research grant. mike1962
Concerning the "the unconscious thing names itself and become self-conscious" blather -- I'd bet that with not too much effort, you guys can get EL to say something really profound, like: "Sentient entities are how "the universe" becomes self-conscious." Ilion
Right: well, what I suggest (as a hypothesis) is that as far as we are aware (I use that term advisedly) pain is just pain. This really seems like you're going right back to treating pain as an 'of' or 'about' or 'object' all over again. As if my having experience is a theory about something, some posit I can be mistaken about. What justification is there for treating pain/experience like this? Again, I'm saying pain - experience, in this sense - is a raw datum. Not a concept I come up with, not a theory about something I've observed. But I think we can drill down beneath that level (to the unconscious if you like, or what I might call a reflexive functional level) to understand what might, at the conscious level above, present as “pure” pain. What is 'the unconscious' and how do I know it exists, much less that it's "beneath" experience in a grounding way? What if the panpsychists are correct? What if the neutral monists are? What about the idealists? And insofar as you suggest pain/experience is grounded by 'unconscious' levels, that seems to beg the question against the various dualisms too. And we know the purpose of pain – pain is a warning – it’s the signal that tells us: “danger: back off”. "We know" this how? Why can't pain be a punishment? Why can't it be a cruel joke? Why can't it be something that simply happens? And why is pain necessary for 'backing off' anyway? People who can’t feel pain are disadvantaged – people with leprosy for instance. Except when it's advantageous to not feel pain, like when defending loved ones from threats, being operated on, etc. We have entire industries devoted to eliminating pain. Should we be banning anaesthetics? People who feel pain can be disadvantaged as well. So I’d say that although we do not (necessarily) conceptualise what pain is “about”, and although we may perceive it as something “raw” and contentless, we can, at least in repose, recognise it as the experience of extreme conflict between the urge to leave Well, there it is again. "Experience of". I have to ask, was that initial question of 'Is all consciousness consciousness about something?' rhetorical? Because I'm starting to get the impression that you want to have a conversation with the stipulation that yes, all 'experience' is 'experience of/about', alternatives be damned. If pain is not a theory or a concept - if experience is a datum, a "sensory quale", not a thing that is 'of' or 'about' something else - it seems your hypothesis here doesn't even begin to get off the ground. I'm off for a bit. nullasalus
Right: well, what I suggest (as a hypothesis) is that as far as we are aware (I use that term advisedly) pain is just pain. I agree it isn't "about" anything. To rephrase, I'd say that we are not conscious of anything other than pain (or may not be - certainly I have experienced the horribleness of knowing nothing but pain, as it were, fortunately not too often - or to use your phraseology (which I might also use) we have not "conceptualised" the pain - we are not thinking "I am in pain" or "this pain is terrible" or "my guts are screaming". Cognition is essentially absent - all we know is pain. But I think we can drill down beneath that level (to the unconscious if you like, or what I might call a reflexive functional level) to understand what might, at the conscious level above, present as "pure" pain. In other words: what might pain be "for"? We have agreed that it is not "about" anything, but it certainly has a purpose (let's even assume ID if you like at this point, though I don't think we have to). And we know the purpose of pain - pain is a warning - it's the signal that tells us: "danger: back off". It causes us to draw back from a thorn before it does too much damage; to let go a hot coal before it burns us too badly; even to curl up and hide, to help us heal. People who can't feel pain are disadvantaged - people with leprosy for instance. Pain, in other words is an aversive urge - the drive to avoid, either a stimulus or further damage, or both. And so, I suggest that what, at a conscious level is "pure" or "raw" continous pain could also be described as the state being driven to escape coupled with the drive to remain still - a particularly horrible combination, leaving us strung out between two opposing urges- no wonder we physically "writhe" in pain! So I'd say that although we do not (necessarily) conceptualise what pain is "about", and although we may perceive it as something "raw" and contentless, we can, at least in repose, recognise it as the experience of extreme conflict between the urge to leave(frustrated because we doing so does not remove the stimulus to leave and the urge to stay still (also frustrated because staying still does not remove the stimulus to stay either). I think I can anticipate the response you will have to this, but let's see :) Elizabeth Liddle
vjtorley: "Each row is still random, but I have imposed a non-random macro-level constraint. That’s how my will works when I make a choice... For Aristotelian-Thomists, a human being is not two things – a soul and a body... In practical situations, immaterial acts of choice are realized as a selection from one of a large number of randomly generated possible pathways."
What exactly is doing the choosing in your model? mike1962
Nullasalus:
Still waiting for that quote of me saying I was infallible to begin with. I was certainly accused of thinking I was infallible, more than once.
Yes, I did say I thought you thought you were infallible, though I don't believe I said that you'd said that you were. Either way, I retract both claims (if I indeed made the second). Elizabeth Liddle
So – we are ready to go on? Go for it. nullasalus
Nullasalus:
Right. So pain can sometimes have no content other than, as it were, itself? It is not “about” anything other than pain?
How is it ‘about’ pain? That’s back to making it sound like an object. It is, in this case, an experience.
OK, fair point. Yes it does rather. So let us say then: for you pain is "about" nothing - it is simply pain.
You asked if all consciousness is consciousness ‘of’ something, and for those who said no, you asked an explanation of being conscious ‘of’ nothing. I pointed out the difficulty there. Now you’re swapping out ‘of’ for ‘about’, but that seems like the same problem all over again.
Yes, I think the two are fairly interchangeable. But I am now clear, I think, that for you, pain is "raw" experience, by which you mean it is conscious experience (actually that's probably tautological, but at least it's clear) that is not "of" or "about" anything.
Pain in this case is an experience, period. Not an experience ‘of’ or experience ‘about’, but a subjective experience, period.
Heh, I just typed the above before scrolling down in the typebox. Looks like we are on the same page eh?
We can pull back and conceptualize experience, turn it into an object, but then we’re into concepts rather than experience.
Yes indeed. So - we are ready to go on? Elizabeth Liddle
OK, Nullasalus, I am happy that we agree that neither of us are infallible. Still waiting for that quote of me saying I was infallible to begin with. I was certainly accused of thinking I was infallible, more than once. nullasalus
Right. So pain can sometimes have no content other than, as it were, itself? It is not “about” anything other than pain? How is it 'about' pain? That's back to making it sound like an object. It is, in this case, an experience. You asked if all consciousness is consciousness 'of' something, and for those who said no, you asked an explanation of being conscious 'of' nothing. I pointed out the difficulty there. Now you're swapping out 'of' for 'about', but that seems like the same problem all over again. Pain in this case is an experience, period. Not an experience 'of' or experience 'about', but a subjective experience, period. We can pull back and conceptualize experience, turn it into an object, but then we're into concepts rather than experience. nullasalus
OK, Nullasalus, I am happy that we agree that neither of us are infallible. I hope you also agree that thinking one is correct, is not the same as thinking one is infallible. Clearly both you and I both think we are correct, or we wouldn't keep thinking it! And clearly, as we disagree, at least one of us is mistaken. So let's continue to find out where. Personally, I think we are both being logical, given our premises, but that our premises differ. That's a good point to get to, if we can. Elizabeth Liddle
I think you just made a logical error there Your logical abilities are pretty unimpressive so far. Just let me be absolutely clear: I think I am correct, but I don’t know with 100% confidence that I am correct. Does that help? What would help is you showing where I said that I know with 100% confidence that I am correct. What would be even more splendid is you admitting that I never said that, that you projected that view onto me, and that you withdraw the claim. Or don't. I really don't care. As I said, I can accuse you of believing you are 'infallible' with the same amount of justification you had when you accused me of it - pretty darn close to 'none'. nullasalus
Nullasalus:
Experience as it is experienced, rather than experience being conceptualized as an object. If I feel pain, I can conceptualize an object of my pain – I can say the pain is from a pin in my arm. And I can be mistaken (phantom pains) about that object. But the pain itself is not a concept. It’s a sensory datum.
OK. Thanks. I hoped we'd get on to pain. I would agree that we do not "conceptualise" pain - so if that is what you mean by "raw" I understand what you mean.
OK, so you think that conciousness can have no content? It depends on what you mean by ‘content’. See above.
Right. So pain can sometimes have no content other than, as it were, itself? It is not "about" anything other than pain? I'll stop there, because I think it is good to go step by step. Let me know if I am going off track. Elizabeth Liddle
Nullasalus:
Alright, I’ll play along: I admit that it’s entirely possible that, say.. my experiences are the result of the manipulations of a cartesian demon, and that what I think is proper reasoning is, as a matter of fact, not.
Cool. But you don't have to posit a cartesian demon - you just need to posit a math error. Or even an unshared premise. I think that's what we have here, actually. I hope we may be able to drill down to it.
Guess what? I still have to go by the criterion I have, and the criterion is still indicating (and it appears I’m not the only one getting this indication) that your position is incoherent and your examples are crappy and flawed.
Sure. And it may be correct.
So go ahead and admit that it’s entirely possible that your position is incoherent and that your arguments are fatally flawed, but that you don’t believe this to be so. Then I can accuse you of believing yourself to be infallible with the same amount of justification you accused me of such. Or better yet, don’t admit to this – and give me ample grounds to accuse you of hypocrisy.
Well, no, I don't admit to it. I don't believe I'm infallible. I just said so. Obviously I believe I'm correct (otherwise I'd change my mind) but that is not the same as saying I'm infallible. I could do a complicated math problem, and when asked whether I think the answer is correct, say "well, yes, to the best of my knowledge and ability, this answer is correct, but I know I am careless at math so it's entirely possible that I dropped a term or lost a minus sign somewhere along the way". I think you just made a logical error there :) But it happens to all of us, so don't worry. Just let me be absolutely clear: I think I am correct, but I don't know with 100% confidence that I am correct. Does that help? Elizabeth Liddle
So what do you mean by “raw experience”? The reason I ask, of course, is that this seems to me to be a fairly fundamental assumption – that experience can be “raw”. What does “raw” mean in this context? Experience as it is experienced, rather than experience being conceptualized as an object. If I feel pain, I can conceptualize an object of my pain - I can say the pain is from a pin in my arm. And I can be mistaken (phantom pains) about that object. But the pain itself is not a concept. It's a sensory datum. OK, so you think that conciousness can have no content? It depends on what you mean by 'content'. See above. nullasalus
Nullasalus:
Does anyone think one could be conscious, yet not conscious OF anything?
Are you asking if all consciousness is consciousness of an object? If so, that doesn’t seem right. Raw experience is exactly that – experience. Not a product of conceptualization.
So what do you mean by "raw experience"? The reason I ask, of course, is that this seems to me to be a fairly fundamental assumption - that experience can be "raw". What does "raw" mean in this context?
If yes, can you explain what consciousness might be if one were conscious of nothing?
But that’s poorly phrased – you ask if there’s consciousness without consciousness OF anything, then ask for an explanation for being conscious OF nothing. You’re treating consciousness as being conscious of something, even when that’s supposed to be what’s being denied.
OK, so you think that conciousness can have no content? I'm just trying to get at the bottom of where we differ here, not trying to prove you wrong. Elizabeth Liddle
Does anyone think one could be conscious, yet not conscious OF anything? Are you asking if all consciousness is consciousness of an object? If so, that doesn't seem right. Raw experience is exactly that - experience. Not a product of conceptualization. If yes, can you explain what consciousness might be if one were conscious of nothing? But that's poorly phrased - you ask if there's consciousness without consciousness OF anything, then ask for an explanation for being conscious OF nothing. You're treating consciousness as being conscious of something, even when that's supposed to be what's being denied. nullasalus
No, that would be circular reasoning. So if, say.. someone changes their alibi 5 times, that's not even an indication that they're lying? If a person repeatedly tries and fails to explain a concept in a way that makes sense, that's not an indication - not utter certain proof, but an indication - that perhaps their concept doesn't make sense, or that they don't understand it after all? No, I already dealt with this a while back, Nullasalus. You seem to think you have an infallible criterion for distinguishing nonsense from sense. You do not appear to consider the possibility that your criterion may be faulty. That’s why I said that you have locked yourself into a position where you cannot be wrong. No, what I have is a criterion for distinguishing sense nonsense from sense. Sure, my criterion can be faulty - so can anyone's - but so what? My criterion is all I have. I can amend it as situations warrant, but pointing out the mere logical possibility that my criterion is faulty does nothing. It certainly doesn't magically give what I take to be nonsense, credence. Oh, I’m sure they do find that his positions reduce to denials of such. That doesn’t mean that they have performed the reduction correctly. Nor does it mean they haven't. Funny, you tell me to be open to the possibility that I'm wrong, and you take my calling your arguments and examples nonsense to indicate that I think I'm infallible. You see no problem writing off people as not having understood or even read Dennett if they end up disagreeing with him. Pot, kettle, black. um, no. um, ya-huh. And if the litmus test for deciding whether or not I am talking nonsense is whether you think I’m talking nonsense, then as I said, you have locked yourself into a position in which you cannot be wrong. No, I haven't. Man, you can't even get this straight? You're honestly telling me that if I use logic and reason to evaluate your arguments, only to find them wanting and pointing out as much, that I've 'locked myself into a position in which I cannot be wrong'? You're freaking telling me that evaluating your arguments and statements for coherency is a bad way to determine whether or not your arguments and statements are coherent? It is, in fact, possible to make errors in logic – to conclude that a piece of reasoning is fallacious when it is not. Gosh, is it? I mean, are you certain of that? Sounds to me like you're saying that your belief in this statement is infallible. Better be open to the possibility that it's not possible to make errors in logic, right? Wait a minute... Look, this isn't exactly helping your case here. You're coming down on me, accusing me of believing that my logic and reasoning is utterly infallible (I dare you to quote me anywhere making this claim, because it doesn't exist) on the grounds that in the course of evaluating your arguments and examples - examples you yourself excuse by saying they're the result of you 'struggling to communicate novel concepts' - I'm concluding that your examples and arguments are rotten. Alright, I'll play along: I admit that it's entirely possible that, say.. my experiences are the result of the manipulations of a cartesian demon, and that what I think is proper reasoning is, as a matter of fact, not. Guess what? I still have to go by the criterion I have, and the criterion is still indicating (and it appears I'm not the only one getting this indication) that your position is incoherent and your examples are crappy and flawed. So go ahead and admit that it's entirely possible that your position is incoherent and that your arguments are fatally flawed, but that you don't believe this to be so. Then I can accuse you of believing yourself to be infallible with the same amount of justification you accused me of such. Or better yet, don't admit to this - and give me ample grounds to accuse you of hypocrisy. nullasalus
So, let's try this step by step, and see whether we can find the error. First of all: Does anyone think one could be conscious, yet not conscious OF anything? If no, do you agree that it makes more sense to talk about consciousness in terms of what we are conscious of rather than intransitively? If yes, can you explain what consciousness might be if one were conscious of nothing? Thanks. Elizabeth Liddle
Nullasalus:
Do you appreciate the possibility that A) perhaps in your ‘struggling to convey novel concepts’, you are – in fact – talking nonsense?
Yes, of course.
That B) Perhaps the position you wish to defend ultimately is, in fact, nonsense –
Yes, of course.
and that your struggling to explain it, and producing nonsense in the process, may be indicative of such?
No, that would be circular reasoning.
That C) I could in fact be open to the possibility that your position is not nonsense, but if nonsense and gibberish is what you end up producing – if you end up bungling some relatively clear and easy concepts – that my being open to the possibility you’re not talking nonsense is not the same as a guarantee that I won’t think you’re talking nonsense, or even that you aren’t in fact talking nonsense?
No, I already dealt with this a while back, Nullasalus. You seem to think you have an infallible criterion for distinguishing nonsense from sense. You do not appear to consider the possibility that your criterion may be faulty. That's why I said that you have locked yourself into a position where you cannot be wrong.
Keep in mind, this comes hot on the heels of you – regularly – insisting that people (for example) don’t understand or haven’t actually read Dennett’s writings when they say he denies (free will, consciousness, etc), and that it hardly seems to occur to you that they read him and found his positions reduced to denials of such.
Oh, I'm sure they do find that his positions reduce to denials of such. That doesn't mean that they have performed the reduction correctly.
You deny to others what you demand for yourself, and with less justification.
um, no.
I’ll put it more briefly: If the litmus test for deciding whether or not I’m open to the possibility that you’re not talking nonsense is ‘If you decide that I’m not talking nonsense after all’, it’s a pretty crappy test, and one I’m not worried about passing.
And if the litmus test for deciding whether or not I am talking nonsense is whether you think I'm talking nonsense, then as I said, you have locked yourself into a position in which you cannot be wrong. It is, in fact, possible to make errors in logic - to conclude that a piece of reasoning is fallacious when it is not. It's a rather serious kind of error, because it tends to be self-perpetuating in a way that making an error of fact is not. Faced with infirming evidence, we can fairly easily say: "aha, I was wrong - my theory predicted circular orbits, but in fact they are elliptical". However, if we are have erroneously concluded that an argument is fallacious, it is much more difficult to correct the error, because the tendency is to repeat original the logical error. I'm not saying you have made a logical error, or that I have not; I'm simply saying that if you have, by assuming that your nonsense-detecting equipment is infallible you have disabled yourself from correcting it, if indeed it is not. Elizabeth Liddle
However, if you are willing to entertain the at least the possibility that perhaps I am not talking nonsense, but rather struggling to convey novel concepts, then I am happy to try continue. Do you appreciate the possibility that A) perhaps in your 'struggling to convey novel concepts', you are - in fact - talking nonsense? That B) Perhaps the position you wish to defend ultimately is, in fact, nonsense - and that your struggling to explain it, and producing nonsense in the process, may be indicative of such? That C) I could in fact be open to the possibility that your position is not nonsense, but if nonsense and gibberish is what you end up producing - if you end up bungling some relatively clear and easy concepts - that my being open to the possibility you're not talking nonsense is not the same as a guarantee that I won't think you're talking nonsense, or even that you aren't in fact talking nonsense? Keep in mind, this comes hot on the heels of you - regularly - insisting that people (for example) don't understand or haven't actually read Dennett's writings when they say he denies (free will, consciousness, etc), and that it hardly seems to occur to you that they read him and found his positions reduced to denials of such. You deny to others what you demand for yourself, and with less justification. I'll put it more briefly: If the litmus test for deciding whether or not I'm open to the possibility that you're not talking nonsense is 'If you decide that I'm not talking nonsense after all', it's a pretty crappy test, and one I'm not worried about passing. nullasalus
Well, your response seems to indicate that you have decided that as I appear to be talking nonsense, I obviously am, and so there is no point in inquiring further. That's fine, but obviously in that case there is no point in my trying to clarify further either. However, if you are willing to entertain the at least the possibility that perhaps I am not talking nonsense, but rather struggling to convey novel concepts, then I am happy to try continue. As I said, just let me know which. This is not sarcasm, although my tone no doubt betrays the irritation I do in fact feel. But I don't give up communication challenges easily, not least because I am interested in counter-arguments to my own positions. It was as a result of persuasive counter-arguments to the view I originally held, that I changed my mind. Elizabeth Liddle
Are you interested in what I am trying to say, or are you only interested in mocking? See, that's the funny thing. I'm hardly mocking here - I'm pointing out what you're saying, and highlighting the fact that it makes no sense and explains nothing. So you try to explain again, and you make the same mistakes. Even the 'emergence' line wasn't mockery so much as anticipating what has become a consistent move. The way I see it, you're telling me "I want to be able to give an explanation. But even if it makes no sense, even if it's obvious nonsense, even if it's incoherent, I want you, nullasalus, to act as if it's worth considering and deserves respect. I can refer to an afterlife as 'pie in the sky when I die', but making light of the flaws in my reasoning, I cannot abide." I'm not about to treat nonsense with respect, materialist or not. If you try to 'explain' something by buzzwording your way through it or using faulty reasoning, I'll have a little fun. I'd say you can avoid this by just avoiding the faulty reasoning or the incoherent examples, but I'm not sure you've got much else. nullasalus
Nullasalus: Are you interested in what I am trying to say, or are you only interested in mocking? If the former, I will try to explain further. If the latter, I won't. Just let me know. Thanks. Elizabeth Liddle
Hope that makes more sense. So, it was conscious, just not conscious of itself? But it gave itself (the thing it was not conscious of) a name ('I'm not aware of this thing's existence, but I'm going to name it anyway!'), and this naming of itself while unconscious of itself and without being aware of itself is what made it conscious of itself "and thus self-aware"? Also, all of this awareness and intentionality is derived (so it could only 'name itself' as derived from another thing, which also would have had to 'name itself', which in turn would only be doing so in virtue of yet another thing, and so on unto infinity or bruteness)? No, it makes no sense at all. You're explaining consciousness by saying a thing that was not aware of itself became aware of itself, and that's how it's aware of itself. Quick, use the word emergence! That'll patch this up in a jiffy! ;) nullasalus
Oh, and Mung: context matters. Just sayin' Elizabeth Liddle
Nullasalus:
The unconscious thing that is unconscious of itself isn’t self-aware consciously names itself thereby becoming conscious of itself and thus self aware.
Hope that makes more sense.
Well… *flails arms madly* Penny drop emergence Dennett love love love non-reductive! There, Mung. If that doesn’t make it clear, nothing will.
Sometimes you guys can be very silly. Elizabeth Liddle
OK. Well, there is a fundamental principle in neuroscience, known as "Hebb's Rule" after the Canadian neuroscientist Donald Hebb http://en.wikipedia.org/wiki/Donald_O._Hebb often expressed as: "what fires together, wires together", meaning that when two neurons activate simultaneously, the synaptic strength between the two is strengthened http://en.wikipedia.org/wiki/Hebbian_theory This is called "long term potentiation" http://en.wikipedia.org/wiki/Long-term_potentiation and involves the expression of proteins (a nice example of how DNA doesn't just build an organism but is key to its second-to-second functioning :)) This means that your brain changes, physically, in response to any brain process, whether spontaneous internally generated processes or processes that are initiated by external stimuli, generating a feed-back loop. And we can even see this at a gross structural level. The text-book example is of London taxi-drivers who are required to learn a vast body of knowledge (called "The Knowledge") http://en.wikipedia.org/wiki/The_Knowledge#The_Knowledge and whose hippcampi, a part of the brain implicated in spatial navigation, were found to be significantly enlarged. http://news.bbc.co.uk/1/hi/677048.stm But there have been other examples, including an experimental study in which students' brains were measured before and after learning to juggle, and compared with those who did not train to juggle. http://www.child-encyclopedia.com/documents/PausANGxp1.pdf although, interestingly, the change was not permanent. But structural studies are just the most dramatic - there is plenty of direct evidence of long-term potentiation at the neural level, resulting in changes in neural firing patterns during learning. That is the sense in which our brains are "plastic" - unlike computers there is no clear division between "hardware" and "software" - the "hardware" itself is changed in response to brain activity, generating new patterns of brain activity which in turn result in further changes. And, if we go below network level, to the actual neurons, there are changes in the degree to which DNA is expressed - resulting in changes to the number of receptors, the amount of neurotransmitter, the amount of neuromodulator, the degree of neurotransmitter reuptake, etc. Elizabeth Liddle
The brain, as you know, is plastic. No, I did not know that the brain is plastic. Explain. Mung
Elizabeth Liddle:
Indeed I’d also argue that a key part of that process is the labelling itself ...
As I’ve said before, I try to avoid labels where possible, because of the baggage they tend to have attached to them! - Elizabeth Liddle
Indeed. Mung
Indeed I’d also argue that a key part of that process is the labelling itself – that by naming our brain-owner as “I” we call it into consciousness and become self-aware. So the key part of consciousness is the consciousness part. We've got a breakthrough here. ;) The unconscious thing that isn't self-aware consciously names itself thereby becoming conscious and self aware. Doesn't make sense? Well... *flails arms madly* Penny drop emergence Dennett love love love non-reductive! There, Mung. If that doesn't make it clear, nothing will. nullasalus
Mung:
Do you think the universe “bootstrapped” itself?
Probably not.
Do you think life “bootstrapped” itself?
Probably.
Do you think consciousness “bootstrapped” itself?
Almost certainly. And no, it is very far from fiction. During development we go from a single cell, which I suggest is conscious of nothing, to an adult, who is conscious of a great deal (note the change in pronoun even). I suggest that "bootstrapping" is a good metaphor for that process. We know that neural networks are not merely the result of differential gene expression during development but by activity within those networks - "bootstrapping" seems an excellent analogy for this process. As for words - yes, sometimes new ideas require new words. That does not mean they are not part of an explanation. But if the explanation is to be understood (even if disagreed with) there needs to be some willingness on the part of the reader to try to understand it. The brain, as you know, is plastic. Indeed, it's one of the arguments invoked to support mind-brain duality. I don't think those arguments are valid - it seems to me that brain-brain feedback is just as good a model, and I suggest, to use a word you dislike, that mind "emerges" from that brain-brain feedback. In other words, that the process by which our brain machinery parses the world, enabling us to navigate, also generates representations of the brain-owner herself, and it is this process that we call "mind" - and the brain-owner that we call "I". Indeed I'd also argue that a key part of that process is the labelling itself - that by naming our brain-owner as "I" we call it into consciousness and become self-aware. Elizabeth Liddle
Elizabeth Liddle:
Mung: do you know what “bootstrap” means?
I do know what it means. But I doubt that you do. Do you think the universe "bootstrapped" itself? Do you think life "bootstrapped" itself? Do you think consciousness "bootstrapped" itself? Every time you come across something you cannot explain you inject a word ("bootstrap" and "emergence" come to mind) that serves for you in place of an actual explanation. It's like your personal version of goddidit. No evidence required. No argument required. No science. Pure fiction. Mung
Allanius @ 55: You know, ultimately, 'Calvinism' is just the religious version of materialistic atheism. "... As long as [men] remain in their fallen natural state, they are not free to choose that which is wholesome and right and leads to life. There is “another law” at work in them ..." Even a slave in chains is free in the way that matters. In actual fact, all men *are* "free to choose that which is wholesome and right and leads to life;" an individual man may not be able to effect that choice -- see Charles' post #46 for the distinction -- and no man, even say, Mother Teresa, is able to fully effect the choice. But, the choice, and the freedom to choose it, is always there. Even that most pathetic man, utterly enslaved by multitudes of specific habits of sin, is free to cry out, "God! have mercy on me;" that he does so cry out -- that he stops his rebellion against God -- is the "trigger" of his salvation. Moreover -- just as the 'atheists' who assert the logical entailments of atheism don't really believe their own assertions -- you don't actually believe what you are asserting. For, if you did believe it, why would you ever be trying to convince anyone (whether a Christian or an 'atheist') of its truth? Your actions in trying to convince "men in their fallen state" to "come to Jesus", or in trying to convince a Christian that 'Calvinism' is actually what the Bible teaches, and thus what he ought to believe about his nature and relationship to God, are totally contrary to your assertions that he is not free. For, only in freedom is conviction even possible. Ilion
Charles @ 46, Those are good points and distinctions you draw. The inability (or disinclination) to grasp these distinctions is one of the tap-roots feeding denial of 'free will.' That is, an active denier of 'libertarian free will' may "argue" that as we cannot do *everything* we wish to do, this means that we are not free; and the people who can't spot sophistry will think, "yeah, that's a good point." Ilion
Ilion:
Consider the following thought-experiment – If some hypothetical person’s foot is amputated, does he cease to exist? Does he become a different person? If our HP’s arms and legs are amputated, does he cease to exist? Does he become a different person? If each of our HP’s internal organs is, in turn, removed (from what is left of him) and replaced with a built-machine capable of performing the organ’s life-support functions, does he cease to exist? Does he become a different person? If the whole of trunk of our HP’s body is amputated (and appropriate life-support machinery connected to his now decapitated head), does he cease to exist? Does he become a different person? We know that the answer to these questions is “No.” We know that through all these changes to — and elimination of the parts of — the body, the human person remains himself, and remains a unified being and remains whole. Therefore, we know that neither the existence of the human person, nor his unity, nor his wholeness as a being, depends upon these bodily parts.
However, if, rather than remove his limbs and truck, we remove parts of his brain - then what? What if we sever his corpus callosum (as is sometimes done in cases of severe epilepsy)? What if we damage his memory? http://en.wikipedia.org/wiki/Clive_Wearing What if he suffers from Alzheimer's disease? Or schizophrenia in which he has delusions of alien control? Is it obvious still that the answer to these question is "no"? Elizabeth Liddle
Mung: do you know what "bootstrap" means? (rhetorical question - I know you do). Think about it :) Elizabeth Liddle
I think consciousness would be better understood if we regarded it as a verb, like “to perceive” than solely as a noun or adjective (“consciousness”; “consciousness”). - Elizabeth Liddle
Ah yes, which came first, the noun or the verb. The "chicken" or the "to lay an egg." So shall we then call it the perceiver? What does it mean to say "to perceive" if there is nothing there to do the perceiving? What is a verb?
A verb, from the Latin verbum meaning word, is a word (part of speech) that in syntax conveys an action (bring, read, walk, run, learn), or a state of being (be, exist, stand).
An action, requires an actor. A state of being, requires a being. To have a perception, to perceive requires not only that which perceives, but also that which is perceived. Consciousness is the perceiving perceiver of the perception. Mung
Mung:
And I reject “I-body” dualism for the same reason as I reject particle-wave dualism. – Elizabeth Liddle
The lack of scientific evidence?
Well, there isn't any scientific evidence for I-body dualism that I am aware of. Like particle-wave dualism, they are just different models of the same thing, useful for different things. Elizabeth Liddle
(4) I can’t for the life of me see why an Aristotelian would want to defend Darwin.
From Aristotle to Darwin and Back Again: A Journey in Final Causality, Species and Evolution http://darwinianconservatism.blogspot.com/2009/04/aristotle-darwin-and-marjorie-grene.html Mung
According to the Bible, man had free will but threw it all away in order to obtain the knowledge of good and evil. Man was free to choose from all of the fruits in the garden; but by choosing the forbidden fruit, he forfeited his freedom. The knowledge of good and evil deprived him of moral freedom by opening his eyes to his nakedness, his mortality. After the fall, he was no longer free to act as a moral agent. He lived in psychic bondage to the grave. Unlike philosophy, the Bible does not describe mind as the highest value known to man. That distinction is reserved for life: “In him was life, and this life was the light of men.” All Biblical morality is based on the value of life. When man lived in paradise, he was free to act as a moral agent because he had life. He lost his freedom when he made a conscious, deliberate choice that led to death: “You will surely die.” In Biblical terms, determinism boils down to the fact that all men are like the grass. They are going to die, and this outcome is fully determined. As long as they remain in their fallen natural state, they are not free to choose that which is wholesome and right and leads to life. There is “another law” at work in them, the law of sin and death, which produces the following conundrum: “The things I would do I cannot do, and the thing I would not do—that is the very thing I do.” Our actions are determined, often for ill, by the psychological captivity of death. The Jews lived in bondage in Egypt until they put the blood of lambs on their doorposts as a sign of their liberation by an act of God. Similarly, we live in bondage unless we have the sign of the cross and the lamb of God in our hearts. Only this sign has the power to transport us into the realm of life and give us the freedom to act as moral agents, following the light of life and imitating Christ. Without it, we are just what the New Atheists say we are. The Biblical view of free will is perfectly consistent because it is based on the value of life. The dividedness seen in philosophy comes from glorifying mind instead of life. This valuation, which makes men seem “like God” by glorifying their thinking, also leads to a curious and amusing dilemma. Plato sought freedom from the unhappiness of embodied existence in the concept of pure mind, but this concept leads to determinism by eliminating the choice offered by body. Aristotle tried to reinstate free will by making the good immanent in bodies themselves, but the moral choices he describes are fixed by the golden mean and the nature of the opposites. Since philosophy glorifies mind, the only possibility philosophers have for obtaining freedom is in the difference between mind and body. Plato’s method of obtaining freedom leads to nothingness. If we raise intellect to divine status, and totalize its force of resistance to body, we wind up with the negation of body as if it had no value, resulting in the loss of all freedom of choice. Aristotle managed to restore freedom of choice by reinvesting bodies with value. If bodies have value as well as minds, then there are moral choices to be made. Politically speaking, the followers of Plato, Idealists of all descriptions, tend to have totalitarian enthusiasms. They want to eliminate freedom of choice for the good of all people, just as Plato did in his Republic; just as Sam Harris would do, if he had the power. Aristotle’s concept of value is more friendly to democracy. For instance, he supported private ownership, since he believed that husbandry puts land owners in touch with the goodness of nature. Private ownership is also the basis of self-governance and participatory democracy. The fight over “free will,” then, is usually a proxy battle for something else. If someone is against it, it’s better than even money that they have grand designs for you and want to remake you in their image. Darwinists are opposed to free will because they don’t want schoolchildren to be free to choose. They believe our salvation lies in Darwin, and their opposition to free will is a way of providing philosophical cover to a certain political agenda. Count on it—Harris, Dawkins, Provine et al are opposed to free will becase they want you to do something. We’re living in a political world. allanius
And I reject “I-body” dualism for the same reason as I reject particle-wave dualism. - Elizabeth Liddle
The lack of scientific evidence? Mung
Ilion:
Therefore, the reality (and unity) of the human person — the ‘mind’, the ‘self,’ the ‘free will,’ the ‘I’ — is not in any way dependent upon matter.
I always wondered what kept "my" atoms in "my" brain and why "my" atoms can't become part of a different brain making that person "me." vjtorley:
it’s just that my “me”-ness will be incomplete, in the absence of my body, until the resurrection of the dead takes place.
Which body of yours? Why do "we" believe that it is the exact same body that will be raised? Particularly when Scripture explicitly states otherwise? Mung
As I’ve said before, I try to avoid labels where possible, because of the baggage they tend to have attached to them! - Elizabeth Liddle
Mung
tragic mishap: Exactly. Which is why, when the penny dropped (as I see it) that substance dualism was not required to account for mind and brain, I also lost any justification for positing God. Elizabeth Liddle
There is probably much I can agree with in Aquinas, VJ. I prefer to use the term "soul" (Hebrew: naphesh, Greek: psyche) to mean the entire person, including the body. "Spirit" on the other hand is the entirely immaterial portion of a person (Hebrew: ruah, Greek: pneuma). Thus when I die my soul ceases to exist until the resurrection, but my spirit is just fine. The spirit serves as sort of a template for the resurrection of the body, at which time my "soul" will again be complete. One of my main problems with monism and hylomorphic dualism is God's place in it. If there really is only one "substance" and one world, then God is subject to all the same physical laws as we are. In principle, we could observe Him without His permission if our technology progressed enough. If God is part of the physical universe, then laws he supposedly created he is also subject to. I don't think that's really possible and I don't think God can really be a part of something he created from nothing. So unless you drop creation ex nihilo then substance dualism is required. Not to mention being a convenient place for the human spirit as well. tragic mishap
Elizabeth: I understand your point, but disagree. Whatever we rationalize after having acted does not change the degree of responsibility of our actions. It is true, instead, that the acceptance of the concept that we are responsible is very important for our future actions: it is a step in the right direction. But it is not important how big is the domain of reality for which we imagine we can be responsible. The important thing, on the contrary, is to be realistic: to exert our free will in things we can really change, humbly, but with the constant intention to become better. IOWs, it is not the imaginary statemdents about ourselves that our ego emanates that built our true self. It is our goodwill, our inner love for truth and good, our attunement with that "moral field" that our intuition percives behind our thoughts, behind our actions. That's why the religious man gives everything to God: that is the supreme exertion of free will, the inner renounciation to exert our power over good and evil, and the loving acceptance that all our will, our desire and our thought should cooperate with the supreme will, where all good and truth abides. So, the importance difference between our views is that both of us give great importance to the personal faculty to change one's destiny, but I maintain that that faculty has meaning only because there is an objective truth about ourselves and our possible choices, a truth that is not created by us, and which we can only receive and accept, or refute. gpuccio
Well, gpuccio, putting it as "feeling guilty to enlarge our ego" doesn't sound very nice, I'd agree. But I don't think that's what it amounts to. I'm not talking about the "ego" here, in the sense it is usually used (not, in fact, usually in Freud's sense). I'm simply talking about the "I" - the referent for that pronoun, and the extent of the agency we attribute to it. If I say I have free will (and I do) I am, in effect (Dennett argues) saying that there is a domain of causality that I regard as coterminous with me. Further more, Dennett is also saying (and agree with him) is that in regarding myself as coterminous with that domain, I am accepting moral responsibility for actions within that domain. To take two extreme cases: A person could commit a terrible crime, and then plead, at her trial, that she was not responsible for her actions, that "the voices made me do it" or that "I was driven out of my mind by his cruelty" or "I had PMS" or whatever. By that plea, she is saying "I am only responsible for a very small domain of actions - for the actions of which I am accused, I am merely an avolitional passenger on a surge of events that are out of my control." In other words that "I have very little freedom of action - free volition"." Take that same woman who says: "I take full responsibility for my crime; yes I was hearing voices, but I could have resisted them - I knew they were wrong; yes, I was suffering from PMS, but I should have made sure that I did not put myself in a position in which other people would be put in danger; yes, he was cruel to me, but there were other solutions, and I should have pursued them". That woman is not "feeling guilty to enlarge her ego" in my view. What she is doing is saying "I have free will - I am responsible for may actions, even in the face of adverse circumstance". It is not that one woman is right and the other wrong. That the first rightly or wrongly takes a Hard Indeterminist view of Free Will and the second, rightly or wrongly takes a Compatibilist view. It's that in taking the view each adopts, each, by that same token, adopts a different definition of her self In other words, it is not that free will is true or false, but that the answer depends entirely on how we define the thing that is alleged to be (or not) free: "I". And that itself is a matter of choice :) And so, by saying "I am free" I become so, whether or not determinism is true. And by becoming so, I am accepting moral responsibility. It's something, as Dennett says, that only human beings appear to have the capacity to do, and it's what makes us human. I would say (or would have said anyway) it's what gives us our soul :) Elizabeth Liddle
Elizabeth: Well, I disagree with you on most points, but I suppose that we should go into deepp philosophical discussions, and even religious specific arguments, to go further. So, I would leave this particular argument to that. Just a few brief comments: I do believe the light is always on, in different ways. I do believe that consciousness can exist, intensely and joyously, without any relevant formal content. As for Dennet and the father example, I don't agree with his (and your) point. Our responsibility is what it is, whatever we can imagine. Feeling guilty to enlarge our ego does not seem a fruitful strategy, to me. On the contrary, a very successful religious strategy is to give everything to God, both our sins and merits, and be joyously humble in Him. I would like to close this brief post with a quote about responsibility form one of my favourite books of all times, The practice of the presence of God, by Brother Lawrence. I believe it conveys wonderfully the special, very strange concept of "responsibility" in true religious experience: "He said he carried no guilt because, "When I fail in my duty, I readily acknowledge it, saying, I am used to do so. I shall never do otherwise if I am left to myself. If I do not fail, then I immediately give God thanks, acknowledging that it comes from Him." gpuccio
vjtorley: There still remains the problem of how an immaterial act of mine, such as thinking, can affect my body, as it must for free will to have any practical significance. Yes, how immaterial thinking can affect the body is not understood, but no, thought control over the body is not the significance of free will. A more precise statement might be "... as it must for free will to have any practical consequence", but I would quibble with that as well. We can think as we choose without limit. From Einstein's thought experiments on relativity to Hawking's theorizing on black holes. Einstein's body was under control of his mind, but Hawking's was not. I can decide with complete libertarian free will to jump off a cliff and fly, but gravity will limit the consequence of that decision, regardless of the control my thoughts have over my body. Having mental control over our bodies is essential to implement free will decisions, but moving our bodies is a consequence of free will, not a determinative pre-condition. ALS patients think with the same freedom, clarity and acuity as do the rest of us, but have degenerating control over their bodies. The distinction between decision and consequence is important to avoid needlessly encumbering the explanations of how and where decisions are made. In seeking an explanation of how an immaterial mind controls a material body, one is struck by the marvelous complexity of the immaterial mind; without being consciously aware, decisions to move a muscle are enacted against precisely the correct motor functions of the brain. Unlike fingers manually targeting keypads on a keyboard, the mind is automatically connected and interfaced to a myriad of physical controls and feedbacks activated singly or multiply, as well as vast and minute memory retrieval. A computer operating system is programmed with hundreds if not thousands of device control commands, and while we talk of an "operating system" as a singular monolithic entity, it is in fact thousands of subroutines associated with device control and memory management alone. Our minds, seemingly, have thousands of motor control and memory retrieval "subroutines" all precisely interfaced with different areas of the brain, and all of it working without our consciously having to select and activate any particular "subroutine". We think "lift arm" and it lifts. Unlike a computer operating system we don't think "compose arm lift command sequence; address left arm driver; copy arm lift command sequence to left arm driver buffer; execute left arm driver"; step arm muscle adapter thru arm lift command sequence; return result code; ..." As complex as are our physical bodies and brains, our minds are "preprogrammed" to manage that complexity. Our minds practice becoming proficient in that management as we mature from infancy, but all the basic "body activation subroutines" seem built in from birth (or inception?) regardless of how monolithic the "mind" seems to be. Whether the body is uncontrollable as in an ALS patient, or controllable but irrelevant as in Einstein's thought experiments, free will exists and is exercised in the mind, regardless of the consequences and limitations imposed by the body and nature. A final point: Jesus pointed out (Mat 16:26) "For what will it profit a man if he gains the whole world and forfeits his soul? Or what will a man give in exchange for his soul? Paul in 2Cor 5 likens our material bodies to "tents" (temporary dwellings) in which we (implicitly, our souls) reside until our souls are absent the body and present with the Lord. Christ died to save our souls, not our flesh. As complex and marvelous as are our physical bodies, our minds and souls seem even moreso and they are what God values, not our bodies. The gift of physical life of earthly tents pales in comparison to the gift of eternal life of the immortal soul. Charles
gpuccio:
Elizabeth: I think that after all our fundamental views are not compatible, although we certainly share many practical and human convictions (that is ceratinly very important). Your model, although “smoothed” by your very positive appraoch, remains essentially strong AI and, I am afraid, compatibilism. So, I have to freindly disagree (no big problem, after all).
No, no problem at all! But yes, my position is Strong AI and compatibilist AFAICT.
I will not insist on the main points I have already discussed. I can, maybe, add some considerations inspired by your last post. a) Why I detest the “emergent property” concept. To make it short, because it is vague, ill defined, ambiguous, and used essentially to support wrong statements. Its use to suggest that cosnciousness can emerge from a sun of parts is typical bad reasoning. To remain short, I take some examples of “emergence” from Wikipedia: The game of chess: “Indeed, you cannot even reliably predict the next move in a chess game. Why? Because the “system” involves more than the rules of the game. It also includes the players and their unfolding, moment-by-moment decisions among a very large number of available options at each choice point. The game of chess is inescapably historical, even though it is also constrained and shaped by a set of rules, not to mention the laws of physics. Moreover, and this is a key point, the game of chess is also shaped by teleonomic, cybernetic, feedback-driven influences. It is not simply a self-ordered process; it involves an organized, “purposeful” activity.” Maybe because conscious, intelligent, pusposeful agents are involved? “The shape and behaviour of a flock of birds [1] or school of fish are also good examples.” And, I suppose, nobody understands what governs those mysterious phenomena. Design? So called “self-organizinf systems”, a la Prigogine: “For example, the shape of weather phenomena such as hurricanes are emergent structures. The development and growth of complex, orderly crystals, as driven by the random motion of water molecules within a conducive natural environment, is another example of an emergent process, where randomness can give rise to complex and deeply attractive, orderly structures. Water crystals forming on glass demonstrate an emergent natural process, where a high level of organizational structure is crafted directly by the random motion of water molecules.[citation needed] However, crystalline structure and hurricanes are said to have a self-organizing phase.” Bit in all these examples, we understand the laws, the mathemathics, and the mix of necessity and randomness that determines the result. There are explicit models, convincing models, for that “emergence”. Nothing of that kind is true for consciousness.
But that isn't a criticism of emergence as a concept - it's just the claim that it isn't relevant to consciousness. I think it is.
b) You say: “I don’t understand what this means.” Well, cosnciousness is a fact. We directly perceive ourselves as conscious beings. This is a fact. All other facts, the perception of a tree or of a star, are possible only because we perceive ourselves as conscious, and because therefore we have conscious represenattions of the world. That’s why I say that: “consciousness is a fact, and it precedes, in our reconstruction of reality, the experience of matter and of the outer world.” I hope it’s clear now.
Yes, that is much clearer, thanks! Now, I simply disagree with it! I don't think that "the perception of a tree [is] only possible...because we have conscious representations of the world". I think that is circular. I think consciousness is the capacity to perceive things - that the second doesn't follow from the first, it is the first. And to support that claim I'd ask: what kind of consciousness would it be to perceive nothing at all? Including the absence of anything? I submit that if we perceived nothing at all, including the perception that we were perceiving nothing, we would not be conscious. In other words, I think consciousness would be better understood if we regarded it as a verb, like "to perceive" than solely as a noun or adjective ("consciousness"; "consciousness"). Moreover, it should be a transitive verb. So let me coin one - to conch. A conscious being is one that is conching something. Conchousness is the state of conching things. Conching things covers a bit more than simply perceiving things, because we can conch things that are not present, and we can also conch the absence of things, as well as abstractions like "injustice"; "anger", and relations between things: causal relations; proximities; agents; intentional causality. I submit that by replacing "consciousness" with "the state of conching things" we omit nothing that is key to the original term, but we cast it in a grammatical form that allows us to ask answerable questions about how it works. We also completely dispense with duality, because instead of having "consciousness" on one hand and "brain" on the other, we have a brain capable of conching stuff.
c) I say: “Therefore, consciousness is the “fact of all facts”, and must have an independent reality in our map.” You say you don’t understand. Well, if we perceive a tree, what do we say? We say that it exists, and we include it in our map of reality. We don’t expect to have a theory of trees explained on the basis of, say, stones to admit their independent reality. trees exist. we give them a name, and try to understand what they are and how they in teract with the rest of reality.
Yes indeed. The way we conch trees is as objects in space. We also conch our own spatial relationship with the tree, as well as our own emotional relationship with it, maybe (there's an awesome oak tree a couple of miles from our house that never fails to raise my heart rate a little!) Our ability to conch is indeed a fact. But that doesn't mean its an object like a tree, or even a phenomenon like a tree. It's more like the phenomenon "growth". A thing something does, not a thing something is.
The same must be true for consciousness. We perceive consciousness “before” perceiving a tree. It exists. We must include it in our map of reality. It is the fact of facts.
Yes, one of the things we can conch is conching. I'm doing it now :) Yes, it is included in our map of reality. I think that is key (and you are in danger of channelling Hofstadter here :)). My view is that the key to understanding self-consciousness (being able to conch that I am a conching thing) is our capacity to make a map of the world on which we not only place ourselves, but that includes the map itself. A bit like those model villages that include a model of the model village, that includes a model of the model village.... It is from that "Strange Loop" that "I" emerges - which is why Hofstadter really wanted to call his book: "I" is a "Strange Loop". Although the final title is cooler :)
d) You say: “You seem to have at least an implicit dualist model in which a chooser (Entity A) makes a selection from options offered by Entity B (the brain?)” No, I have a model where a subject (the I) express itself through a complex interface (mind and brain/body) which works in both directions, an interface that it perceives and by which it is influenced, and which it can to a point influence. Like in a videogame, just to understand.
Now, when we are very intrigued by a videogame, we become very much identified with the interface. But if we “were” the interface, there would be no game: like when a demo rolls on, and you cannot intervene. When we play at a game, we cannot do everything (unless we cheat). And what we can do at any moment is strongly influenced, and limited, by how we have played previously. But still we can originally influence the game, at any moment. But there is no doubt that a strong reciprocity exists, at any moment, between the interface (the game) and the subject(the player). Well, yes, but your interface implies dualism. Doesn't it? The interface may be complex but it remains an interface between TWO things, no? A chooser and a set of choices?
e) I don’t like Dennett. He is smart, but I don’t believe he uses his smartness correctly.
Well, you don't like emergence either, but it's scarcely an argument!
f) Let’s go to responsibility, and to the sad story of the father and child. I am sorry to say that a similar story recently happened here in Italy. Well, I don’t understand the point. Responsibility does not mean that we are responsible of anything which happens as direct or indirect consequence of all that we do. That would be true only if ous actions were completely free, and if we were omniscient and omnipotent. I have never thought that, and I donìt think that amy serious defender of libertarian free will ever has.
No, but Dennett's point is that the father has the opportunity for a "self-forming act" - he can include, or exclude, his omission from his self. If he includes it, he accepts a truly terrible moral responsibility; if he does not, he escapes it, but diminishes his self. That is the entire point of Dennett's (superb IMO) book - that the answer to "what does 'I' refer to?" is "what you assign moral responsibility to". I, in other words, refers to the agent of our own actions. Which is very simple actually. The clever part is where we draw the agency boundaries - we can draw them wide, and take on huge, perhaps an unbearable degree, of moral responsibility, or we can draw them narrow, avoid moral responsibility, but define ourselves almosts out of existence. As interestingly, J.K.Rowling implies about Voldemort (yes I've just come back from the last film!) - he is reduced to a nothing - a whimpering shell of humanity, incapable of volition. And, in the same scene, there's a lovely exchange between Dumbldore and Harry (I wish I could recall the exact words). Harry asks "is this real? or is just in my mind?". And Dumbldore says: "of course it's in your mind, Harry! That doesn't mean it isn't real". I felt like applauding!
So, what does responsibility mean? I think it is a very positive concept, rather than a harbinger of sin and gult. Responsibility means that we are free in the measure that we can “influence” our destiny. Sometimes we can influence it very little at present, but the cumulative action of good use of free will can build important results, in time.
Yes indeed. I agree with all that, and as far as I can see, it is included in my conceptualizatin.
That’s why we must never judge others (a concept which is well expressed in many religious paths). We cannot understand. We are not aware of the true context. But we can inspire others to change, if we believe, and make them believe, that they can change for the better. Gradually, patiently. Because it is true that they can change. Because, however difficult their present condition may be. they still have free will, and can gradually change it.
Yes, absolutely :) We do seem to have a lot in common!
In the same way, we should not judge ourselves. We cannot understand. We don’t know the true context. But, at the same time, we have the duty and the privilege to know that we are free, that we can change for the better. Whatever our condition is.
Yes. "Love your neighbour as you love yourself" is still the crowning precept for me, and we should not forget the last part. Sometimes we have to forgive ourselves.
g) You say: “I think we already have AI robots that are conscious of something extremely simple.” I don’t agree. Is that just your imagination? Have you any evidence?
Yes, as long as we use my approach. Let me rephrase using my new word: "I think we already have AI robots that can conch simple things". We know they do because we they alter their behaviour in response to those things. The behaviour, moreoever, is not stereotyped and reflexive - it demonstrates the taking into account of distal as well as proximal goals, and balance them; the robots can learn from experience; they can plan ahead - anticipate obstacles and take avoiding action. They even have a map of the world on which they feature, and which is constantly updated. This is what we do when we are conscious of our place in the world.
When one is conscious, one is conscious. One has conscious representations. Dim or vivid, but conscious. It is not a problem of complexity. We can have very deep and intense and meaningful representations which are extremely simple. Love, especially when it is deep and pure, is very simple. Pain can be very simple, anr yet excruciating and terrible. A mathemathical demonstration can be boring and unimportant for the student who goes through it.
I don't dispute any of those statements, atlhough I find some of them a little non-informative! So let me rewrite your first fewe sentences with my new word (sorry if this is irritating, but I find it a useful exercise): When one is conching things, one is conching things. We call our the products of our conching of things "representations" of those things. Those representations may be dim or vivid".
Where are your conscious robots? Why do you believe they are cosncious? Is that a just so story?
See above.
h) You say: “And, I would argue, that the “sum of parts” is an inadequate description of the whole. ” But consciousness is radically different. It is based on simplicity, on oneness. The I us a point that perceives, not a complex structure. The simple I percieves complex structures, may even identify with them. But, at any moment, the subject can “recede” at a metalevel, and what seemed to be part of the I becomes something observed by the I. But the I is always there, still perceiving.
Well, I don't agree that "the I is a point that perceives, not a complex structure". I think "I" is the name we give to the thing doing the perceiving - the conching. It's a simplifying, unifying label, just as "universe" is the name we give to the entirety of the world. Or, alternatively "reality". And yes, the subject of our attention - of or perception, or consciousness, can change moment by moment, including the perception of ourselves as observers. But I think the perception that consciousness is a state of flow - "the I is always there, still perceiving" - actually is an illusion. My favorite analogy is the fridge light - because the fridge light is always on when we need it, we have no perception that it ever goes off. We cannot "catch it" in the off state. And of course, that is all we need - we do not need the light to be on when we are not looking in the fridge. I suggest the same is true of our perception of ourselves - because we can be conscious of ourselves whenever we need to be, or, for that matter, of anything, as soon as we need to be, we have the illusion that we are always aware of everything - that the fridge light is always on. And it doesn't matter that it isn't. It's just more energy efficient :) Elizabeth Liddle
Ilion and tragic mishap, Thank you both for your comments. I'd like to refer you both to a paper by Fr. John O'Callaghan, entitled, From Augustine's Mind to Aquinas' Soul , which traces the change in Aquinas' thinking from his youthful view that a human being is a soul, that this soul has two parts (the vegetative/sensitive soul and the mind), that the mind is the higher part of the soul, and that will, memory and intellect are the three powers of the mind (Augustine's view) to Aquinas' later and more mature view that a human being has a soul, that each of us has one (not two) principles of life, that the life of man includes everything from his vegetative functions to his intellect, that intellect and will are not powers of some thing called "the mind" but are simply two immaterial powers of the soul, and that the death of a human being is the death of a person. Hence a separated soul is not a human person, even though it is conscious and able to think, because it lacks a body. I gather that Ilion's view is much closer to Augustine's opinion than to Aquinas'. I would just like to ask him if he accepts that the soul is the form of the body. A few quick remarks: (1) From the earliest times, the Church has prayed for the souls of the faithful departed. I can't refer any prayer of the Church which refers to dead people as such. I don't have a big problem with the idea that when I die I will no longer be a person. I would say however that I will still be me; it's just that my "me"-ness will be incomplete, in the absence of my body, until the resurrection of the dead takes place. (2) For my part, I don't equate personal identity to brain identity. If my head could be transplanted to someone else's body, I don't think it would be me. I think my identity is bound up with my nervous system as well as my brain, and if someone's brain were transplanted into my body, I think that body would still be me. (3) The fact that the particles in my body are in continual flux is irrelevant. What matters is that the form perdures, as can be seen from the fact that the overall structures of the organs remain the same, and the parts continue to function as a unified whole. (4) I can't for the life of me see why an Aristotelian would want to defend Darwin. As I showed in parts 1 and 2 of my five-part reply to Professor Tkacz, Darwin and Aristotle (and especially Aquinas) don't mix. (5) tragic mishap writes that if I did not believe mind and body were two things, I would not have an interaction problem to solve. Not so. I believe that one and the same being is capable of both material and immaterial acts. There still remains the problem of how an immaterial act of mine, such as thinking, can affect my body, as it must for free will to have any practical significance. That is the problem that my posts on the interaction and libertarian free will were intended to address. Hope that helps. vjtorley
All you have to do is trot out Aristotle and suddenly good theists start defending Darwinism and attacking ID (like Feser). It disgusts me. tragic mishap
Exactly. If Mr. Torley did not believe that mind and body were two different things, then he would not have an interaction problem to solve. Thus I'm beginning to wonder whether hylomorphic dualism and substance dualism is a distinction without a difference. Unless of course hylomorphic dualism is required for consistency with Aristotelian cosmology. Aristotle and the Catholic Church: A match made in hell. They are lucky the materialists are covering for it on the geocentrism issue by blaming it on the Bible. Otherwise people might realize there has been no greater source of confusion, error and general BS in all of Western history. Darwinism doesn't even come close. Aristotle wins for stamina and deeper penetration. tragic mishap
Tragic Mishap @ 26: "VJ, what exactly is your objection to substance dualism? You claim to have solved the only major problem with it." VJTorley @ 35: "In response to your question: I don’t think Cartesian substance dualism does justice to the unity of the human person. Mind and body are not two things; each of us is one being. That’s why I reject Cartesian substance dualism in favor of what Elizabeth has dubbed “I-body” dualism." Tragic Mishap @ 36: "So would you say that a person is like the Trinity, consisting of two distinguishable parts which are still wholly one being? I’m not that familiar with Descartes, but I sort of doubt that Descartes was saying the mind and body were not both part of a singular human being." I’d like to echo TM’s comment about Descartes. ... and add other comments: 1) Concerning the "official" RCC position on "Cartesian substance dualism" vs actual lived Catholicism -- Listening to folk like Edward Feser bang on about, or to a lesser extent, folk like Mr Torley speak about (forgive me if I am mis-remembering), the superiority of "hylomorphic unity" to "Cartesian substance dualism", one understands that one of the things logically entailed by the "hylomorphic unity" concept is that dead human persons do not (and cannot) exist when/while they are dead. Yet, Catholics, all over the world, pray to dead people, every day. 2) Concerning the statement that "Mind and body are not two things; each of us is one being." -- Indeed, we are each one being -- and, yet the immaterial mind and the material body are two different things; it's not just that we can talk about them as though they are, it's that they are. Consider the following thought-experiment -- If some hypothetical person's foot is amputated, does he cease to exist? Does he become a different person? If our HP's arms and legs are amputated, does he cease to exist? Does he become a different person? If each of our HP's internal organs is, in turn, removed (from what is left of him) and replaced with a built-machine capable of performing the organ's life-support functions, does he cease to exist? Does he become a different person? If the whole of trunk of our HP's body is amputated (and appropriate life-support machinery connected to his now decapitated head), does he cease to exist? Does he become a different person? We know that the answer to these questions is "No." We know that through all these changes to -- and elimination of the parts of -- the body, the human person remains himself, and remains a unified being and remains whole. Therefore, we know that neither the existence of the human person, nor his unity, nor his wholeness as a being, depends upon these bodily parts. But, what of what is left of the body? Does the being of the human person depend upon, or follow from, the head (as a whole) or some individual part of it? If what little remains the original body of our hypothetical person is, in turn, removed, until only the brain remains, does he cease to exist? Does he become a different person? Again, the answer to is "No." Therefore, we know that neither the existence of the human person, nor his unity, nor his wholeness as a being, depends upon *those* bodily parts. But, what of what is left of the body? What of this, now, "brain in a box"? Does the being of the human person depend upon, or follow from, his brain? Does the mind of the human person depend upon, or follow from, his brain? So-called atheists -- and other materialists -- to the extent that they even acknowledge that human persons, and human minds, really do exist, will say "Yes". But, then, they have no other logical option, given their explicit or implicit commitment to materialism. They would say, in effect, that "Your brain is you." On the other hand, those few persons in the world who are not implicit philosophical materialists, will say "No" (else they'd be either explicit or implicit materialists); they would say, in effect, that "You are not your brain." Can this final question be resolved? Can it be answered without simply asserting one of the only two possible answers? Certainly, we could continue the above thought experiment, removing, in tern, certain scientifically identified parts of the brain of our hypothetical human person, until only a very few distinctly identified parts-of-the-brain remain and observing that he still exists, that he is still himself (however much he may rage at what we have done to him). Yet, there comes a point when that particular thought-experiment seems to reach the end of its utility: we have reached some state of a minimal "body" required for the existence of a human person, and the thought-experiment cannot go further ..else I would be simply asserting the denial of the broadly materialistic assertion about the nature of human persons. Or can it go further? Yes, it can, with a slight change of focus. What I have been doing in this thought-experiment -- removing the parts of the body and in some cases (when necessary for the continuance of biological life), replacing the parts with something else -- is already going on, continuously, throughout the bodies of each of us, including in our brains. Individual cells are "born" and die, continuously, but the person remains himself and remains a unified being. And, at a deeper level of biology, the individual cells of the body are constantly replacing the matter of which they are constructed with different matter, yet the continuance and unity of the human person, and the reality of the human mind is unaffected. We, each of us, are "made" of different matter from when we were born; and, for that matter, at this very instant we're not materially "made" of exactly the same matter, or configuration of matter, as we were just a moment ago. Therefore, the reality (and unity) of the human person -- the 'mind', the 'self,' the 'free will,' the 'I' -- is not in any way dependent upon matter. You exist independently of your body -- it is not you, and you are not it. The only options logically available are: 1) acknowledge that truth; 2) deny that you even exist at all, as the explicit materialists, who actually understand where they stand, assert. "But, but, but ..." you may whinge, "Ilíon, you haven't 'explained' how it is that the immaterial self/mind moves the material body; you haven't solved the so-called 'mind-body problem'"; to which I reply, "You have a point?" One may recall, in Mr Torley's "Why I think the interaction problem is real" thread, a certain conclusion I stated:
So, the choice is between a world-view that we can tame — but which is a known engine for generating false claims — and one that we cannot tame — but which generates no false claims.
Truth is truth, even if we cannot think of a way to "tame" it. The human mind, the human self, the human person, is distinct from -- and, ultimately, separate from -- his body, even if we fail to understand how to "explain" that truth in light of the implicit (or explicit) "folk materialism" (to paraphrase oh-so-superior phrase of the Chruchlands and Dennett and others) by which we reflexively, and habitually, seek to understand reality. Ilion
One should not look for the car keys under the street-light when he knows that they are somewhere else. Ilion
--"Well … there goes the neighborhood." When one is in search of a thought, he should not give up until he finds it. StephenB
Elizabeth: I think that after all our fundamental views are not compatible, although we certainly share many practical and human convictions (that is ceratinly very important). Your model, although "smoothed" by your very positive appraoch, remains essentially strong AI and, I am afraid, compatibilism. So, I have to freindly disagree (no big problem, after all). I will not insist on the main points I have already discussed. I can, maybe, add some considerations inspired by your last post. a) Why I detest the "emergent property" concept. To make it short, because it is vague, ill defined, ambiguous, and used essentially to support wrong statements. Its use to suggest that cosnciousness can emerge from a sun of parts is typical bad reasoning. To remain short, I take some examples of "emergence" from Wikipedia: The game of chess: "Indeed, you cannot even reliably predict the next move in a chess game. Why? Because the “system” involves more than the rules of the game. It also includes the players and their unfolding, moment-by-moment decisions among a very large number of available options at each choice point. The game of chess is inescapably historical, even though it is also constrained and shaped by a set of rules, not to mention the laws of physics. Moreover, and this is a key point, the game of chess is also shaped by teleonomic, cybernetic, feedback-driven influences. It is not simply a self-ordered process; it involves an organized, “purposeful” activity." Maybe because conscious, intelligent, pusposeful agents are involved? "The shape and behaviour of a flock of birds [1] or school of fish are also good examples." And, I suppose, nobody understands what governs those mysterious phenomena. Design? So called "self-organizinf systems", a la Prigogine: "For example, the shape of weather phenomena such as hurricanes are emergent structures. The development and growth of complex, orderly crystals, as driven by the random motion of water molecules within a conducive natural environment, is another example of an emergent process, where randomness can give rise to complex and deeply attractive, orderly structures. Water crystals forming on glass demonstrate an emergent natural process, where a high level of organizational structure is crafted directly by the random motion of water molecules.[citation needed] However, crystalline structure and hurricanes are said to have a self-organizing phase." Bit in all these examples, we understand the laws, the mathemathics, and the mix of necessity and randomness that determines the result. There are explicit models, convincing models, for that "emergence". Nothing of that kind is true for consciousness. And so on. b) You say: "I don’t understand what this means." Well, cosnciousness is a fact. We directly perceive ourselves as conscious beings. This is a fact. All other facts, the perception of a tree or of a star, are possible only because we perceive ourselves as conscious, and because therefore we have conscious represenattions of the world. That's why I say that: "consciousness is a fact, and it precedes, in our reconstruction of reality, the experience of matter and of the outer world." I hope it's clear now. c) I say: "Therefore, consciousness is the “fact of all facts”, and must have an independent reality in our map." You say you don't understand. Well, if we perceive a tree, what do we say? We say that it exists, and we include it in our map of reality. We don't expect to have a theory of trees explained on the basis of, say, stones to admit their independent reality. trees exist. we give them a name, and try to understand what they are and how they in teract with the rest of reality. The same must be true for consciousness. We perceive consciousness "before" perceiving a tree. It exists. We must include it in our map of reality. It is the fact of facts. d) You say: "You seem to have at least an implicit dualist model in which a chooser (Entity A) makes a selection from options offered by Entity B (the brain?)" No, I have a model where a subject (the I) express itself through a complex interface (mind and brain/body) which works in both directions, an interface that it perceives and by which it is influenced, and which it can to a point influence. Like in a videogame, just to understand. Now, when we are very intrigued by a videogame, we become very much identified with the interface. But if we "were" the interface, there would be no game: like when a demo rolls on, and you cannot intervene. When we play at a game, we cannot do everything (unless we cheat). And what we can do at any moment is strongly influenced, and limited, by how we have played previously. But still we can originally influence the game, at any moment. But there is no doubt that a strong reciprocity exists, at any moment, between the interface (the game) and the subject(the player). e) I don't like Dennett. He is smart, but I don't believe he uses his smartness correctly. f) Let's go to responsibility, and to the sad story of the father and child. I am sorry to say that a similar story recently happened here in Italy. Well, I don't understand the point. Responsibility does not mean that we are responsible of anything which happens as direct or indirect consequence of all that we do. That would be true only if ous actions were completely free, and if we were omniscient and omnipotent. I have never thought that, and I donìt think that amy serious defender of libertarian free will ever has. So, what does responsibility mean? I think it is a very positive concept, rather than a harbinger of sin and gult. Responsibility means that we are free in the measure that we can "influence" our destiny. Sometimes we can influence it very little at present, but the cumulative action of good use of free will can build important results, in time. That's why we must never judge others (a concept which is well expressed in many religious paths). We cannot understand. We are not aware of the true context. But we can inspire others to change, if we believe, and make them believe, that they can change for the better. Gradually, patiently. Because it is true that they can change. Because, however difficult their present condition may be. they still have free will, and can gradually change it. In the same way, we should not judge ourselves. We cannot understand. We don't know the true context. But, at the same time, we have the duty and the privilege to know that we are free, that we can change for the better. Whatever our condition is. g) You say: "I think we already have AI robots that are conscious of something extremely simple." I don't agree. Is that just your imagination? Have you any evidence? When one is conscious, one is conscious. One has conscious representations. Dim or vivid, but conscious. It is not a problem of complexity. We can have very deep and intense and meaningful representations which are extremely simple. Love, especially when it is deep and pure, is very simple. Pain can be very simple, anr yet excruciating and terrible. A mathemathical demonstration can be boring and unimportant for the student who goes through it. Where are your conscious robots? Why do you believe they are cosncious? Is that a just so story? h) You say: "And, I would argue, that the “sum of parts” is an inadequate description of the whole. " But consciousness is radically different. It is based on simplicity, on oneness. The I us a point that perceives, not a complex structure. The simple I percieves complex structures, may even identify with them. But, at any moment, the subject can "recede" at a metalevel, and what seemed to be part of the I becomes something observed by the I. But the I is always there, still perceiving. well, I believe that's enough. Good night :) gpuccio
vjtorley:
That’s why I reject Cartesian substance dualism in favor of what Elizabeth has dubbed “I-body” dualism.
And I reject "I-body" dualism for the same reason as I reject particle-wave dualism :) Elizabeth Liddle
So would you say that a person is like the Trinity, consisting of two distinguishable parts which are still wholly one being? I'm not that familiar with Descartes, but I sort of doubt that Descartes was saying the mind and body were not both part of a singular human being. tragic mishap
tragic mishap (#26) In response to your question: I don't think Cartesian substance dualism does justice to the unity of the human person. Mind and body are not two things; each of us is one being. That's why I reject Cartesian substance dualism in favor of what Elizabeth has dubbed "I-body" dualism. vjtorley
gpuccio:
Elizabeth: I apprciate your post, but I am confused. You say you are a monist. That can be OK for me, but what kind of monist? Are you a materialist monist? The I you speak of, what kind of entity is it for you?
As I've said before, I try to avoid labels where possible, because of the baggage they tend to have attached to them! By monist, I mean I mean I don't think that we consist of a soul + body. I think it's different aspects of the same thing.
If you are a materialist monist, then I suppose the the I you speak of must be some formal property of assembled matter, maybe an “emergent property” (a concept, I am afraid, that I really detest). Is that your position?
Well, probably. Why do you "detest" it?
May I ask why you changed your mind? Now I have not the time to look at your referenced blog, but maybe a short summary from you could help.
The blog posts themselves are quite short. I'm not sure I could do it again in less.
I am not interested in monist-dualist debate.
OK.
I try to stay empirical.
Well, me too.
For me, as I have said, consciousness is a fact, and it precedes, in our reconstruction of reality, the experience of matter and of the outer world.
I don't understand what this means.
Therefore, consciousness is the “fact of all facts”, and must have an independent reality in our map.
Nor this, probably because I can't parse your premise.
Your “reconstruction” of my thought is fine, except that it apparently becomes contradictory at a specific point: “Different choices avaliable at each moment to an individual” that’s a fine start: different choices “are available”: the individual can choose, if words still mean anything.
Yes indeed.
“may be intuitively “felt” implicitly by the individual as better, or worse” OK “or much more explicitly reasoned, using language to articulate the alternatives (vocally or subvocally)” Maybe not exactly what I think. The role of reason, for me, is more in presenting the choices, so that the individual may choose. Given the choices, given the influence ot reason or of other inner faculties, in the end the individual can still choose: and his inner, intuitive moral conscience is the only faculty that feels if the choice, whatever it is, is good or bad. Anyay, let’s go on to the most important point: “At a neural level, this arises from competition between networks implicated in executing alternative courses of actions, the simulation of their consequences, and the feeding back of those simulated consequences as inhibitory or excitatory input into the competing networks.” This is the point. Yje neural activity “preceding” the choice is in the end only one of the “previous states” tha influence the choice, without determining it. Let’s see the following sentence: “The chosen action is the one that corresponds to the “winning” network.” No. Here our views differ radically. There is no winning network.
Well, neurally there is :)
There is, of possible winning networks, one that wins because “the individual”, the “I”, chooses to let it win.
And I'm saying that the way that choosing operates is via competition between networks. This is where the monist-dualist distinction cuts in. You seem to have at least an implicit dualist model in which a chooser (Entity A) makes a selection from options offered by Entity B (the brain?) I'm uniting that into a single model, whereby the winning action is the one that receives the most exitatory input, and that input includes all kinds of factors including the simulation of the moral consequences of each action. In the end, though, both yours and mine are models, and yours in many ways is the more efficient model for daily use. However, as a brain scientist I need one that maps better on to what we know about the brain. And the one I have seems to do so without losing the essentials of the dualist model - we still have a chooser, but instead of splitting the roles between the presentation of options and the selection of options, the two processes are intimately interlinked via excitatory and inhibitory neural connections, and an iterative process from which the choice emerges. In other words, you assign to the "I" only the selection part; I assign to the "I" both the option-presenting part and the selection part, the two being close-coupled into a single, iterative, re-entrant mechanism. Like you, I am an empiricist, and this is what the data indicate. My larger point, though, is that is neither "reductionist" nor "eliminative". It is simply locating agency in a distributed, iterative, re-entrant decision-making system rather than at the top of a hierarchical one.
If the winning of a netwrok were only the result of the competing activity of different neural networks, either by necessity, chance ot a mix of the two, then what meanins have all your talks about the I, the individual, and moral responsibility? You are just confounding the words, like any compatibilist! (Well, that was really a heavy offence, I apologize :) ).
Oh, don't worry, I think I'm already heavily tarred with the compatibilist brush! Well, I'm afraid I can't do better on the subject of moral responsibility than quote Dennett, who, though not himself a neuroscientist, is extremely well informed. He starts his book (Freedom Evolves) with a heart-rending story (I think a true one) of a father who parks his car with his small son on the back seat, meaning to take him into the workplace childcare centre, as usual, but forgets. When he goes to the childcare centre later to pick up his son, the childcare people say he never dropped him off. He then finds his son, dead, in the hot car in the car park. Horrible story. I just hope it is fiction (but even if so, it could happen so easily). It's a good tale though, for illustrating issues of personal responsibility: "It is time to recall the plight of the hapless father from Chapter 1, who bears responsibility - doesn't he? - for the death of his child. Presumably everybody has a breaking point; those who happen to encounter their personal breaking point break! How can it be fair to hold them responsible and punish them, just bedause some other person wouldn't have broken if faced with exactly their predicament? Isn't he just the victim of bad luck? And isn't it just your good luck not to have succumbed to temptation or had your weaknesses exploited by some conspiracy of events? Yes, luck figures heavily in our lives, all the time,but since we know this, we take the precautions we deem appropriate to minimise the untoward effects of luck, and then take responsibilty for whatever happens. We can note that if he makes himself really small he can externalize this whole episode in his life, almost turning itinto a bad dream, a thing that happend to him,not somethign he did. or he can make himself large, and then face the much more demanding task of constructing a future self that has this terrible act of omission in its biography. It is up to him, but we may hope he gets a little help from his friends. This indeed an opportunity for a Self-Forming Action...." Dennett, in other words, ingeniously, and absolutely rightly, IMO, equates the formation of the self with the act of taking responsibility. As he says: "if we make ourselves really small, we can externalise virtually anything". Conversely, by defining ourselves as the agent morally responsible for our actions we bring ourselves into being. We "ensoul ourselves" as you might say :)
In a neurat network, there is no I. There is no individual. Only a sum of parts interacting according to laws. There is no freedom, no choice, no consciousness.
Depends on the network. And, I would argue, that the "sum of parts" is an inadequate description of the whole. I've made this point a few times - there are a large number of non-controversial examples of where a "sum of parts" omits the properties of the whole. The parts of a neural network certainly have nothing corresponding to "I". But that does not mean that the whole may not. And, in the case of the almost unimaginably complex neural networks that comprise a human brain, I think it does. At least, I don't see why it shouldn't.
Do you really believe, like the AI guys, that if you simulate the software running in neurons, consciousness will arise?
Well, as I've said elsewhere, I don't myself think it make sense to talk about consciousness without saying what the entity concerned is supposed to be conscious of. I think we already have AI robots that are conscious of something extremely simple. But if we ask ourselves "what would it be like to be that robot" the answer would have to be: "not much". For an AI robot to have anything resembling even animal consciousness, let alone human consciousness and moral responsibility, the thing would have to be complex beyond our wildest imaginings. And I think the only way we could ever achieve it would be a) using nanotechnology (to make it small enough to be mobile) and b) using evolutionary algorithms. But then we have those robots already - they are called us :)
No. Never has it arisen that way, and never will. Consciousness just exists, it does not arise from assembled parts of matter.
I disagree. I think we are assembled parts of matter, and we are conscious. I don't think our conscious capacity arises from anything other than our assembled parts. But that is not (to repeat myself!) to say reduce our selves to our assembled parts, nor is it to say that we do not exist as agents or that we are not conscious. It is simply to say that our existence as autonomous moral agents is a property of the vastly complex and interactive arrangement of material of which we consist. It is not a property of any of our parts.
If we have to be monists, let’s cosnciousness be our substance. Of it, and only of it, we are really sure.
Well, it's an attractive model, but I don't think it maps on to our data terribly well. Elizabeth Liddle
tragic mishap: Because consciousness is perceived in ourselves, and therefore is a fact. If we derive our description of conscious processes and their properties from our experience, and try to connect those properties to other parts of our experience (interaction with the outer world), we are really working on an empirical basis. For example, I have defined dFSCI in a completely empirical way, using the empirical concept of a conscious intelligent being both to define design and to define functional specification, two key steps of my definition. And yet, no metaphisical theory of consciousness is required for that, only the practical acceptance of the fact that conscious intelligent agents exist, that they can be observed either directly (inwardly) or indirectly, that conscious processes can be described and that common words we use (meaning, purpose, function) have a well defined correspondence with specific observed subjective conscious representations, while they cannot be defined objectively in any way. This is all empirical, for me. Observed facts, and reasonable inferences on the observed facts. The only important point is that I include consciousness and its representations in the observed facts. And I am very happy with that. gpuccio
gpuccio:
I try to stay empirical. For me, as I have said, consciousness is a fact, and it precedes, in our reconstruction of reality, the experience of matter and of the outer world.
Does not compute. How is a metaphysical argument from consciousness empirical? tragic mishap
Elizabeth: I apprciate your post, but I am confused. You say you are a monist. That can be OK for me, but what kind of monist? Are you a materialist monist? The I you speak of, what kind of entity is it for you? If you are a materialist monist, then I suppose the the I you speak of must be some formal property of assembled matter, maybe an "emergent property" (a concept, I am afraid, that I really detest). Is that your position? May I ask why you changed your mind? Now I have not the time to look at your referenced blog, but maybe a short summary from you could help. I am not interested in monist-dualist debate. I try to stay empirical. For me, as I have said, consciousness is a fact, and it precedes, in our reconstruction of reality, the experience of matter and of the outer world. Therefore, consciousness is the "fact of all facts", and must have an independent reality in our map. Your "reconstruction" of my thought is fine, except that it apparently becomes contradictory at a specific point: "Different choices avaliable at each moment to an individual" that's a fine start: different choices "are available": the individual can choose, if words still mean anything. "may be intuitively “felt” implicitly by the individual as better, or worse" OK "or much more explicitly reasoned, using language to articulate the alternatives (vocally or subvocally)" Maybe not exactly what I think. The role of reason, for me, is more in presenting the choices, so that the individual may choose. Given the choices, given the influence ot reason or of other inner faculties, in the end the individual can still choose: and his inner, intuitive moral conscience is the only faculty that feels if the choice, whatever it is, is good or bad. Anyay, let's go on to the most important point: "At a neural level, this arises from competition between networks implicated in executing alternative courses of actions, the simulation of their consequences, and the feeding back of those simulated consequences as inhibitory or excitatory input into the competing networks." This is the point. Yje neural activity "preceding" the choice is in the end only one of the "previous states" tha influence the choice, without determining it. Let's see the following sentence: "The chosen action is the one that corresponds to the “winning” network." No. Here our views differ radically. There is no winning network. There is, of possible winning networks, one that wins because "the individual", the "I", chooses to let it win. If the winning of a netwrok were only the result of the competing activity of different neural networks, either by necessity, chance ot a mix of the two, then what meanins have all your talks about the I, the individual, and moral responsibility? You are just confounding the words, like any compatibilist! (Well, that was really a heavy offence, I apologize :) ). In a neurat network, there is no I. There is no individual. Only a sum of parts interacting according to laws. There is no freedom, no choice, no consciousness. Do you really believe, like the AI guys, that if you simulate the software running in neurons, consciousness will arise? No. Never has it arisen that way, and never will. Consciousness just exists, it does not arise from assembled parts of matter. If we have to be monists, let's cosnciousness be our substance. Of it, and only of it, we are really sure. gpuccio
gpuccio: Thank you for your direction to this post from the other thread. In some ways what you write below conforms very precisely to the view I held until about three years ago (which is why I take full responsibility for my current views! Yes, they were changed during the course of discussion with others, but no-one coshed me in a back alley and force-fed me with "eliminative materialism" - I simply considered alternative arguments and found myself persuaded. The record, if you are interested, can be found here: http://www.freeratio.org/thearchives/showthread.php?p=4518826#post4518826 (but it's a very long thread! - the Damascene moment is here: http://www.freeratio.org/thearchives/showpost.php?p=4578996&postcount=696) (not sure whether you have to be logged in to view - if so, I will repost the relevant posts here). And to make it completely clear here as well: I do not deny the existence of "I". In fact the only thing I have "eliminated" is duality. I find that everything can be perfectly well accommodated in a monist model, including free will.
c) Quantum indeterminism is, at present, the best “window” we have (given our scientific understanding of these things, which is certain ly rough) to “join” strict determinism and free will of agents. d) The way the agent (my “transcendental I”) influences outer events is anyway still a big mystery. e) There is no doubt that both outer and inner previous states (including character etc.) influence (but do not dtermine) free will. My idea is that precious states (the total sun up of them, including all our personal past and therefore our previous exercise of our free will) determine the “range” of choice we have in each single situation: what I call the “level of freedom” of each individual at each time. IOWs, all individuals have free will, but each one has different levels of freedom (can act from different ranges of options).
If you do get access to the first post in my IIDB thread (entitled "why I am a theist"), you may recognise your own views in mine :) I'll be interested to see what you make of them (though I no longer hold them). However, my current views pretty well resemble what you state below, except that I would recast them in monist terms.
f) The final point, and IMO the most important one, is the following: free choices not only are not determined, not only are not random. They are not “neutral”. IOWs, different free choices in each individual situation have different “moral” values for the individual, and affect differently his personal future. IOWs, different choices avaliable at each moment to an individual are intuitively “felt” by the individual’s consciousness as connected to a moral “field”: some of them are better, some of them are worse. Reason has a role in that, but I believe that it is not the only factor, and that the final property of “moral conscience” is essentially intuitive, and directly perceived by the transcendental I. That “moral” property of free choices is the natural basis of responsibility, and is the reason why our present use (good or bad) of free will influences our future “level of freedom”: good choices increse our level of freedom, bad choices reduce it.
Let me try to phrase the above in my own terms: "Free choices cannot be not determined by solely by proximal factors, nor by random (e.g. quantum) factors. Neither kind of choice could be said to be free - the first because the action would be entirely predictable given a handful of local immediate factors, the second because it excludes the agent we are interested in (someone who tosses a coin to make a decision has delegated the decision making to the coin). For a choice to be both free, and informed, it must also be determined by consideration of the long-term (i.e. distal) effects of the chosen action not only on ourselves but on others. In other words those actions are not morally “neutral”. Different choices avaliable at each moment to an individual may be intuitively “felt” implicitly by the individual as better, or worse, or much more explicitly reasoned, using language to articulate the alternatives (vocally or subvocally). At a neural level, this arises from competition between networks implicated in executing alternative courses of actions, the simulation of their consequences, and the feeding back of those simulated consequences as inhibitory or excitatory input into the competing networks. The chosen action is the one that corresponds to the "winning" network. Whether this choice is reached implicitly ("intuitively") or explicitly (reasoned), the agent responsible for decision is perceived as the self, and referred to, in language, by the first person pronoun. Thus the referent for the word "I" (in English) is a "transcendent" entity - an entity with properties quite different from, and much more widely effective than, those of any of the subprocesses from which it arises, and capable of free (i.e. unconstrained by immediacy) volitional (i.e. not delegated to some quantum coin tosser) moral (i.e. made with consideration of the long-term constraints the action may place both ourselves and others) choice. That “moral” property of free choices is the natural basis of responsibility, and is the reason why our present use (good or bad) of free will influences our future “level of freedom”: good choices increse our level of freedom, bad choices reduce it." Note that I have left your final sentence untouched :) There is much, in other words, on which we agree. The only disagreement is how we get there :)
Elizabeth Liddle
Mung: "So, how is it that anything but libertarian free will is even possible?" Or -- So, how is it that anything but [living entities (*) ] is even possible? (*) by 'living entities', I am not limiting the question to biological entities. Ilion
Well ... there goes the neighborhood. Ilion
I would like to compliment everyone involved for sustaining such a high level discussion. I think, though, that a key point needs to be articulated, albeit in an abbreviated fashion: With libertarian free will, our choices influence our destiny. With compatibilism, they do not. Compatibilism, like determinism, teaches that a man cannot live a purpose-driven life, although he can, as the story goes, be “free” insofar as no human agent interferes with his pre-determined choices. To say, though, a man can be nature’s plaything and also be free is obviously ridiculous. Legitimate free will, on the other hand, allows us to do the things that really matter—to set a concrete goal and achieve it, to build character or not build character, to support good causes and fight evil causes, to love or not love—to do anything that really matters. StephenB
VJ, what exactly is your objection to substance dualism? You claim to have solved the only major problem with it. tragic mishap
So, how is it that anything but libertarian free will is even possible? Mung
vjtorley: I would argue strongly, however, that operations such as thinking and willing are not carried out in some higher dimension. Agreed. I was not suggesting human thinking and willing occur beyond 3-D, but rather that "spirits" can and do mediate between the mind and body; specifically the human spirit mediating between an immaterial mind/soul and the 3-D physical body, and further that non-human spirits when they manifest in our 3-D space can likewise mediate between the human mind/soul and the human body, and implicitly, angels never do so (out of obedience to God and respect for His creations) but demonic spirits do so whenever given an opportunity. Regardless of whether you think they can move physical objects or not, thinking and willing are non-physical operations. Agreed. For a good summary of the main arguments for the immateriality of the soul, with lots of links to good philosophical articles, see here . I agree the soul is immaterial, and in my 'model' (scant as it is) the mind is included in the soul. It is in our soul that our personality distinctions exist such as our likes/dislikes, attitudes, intentions, some intellectual capacities, and maybe our identity, but in our spirit resides our "conscience", our sense of moral right and wrong and maybe our identity, and further that our spirit somehow interfaces between God's Holy Spirit and our mind/soul and physical body. To borrow a computer analogy, the body is like hardware, the mind/soul is like heuristic software, and the spirit is like device drivers/adapters [obviously, the analogy can not be pushed very far]. At least one problem with my 'model' is that a spirit is required in all organisms to interface between the mind and body, but biblically, only mankind has a spirit, though obviously animals have personalities and minds which exert decisions on voluntary muscles, which seemingly would argue for animal spirits in my model (which argument contradicts the bible, IMO). The Scriptural support [for man's triune nature] is a bit meager (1 Thessalonians 5:23 and Hebrews 4:12, as I recall). There is also Gen 1:26 that man is created in God's image, Rom 8:16 "The Spirit Himself testifies with our spirit that we are children of God", and Rom 9:1 "my conscience testifies with me in the Holy Spirit" to supplement the already admitted dual nature of man. In any case, I hope you would agree that God is incorporeal and can move things by His will alone. Agreed, God can. But does he without exception, must he without exception? Regarding paralysis and amputated limbs, my question went to your original point:
the arm goes through a large number of micro-level muscular movements (tiny twitches) which are randomly generated at the quantum level. The agent tries these out over a very short interval of time
In paralysis and amputated limbs, such "twitches" are non-existant, yes?, and hence nothing for the agent to try out, yes? Charles
Vivid: Thank you for clarifying. And, for me, it does matter :) gpuccio
gp re 21 I can see from how I worded my question how you would think that I have made a conclusion about your point of view but I have not.Thanks for clarifying. As for the rest of your post I pretty much agree with it, not that it matters :) Vivid vividbleau
Vivid: I am not a theologian, but I don't agree with your conclusion about my point of view. It is perfectly possible that, in our human condition, "peccare" is the only way to describe our possible actions, as our level of freedom is not so big that our free option be not, in any way, "tainted" by our human nature. And yet, at each moment we could still have free will, and, if you want, "peccare" in different ways: opening ourselves, more or less, to the transforming grace of God's love. But again, I am not a theologian, and I am not really interested in a theologic debate. So, please take my notes only for what they are: a personal consideration. gpuccio
VJT: "Are there any good links you’d care to recommend on the transcendental “I”?" First of all, I really want to thank you for your constant contribution and work on these fundamental topics. Well, I am not a philosopher, and my personal concepts are just that, personal. They are more or less the result of my whole view of the world, and of my personal experiences. I call the "I" transcendental, because my idea is that consciousness requires an unifying principle which can refer to itself all the modifications perceived. The empirical point is that the perceiving subject has the universal property of being able to "recede" in a meta position from any of its perceptions, both outer and inner (IOWs, we can always "observe" as outer to our consciousness any conscious content that we were representing as part of it a moment before, and that is alwys true, in an infinite "mise en abime"). Unity of the point of perception is one the the fundamental properties which distinguish conscious representations from any non conscious, objectual system. That unity is both a cognitive and a feeling principle (we recognize ourselves as ourselves, always, and we always care about what will happen to ourselves, I would say, in a very special way). The unity of semplicity of the I is the main reason why I believe we should think of it as "without parts". That's why its nature, and essential qualities, cannot be described in term of an objective system made of parts, whatever AI people may think. Having loops can be a trendy social status, especially after Hofstadter, but I am afraid that it never transformed anything into a conscious, least of all intelligent, being. gpuccio
gpuccio Having read your post, I realize that Eccles' views and mine are probably closer than I had thought, even if his terminology is somewhat different. I thought there was a great deal of wisdom contained in your final remark: "good choices increse our level of freedom, bad choices reduce it." Are there any good links you'd care to recommend on the transcendental "I"? vjtorley
tragic mishap You ask:
Hold the phone. What is a "spirit"? Where do they exist? And do human beings have them?
First, a spirit is any entity whose operations are exclusively those of the intellect (reasoning, understanding, critical thinking, meta-cognition) and will (especially making choices - be they good or bad). Any entity (such as a human being) which is capable of lower-level physical operations as well has a spirit; whereas any entity whose operations are solely those of intellect and will is a spirit. Angels, for instance, can do nothing but know and choose. The love they have for God is a choice; it is devoid of passion. Demons' wills, by contrast, have been fixed in hate ever since their Fall from grace. A human being isn't a spirit, but a human being has a spirit. However, this spirit is also the form of a living body - which is why we can do so many other things apart from thinking an choosing. Second, a spirit doesn't have a location as such; nevertheless it can be said to be wherever its power extends. If you want to know what Aquinas thought about angels and their location, see St. Thomas and angels and see also here: http://www.angelfire.com/linux/vjtorley/thomas1.html#smoking4 . For my own very tentative answers to your questions of how angels or demons could move things and where they are, please see here: http://www.angelfire.com/linux/vjtorley/thomas2.html#appendix Third, spirits - good and bad - are quite real. You want evidence? Take a look at this: http://www.reasons.org/testing-demonic-possession and http://www.worldmysteriesandtrueghosttales.com/modern-day-demonic-possession-documented-true-story-of-exorcism/ vjtorley
gp RE 11 "There is no doubt that both outer and inner previous states (including character etc.) influence (but do not dtermine) free will." I take it that from a theological perspective you do not agree with Augustine when he wrote ( I think it was Augustine)"non posse non peccare" (I cannot not sin) ? Vivid vividbleau
dmullenix Thank you for your post. You write:
If I can ask, what exactly is a formal operation?
A formal operation is any mental operation in which the operands are abstract concepts. To see why a computer does not perform formal operations, think of the crudest analogue computer you can, and how it works - and then ask yourself whether a digital one is any different in principle. Further reading: Some brief arguments for dualism, Part IV by Professor Edward Feser (highly readable and fairly informal in its style) Immaterial Aspects of Thought by Professor James Ross. In The Journal of Philosophy, Vol. 89, No. 3, (Mar. 1992), pp. 136-150 (considerably meatier). Immaterial Aspects of Thought by Professor James Ross. An expanded and up-to-date version of Ross's argument. vjtorley
Charles, Thank you for your very detailed post. I was intrigued by your "Flatland" suggestion that spirits exist in higher dimensions, and use physical brute force to manipulate objects. I can't think of any a priori argument against the existence of n-dimensional beings, so I guess their possibility has to be taken seriously. I would argue strongly, however, that operations such as thinking and willing are not carried out in some higher dimension. Regardless of whether you think they can move physical objects or not, thinking and willing are non-physical operations. For a good summary of the main arguments for the immateriality of the soul, with lots of links to good philosophical articles, see here . I first encountered the triune theory of man when I was a boy of ten or so. The Scriptural support is a bit meager (1 Thessalonians 5:23 and Hebrews 4:12, as I recall). Interestingly, the late neurophysiologist John C. Eccles referred to himself as a trialist rather than a dualist, although he doesn't use the terminology of spirit, soul and body, and his "World 3" is a human creation: see here . In any case, I hope you would agree that God is incorporeal and can move things by His will alone. Regarding your question on paralysis: I understand that paralysis is most often caused by damage in the nervous system, especially the spinal cord. I'd explain it by saying that probably the only areas of our bodies that move in response to acts of will on our part are regions of the motor homunculus. Certainly anything below the neck is not directly responsive to our will. Regarding phantom limb sensations, the Wikipedia article on the subject is definitely worth reading: http://en.wikipedia.org/wiki/Phantom_limb I don't see that they pose any special problem for the modest "I-body" version of dualism that I have defended (see Why I think the interaction problem is real ). vjtorley
vjtorley: At your link, I note:
In other words, physical objects have S-properties (properties which make reference to spiritual beings), while spiritual beings have built-in P-properties (properties which make reference to physical objects).
Ostensibly spiritual beings exist in higher dimensions than do we, and just as we 3-D beings can insert our finger into "flatland" and push bits of paper around or punch holes, higher-dimensional spiritual beings can 'step into' 3-D land and push 3-D objects around (such as rolling the stone away from the tomb). Higher dimensional beings can then withdraw from lower dimensional planes back into higher dimensions, just like we lift our finger off the surface of flatland. But such manipulation of objects (e.g. rolling away the stone) is simple brute force. It is not the intentional will of the being superceding the 'will' of the stone or otherwise neutralizing the force of gravity and levitating it (neither at molecular or sub-atomic levels nor as a single macroscopic mass). Unanswered as yet is how does a spiritual being take up (or take over) the neural controls of a human body? The bible tells us we have a triune nature. Perhaps the agency that mediates between the mind and body is the spirit. But how? When we are asleep or unconscious, the mind seems to remain active as evidenced by dreaming and those dreams are recorded in the brain (however fleeting), yet the grip on the body's neural controls seems relaxed or loosened. Perhaps it is the spirit that disengages from controlling voluntary muscles when the body sleeps, while the mind dreams of trying to run but imagines itself immobilized. Demonic possession of a living body (even a herd of pigs) would be futile if autonomic functions failed to keep the body alive. Of all the credible reports of demonic possession, none mention any cessation of breathing or possession of the deceased. Perhaps the spiritual-neural controls exist only for voluntary muscles and not for any autonomic functions. Libertarian free will may originate in the mind and its decisions are then effected over voluntary muscles via the spirit. It would seem that controlling voluntary muscles is a far more plausible and simpler problem than controlling molecules or subatomic particles. But where or what is the interface between a spirit and voluntary neurons? I've not answered the "controlled how" question, but merely narrowed its scope, hopefully, not unproductively. In your response, you did not mention how your model would account for paralysis? Charles
I conclude that if you’re going to believe that Satan and his minions can wreak havoc in the world, you have to believe that material objects have properties which refer to spiritual beings, whereby if a being of type X wills that the object should do Y, then it does Y. You would have to believe, in short, that objects were explicitly designed to be manipulable by the will of certain kinds of spirits.
Hold the phone. What is a "spirit"? Where do they exist? And do human beings have them? tragic mishap
"Reasoning and choosing are indeed immaterial processes: they are actions that involve abstract, formal concepts. (By the way, computers don’t perform formal operations; they are simply man-made material devices that are designed to mimic these operations. A computer is no more capable of addition than a cash register, an abacus or a Rube Goldberg machine.)" If I can ask, what exactly is a formal operation? dmullenix
VJT: Personally, I don't see much difference between Eccles'vue and yours: the random configuration selected are probably connectd to reaching or not reaching the threshold of excitation at synaptic level. I agree with you on almost all: a) Compatibilism is an essentially stupid idea (OK, you were more of a gentleman, but let me say things strtight once in a while: I apologize in advance to my friend Mark Frank :) ) . b) Libertarian free will is the only reasonable solution. c) Quantum indeterminism is, at present, the best "window" we have (given our scientific understanding of these things, which is certain ly rough) to "join" strict determinism and free will of agents. d) The way the agent (my "transcendental I") influences outer events is anyway still a big mystery. e) There is no doubt that both outer and inner previous states (including character etc.) influence (but do not dtermine) free will. My idea is that precious states (the total sun up of them, including all our personal past and therefore our previous exercise of our free will) determine the "range" of choice we have in each single situation: what I call the "level of freedom" of each individual at each time. IOWs, all individuals have free will, but each one has different levels of freedom (can act from different ranges of options). f) The final point, and IMO the most important one, is the following: free choices not only are not determined, not only are not random. They are not "neutral". IOWs, different free choices in each individual situation have different "moral" values for the individual, and affect differently his personal future. IOWs, different choices avaliable at each moment to an individual are intuitively "felt" by the individual's consciousness as connected to a moral "field": some of them are better, some of them are worse. Reason has a role in that, but I believe that it is not the only factor, and that the final property of "moral conscience" is essentially intuitive, and directly perceived by the transcendental I. That "moral" property of free choices is the natural basis of responsibility, and is the reason why our present use (good or bad) of free will influences our future "level of freedom": good choices increse our level of freedom, bad choices reduce it. gpuccio
... and, the thing is, it's not indeterminism which causes "free will," but rather that the freely observed reality of "free will" shows determinism to be a woefully incomplete view of the world. Ilion
VJT: "When I wrote that indeterminism does not necessarily imply free will, I meant that the mere fact that my choices are not determined does not make them free. Indeterminism is a necessary but not sufficient condition for free will." But, isn't that exactly to say that "indeterminism" implies "free will." Ilion
NZer and Charles, Thank you for your questions regarding how immaterial spirits (such as demons) might act upon bodies. Actually I addressed this question some time ago in my five-part reply to Professor Tkacz. If you have a look here: http://www.angelfire.com/linux/vjtorley/thomas2.html#section7 you'll see that I address the problem of animal suffering and offer my own tentative solution. In the Appendix (scroll down) I examine how demonic activity could play havoc with the natural order. I conclude that if you're going to believe that Satan and his minions can wreak havoc in the world, you have to believe that material objects have properties which refer to spiritual beings, whereby if a being of type X wills that the object should do Y, then it does Y. You would have to believe, in short, that objects were explicitly designed to be manipulable by the will of certain kinds of spirits. Only then would angelic or demonic agency be possible, in the material world. vjtorley
Hi Ilion, Thanks for your comments. When I wrote that indeterminism does not necessarily imply free will, I meant that the mere fact that my choices are not determined does not make them free. Indeterminism is a necessary but not sufficient condition for free will. In a world without (i) top-down causation, and (ii) immaterial mental acts, there would be no libertarian free will, indeterminism notwithstanding. vjtorley
Ilion,
And, the amusing thing about such models is that *all* of them must, in the end, by the their very natures, “explain” it by explaining it away. All “explaining” free will by reducing it something else is just the denial that it is real.
That's exactly right, Science was originally intended to save the phenomenon, now science is only considered scientific if it explains the phenomenon away. Clive Hayden
Well, you know, we *all* know -- even those who publically deny it -- that "libertarian free will" is the truth about our natures. Whether or not anyone can come up with a model to "explain" (*) it, does not affect the truth of the matter. (*) And, the amusing thing about such models is that *all* of them must, in the end, by the their very natures, "explain" it by explaining it away. All "explaining" free will by reducing it something else is just the denial that it is real. Ilion
"... By contrast, indeterminism is compatible with the existence of libertarian freedom, but in no way implies it. ... " Certainly it does! "Indeterminism" implies: 1) events may happen randomly, without a causal history; 2) events may happen due to a novel causal-chain initiated by an agent. But, implication 1) (i.e. "hard indeterminism") is absurd, just as "compatibilism" is absurd. Ilion
the arm goes through a large number of micro-level muscular movements (tiny twitches) which are randomly generated at the quantum level. The agent tries these out over a very short interval of time (a fraction of a second) before selecting the one which feels right How does physical paralysis fit your model, wherein someone mentally chooses to raise a limb which is in fact paralyzed and ostsensibly has neither twitches nor feedback? How about when the mind/brain believes it feels an amputated limb? How might demonic possession overcome free will in your model? I'm not arguing, just exploring. Charles
Ok, for argument's sake, what would be the implications if an immaterial soul/spirit could in some way interact with a physical body? From your earlier post, I understand that the inability of interactions between these two proposed entities makes everything more complicated. BTW, have you read JP Moreland on this topic? NZer
Of course the only thing preventing this from applying equally well to dualism is the word "spooky." I don't exactly see a mechanism either. Collapsing the wave function would require some mechanism correct? tragic mishap

Leave a Reply