Uncommon Descent Serving The Intelligent Design Community

How is libertarian free will possible?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In this post, I’m going to assume that the only freedom worth having is libertarian free will: the free will I possess if there are choices that I have made during my life where I could have chosen differently, under identical circumstances. That is, I believe that libertarian free will is incompatible with determinism. By contrast, indeterminism is compatible with the existence of libertarian freedom, but in no way implies it.

There are some people who think that even if your choices are fully determined by your circumstances, they are still free, if you selected them for a reason and if you are capable of being educated to act for better reasons. People who think like that are known as compatibilists. I’m not one of them; I’m an incompatibilist. Specifically, I’m what an agent-causal incompatibilist: I believe that humans have a kind of agency (an ability to act) that cannot be explained in terms of physical events.

Some time ago, I came across The Cogito Model of human freedom, on The Information Philosopher Web site, by Dr. Roddy Doyle. The Website represents a bold philosophical attempt to reconcile the valid insights underlying both determinism and indeterminism. The authors of the model argue that it accords well with the findings of quantum theory, and guarantees humans libertarian freedom, but at the same time avoids the pitfall of making chance the cause of our actions. Here’s an excerpt:

Our Cogito model of human freedom combines microscopic quantum randomness and unpredictability with macroscopic determinism and predictability, in a temporal sequence.

Why have philosophers been unable for millennia to see that the common sense view of human freedom is correct? Partly because their logic or language preoccupation makes them say that either determinism or indeterminism is true, and the other must be false. Our physical world includes both, although the determinism we have is only an adequate description for large objects. So any intelligible explanation for free will must include both indeterminism and adequate determinism.

At first glance, Dr. Doyle’s Cogito Model appears to harmonize well with the idea of libertarian free will. Doyle makes a point of disavowing determinism, upholding indeterminism, championing Aristotle, admiring Aquinas and upholding libertarian free will. However, it turns out that he’s no Aristotelian, and certainly no Thomist. Indeed, he isn’t even a bona fide incompatibilist. Nevertheless, Doyle’s Cogito Model is a highly instructive one, for it points the way to how a science-friendly, authentically libertarian account of freedom might work.

There are passages on Dr. Doyle’s current Web site (see for instance paragraphs 3 and 4 of his page on Libertarianism) where he appears to suggest that our character and our values determine our actions. This is of course absurd: if I could never act out of character, then I could not be said to have a character. I would be a machine.

Misleadingly, in his Web page on Libertarianism, Dr. Doyle conflates the incoherent view that “an agent’s decisions are not connected in any way with character and other personal properties” (which is surely absurd) with the entirely distinct (and reasonable) view that “one’s actions are not determined by anything prior to a decision, including one’s character and values, and one’s feelings and desires” (emphases mine). Now, I have no problem with the idea that my bodily actions are determined by my will, which is guided by my reason. However, character, values, feelings and desires are not what makes an action free – especially as Doyle makes it quite clear in his Cogito Model that he envisages all these as being ultimately determined by non-rational, physicalistic causes:

Macro Mind is a macroscopic structure so large that quantum effects are negligible. It is the critical apparatus that makes decisions based on our character and values.

Information about our character and values is probably stored in the same noise-susceptible neural circuits of our brain…

The Macro Mind has very likely evolved to add enough redundancy, perhaps even error detection and correction, to reduce the noise to levels required for an adequate determinism.

The Macro Mind corresponds to natural selection by highly determined organisms.

There is a more radical problem with Doyle’s model: he acknowledges the reality of downward causation, but because he is a materialist, he fails to give a proper account of downward causation. He seems to construe it in terms of different levels of organization in the brain: Macro Mind (“a macroscopic structure so large that quantum effects are negligible… the critical apparatus that makes decisions based on our character and values”) and Micro Mind (“a random generator of frequently outlandish and absurd possibilities”) – the latter being susceptible to random quantum fluctuations, from which the former makes a rational selection.

Doyle goes on to say:

Our decisions are then in principle predictable, given knowledge of all our past actions and given the randomly generated possibilities in the instant before decision. However, only we know the contents of our minds, and they exist only within our minds. Thus we can feel fully responsible for our choices, morally and legally.

This passage leads me to conclude that Doyle is a sort of compatibilist, after all. As I’ve said, I’m not.

So how do I envisage freedom? I’d like to go back to a remark by Karl Popper, in his address entitled, Natural Selection and the Emergence of Mind, delivered at Darwin College, Cambridge, November 8, 1977. Let me say at the outset that I disagree with much of what Popper says. However, I think he articulated a profound insight when he said:

A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.

Let’s get back to the problem of downward causation. How does it take place? The eminent neurophysiologist and Nobel prize winner, Sir John Eccles, openly advocated a “ghost in the machine” model in his book Facing Reality, 1970 (pp. 118-129). He envisaged that the “ghost” operates on neurons that are momentarily poised close to a threshold level of excitability.

But that’s not how I picture it.

My model of libertarian free will

Reasoning and choosing are indeed immaterial processes: they are actions that involve abstract, formal concepts. (By the way, computers don’t perform formal operations; they are simply man-made material devices that are designed to mimic these operations. A computer is no more capable of addition than a cash register, an abacus or a Rube Goldberg machine.)

Reasoning is an immaterial activity. This means that reasoning doesn’t happen anywhere – certainly not in some spooky Cartesian soul hovering 10 centimeters above my head. It has no location. Ditto for choice. However, choices have to be somehow realized on a physical level, otherwise they would have no impact on the world. The soul doesn’t push neurons, as Eccles appears to think; instead, it selects from one of a large number of quantum possibilities thrown up at some micro level of the brain (Doyle’s micro mind). This doesn’t violate quantum randomness, because a selection can be non-random at the macro level, but random at the micro level. The following two rows of digits will serve to illustrate my point.

1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 1
0 0 1 0 0 0 0 1 1 0 1 1 0 1 1 1 0 1

The above two rows of digits were created by a random number generator. Now suppose I impose the macro requirement: keep the columns whose sum equals 1, and discard the rest. I now have:

1 0 1 1 1 0 0 0 0 0 1
0 1 0 0 0 1 1 0 1 1 0

Each row is still random, but I have imposed a non-random macro-level constraint. That’s how my will works when I make a choice.

For Aristotelian-Thomists, a human being is not two things – a soul and a body – but one being, capable of two radically different kinds of acts – material acts (which other animals are also capable of) and formal, immaterial actions, such as acts of choice and deliberation. In practical situations, immaterial acts of choice are realized as a selection from one of a large number of randomly generated possible pathways.

On a neural level, what probably happens when an agent decides to raise his/her arm is this: the arm goes through a large number of micro-level muscular movements (tiny twitches) which are randomly generated at the quantum level. The agent tries these out over a very short interval of time (a fraction of a second) before selecting the one which feels right – namely, the one which matches the agent’s desire to raise his/her arm. This selection continues during the time interval over which the agent raises his/her arm. The wrong (randomly generated quantum-level) micro-movements are continually filtered out by the agent.

The agent’s selection usually reflect his/her character, values and desires (as Doyle proposes) – but on occasion, it may not. We can and do act out of character, and we sometimes act irrationally. Our free will is not bound to act according to reason, and sometimes we act contrary to it (akrasia, or weakness of will, being a case in point).

So I agree with much of what Doyle has to say, but with this crucial difference: I do not see our minds as having been formed by the process of natural selection. Since thinking is an immaterial activity, any physicalistic account of its origin is impossible in principle.

Comments
Ciphertext,
If you accept the premise that a mind need not be an emergent property by necessity. Then you have several options available to you in terms of from where a “mind” could spring forth. While an interesting question in and of itself. At least more interesting for me is “How did my mind spring forth?” and similarly “How did my mind become connected with my hardware?”. I am assuming that your mind is separate from my mind. At least in terms of how you and I perceive it to be.
I'm certainly not beholden to the 'mind as emergent property' hypothesis by necessity, though I do find it compelling for a number of reasons. Lizzie notes a number of those reasons above. But, given the nature of the concept, it is fruitful (to say nothing of interesting) to consider other ideas. Plus, I freely admit this is not my area I have much expertise in (though I have some) so I just find the concepts interesting to explore. Like positing whether the mind is a complex AI program (from your post above).
Perhaps, then, there would be a sufficient “neural pattern” to indicate the local storage of such a complex AI program (mind). That “neural pattern” would essentially be the executable, object code, (note, it wouldn’t necessarily be the source code, though an argument could be made for the coexistence of both on the same hardware) for the mind.
Indeed. Of course if such were the case, you'd think (ha!) there would be some way to locate and analyze evidence of the program. Would be interesting to do so.Doveton
July 20, 2011
July
07
Jul
20
20
2011
10:24 AM
10
10
24
AM
PDT
Elizabeth: You ask: "do you accept that it is possible that consciousness is not a continuous flow, but that rather it consists of a series of summations of input over time?" Anything is possible. I don't accept that it is true. It is false. Consciosness is a continuous flow. Through different states of consciousness.gpuccio
July 20, 2011
July
07
Jul
20
20
2011
09:01 AM
9
09
01
AM
PDT
Elizabeth Liddle:
do you accept that it is possible that consciousness is not a continuous flow, but that rather it consists of a series of summations of input over time?
Summations of what? Who or what is doing the summing?Mung
July 20, 2011
July
07
Jul
20
20
2011
08:58 AM
8
08
58
AM
PDT
@Doveton Post 142 RE: Abstraction vs. Emergent If you accept the premise that a mind need not be an emergent property by necessity. Then you have several options available to you in terms of from where a "mind" could spring forth. While an interesting question in and of itself. At least more interesting for me is "How did my mind spring forth?" and similarly "How did my mind become connected with my hardware?". I am assuming that your mind is separate from my mind. At least in terms of how you and I perceive it to be. Is the mind really a complex AI program (as we would term it) that is executed by the "Human OS" (for lack of a better term)? [a-la Battlestar Galactica remake] My term "Human OS" is what I call the autonomic system, in that it provides a similar role as to a computer's OS. The OS provides application programs with access to the underlying hardware via "drivers" and coordinates the use of system resources at a macroscopic level. The CPU and other chips have their own "on-board" systems to manage the threading and prioritization of instruction sets. The applications only need to use the OS to facilitate communication with the underlying hardware (video, audio, receive input). Perhaps, then, there would be a sufficient "neural pattern" to indicate the local storage of such a complex AI program (mind). That "neural pattern" would essentially be the executable, object code, (note, it wouldn't necessarily be the source code, though an argument could be made for the coexistence of both on the same hardware) for the mind.ciphertext
July 20, 2011
July
07
Jul
20
20
2011
08:04 AM
8
08
04
AM
PDT
Ciphertext,
I don’t think that it is necessary that the mind be an “emergent property” of the body (underlying hardware). For the same reason that I don’t believe source code is an emergent property of computing hardware.
Hmmm...ok. I think I get what your concept is. Interesting. I'll ponder this for a bit, but on first look, I like it.Doveton
July 20, 2011
July
07
Jul
20
20
2011
06:41 AM
6
06
41
AM
PDT
For example to “experience” a red apple takes tens of milliseconds, it is not instantaneous. This is what our evidence suggests. Do you think this is incorrect? I repeat: You determine what is or is not experienced by whether it is or isn't reported. The idea that something can be experienced then forgotten before it is reported doesn't seem to register with you - and it's not because of the data in and of itself, but because of the assumptions you bring to the data. "If a stimulus only lasts less than X amount of time, it will not be reported" does not itself equal "If a stimulus only lasts X amount of time, there is no experience of it". Not unless you insist that there is no experience unless it's reported - and then we're back to the example of not remembering yesterday meaning I had no experience yesterday. Know your assumptions and the limitations of your methods of inquiry, particularly with regards to subjective experience.nullasalus
July 20, 2011
July
07
Jul
20
20
2011
06:29 AM
6
06
29
AM
PDT
working=wording. Gotta run.Elizabeth Liddle
July 20, 2011
July
07
Jul
20
20
2011
06:18 AM
6
06
18
AM
PDT
No, I'm saying that subjective experience is NOT like a film with infinitessimally short frames. Sorry, that was clumsy working. I'm saying that it arises from discrete processes during which the inputs are smeared over time. For example to "experience" a red apple takes tens of milliseconds, it is not instantaneous. This is what our evidence suggests. Do you think this is incorrect?Elizabeth Liddle
July 20, 2011
July
07
Jul
20
20
2011
06:17 AM
6
06
17
AM
PDT
I am saying that what we call “subjective experience” is a function of memory – which in turn is a function of the integration of inputs over time. What you've said is that (x) emerges out of infinite recursive loops and how this is entirely materialist and non-reductive, then pointed to Hofstadter for explanation, who comes out as a reductionist and doesn't see 'emergence' and his loops as incompatible with reductionism. And who himself doesn't explain all that much. Not that it doesn’t exist, but that it is discrete, and that each discrete perception is smeared over time, not continuous, like a film (or like a film with infinitesimally short frame durations anyway). Er, infinitesimally short frame durations that are smeared over time? Take your pick. You realize that your move here relies on assumptions about time itself, right? Not exactly the most clear topic itself.nullasalus
July 20, 2011
July
07
Jul
20
20
2011
05:58 AM
5
05
58
AM
PDT
Nullasalus: at no point have I said we do not have subjective experience! I'd like you to go through my post again, with as open a mind as you can, and see if you can see what I'm saying (i.e. do not assume that I am "in effect" saying something else). I am saying that what we call "subjective experience" is a function of memory - which in turn is a function of the integration of inputs over time. Not that it doesn't exist, but that it is discrete, and that each discrete perception is smeared over time, not continuous, like a film (or like a film with infinitesimally short frame durations anyway).Elizabeth Liddle
July 20, 2011
July
07
Jul
20
20
2011
05:35 AM
5
05
35
AM
PDT
But you don’t have “your own subjective experience” once you’ve forgotten it! Although you might have it again, if you remember it! No, I can't recall a memory of a subjective experience if I've forgotten it. But I certainly have subjective experience here and now, and it's entirely possible for there to be subjective experience sans memory. Unless you make the assumption "conscious experience needs memory" of course. If we cast consciousness in this form, the questions I asked above become simply answerable. Yes, if you make a bunch of assumptions, you can answer a question. I pretty much said this myself; the key is to remember that they are assumptions. You've just told me 'I can tell you what happened in that hypothetical story if you just let me make assumptions about what happened!' No duh, Elizabeth. Seen dynamically in this way, I think consciousness becomes tractable to explanation, specifically in terms of memory, and even more specifically, in terms of a model of memory in which recall is a renactment of the state that accompanied the remembered stimulus, and applies not merely to retrospective states but to forward models too (although we don’t generally call forward models “memory”, but I suggest that they are closely related, as, again, data from neuroscience seems to suggest – hence the key role of the hippocampus in both memory and spatial navigation). Yeah, you always talk about how 'if I just assume these things then consciousness becomes explainable', missing the part that A) You're making assumptions, B) That you also leave out data, data that's far more primary than your personal metaphysical speculations, and most importantly C) You never actually explain much of anything relevant. You just gesture wildly and buzzword up your sentences with 'emergence' and 'recursion' and 'non-reductive!' Then when it's pointed out that you haven't really explained much of anything, we get the metaphors. And this is what leads us to infer that the coupling is very close indeed – so close that when we observe an absence of certain neural events, we infer that a person is unconscious, possibly irreversibly so. Congratulations, you've discovered correlates of experience and the fact that what we call the human body is tied to the human mind. Something no one - not Chalmers, not panpsychists, not neutral monists, not substance dualists, not hylemorphic dualists, not even freaking idealists - denies. Except perhaps eliminative materialists, since they dispense with the whole 'mind' thing. And throughout your list of examples, notice that everything you say works with the assumption that if it can't report, there was no experience. That not all experience that is actual is reported, or even reportable, doesn't seem to occur to you - someone has to remind you that what you're doing is making a model, with certain assumptions (some of them downright controversial). And when we infer irreversible uncsonsciousness we say that the person is dead. And when the unconsciousness ends up being reversed after the fact, you say "oops". And if a person gives reports during the time they were inferred to have been unconscious, you flail and revise. But at the end of the day, we still have subjective experience, and we still have intentionality. And your explanations for both are non-explanations - largely dogma that melts into metaphors the moment any light shines on them.nullasalus
July 20, 2011
July
07
Jul
20
20
2011
05:12 AM
5
05
12
AM
PDT
But you don't have "your own subjective experience" once you've forgotten it! Although you might have it again, if you remember it! And that seems to me to be the key to the whole problem - I suggest that we are not so much "conscious" as that we "conch". And what we conch at any given time may be something that happened in the immediate past (a few milliseconds earlier) or something that happened in the more remote past, or even something that may or may not happen in the future. If we cast consciousness in this form, the questions I asked above become simply answerable. If a stimulus is flashed for too short a time it appears to leave no retrievable trace at all - there is no space to get in and say: "what did you see"? and data suggests that the stimulus never got further than very minimal processing by the primary sensory systems. However, if it is flashed up for a little longer, we find that it is subsequently retrievable, after a fashion, and indeed, it might be possible to train people to infer what the prime might have been from their apparent intuitive response to subsequent stimuli. In other words, the subsequent stimuli might induce some kind of consciousness of the prime. Seen dynamically in this way, I think consciousness becomes tractable to explanation, specifically in terms of memory, and even more specifically, in terms of a model of memory in which recall is a renactment of the state that accompanied the remembered stimulus, and applies not merely to retrospective states but to forward models too (although we don't generally call forward models "memory", but I suggest that they are closely related, as, again, data from neuroscience seems to suggest - hence the key role of the hippocampus in both memory and spatial navigation). But I fear this discussion is stalling over a very different set of assumptions about what neuroscience can (and can't) tell us about the way we experience the world. I find it puzzling when people say: oh, there are correlates alright between neural events and mental events, but we can't ascertain the direction of causality. Well, yes we can - the way we infer causality in science is by manipulating a variable. And we can do this in both directions - we can manipulate mental events, by presenting task relevant stimuli, and we can check that the task has been peformed by looking at the behavioural output, and then look at the neural correlates of that mental activity. We can also manipulate neural events in various ways, by drugs, electodes, trans-cranial magnetic stimulation, etc, and correlate these with the participants subjective reports of their experience, and/or their behavioural response to a task. For instance, by timing and placing a TMS pulse carefully, we can show precisely when and where a disruption to a neural process results in disruption to task performance. We can also ask for reports of subjective experience. We can even scan people (using fMRI for instance) and ask them to note, with a button press, whenever a mental event of some sort occurs - a novel thought, an auditory hallucination, the urge to tic, whatever, and examine the concomitant neural evidence - in the case of fMRI the blood flow to a region that follows neural firing. These are not haphazard observation - they are reproducable effects, in both directions, that allow us to predict with a high degree of confidence how a manipulation of mental events is reflected in neural events and how manipulation of neural events is reflected in mental events. And this is what leads us to infer that the coupling is very close indeed - so close that when we observe an absence of certain neural events, we infer that a person is unconscious, possibly irreversibly so. And when we infer irreversible uncsonsciousness we say that the person is dead.Elizabeth Liddle
July 20, 2011
July
07
Jul
20
20
2011
04:48 AM
4
04
48
AM
PDT
The data (specifically EEG data from priming experiments) suggest that if a stimulus is sufficiently brief, it may influence a subsequent decision, even though the participant has no awareness of the stimulus and the EEG trace lacks features normally associated with “late processing”. And again I point out, this doesn't show that there was no awareness of the stimulus, period. The best you can get is that they are unable to recall having such an experience at a later point. Whether they had one at the time, for whatever brief moment, is up in the air. Recall that I question the very existence of 'the unconscious' (as opposed to 'something I'm not conscious of right now'). Now, you raise the interesting point: if I subsequently forget something, was I ever conscious of it? Or, even more interestingly, let’s say I pass someone in the corridor, and do not recognise them. Then, a few yards further on, I think to myself “hey, that was Jim!” That happened to me the other day, and I apologised to “Jim” for cutting him dead. Does that mean I was aware of Jim when I saw him, subsequently forgot it was Jim, then remembered? Or was I not aware of Jim until I’d progressed a few yards down the corridor? (Dennett has an example of this, near the beginning of Consciousness Explained IIRC). Is it even a sensible question? Of course it's a sensible question. And "Who's to say?" is a sensible answer. You're asking a hypothetical question about what could have possibly taken place in a space of time that a person can't recall and asking me what happened. Here's a sensible conclusion: Talking about what was or wasn't experienced is fraught with assumption that most people miss, and discussing what 'the data shows' about first-person experience from a third-person point of view typically relies on these assumptions. But I don't have to make any assumptions when it comes to my own subjective experience - I have it. It's data, not theory.nullasalus
July 20, 2011
July
07
Jul
20
20
2011
04:06 AM
4
04
06
AM
PDT
Well, I have to disagree, nullasalus. Or rather, I think you put your finger on a key point, but it is not the point I was making! The data (specifically EEG data from priming experiments) suggest that if a stimulus is sufficiently brief, it may influence a subsequent decision, even though the participant has no awareness of the stimulus and the EEG trace lacks features normally associated with "late processing". For example, if the word "couch" is flashed up briefly, then masked, and the subject is then asked whether the word "bofa" is a word, their reaction time will be slower than if the word "string" had been flashed up. In other words, there is evidence that a subliminally presented "prime" affects subsequent behaviour, even though the participant has no awareness of the content of the prime (as ascertained by various tests). Now, you raise the interesting point: if I subsequently forget something, was I ever conscious of it? Or, even more interestingly, let's say I pass someone in the corridor, and do not recognise them. Then, a few yards further on, I think to myself "hey, that was Jim!" That happened to me the other day, and I apologised to "Jim" for cutting him dead. Does that mean I was aware of Jim when I saw him, subsequently forgot it was Jim, then remembered? Or was I not aware of Jim until I'd progressed a few yards down the corridor? (Dennett has an example of this, near the beginning of Consciousness Explained IIRC). Is it even a sensible question? (I have my answer, but I'm interested to hear yours first :))Elizabeth Liddle
July 20, 2011
July
07
Jul
20
20
2011
03:53 AM
3
03
53
AM
PDT
Because that is what the evidence suggests – that we “become conscious” of a stimulus, and its properties, over a period of a substantial number of milliseconds, and that if the stimulus duration is shorter than that, we have no conscious recollection of it, even though it may influence our subsequent choices. No, the data doesn't 'suggest' that. There is data, and then there is an interpretation within one or another model - a model often filled with a variety of assumptions to begin with. And right here you're confusing conscious experience of something with recollection of an experience - a mistake Dennett makes as well. If I can't remember yesterday, does that mean I had no conscious experiences yesterday?nullasalus
July 20, 2011
July
07
Jul
20
20
2011
03:33 AM
3
03
33
AM
PDT
Nullasalus:
But before we get to that: do you accept the possibility that conscious experience of the world might be summed over periods of time, rather than being a continuous flow? Because that is an important underpinning to my approach (and Dennett’s, I think)
You’re essentially asking me if I accept it’s possible that I could be wrong about qualia, having experience. No, it’s not possible.
No, I did not ask you if you could be wrong about qualia. I asked you something quite different: do you accept that it is possible that consciousness is not a continuous flow, but that rather it consists of a series of summations of input over time? Because that is what the evidence suggests - that we "become conscious" of a stimulus, and its properties, over a period of a substantial number of milliseconds, and that if the stimulus duration is shorter than that, we have no conscious recollection of it, even though it may influence our subsequent choices.Elizabeth Liddle
July 20, 2011
July
07
Jul
20
20
2011
03:22 AM
3
03
22
AM
PDT
Nullasalus @ 123 "Hofstadter says he’s certain that a reductionistic explanation of mind is true." Mike 1962 @ 126 "That just means he’s a fool or a liar. Who buys into this drivel?" Just to quibble, 'fool' and 'liar' overlap; specifically, a 'fool' is a special type of liar. A mere liar lies episodically: he lies about specific things, generally for specific reasons. On the other hand, a fool lies systemically: he lies about the very nature of reason and of truth. That is, just as a (common) hypocrite asserts a double standard with respect to morality, a fool asserts a double standard with respect to the intellect. To accuse another of being a 'fool' is to make a moral condemnation of him; it is to accuse him of being intellectually dishonest; it is to accuse him of being an intellectual hypocrite. I mean, I do quite understand what you meant by saying, "That just means he’s a fool or a liar." Obviously, you didn't mean, "That just means he's a liar or a liar". Rather, you meant "That means he's incapable of understanding the truth, or unwilling to understand the truth"; or, in simpler terms, "That just means he's stupid, or a liar".Ilion
July 19, 2011
July
07
Jul
19
19
2011
10:40 PM
10
10
40
PM
PDT
nullasalus, hehe, indeed. You'd think these guys would just acknowledge the philosophical brick wall, and humbly bow their heads instead of spinning vacuous drivel. Ah, the depths of human arrogance. I'm not immune to it myself.mike1962
July 19, 2011
July
07
Jul
19
19
2011
09:54 PM
9
09
54
PM
PDT
He never says how “reflective loops” generate the consciousness that I am He doesn’t know. He’s just burying himself in levels of verbal cow poo poo in an attempt to hide the dearth of real explanation. (Since there is none.) Who actually buys into this drivel? Of course, there's also the whole "the self comes into being the moment it has the power to reflect itself" thing. So, there's no self, until the self reflects itself. Then the self shows up. But the self had to exist to reflect itself, so...nullasalus
July 19, 2011
July
07
Jul
19
19
2011
09:09 PM
9
09
09
PM
PDT
"Hofstadter says he’s certain that a reductionistic explanation of mind is true. Also, he thinks the reductionistic explanation is incomprehensible. So, we have to translate the incomprehensible into something we comprehend. Think about that for a moment: “I can’t comprehend what this means. So I have to translate what this means into something I can comprehend.”"
That, my friend, is a great example of a subtle insanity at work. Weird people.mike1962
July 19, 2011
July
07
Jul
19
19
2011
08:53 PM
8
08
53
PM
PDT
"Hofstadter says he’s certain that a reductionistic explanation of mind is true." That just means he's a fool or a liar. Who buys into this drivel?mike1962
July 19, 2011
July
07
Jul
19
19
2011
08:51 PM
8
08
51
PM
PDT
"the problem is how to translate it into a language we ourselves can fathom." Of course, consciousness never can be put into "fathomable" language apart from the brute experience itself. A congenitally blind man will never know what color is from a description, no matter how cleverly composed. Conscious qualia: it takes one to know one.mike1962
July 19, 2011
July
07
Jul
19
19
2011
08:49 PM
8
08
49
PM
PDT
"The self comes into being at the moment it has the power to reflect itself."
So if I point a video camera at a mirror, is it conscious? Is any feedback loop conscious as I experience consciousness? He never says how "reflective loops" generate the consciousness that I am He doesn't know. He's just burying himself in levels of verbal cow poo poo in an attempt to hide the dearth of real explanation. (Since there is none.) Who actually buys into this drivel?mike1962
July 19, 2011
July
07
Jul
19
19
2011
08:47 PM
8
08
47
PM
PDT
And just to get at some of what I'm suggesting about Hofstadter... My belief is that the explanations of "emergent" phenomena in our brains-for instance, ideas, hopes, images, analogies, and finally consciousness and free will-are based on a kind of Strange Loop, an interaction between levels in which the top level reaches back down towards the bottom level and influences it, while at the same time being itself determined by the bottom level. In other words, a self-reinforcing "resonance" between different levels--quite like the Henkin sentence which, by merely asserting its own provability, actually becomes provable. The self comes into being at the moment it has the power to reflect itself. This should not be taken as an antireductionist position. It just implies that a reductionistic explanation of a mind, in order to be comprehensible, must bring in "soft" concepts such as levels, mappings, and meanings. In principle, I have no doubt that a totally reductionistic but incomprehensible explanation of the brain exists; the problem is how to translate it into a language we ourselves can fathom. There's the man on his view of consciousness, self, intentionality, etc. Notice a few things. * Hofstadter insists his view is not antireductionistic. Indeed, he's certain reductionism about the mind is true. So either Liddle (who repeatedly claims to be a non-reductive materialist) has some sharp disagreement with Hofstadter's view (which she's apparently endorsed), or she'd have to accuse Hofstadter of not even being able to properly identify whether or not he's a reductionist. * Hofstadter says he's certain that a reductionistic explanation of mind is true. Also, he thinks the reductionistic explanation is incomprehensible. So, we have to translate the incomprehensible into something we comprehend. Think about that for a moment: "I can't comprehend what this means. So I have to translate what this means into something I can comprehend." But if you can't comprehend something, you can't translate it - and you certainly can't know it's true. In addition, his talk of 'soft concepts' combined with his commitment to reductionism stongly implies that what he's dealing with are useful fictions. * So why does Hofstadter know this view is true? He gives a clue elsewhere: And this is our central quandary. Either we believe in a nonmaterial soul that lives outside the laws of physics, which amounts to a nonscientific belief in magic, or we reject that idea, in which case the eternally beckoning question ‘What could ever make a mere physical pattern be me?’ – the question that philosopher David Chalmers nicknamed ‘The Hard Problem’ – seems just as far from having an answer today (or, for that matter, at any time in the future) as it was many centuries ago. First, note that at least according to this, Hofstadter himself doesn't think his 'loops' solve consciousness - we're still left with the hard problem. But more than that, notice one reason he gives for stumbling in the direction of translating the incomprehensible, and waving his arms while talking about loops and emergence: Because the alternative is 'magic'. And what makes it magic? Because we have a certain picture of the world now, and Hofstadter is drawing a line in the sand, saying that this (metaphysical) picture cannot be changed. Imagine if this sort of game was played with quantum physics: "If we accept the apparent results of the Stern-Gerlach experiment, it would mean classical mechanics is incorrect. That's tantamount to a belief in magic. So, we have to reject it and hold out for an explanation consistent with classical mechanics. And we have to do this in defense of science." But if you drop Hofstadter's false dilemma - if you accept that science (particularly our current science) can be incomplete, or that materialism can in fact be wrong - a lot of these problems melt away. Maybe physics will just have to be revised in the future. Maybe a materialistic understanding of the world should be jettisoned.nullasalus
July 19, 2011
July
07
Jul
19
19
2011
07:12 PM
7
07
12
PM
PDT
Mung: "Do you believe it is possible for someone to perpetuate a lie without knowing it’s a lie? I do. Does that make it less of a lie?" Or, to put it another way, had Seinfeld's George Constanza discovered a loophole in the general prohibition against lying (*) when he counseled/rationalized, “It’s not a lie if you really believe it”? One has an obligation to have done “due diligence” regarding the things one asserts; one has the obligation to have proper rational warrant for believing what one believes and especially regarding what one asserts. Thus, even if one “totally believes” something but hasn’t the rational warrant for believing it, one may indeed lie in asserting it – even if the belief is objectively true. Isn’t it curious, the things one can learn, if one thinks carefully about what one already knows? Equally curious will be the reaction to the above by persons who do not wish to understand it. (*) Not every act of lying is immoral; sometimes, morality *requires* one to lie: the famous test case being the Nazis-at-the-door looking for the person(s) you have conspired to hid from them.Ilion
July 19, 2011
July
07
Jul
19
19
2011
05:47 PM
5
05
47
PM
PDT
Nullasalus: my position, which I believe I share with Dennett, is that “qualia” is ultimately an incoherent concept. Dennett's position amounts to that claim that qualia cannot exist, because if they do then materialism is false. And because the commitment to materialism is primary, experience must be eliminated from the picture. What makes qualia 'incoherent' to Dennett is the assumptions of materialism. But we don't need to assume materialism anyway. I accept that certain sensations seem very “raw”. But I think if we examine them we find that they are not as raw as we think they are. Qualia are sensations - experience. They're what you're denying. You don't 'examine the sensations'; what you do is redefine the mind to exclude qualia, throw in what you think replaces it, and then try to give an explanation of how your replacement could come to be. And you don't even succeed there, because it relies on an account of intentionality that itself collapses into actual incoherence. Again, even Hostadter gives strong indications that he knows he's in a bad situation with this game and justifies it largely on a 'but the alternative is materialism is wrong' plea. I certainly do not “deny the existence of experience” which would indeed be insane. I just don’t think we need a special word for certain kinds of experience such as “qualia”. And here comes the word game. You're not objecting to 'a special word' here, as if this is a mere argument over terminology. Qualia is the experiential, and this is what is being denied. So you replace qualia with function and some mumblings about infinity and emergence, call this experience, then get huffy when you're accused of denying experience. The alternative is that you're screwing around with the word 'qualia' - since qualia is not 'a certain kind of experience'. It's subjective experience, period. Back to an example I always use: If I define Bigfoot as a delusion, and point out that delusions are real - then get upset when someone accuses me of being a Bigfoot denier because 'all I've done is sort out my definition of Bigfoot - and what I define Bigfoot to be absolutely exists!', it's pretty easy to see I'm BSing. I think it's clear this is what's going on here with 'I don't deny consciousness/experience exists, I just deny that qualia exists'. Qualia is subjective experience. Nor does Dennett think there is no such thing as conscious experience. He wrote an entire book about how it can be explained. I don’t expect he’d have bothered if he didn’t think it existed And here's the familiar refrain: Ignoring that Dennett's critics (including fellow in-name materialists), after reading his book, argued that what Dennett did was explain away consciousness. And really, insofar as Dennett rules out qualia and the experiential from the start, that's exactly what he did. Again: If Dennett wrote a book on 'Bigfoot Explained', defined Bigfoot to be a delusion, and then spent the rest of his book explaining how people come to have this delusion - surprise, Dennett denies the existence of Bigfoot, title be damned. But before we get to that: do you accept the possibility that conscious experience of the world might be summed over periods of time, rather than being a continuous flow? Because that is an important underpinning to my approach (and Dennett’s, I think) You're essentially asking me if I accept it's possible that I could be wrong about qualia, having experience. No, it's not possible. And before you do the 'well you think you're right, and thinking you're right is a great way to mislead yourself' song and dance, I point out that you're not open to the possibility that experience is real, that there are qualia as opposed to nothing but function. I'd rather you hold your ground and justify your explanation of intentionality - namely, that all intentionality is derived. Let's draw out that show, where you say it's an infinite circle, but that's okay because emergence and also look at waves. I'd also like to see you justify your denial of qualia, since qualia is subjective experience, but you claim not to deny subjective experience. So either you are in fact denying subjective experience, or you're botching qualia by defining it to be something other than subjective experience.nullasalus
July 19, 2011
July
07
Jul
19
19
2011
04:43 PM
4
04
43
PM
PDT
Liz: Nor does Dennett think there is no such thing as conscious experience. He wrote an entire book about how it can be explained.
One of the worst wastes of paper I ever spent good money on. He "explains" it by redefining the word to something else, then knocking that down. My conclusion: either the man is wicked, lazy or insane..., or a zombie. Whatever he's trying to explain is not what I experience as an instance of consciousness.mike1962
July 19, 2011
July
07
Jul
19
19
2011
04:39 PM
4
04
39
PM
PDT
Nullasalus: my position, which I believe I share with Dennett, is that "qualia" is ultimately an incoherent concept. I accept that certain sensations seem very "raw". But I think if we examine them we find that they are not as raw as we think they are. I certainly do not "deny the existence of experience" which would indeed be insane. I just don't think we need a special word for certain kinds of experience such as "qualia". Nor does Dennett think there is no such thing as conscious experience. He wrote an entire book about how it can be explained. I don't expect he'd have bothered if he didn't think it existed :) You write:
You leave out that the “concept called “I”" (along with the ‘sense of time’) can only be had, under your view, by derivation – meaning, you only ‘have the concept of I’ by means of a third party deriving that you have the ‘concept of I’. But that third party only ‘derives that you have the concept of I’ by virtue of another party deriving that that are deriving that you have the concept of ‘I’ – and so on. When I point this out, your response is ‘yes well it’s infinite circularity, I accept that, no problem there’.
Well, there isn't. And there is no third party either. But before we get to that: do you accept the possibility that conscious experience of the world might be summed over periods of time, rather than being a continuous flow? Because that is an important underpinning to my approach (and Dennett's, I think). If you think this is false, then perhaps that's the next thing to discuss. Except that I'm going to be out of action for the next few days. But I'd certainly like to know your response.Elizabeth Liddle
July 19, 2011
July
07
Jul
19
19
2011
03:42 PM
3
03
42
PM
PDT
Null: “you don’t believe that there is qualia, or conscious experience. There is no “experience”, there is only function.” I think the robot future-fantasy analogy clearly illustrated this view.junkdnaforlife
July 19, 2011
July
07
Jul
19
19
2011
02:56 PM
2
02
56
PM
PDT
Well, you can probably look up the physiological reactions but I would say (and Nullasalus would tell me this is illogical, but I’ll say it anyway, because I don’t think it is) that pain is the knowledge that we are in the grip of an aversive reaction. Which requires a concept called “I” and a sense of time. I think integration of input over time is absolutely critical to conscious experience, and that although we perceive it as a flow, it operates as a series of discrete summations. There is fairly good evidence to support this. See, you say "Nullasalus would tell me this is illogical". You don't mention why I lodge the objections I do. You leave out that the "concept called "I"" (along with the 'sense of time') can only be had, under your view, by derivation - meaning, you only 'have the concept of I' by means of a third party deriving that you have the 'concept of I'. But that third party only 'derives that you have the concept of I' by virtue of another party deriving that that are deriving that you have the concept of 'I' - and so on. When I point this out, your response is 'yes well it's infinite circularity, I accept that, no problem there'. You talk about conscious experience, but you leave out the part that - unless you sharply disagree with Dennett - you don't believe that there is qualia, or conscious experience. There is no "experience", there is only function. You say "there is fairly good evidence to support this" - but your "evidence" in this case is this: 'This is the only thing, given my metaphysics, that could be taking place. Therefore, rather than amend my metaphysics in any way or be open to that possibility, I will interpret all data in light of this and call it evidence.' Yes, I say that asserting that all intentionality is derived (so that if I think 'I'm going to the supermarket', I only think this in virtue of another observer, perhaps internal, interpreting (brain processes, what-have-you) as 'it thinks it is going to the supermarket' which in turn only means this by virtue of yet another observer interpreting that as 'it thinks it thinks it is going to the supermarket') and then trying to defend it by saying 'emergence! infinite circularity!' is both incoherent and a ridiculous dodge. Even Hofstadter gives off the impression that he knows it's ridiculous - he justifies his taking this route in large part because the alternatives are too religious for his liking, and we can't have that. (The man's also a reductionist about the mind by his own admission, though his reductionism comes with a real telling caveat.) Yes, I think denying the existence of experience, of qualia, is insane - even many materialists would agree (and many would agree that the denial of original intentionality is just as insane).nullasalus
July 19, 2011
July
07
Jul
19
19
2011
01:51 PM
1
01
51
PM
PDT
1 2 3 4 6

Leave a Reply