Uncommon Descent Serving The Intelligent Design Community

How is libertarian free will possible?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In this post, I’m going to assume that the only freedom worth having is libertarian free will: the free will I possess if there are choices that I have made during my life where I could have chosen differently, under identical circumstances. That is, I believe that libertarian free will is incompatible with determinism. By contrast, indeterminism is compatible with the existence of libertarian freedom, but in no way implies it.

There are some people who think that even if your choices are fully determined by your circumstances, they are still free, if you selected them for a reason and if you are capable of being educated to act for better reasons. People who think like that are known as compatibilists. I’m not one of them; I’m an incompatibilist. Specifically, I’m what an agent-causal incompatibilist: I believe that humans have a kind of agency (an ability to act) that cannot be explained in terms of physical events.

Some time ago, I came across The Cogito Model of human freedom, on The Information Philosopher Web site, by Dr. Roddy Doyle. The Website represents a bold philosophical attempt to reconcile the valid insights underlying both determinism and indeterminism. The authors of the model argue that it accords well with the findings of quantum theory, and guarantees humans libertarian freedom, but at the same time avoids the pitfall of making chance the cause of our actions. Here’s an excerpt:

Our Cogito model of human freedom combines microscopic quantum randomness and unpredictability with macroscopic determinism and predictability, in a temporal sequence.

Why have philosophers been unable for millennia to see that the common sense view of human freedom is correct? Partly because their logic or language preoccupation makes them say that either determinism or indeterminism is true, and the other must be false. Our physical world includes both, although the determinism we have is only an adequate description for large objects. So any intelligible explanation for free will must include both indeterminism and adequate determinism.

At first glance, Dr. Doyle’s Cogito Model appears to harmonize well with the idea of libertarian free will. Doyle makes a point of disavowing determinism, upholding indeterminism, championing Aristotle, admiring Aquinas and upholding libertarian free will. However, it turns out that he’s no Aristotelian, and certainly no Thomist. Indeed, he isn’t even a bona fide incompatibilist. Nevertheless, Doyle’s Cogito Model is a highly instructive one, for it points the way to how a science-friendly, authentically libertarian account of freedom might work.

There are passages on Dr. Doyle’s current Web site (see for instance paragraphs 3 and 4 of his page on Libertarianism) where he appears to suggest that our character and our values determine our actions. This is of course absurd: if I could never act out of character, then I could not be said to have a character. I would be a machine.

Misleadingly, in his Web page on Libertarianism, Dr. Doyle conflates the incoherent view that “an agent’s decisions are not connected in any way with character and other personal properties” (which is surely absurd) with the entirely distinct (and reasonable) view that “one’s actions are not determined by anything prior to a decision, including one’s character and values, and one’s feelings and desires” (emphases mine). Now, I have no problem with the idea that my bodily actions are determined by my will, which is guided by my reason. However, character, values, feelings and desires are not what makes an action free – especially as Doyle makes it quite clear in his Cogito Model that he envisages all these as being ultimately determined by non-rational, physicalistic causes:

Macro Mind is a macroscopic structure so large that quantum effects are negligible. It is the critical apparatus that makes decisions based on our character and values.

Information about our character and values is probably stored in the same noise-susceptible neural circuits of our brain…

The Macro Mind has very likely evolved to add enough redundancy, perhaps even error detection and correction, to reduce the noise to levels required for an adequate determinism.

The Macro Mind corresponds to natural selection by highly determined organisms.

There is a more radical problem with Doyle’s model: he acknowledges the reality of downward causation, but because he is a materialist, he fails to give a proper account of downward causation. He seems to construe it in terms of different levels of organization in the brain: Macro Mind (“a macroscopic structure so large that quantum effects are negligible… the critical apparatus that makes decisions based on our character and values”) and Micro Mind (“a random generator of frequently outlandish and absurd possibilities”) – the latter being susceptible to random quantum fluctuations, from which the former makes a rational selection.

Doyle goes on to say:

Our decisions are then in principle predictable, given knowledge of all our past actions and given the randomly generated possibilities in the instant before decision. However, only we know the contents of our minds, and they exist only within our minds. Thus we can feel fully responsible for our choices, morally and legally.

This passage leads me to conclude that Doyle is a sort of compatibilist, after all. As I’ve said, I’m not.

So how do I envisage freedom? I’d like to go back to a remark by Karl Popper, in his address entitled, Natural Selection and the Emergence of Mind, delivered at Darwin College, Cambridge, November 8, 1977. Let me say at the outset that I disagree with much of what Popper says. However, I think he articulated a profound insight when he said:

A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.

Let’s get back to the problem of downward causation. How does it take place? The eminent neurophysiologist and Nobel prize winner, Sir John Eccles, openly advocated a “ghost in the machine” model in his book Facing Reality, 1970 (pp. 118-129). He envisaged that the “ghost” operates on neurons that are momentarily poised close to a threshold level of excitability.

But that’s not how I picture it.

My model of libertarian free will

Reasoning and choosing are indeed immaterial processes: they are actions that involve abstract, formal concepts. (By the way, computers don’t perform formal operations; they are simply man-made material devices that are designed to mimic these operations. A computer is no more capable of addition than a cash register, an abacus or a Rube Goldberg machine.)

Reasoning is an immaterial activity. This means that reasoning doesn’t happen anywhere – certainly not in some spooky Cartesian soul hovering 10 centimeters above my head. It has no location. Ditto for choice. However, choices have to be somehow realized on a physical level, otherwise they would have no impact on the world. The soul doesn’t push neurons, as Eccles appears to think; instead, it selects from one of a large number of quantum possibilities thrown up at some micro level of the brain (Doyle’s micro mind). This doesn’t violate quantum randomness, because a selection can be non-random at the macro level, but random at the micro level. The following two rows of digits will serve to illustrate my point.

1 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 1
0 0 1 0 0 0 0 1 1 0 1 1 0 1 1 1 0 1

The above two rows of digits were created by a random number generator. Now suppose I impose the macro requirement: keep the columns whose sum equals 1, and discard the rest. I now have:

1 0 1 1 1 0 0 0 0 0 1
0 1 0 0 0 1 1 0 1 1 0

Each row is still random, but I have imposed a non-random macro-level constraint. That’s how my will works when I make a choice.

For Aristotelian-Thomists, a human being is not two things – a soul and a body – but one being, capable of two radically different kinds of acts – material acts (which other animals are also capable of) and formal, immaterial actions, such as acts of choice and deliberation. In practical situations, immaterial acts of choice are realized as a selection from one of a large number of randomly generated possible pathways.

On a neural level, what probably happens when an agent decides to raise his/her arm is this: the arm goes through a large number of micro-level muscular movements (tiny twitches) which are randomly generated at the quantum level. The agent tries these out over a very short interval of time (a fraction of a second) before selecting the one which feels right – namely, the one which matches the agent’s desire to raise his/her arm. This selection continues during the time interval over which the agent raises his/her arm. The wrong (randomly generated quantum-level) micro-movements are continually filtered out by the agent.

The agent’s selection usually reflect his/her character, values and desires (as Doyle proposes) – but on occasion, it may not. We can and do act out of character, and we sometimes act irrationally. Our free will is not bound to act according to reason, and sometimes we act contrary to it (akrasia, or weakness of will, being a case in point).

So I agree with much of what Doyle has to say, but with this crucial difference: I do not see our minds as having been formed by the process of natural selection. Since thinking is an immaterial activity, any physicalistic account of its origin is impossible in principle.

Comments
F/N: I have put up a reference clip on the Smith Model (a cybernetic architecture) and linked issues and ideas, here. KFkairosfocus
June 4, 2013
June
06
Jun
4
04
2013
06:20 AM
6
06
20
AM
PST
LIZ: Well, I don’t think any “part” of the brain is conscious! I think I am conscious, and that I consist of not just my brain but the rest of me as wel, including my history over time.
As you probably know, brain studies indicate that certain areas are directly related to conscious experience and some are not. You can probe/excite one spot and the subject is consciously aware of it, and probe another and the subject is not. So then, it would seem my characterization is a fair one. While I agree that "you" are conscious, it seems empirically true that only parts of your brain, that is, only a subset of neurons and synapses and dendrites, are associated directly with conscious experience. If this is true, it is an interesting and unanswered question how this can be true if consciousness is reducible to certain neurons, synapses and dendrites. What makes those particular neurons, synapses and dendrites so special?mike1962
July 25, 2011
July
07
Jul
25
25
2011
08:26 AM
8
08
26
AM
PST
Watch carefully as I emit a cloud of ink so as to elude my most dangerous predator, the dreaded clear position.
classicMung
July 24, 2011
July
07
Jul
24
24
2011
06:39 PM
6
06
39
PM
PST
Yes. But some are less wrong than others, mostly by being more complete. However, sometimes a simpler, less complete, more wrong, model is good enough, and easier to handle. And as of right now, the 'filling in' model is the better model than the 'refrigerator light' model, given the data. All this wriggling about how you think the other model is better, even though it's wrong, because all models are wrong, but some are better than others, but the filling in model is a good model... it's not necessary. It’s a fair point, and an interesting one – indeed, one I have been making, that consciousness is intrinsically related to memory. So I will rephrase: on waking we are conscious of very little that has occurred since we fell asleep, and what we are conscious of, is frequently confusing. However, interestingly, we usually are conscious that time has passed, which is often not true on recovery from an anaesthetic, suggesting that anaesthesia promotes, at the least, a more profound amnesia than natural sleep. See, it's not a 'point' you have been making. An assertion, a fiat declaration, is not a point. You say consciousness is intrinsically related to memory - but the only support you have of this is reportability. But that suggests a link between memory and reportability. Consciousness, experience itself, isn't necessarily tied to memory. Well, in terms of understanding how consciousness work, efficiency seems a reasonable variable to consider – an efficient brain can be more compact than an inefficient one, and when designing, say, “seeing” robots, efficiency is important. And we have good behavioural evidence that the kind of efficiency we build into “seeing” robots is also displayed by the visual systems of living organisms, which, indeed, is where we got the idea from! The behavioural evidence includes a wealth of data from research into illusions, including something called “change blindness”. You're talking about 'an efficient brain', but if experience is simply a property of all matter (among other possibilities), efficiency goes out the window. It would be like talking about how brains would be more efficient if "they only were affected by gravity when they needed it", then going off to speculate on how evolution developed the ability for brains to be affected by gravity. It's wrongheaded from the start. My point from the start has been that when we talk about what we recall or report, we are talking about our memory - but that memory and experience are not necessarily linked. It's entirely possible to have an experience but to have no memory of it, or to be unable to recall a memory. To talk about memory is not to talk about experience - and once you recognize that point, the model you say is "what the data shows", turns out not to be "what the data shows". It's one interpretation you get which is powered more by your assumptions than the data. I've offered an explanation in reply that makes different assumptions and uses the same data. So when you say... Again, it’s possible that the findings are all nonsense, just as it’s possible that omphalism or solipsism are true. ...You're being tremendously disingenuous. It's not the data, the actual 'findings' that I'm questioning. It's the interpretation. That point is driven home when you recognize that omphalism and solipsism do not need to deny any 'data' - they are frameworks for interpreting data. But then, you're using a framework to interpret the data too. No, I don’t appreciate that. Please explain. A person can be a reductionist while still finding certain models useful even if they know they're wrong. They can even use a 'holistic' model when they reject holism in the particular case. Let's say that someone is modeling the throw of a baseball. They don't think 'the baseball' is some irreducible thing - in fact, they're sure the baseball reduces entirely to its parts. But it takes a tremendous amount of time to model the throw of a baseball in terms of quanta. It may not even be feasible yet, they may even lack all the data required. The fact that they model the baseball as a single unit, the fact that they find it convenient to talk about the baseball as 'a baseball', doesn't make them non-reductive materialist or an emergentist. I do agree with what I think Hofstadter is saying, which is, I think, that we could (in theory only) model a complete brain from the model up, but that we would be incapable of understanding the model as an explanation. I’d put it as I’ve put it earlier, similarly: that an explanation of the brain in terms of, say the interactions of the quarks of which they are composed would not be an explanation. And I wouldn’t use the word “reductionist” to describe an explanation that wasn’t an explanation”! An incomprehensible explanation is an oxymoron. But then Hof likes playing with words. "Hof likes playing with words". Granted, he BSs quite a lot and makes it sound like he said something profound when he mostly produced obfuscation. But, let's see what you're saying here. 1) Hof says outright that his position is not antireductionistic. 2) He further asserts flatly that he has no doubt that a totally reductionistic but incomprehensible explanation of the brain exists. 3) He also claims that nothing he's saying should be taken as being in conflict with reductionism. So what's your response? A) You agree entirely with Hof. B) But you don't think that an entirely reductionistic explanation of the brain exists. C) Indeed, to explain the brain entirely in reductionist terms would not be an explanation at all. D) So, clearly Hof had to be kidding. Because otherwise he'd either be talking nonsense, or worse, you and him would disagree. Don't you think "Hof likes to play with words" is a little desperate here? You're saying that Hof can't really believe that there is a completely reductionistic explanation of the brain, even though he just said explicitly he's sure such a thing exists. You're trying to insist he's not a reductionist, when he's flatly stating that his views should not be taken as antireductionist. Surprise: You're the one who's 'playing with words' here, more than Hof. No, I don’t, because I don’t think it does. I think the conflict is only apparent, not real, and is apparent because of different ways of using words. Bwahaha. And how would you register a real, rather than apparent, conflict? Well, if a different way of using words was involved! Your big defense here is to say that a completely reductive explanation of the mind is an oxymoron and therefore cannot exist. Therefore, Hof can't really believe this exists, even though he just said it did. It can't be that Hof and you disagree, right? Or worse, that Hof's own ideas are deeply flawed? No. Has to be a big misunderstanding. No, because I don’t think the non-reductive models are fictions. I don’t even know what that would mean. Non-reductive models can fit the data as well, at a huge efficiency saving, as reductive ones. You've never heard of fictionalism, or even the term "useful fiction"? You just went on talking about how models can be useful while being wrong, but you can't wrap your head around the idea that a model can be false but pragmatic? It isn’t even a fiction that geocentrism is true. Any point in the universe can be regarded as a reference point, and we are, by definition, at the very centre of the observable universe. So not only is a geocentric reference point perfectly reasonable, it’s has a more profound truth value. I love it. So, I point out that someone can reject geocentrism and still use a geocentric model, and your response is to looking-glass geocentrism until you can say that geocentrism is true? This is epic. Parsing the world in terms of “patterns” and “information” is something that minds do. I’m no solipsist, so I would not argue that the referents for those words don’t represent an underlying reality. But, as I’ve probably said before, but I’ll say it again; I do not consider we have direct access to reality – all we have are models. We can test those models against data (which themselves are models at a level closer to “raw” reality) but we can never actually access reality directly. But the better our models predict the data, the closer we can infer they describe an underlying reality. And that, I suggest, is how science works – by making ever-more closely fitting models of reality. Metaphors and propositions expressed linguistically are one kind of model. There are others. And here we go again. EL: And if information and patterns are, in your lexicon, “immaterial” then I’m not a materialist because I am totally persuaded they exist. NS: So these "information" and "patterns" exist in nature, entirely independent of any minds deriving or assigning them? EL: Watch carefully as I emit a cloud of ink so as to elude my most dangerous predator, the dreaded clear position. You're damn certain that information and patterns exist! And by that you mean that there are patterns and information in your head. Do patterns and information exist independently of your mind? ... Who's to say, perhaps the world is naught but illusion! (We don't have direct access to reality, but somehow we can check our models against it. Using the data which is also a model.) So, that certainty that information and patterns exist, turns out to not be quite so certain. It's a way minds parse things. But wait, do minds actually parse things, independent of other minds assigning or deriving that they are parsing? Oh, I know the answer: Back to infinite loops again. Not sure what you mean in this context. Information is often compressible. I was asking if information 'reduced to' something else, but since you've moved from being certain that information and patterns exist to declaring them to be the result of mental parsing, it's moot.nullasalus
July 24, 2011
July
07
Jul
24
24
2011
02:45 PM
2
02
45
PM
PST
Elizabeth Liddle:
Not sure what you mean in this context. Information is often compressible.
Sigh. When information is compressed, how is that information reduced? The context, clearly, is reductionism. So the question should be understood as not one about whether information and patterns can be compressed, but rather whether they can be reduced.
Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents.[1] This can be said of objects, phenomena, explanations, theories, and meanings.
http://en.wikipedia.org/wiki/ReductionismMung
July 24, 2011
July
07
Jul
24
24
2011
11:01 AM
11
11
01
AM
PST
I don't especially mind labels as long as they don't indicate to their reader that I hold views that I don't in fact hold. Mung: I don't deny the existence of patterns and information. You say these are immaterial. Why then, do you call someone who ponts to the existence of patterns and information a "materialist"?Elizabeth Liddle
July 24, 2011
July
07
Jul
24
24
2011
10:57 AM
10
10
57
AM
PST
Elizabeth, You are a self-described materialist, even if you don't care for the label.Mung
July 24, 2011
July
07
Jul
24
24
2011
10:52 AM
10
10
52
AM
PST
Nullasalus:
Because all models are wrong, although some are less wrong than others. More importantly, all models are incomplete, but sometimes a simpler,though more incomplete model is more useful than a more complex, though more complete one. So, the filling in model is a good model but probably wrong and the fridge light model is better despite the filling in model being better supported by the data. Also, all models are wrong.
Yes. But some are less wrong than others, mostly by being more complete. However, sometimes a simpler, less complete, more wrong, model is good enough, and easier to handle.
For example, we are asleep, we aren’t conscious of much, and yet when we awake we are usually aware that time has passed. We aren’t conscious of much? According to who? How do we know we’re not plenty conscious but simply don’t remember what we were conscious of? The mere existence of dreams, and the difficulty recalling them, speaks against your view here.
It’s a fair point, and an interesting one – indeed, one I have been making, that consciousness is intrinsically related to memory. So I will rephrase: on waking we are conscious of very little that has occurred since we fell asleep, and what we are conscious of, is frequently confusing. However, interestingly, we usually are conscious that time has passed, which is often not true on recovery from an anaesthetic, suggesting that anaesthesia promotes, at the least, a more profound amnesia than natural sleep.
We have no awareness of what we are not seeing, probably because we do not need to – if we do need to get information from somewhere else, we can. So rather than “filling in” I think it’s better to think that we are aware on a “need to know” basis – “need to be conscious of”. Much more efficient than being conscious of everything all the time. Or we have an experience that is not remembered. You say ‘much more efficient’, but who says efficiency is key here? It would be very efficient if, re: that old quantum physics joke, the moon wasn’t there unless we were looking at it. It’s not a good argument that the moon isn’t there if we’re not looking at it.
Well, in terms of understanding how consciousness work, efficiency seems a reasonable variable to consider – an efficient brain can be more compact than an inefficient one, and when designing, say, “seeing” robots, efficiency is important. And we have good behavioural evidence that the kind of efficiency we build into “seeing” robots is also displayed by the visual systems of living organisms, which, indeed, is where we got the idea from! The behavioural evidence includes a wealth of data from research into illusions, including something called “change blindness”. Again, it’s possible that the findings are all nonsense, just as it’s possible that omphalism or solipsism are true. There’s no way of knowing that we are not acutely conscious throughout our sleeping hours, but have no access to any of the stuff we were conscious of during those hours subsequently. But then this is why I think that memory is intrinsic to consciousness – without memory, however short term, I suggest that consciousness makes no sense, which is why Edelman calls consciousness “the remembered present”.
Now the reason I reject the label “reductionist” is that it seems wildly inappropriate as a term to describe a holistic view of the self. I can scarcely think of more counter-intuitive term. Do you appreciate that ‘reductionist’ and ‘not reductionist’ are not about mere pragmatic descriptions, but realities?
No, I don’t appreciate that. Please explain.
Hof affirms explicitly that he believes a reductionist explanation of mind is true, but incomprehensible. So, since we can’t comprehend the reductionist explanation of mind, we use metaphors. Let’s put this question in as stark relief as I think is possible, EL. Here’s the quote from Hof: “In principle, I have no doubt that a totally reductionistic but incomprehensible explanation of the brain exists; the problem is how to translate it into a language we ourselves can fathom.” A) Do you agree that ‘a totally reductionistic’ explanation of the brain exists?
Well, as I don’t “appreciate” (yet!) that “reductionist” and “not-reductions” are not mere pragmatic descriptions, I’m going to answer on the basis that they are. I do agree with what I think Hofstadter is saying, which is, I think, that we could (in theory only) model a complete brain from the model up, but that we would be incapable of understanding the model as an explanation. I’d put it as I’ve put it earlier, similarly: that an explanation of the brain in terms of, say the interactions of the quarks of which they are composed would not be an explanation. And I wouldn’t use the word “reductionist” to describe an explanation that wasn’t an explanation”! An incomprehensible explanation is an oxymoron. But then Hof likes playing with words.
B) If not, do you realize this places you in apparent conflict with Hof, and possibly Dennett?
No, I don’t, because I don’t think it does. I think the conflict is only apparent, not real, and is apparent because of different ways of using words.
B) Do you realize that a reductionist can nevertheless make use of useful non-reductive fictions – in the same way that, say, a geocentric reference point can be used in satellite launches without someone having to affirm that geocentrism is true?
No, because I don’t think the non-reductive models are fictions. I don’t even know what that would mean. Non-reductive models can fit the data as well, at a huge efficiency saving, as reductive ones. It isn’t even a fiction that geocentrism is true. Any point in the universe can be regarded as a reference point, and we are, by definition, at the very centre of the observable universe. So not only is a geocentric reference point perfectly reasonable, it’s has a more profound truth value.
And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. I think the me-ness of me inheres in my pattern, not in the material in which that pattern is instantiated. Fantastic. A) Do these “patterns” and “information” (‘information about’, it would seem) exist independently of any minds deriving or assigning them?
Parsing the world in terms of “patterns” and “information” is something that minds do. I’m no solipsist, so I would not argue that the referents for those words don’t represent an underlying reality. But, as I’ve probably said before, but I’ll say it again; I do not consider we have direct access to reality – all we have are models. We can test those models against data (which themselves are models at a level closer to “raw” reality) but we can never actually access reality directly. But the better our models predict the data, the closer we can infer they describe an underlying reality. And that, I suggest, is how science works – by making ever-more closely fitting models of reality. Metaphors and propositions expressed linguistically are one kind of model. There are others.
B) Are these patterns and information themselves reducible?
Not sure what you mean in this context. Information is often compressible.Elizabeth Liddle
July 24, 2011
July
07
Jul
24
24
2011
03:27 AM
3
03
27
AM
PST
What's wrong with it, Mung? We are talking labels here, right? What does the term "materialist" mean, in your lexicon?Elizabeth Liddle
July 24, 2011
July
07
Jul
24
24
2011
02:52 AM
2
02
52
AM
PST
A further point. Elizabeth says if we define information as being immaterial, and she is totally persuaded that information exists, it follows that she is not a materialist. This sort of asinine "reasoning" is just one example [they are Legion] of what I find so frustrating about our dear Lizzie.Mung
July 23, 2011
July
07
Jul
23
23
2011
06:21 PM
6
06
21
PM
PST
Elizabeth Liddle:
But what you call me depends how you define you concepts and terms.
Well, I don't think I've ever called you a liar, though I suppose it's possible. Yet you seem to think that I do so with some regularity. But I prefer to let you speak for yourself on the question of whether or not you are a materialist:
I do not see why a “purposeless, mindless process” should not produce purposeful entities, and indeed, I think it did and does. - Elizabeth Liddle
Don't you think that makes you a materialist, regardless of how we define "information" or "pattern"?Mung
July 23, 2011
July
07
Jul
23
23
2011
05:28 PM
5
05
28
PM
PST
Because all models are wrong, although some are less wrong than others. More importantly, all models are incomplete, but sometimes a simpler,though more incomplete model is more useful than a more complex, though more complete one. So, the filling in model is a good model but probably wrong and the fridge light model is better despite the filling in model being better supported by the data. Also, all models are wrong. For example, we are asleep, we aren’t conscious of much, and yet when we awake we are usually aware that time has passed. We aren't conscious of much? According to who? How do we know we're not plenty conscious but simply don't remember what we were conscious of? The mere existence of dreams, and the difficulty recalling them, speaks against your view here. We have no awareness of what we are not seeing, probably because we do not need to – if we do need to get information from somewhere else, we can. So rather than “filling in” I think it’s better to think that we are aware on a “need to know” basis – “need to be conscious of”. Much more efficient than being conscious of everything all the time. Or we have an experience that is not remembered. You say 'much more efficient', but who says efficiency is key here? It would be very efficient if, re: that old quantum physics joke, the moon wasn't there unless we were looking at it. It's not a good argument that the moon isn't there if we're not looking at it. Now the reason I reject the label “reductionist” is that it seems wildly inappropriate as a term to describe a holistic view of the self. I can scarcely think of more counter-intuitive term. Do you appreciate that 'reductionist' and 'not reductionist' are not about mere pragmatic descriptions, but realities? Hof affirms explicitly that he believes a reductionist explanation of mind is true, but incomprehensible. So, since we can't comprehend the reductionist explanation of mind, we use metaphors. Let's put this question in as stark relief as I think is possible, EL. Here's the quote from Hof: "In principle, I have no doubt that a totally reductionistic but incomprehensible explanation of the brain exists; the problem is how to translate it into a language we ourselves can fathom." A) Do you agree that 'a totally reductionistic' explanation of the brain exists? B) If not, do you realize this places you in apparent conflict with Hof, and possibly Dennett? C) Do you realize that a reductionist can nevertheless make use of useful non-reductive fictions - in the same way that, say, a geocentric reference point can be used in satellite launches without someone having to affirm that geocentrism is true? And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. I think the me-ness of me inheres in my pattern, not in the material in which that pattern is instantiated. Fantastic. A) Do these "patterns" and "information" ('information about', it would seem) exist independently of any minds deriving or assigning them? B) Are these patterns and information themselves reducible?nullasalus
July 23, 2011
July
07
Jul
23
23
2011
02:19 PM
2
02
19
PM
PST
Mung:
Like a wave?
Yep, but a lot more complicated.
And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. So what we think makes you not a materialist? Interesting.
What you think makes no difference to what I think unless you persuade me to think differently. But what you call me depends how you define you concepts and terms. And if information and patterns are, in your lexicon, "immaterial" then I'm not a materialist because I am totally persuaded they exist.Elizabeth Liddle
July 23, 2011
July
07
Jul
23
23
2011
11:41 AM
11
11
41
AM
PST
It isn't about the 'ists', it's about the '-ism'.Ilion
July 23, 2011
July
07
Jul
23
23
2011
11:27 AM
11
11
27
AM
PST
Elizabeth Liddle:
REDUCTIONISTS DO NOT NECESSARILY REDUCE THINGS TO THEIR CONSTITUENT PARTS; SOME OF THEM ELEVATE SPECIFIC ARRANGEMENTS CONSTITUENT PARTS TO THE STATUS OF A WHOLE THAT HAS PROPERTIES NOT SHARED WITH ANY OF THE PARTS.
Like a wave? And, by the same token, if you think that “pattern” and “information” are immaterial, then I am not a materialist. So what we think makes you not a materialist? Interesting.Mung
July 23, 2011
July
07
Jul
23
23
2011
10:48 AM
10
10
48
AM
PST
Mike 1962:
… How can “part” of the brain, or the “backend” of the brain be conscious and not the whole damn thing? What is so special about that particular arrangement of synapses and dendrites that makes it conscious?
Well, I don't think any "part" of the brain is conscious! I think I am conscious, and that I consist of not just my brain but the rest of me as wel, including my history over time. Unfortunately we have found ourselves in a closed loop with this conversation, so it will probably just languish, but the basis of my position is that the consciousness question is ill-posed. What I have been trying to do (and what I think both Dennett and Hofstadter have tried to do) is to pose the question so that it is answerable. Of course the natural reaction to that is "but you haven't answered the question!" or, worse "you are denying the existence of the very explanandum!" Well, in a sense yes, but just because I don't think there is an entity called "consciousness" doesn't mean that I don't think we are conscious. I do. I think organisms (not brains, or part-brains, or brain regions) are conscious, and that one of the things that very smart organisms, such as people, are conscious of is their own existence, specifically as objects with the same set of properties as other similar objects, e.g. other people. Now the reason I reject the label "reductionist" is that it seems wildly inappropriate as a term to describe a holistic view of the self. I can scarcely think of more counter-intuitive term. But if, by reductionist, you mean that I think that the self is the result of lots of subprocesses and systems, going right down to subatomic processes and systems, then sure, I guess I just have to wear that label. But if you insist on my wearing such a label, please also read the hazard warning printed in red letters below, saying: REDUCTIONISTS DO NOT NECESSARILY REDUCE THINGS TO THEIR CONSTITUENT PARTS; SOME OF THEM ELEVATE SPECIFIC ARRANGEMENTS CONSTITUENT PARTS TO THE STATUS OF A WHOLE THAT HAS PROPERTIES NOT SHARED WITH ANY OF THE PARTS. And, by the same token, if you think that "pattern" and "information" are immaterial, then I am not a materialist. I think the me-ness of me inheres in my pattern, not in the material in which that pattern is instantiated. Hope that clears things up a little.Elizabeth Liddle
July 23, 2011
July
07
Jul
23
23
2011
05:43 AM
5
05
43
AM
PST
Nullasalus:
But FWIW – I think Dennett is right about “filling in” – it’s a moderately good working model, but not really supported by the data. The fridge light is a better model IMO.
If it’s not really supported by the data, then how can it be a good working model?
Because all models are wrong, although some are less wrong than others. More importantly, all models are incomplete, but sometimes a simpler,though more incomplete model is more useful than a more complex, though more complete one. "Filling in" is a reasonable shorthand for what the brain does, and at least it is more accurate than the idea that we see a whole scene at a glance, because we don't. But I think it's a poor metaphor for something that is in many ways far more interesting.
“Well, the data is against it – but it’s still a good model.”? Others seem to think that it’s a good model because it IS supported by the data. And the fridge light model (I take this to mean ‘conscious experience shows up and disappears as needed or whatever’) is better how – especially given that you yourself said that it’s difficult, and I suggest it may be impossible, to test whether consciousness is going ‘off’ as opposed to not being accessible / being forgotten?
By the "fridge light" metaphor, I mean that whenever we need to be conscious of something, we are able to be conscious of it, and so we don't ever register being unconscious of it, just as we don't register that the fridge light is off. As I've said, I think the model of consciousness in which it is "off" or "on" is a poor one too - I think it's much better to think in terms of what we are conscious of, than in terms of "whether we are conscious". For example, we are asleep, we aren't conscious of much, and yet when we awake we are usually aware that time has passed. Interestingly, after most anaesthetics this is not the case. This suggests that awareness of time passing does not cease during normal sleep, but does cease during anaesthesia. Or another way of putting that might be that on waking from normal sleep we can access information (become conscious) of the time that has elapsed, whereas this is not possible after an anaesthetic until we see a clock. I have an odd memory of seeing a clock reading 1.00pm as the anaesthetist injected my hand, and then, as it seemed, after an eyeblink, seeing a clock reading 4.00 - except that it was a different clock! Back to the fridge light, and filling in: there's an interesting experiment you can do with an eyetracker, whereby you program a computer display only to display text where the participant is looking (a couple of degrees of visual angle), and just replace the rest of the text with random letters. The extraordinary thing is that to the participant, the screen looks completely normal, but to those watching, there is a constant flicker, as normal text appears wherever the participant is looking, but nowhere else. We have no awareness of what we are not seeing, probably because we do not need to - if we do need to get information from somewhere else, we can. So rather than "filling in" I think it's better to think that we are aware on a "need to know" basis - "need to be conscious of". Much more efficient than being conscious of everything all the time.Elizabeth Liddle
July 22, 2011
July
07
Jul
22
22
2011
04:00 PM
4
04
00
PM
PST
mike1962, High compliment, thank you. EL, You say you are answering my questions, then you don’t answer them! What did I not answer here? You asked me if I think it's incorrect that the 'evidence suggests' it takes time for an apple to be experienced. I explained my problem with taking the 'evidence suggests' that way. Is it that I'm not giving the right answers? That I'm giving rationales rather than some straight-up yes/no? Which not only did I not say, I said I do not say it, and do not even hypthesise! You're telling me directly if the 'evidence suggests' something, I gave my reasons for being skeptical of that view. Not to mention my reasons for thinking that even if it did 'take time' that this would hardly affect there being a 'stream of consciousness' - I'm splitting hairs, because some hairs have it coming. The you talk about “the emergence dance”. It’s as though you just scan my posts for stimuli, and press the nearest button, without actually addressing the reasoning. Elizabeth, there is no reasoning in the Emergence Dance. I've seen you appeal to it in multiple threads - each time it follows a similar pattern. "The self/consciousness emerges!", the 'how' is asked for, you talk about feedback loops, someone asks how that leads to the self or consciousness, you go for the metaphors. Even Hof seems to think he hasn't really given an explanation of consciousness and mind so much as gestured in some direction or come up with some useful fiction. And Dennett's position is largely one of keeping the materialist faith while trying to make it sound not ridiculous. What I'm doing is taking note of your typical moves, and pointing them out. And really, for someone who's accusing me of not answering questions - you've dodged quite a lot of mine, and left various observations unanswered. I gave a quote where Hof (who you've appealed to more than once, and who is typically taken as being on the exact same page as Dennett) affirms the truth of an entirely reductionist view of the mind - he just thinks it's not comprehensible to humans, so we have to use some convenient language. (Let's put aside for a moment the problem of declaring reductionism to be true while admitting he can't comprehend how. And then trying to translate what he can't comprehend.) So here I am, wondering. Hof claims to be a reductionist. You claim to reject reductionism, but you've strongly defended Hof and Dennett. So where's your disagreement with Hof? Is it that you don't disagree, and you just call yourself a non-reductive materialist because you think it sounds better? Or you think Hof's views are non-reductive and he just can't figure that out? I hope you appreciate that this is a pretty important question, since it touches on a claim you've repeated about your views. If you're a reductionist after all, that's going to be quite a change. If you're not a reductionist, then at least given what I've provided it's clear you're in conflict with Hof and quite possibly Dennett - I'd like to explore that. And either way, the result is going to be you've either misunderstood something, misrepresented your views, or think Hof (and possibly Dennett) don't even know what their own views commit them to. I mean, this quote does seem to be accurate - it's pretty straightforward. And I have another source which at least suggests that Dennett rejects non-reductive materialism. But FWIW – I think Dennett is right about “filling in” – it’s a moderately good working model, but not really supported by the data. The fridge light is a better model IMO. If it's not really supported by the data, then how can it be a good working model? "Well, the data is against it - but it's still a good model."? Others seem to think that it's a good model because it IS supported by the data. And the fridge light model (I take this to mean 'conscious experience shows up and disappears as needed or whatever') is better how - especially given that you yourself said that it's difficult, and I suggest it may be impossible, to test whether consciousness is going 'off' as opposed to not being accessible / being forgotten?nullasalus
July 21, 2011
July
07
Jul
21
21
2011
01:13 AM
1
01
13
AM
PST
"Materialists are hell bent on denying themselves. It’s weird. Really really weird." Indeed, and the fact was noted centuries ago. For example: "He must pull out his own eyes, and see no creature, before he can say, he sees no God; He must be no man, and quench his reasonable soul, before he can say to himself, there is no God." -- John DonneIlion
July 21, 2011
July
07
Jul
21
21
2011
12:19 AM
12
12
19
AM
PST
Nullasalus: You say you are answering my questions, then you don't answer them! And you say you are reading my post, then you say things like this:
The problem is that you’re not entertaining the possibility in this case, because you’re presenting the idea that if there is no report, there was no experience
Which not only did I not say, I said I do not say it, and do not even hypthesise! The you talk about "the emergence dance". It's as though you just scan my posts for stimuli, and press the nearest button, without actually addressing the reasoning. Well I'm out most of today, I hope to get back later. But FWIW - I think Dennett is right about "filling in" - it's a moderately good working model, but not really supported by the data. The fridge light is a better model IMO.Elizabeth Liddle
July 21, 2011
July
07
Jul
21
21
2011
12:13 AM
12
12
13
AM
PST
Nullasalus, you have a superb mind. Thanks for typing out all the words.mike1962
July 20, 2011
July
07
Jul
20
20
2011
11:55 PM
11
11
55
PM
PST
I am a conscious thing. I am consciousness. That is primary. Everything else is secondary. Including inferences of reason which my consciousness experienceses as a mere process. Duh. Materialists are hell bent on denying themselves. It's weird. Really really weird.mike1962
July 20, 2011
July
07
Jul
20
20
2011
11:53 PM
11
11
53
PM
PST
"... Ultimately, however, studies have confirmed that the visual cortex does perform a very complex “filling in” process ..." In grade-school, during a game the murderous version of dodge-ball we played, I once caught the ball in the crook of my elbow; it wasn't that I snagged it with my arm and re-directed its flight into the core of my body, so that I could trap it with both arms (as one girl who was very good at the game regularly did). Now, I didn't consciously plan this feat, and as I recall, most of my conscious concentration was directed at avoiding the *other* ball heading my way.Ilion
July 20, 2011
July
07
Jul
20
20
2011
10:50 PM
10
10
50
PM
PST
There is no such thing as a non-reductive non-eliminative materialism. What there is are materialists who do not, or will not, recognize/understand what materialism means and entails.Ilion
July 20, 2011
July
07
Jul
20
20
2011
10:41 PM
10
10
41
PM
PST
Nullasalus @ 148 "Sure, hallucinations are possible. Misremembering is possible. What’s not possible is being mistaken that you’re having a subjective experience when you’re having it. And this is where Dennett makes a very key and telling fumble – he confuses memory of qualia with qualia itself. ‘You think you saw something, but you cannot have seen that!’ merits a reply of, ‘Perhaps what I saw was an illusion. But my seeing an illusion is not open to being an illusion itself.’" Yeppers. A "philosophical zombie" or a robot -- which are essentially the same things, actually, and which is what Dennett and LE assert and insisting that we are -- can never recognize that it has experienced an “illusion” … or that it made a non-illusory mis-judgment. For that matter, a zombie/robot cannot ever actually have/experience an illusion (even the phrases, “it experienced an illusion” and "it was mistaken" cannot properly be applied to a zombie/robot). What it can have is a malfunction of its sensory receptors, or a malfunction of its “brain” or CPU; and, unless it has (at least) triply-redundant systems, it has no means by which it initiate a question of the veracity or validity of its inputs, or its outputs.Ilion
July 20, 2011
July
07
Jul
20
20
2011
10:38 PM
10
10
38
PM
PST
mike1962, What I find particularly odd here is that EL has insisted - strongly - that she's not a reductionist, and how everyone is mistaken in thinking that materialism entails reductionism. Hence the emergence dance. Except she's also leaned heavily on Dennett and Hofstadter. Hof throws out 'emergence' fairly often - but then he turns around and says that this isn't antireductionism, and in fact he's sure an entirely reductive explanation of the mind is true (yet incomprehensible, etc.) And Hof and Dennett are said to be on exactly the same page. So, something's up. Maybe you could make the argument that Hof's a reductionist and doesn't even realize it, but that's a little like saying Hof's kind of stupid. Or maybe EL has deeply misunderstood Hof at least, and possibly Dennett as well. (I know, Dennett wrote a paper blasting 'greedy reductionism', but he contrasted it with the 'good kind' of reductionism.) Or EL's going on about being a non-reductive materialist but taking a position that's indistinguishable from a reductive materialist (or...). Or she disagrees sharply with Dennett and Hof, but isn't explaining how. So, the whole thing's a mess. There's also a claim on wikipedia (unsourced) that Dennett regards the claim that consciousness is an irreducible and emergent 'thing' (and nonreductive physicalism along with it) as mysterianism.nullasalus
July 20, 2011
July
07
Jul
20
20
2011
08:13 PM
8
08
13
PM
PST
... How can "part" of the brain, or the "backend" of the brain be conscious and not the whole damn thing? What is so special about that particular arrangement of synapses and dendrites that makes it conscious? Unless...mike1962
July 20, 2011
July
07
Jul
20
20
2011
07:34 PM
7
07
34
PM
PST
Null: "Dennett had powerfully argued that such “filling in” was unnecessary, based on his objections to a Cartesian theater. "
"Powerfully" is not the abverb I would use, but whatever. At any rate, the fact that the whole brain ain't conscious, that it can clearly be shown by experiment that high degrees of processing is done by the brain before it reaches the "conscious part", demonstrates that in some sense there is a "cartesian theater" going on.mike1962
July 20, 2011
July
07
Jul
20
20
2011
07:30 PM
7
07
30
PM
PST
Nullasalus, I appreciate that this is frustrating for both of us, but it would be really helpful if you would answer my questions! I take it that your answer is yes? I've been answering your questions and pointing out the problems with what you're presenting. It's pretty straightforward. I've also said straight up what the problem is with your 'this is what the evidence suggests' claim. It's not what the evidence suggests, it's what the evidence, with various assumptions that I question, leads you to model. And I’d also appreciate it if you’d actually read my posts! I’ve said several times now, that I think that that something can be consciously experienced and then forgotten! Then re-remembered! I wrote a whole post about that, don’t you remember (heh)? I've been reading your posts repeatedly - what is with this 'if you disagree with me, clearly you haven't read or understood me' attitude? Of course you agree that something can be experienced and then forgotten. The problem is that you're not entertaining the possibility in this case, because you're presenting the idea that if there is no report, there was no experience - *I* am pointing out that this idea is flawed. Will you acknowledge that it is flawed? Will you acknowledge the possibility of experience without report, or without memory - particularly in this case? Now I accept that it is possible that even when the brain doesn’t appear to register that a face has been shown, that “you” somehow nonetheless experience it but immediately forget it. It would be difficult to test this of course. But I also suggest that it is at least a reasonable working model to posit that the brain takes a short while (tens of milliseconds) to do the processing that is required to enable “face” to be experienced, either “at the time” (namely a few tens of milliseconds after presentation) or later. You shrug off 'it would be difficult to test for this' - really, it may well be impossible to test for it - as if that isn't a concern. And that's one difference between you and me here; I think it's actually a very big concern. What's more, your 'reasonable working model' isn't required to explain the data - in fact, I'm replying with an even better model, one that takes into account the limits of what the tests can show. "Stimulus exposure under certain amounts of time are unlikely to be reported or apparently retained". What makes you think that your rendition of the data becomes more reasonable than mine? The fact that yours contains assumptions that aren't open to testing? And notice that this isn't even some kind of essential point for me - say it takes time for something to enter consciousness and you're still left with subjective experience to account for. Really, you're still left with a stream of consciousness since something taking time to fully enter the stream doesn't mean there is no stream. But I'm actually bothering to be careful with the data and the interpretation, and pointing out the limits and problems that come with a third-person examination of such. You seem only concerned with these problems when they're attached to conclusions you want to dispute - if you don't want to dispute them, it's not much of a worry. As I said, it’s possible that all this is bunk – that the self-consistent model we get from neuroscience is not in fact what happens! But it correlates very highly with various ways of tapping into subjective experience (behavioural performance, self-report) and there comes a point in science when the data fit the model so well, it seems odd to insist that there is something fundamentally wrong with the model. Have you noticed that my counter-model fits the data splendidly as well - and has the added virtue of properly taking into account the limits and pitfalls of the methodology? Nothing I've said conflicts with the data or the reports - in fact, what I've said is entirely consistent with the reports. In fact I question whether your model better fits with the behavior. You earlier stated: Because that is what the evidence suggests – that we “become conscious” of a stimulus, and its properties, over a period of a substantial number of milliseconds, and that if the stimulus duration is shorter than that, we have no conscious recollection of it, even though it may influence our subsequent choices. You're saying that the stimulus was never experienced, period. I'm proposing it was possible it was experienced but could not be recalled. But then you mention that the stimulus 'may influence our subsequent choices' - and that seems like a good reason to speculate that the stimulus was in fact experienced, even if recollection of it is unavailable. This is what I mean by experience (from the first person perspective) being summed over time – we experience continuity, but in fact we back-project that continuity after-the-fact. So we experience continuity because continuity is what we're actually experiencing. What you question, at least when putting it this way, is the source of the stream, not the stream itself - like having the experience of watching a river flow, but not being able to tell if we're watching 'an actual river in front of us' or a very realistic movie of a river flowing. Or maybe we experience an amalgam of both - maybe we're looking at a real river through screen that has a see-through projection on it. As I said, know your assumptions and the limitations of your methods of inquiry. But also know the limits of your subjective experience – we frequently think we see things that cannot have seen. Sure, hallucinations are possible. Misremembering is possible. What's not possible is being mistaken that you're having a subjective experience when you're having it. And this is where Dennett makes a very key and telling fumble - he confuses memory of qualia with qualia itself. 'You think you saw something, but you cannot have seen that!' merits a reply of, 'Perhaps what I saw was an illusion. But my seeing an illusion is not open to being an illusion itself.' An alternative of course is that there is some unknown Stuff called consciousness that enables us to see things that our retinas are incapable of registering! But doesn’t it seem more likely that the neuroscience model is correct? I pointed out the problems with your earlier model, and indeed why at a glance mine actually seems to perform better than yours. As for this situation, it depends on what you're saying. Is it possible to hallucinate, to misremember, or to have an experience that isn't 1:1 with what the retinas are aimed at? Sure, entirely possible - that's very mundane. But if the neuroscientist says 'You could not have possibly had the experience you claimed to have had' - notice that this is about having the experience, not the source of the experience - then so much worse for the neuroscientist, model be damned. Subjective experience trumps models. If the philosopher says, 'Materialism is true, therefore no thoughts can really be 'about' anything (as Alex Rosenberg and others outright claim) and subjective experienced will have to be eliminated by a more complete science (as Pat Churchland and others suggest)', so much the worse for them as well. Funny you should bring up the visual system - I recalled this incident and hunted down the wikipedia reference: "Another criticism comes from investigation into the human visual system. Although both eyes each have a blind spot, conscious visual experience does not subjectively seem to have any holes in it. Some scientists and philosophers had argued, based on subjective reports, that perhaps the brain somehow "fills in" the holes, based upon adjacent visual information. Dennett had powerfully argued that such "filling in" was unnecessary, based on his objections to a Cartesian theater. Ultimately, however, studies have confirmed that the visual cortex does perform a very complex "filling in" process (Pessoa & De Weerd, 2003)."nullasalus
July 20, 2011
July
07
Jul
20
20
2011
02:31 PM
2
02
31
PM
PST
Nullasalus:
For example to “experience” a red apple takes tens of milliseconds, it is not instantaneous. This is what our evidence suggests. Do you think this is incorrect?
I repeat: You determine what is or is not experienced by whether it is or isn’t reported. The idea that something can be experienced then forgotten before it is reported doesn’t seem to register with you – and it’s not because of the data in and of itself, but because of the assumptions you bring to the data.
Nullasalus, I appreciate that this is frustrating for both of us, but it would be really helpful if you would answer my questions! I take it that your answer is yes? And I'd also appreciate it if you'd actually read my posts! I've said several times now, that I think that that something can be consciously experienced and then forgotten! Then re-remembered! I wrote a whole post about that, don't you remember (heh)?
“If a stimulus only lasts less than X amount of time, it will not be reported” does not itself equal “If a stimulus only lasts X amount of time, there is no experience of it”. Not unless you insist that there is no experience unless it’s reported – and then we’re back to the example of not remembering yesterday meaning I had no experience yesterday.
Well, not exactly, but perhaps I see where the roadblock is. Let me try and put the hypothesis as straightforwardly as I can, and for now, for simplicity, we'll confine ourselves to the visual modality, as it is the best studied. If I flash a face on a screen while you are in an MRI scanner, then we can fairly reliably show that a certain region of the brain (called the "face area") will become active. However, if the stimulus is shown for a very short period of time and then mask it (to erase the retinal afterimage), or if we degrade the image and show it for a very short amount of time, then we observe no activation in the "face area". We also find that we can predict whether or not an image is recalled, or, even, whether it influences subsequent behaviour (by using it as a "priming" stimulus) is highly correlated with whether or not we see activation in that face area. A picture of a house activates a different area, and so faces and houses are useful stimuli for figuring out how long it takes for the brain to process the different stimuli. Now I accept that it is possible that even when the brain doesn't appear to register that a face has been shown, that "you" somehow nonetheless experience it but immediately forget it. It would be difficult to test this of course. But I also suggest that it is at least a reasonable working model to posit that the brain takes a short while (tens of milliseconds) to do the processing that is required to enable "face" to be experienced, either "at the time" (namely a few tens of milliseconds after presentation) or later. Certainly we can do experiments where people show neural evidence of having recognised a stimulus as a face or a house, but nonetheless cannot recall the picture when presented later; it seems reasonable on these occasions to assume they saw - experienced - the face or house, but that they did not "store" the information in such a way that it was available for access later. And we even know quite a bit (we think) about how this happens. As I said, it's possible that all this is bunk - that the self-consistent model we get from neuroscience is not in fact what happens! But it correlates very highly with various ways of tapping into subjective experience (behavioural performance, self-report) and there comes a point in science when the data fit the model so well, it seems odd to insist that there is something fundamentally wrong with the model. Especially when we know, again from the visual system, that we make "forward models" of the world. You probably know that people make several saccadic (i.e. jerky) eye movements per second, and so the image of the world on the retina is constantly changing. Not only that, but only the image right at the centre of the retina (the fovea) actually registers much in the way of detail, including colour. So if our eyes were movie cameras, they'd be jerky hand held cameras loaded with slow film that only registered colour and high resolution detail in the centre, the rest of the field of view being recorded as looming shapes in grey-scale. But this of course is not what we see! And the reason seems to be that our brains use the information about where our eyes are going to move next to make a predictive model about how the retinal image will change, and rejig everything so that the actual eye movement is discounted. not only that, but the visual system is set up so that anything of interest elicits an eye movement to it. So our impression is of a wide, detailed visual scene, observed as a gestalt, or simultaneously. But this cannot be the case - that image is simply not what appears on the retina - at any given time, most of it is missing. This is what I mean by experience (from the first person perspective) being summed over time - we experience continuity, but in fact we back-project that continuity after-the-fact. At least, it is difficult to see how anything else could possibly be the case, given the data.
Know your assumptions and the limitations of your methods of inquiry, particularly with regards to subjective experience.
Well, sure, and the best we can do is model. But also know the limits of your subjective experience - we frequently think we see things that cannot have seen. We have good explanations of this in neuroscience, to the extent that we can use those explanations to design artificial vision, so the model seems good. An alternative of course is that there is some unknown Stuff called consciousness that enables us to see things that our retinas are incapable of registering! But doesn't it seem more likely that the neuroscience model is correct?Elizabeth Liddle
July 20, 2011
July
07
Jul
20
20
2011
12:16 PM
12
12
16
PM
PST
1 2 3 6

Leave a Reply