Uncommon Descent Serving The Intelligent Design Community

Why we shall have to wait for a real biography of Stephen Hawking

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a review of Kitty Ferguson’s Stephen Hawking, Ed Lake “examines how Stephen Hawking gets the world to sit up and take notice” (Telegraph, January 12, 2012):

When he speaks, as he has this week on his 70th birthday, the world takes notice. That’s partly down to his distinguished career but, let’s not be squeamish, partly because his motor neurone disease and voice synthesiser have made him a convenient symbol for the life of the mind. That aura of mystical detachment doesn’t quite stand up to examination, however. “Was it just an accident that he always seemed to come up with attention-getting statements whenever public and media attention appeared to require a boost?” asks Kitty Ferguson in her starry-eyed biography. As one of Hawking’s assistants told her: “He isn’t stupid, you know.”

One starts to suspect that his real genius may be for judging the appetites of the public.

Well, how about the appetites of self-conscious urban elites – people who feel knowing about “imaginary time” and space wormholes, who could not point to and name a single star visible in their own region.

Indeed, there’s so little that’s dark or sad about her Hawking, the effect is almost sinister. Perhaps he really is just a permanently upbeat and sunny chap. On the other hand, …

On the other hand, that’s highly unlikely. Such people exist, to be sure, but they don’t think, say, or do the things Hawking has. Which is why we shall have to wait for a real biography of Stephen Hawking.

Stephen Hawking at 70: What would revolutionize our understanding of the universe

Follow UD News at Twitter!

Comments
For instance, can your robot show behaviours suggesting an emotional reaction to pain or pleasure, behaviours that were not pre-programmed in it? Can your robot output new, original dFSCI?
ASIMO shows appetitive and aversive behaviours in that video. And yes, those behaviours (if I understand the system correction) were not "pre-programmed" but learned. Certainly that's the easiest way to do it. And if not ASIMO, there are others that do exhibit non-preprogrammed behaviour. And this gets us back to GAs, and the reason why GAs are so closely related to learning/brain models. The most effective learning models are GAs. I'm fairly sure ASIMO learns by means of GAs.Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
08:00 AM
8
08
00
AM
PDT
But, I agree that "inference of conscious experience in animals, for instance, is sometimes made very strong by their observed behaviour". So if you that behaviour in a robot, would you make the same inference? In which case, can you give me an operational definition for experience?Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
07:37 AM
7
07
37
AM
PDT
Elizabeth: I don't agree. The inference of conscious experience in animals, for instance, is sometimes made very strong by their observed behaviour. Even is some objective computing can simulate some behaviour of conscious beings, I definitely do not believe that it can simulate all their behaviours. Feeling, for instance, generates a lot of behaviours that cannot simply be simulated in advance. One of the most outstanding behaviours of conscious intelligent beings that cannot be simulated by computation is the generation of new dFSCI. I suppose that has something to do with what is being debated in the thread about the mathemathical modeling of the mind. I strongly believe that Penrose's argument abour Godel's theorem is a strong evidence that conscious cognition is not purely algorithmic. I strongly believe that the failure of computers in generating true creative language output is an empirical demonstration that they cannot generate new dFSCI. So, there are many ways to approach the problem empirically and scinetifically. And still you have offered nothing to suggest that your robot is essentially different from a motor car, as far as subjective experience is concerned. For instance, can your robot show behaviours suggesting an emotional reaction to pain or pleasure, behaviours that were not pre-programmed in it? Can your robot output new, original dFSCI?gpuccio
January 16, 2012
January
01
Jan
16
16
2012
07:27 AM
7
07
27
AM
PDT
gpuccio, As far as I can tell, the only explanatory functions you have assigned to the immaterial soul/mind are 1) to explain consciousness and 2) to explain free will. But as I pointed out earlier, postulating an immaterial mind doesn't solve the consciousness problem at all. Why must an immaterial mind give rise to consciousness? If you can't answer that question -- and I haven't seen any indication so far that anyone can -- then an immaterial mind doesn't explain consciousness. As for free will, I have the impression from earlier comments of yours that you accept the idea of libertarian free will. How do you know that libertarian free will exists? If it doesn't exist, it doesn't require an explanation. And if you could show that it does exist, you would also need to show that it depends on an immaterial mind. How would you do that? If an immaterial mind isn't needed to explain consciousness or free will, then what explanatory purpose does it serve?champignon
January 16, 2012
January
01
Jan
16
16
2012
07:23 AM
7
07
23
AM
PDT
If that is how you are defining it, then we have no way of knowing whether any other entity has it. So we cannot ask science to explain it. Your assertion that a robot does not experience consciousness must remain an assertion, and my assertion (should I make it) that it does, must also remain an assertion. It is untestable. If we are to turn it into a question with an informative answer, we must define "experience" in a way that allows us to tell whether it has occurred. What we cannot do, however, is to say: only organisms can have experience, therefore X. Unless we can define "experience" in a way that the statement can be falsified, the conclusion is invalid.Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
07:16 AM
7
07
16
AM
PDT
Elizabeth: I have defined it. My term describes the kind of experiences we have in consciousness. Pain. Pleasure. Visualization. The intuition of meaning. The intuition of purpose. Attachment. Love. Fear. The meaning of "true" and "false". And so on. It "describes" that kind of phenomena. Those phenomena have been called "subjective experiences" for millennia. Have you problems with the concept? My computer has sequences of bits corresponding to this message in its memory, but it has no awareness of that, least of all of their meaning. A motor car moves, but has no consciousness of moving. I am conscious of my moving. And of my message. You are, too. A cat who is humngry feels hunger, I believe. A notor car whose petrol is down feels nothing.gpuccio
January 16, 2012
January
01
Jan
16
16
2012
07:11 AM
7
07
11
AM
PDT
I can't tell you whether I think my robot (well, not mine) has subjective experiences unless you define that term. That's why I asked! And I agree that my use of quotation marks is confusing - for "knows" they were actually scare quotes (heere bee dragones); for the other terms they were meant to indicate quotations from you. Apologies, I will try to avoid using them.Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
06:54 AM
6
06
54
AM
PDT
Elizabeth: You use "subjectively aware" and "knows" in quotation marks for a reason: indeed, either you are aware of it or not, you are simply changing, forcing and twisting their meaning. Do you really believe that the robot has subjective experiences? Even if different from ours? I am not asking you if the robot "has the same subjective experience as we do", but if the robot has subjective experiences, as we do. The point is that the expereinces can be different, but they must be subjective, like ours. IOWs, is the robot feeling pleasure and pain? Is the robot attributing meaning to what it dies? Or is the robot only an objective system programmed to make some computations and behave accordingly rto them, without any conscious awareness? The computer computes. And gives outputs, according to computations. The robot does the same. Some of the outputs include movement, while the computer remains in the same place. Is that the difference? The robot "knows" nothing, not any more than my computer "knows" what I am writing here. The robot is not "subjectively aware" of anything, because it has no reason in the world to have subjective experiences, any more than my motor car has. All objective sustems are basically the same: matter interacting according to the rules of physics. Some are arranged according to some designing ingtelligence: the robot and my computer and my car engine are among them. They are different for type of matter and for structure, but still they are the same thing: matter interacting according to the laws of matter. The laws of matter tell us nothing about subjective experience. The intelligence structure in designed material objects tells us nothing about subjective experiences. If the designer is smart enough, that structure can simulate the outer behaviour of beings who have subjective experiences, to some degree. Simulating an outer behaviour does not mean that an inner awareness is there. How can materialists continuosly make such a silly assumption? I move. My motor car moves. But I am aware that I am movbing, and my motor car is not. Even of the robot records its movement, and makes computations about it, that does not make it aware of anything. A motor car with a GPS does the same. Objective computations have nothing to do with having subjective experiences. That's why Chalmers calls the origin of subjective experiences "the hard problem", and the simulation of computations "the easy problem". It's not because the simulation of computations and behaviour is easy, sometimes it's not easy at all. But it is, in principle, possible. The objective explanation of the origin of subjective experience, instead, is all another thing.gpuccio
January 16, 2012
January
01
Jan
16
16
2012
05:28 AM
5
05
28
AM
PDT
No, I'm certainly not inferring that the robot has the same subjective experience as we do. It must necessarily be vastly different. But I am inferring that it is subjectively aware of its environment and of the moving obstructions within it. Were it not, it would be unable to navigate its environment, and yet it can. And by "subjectively aware" I mean that it represents those objects in relation to its own current location, and, moreover in relation to alternative future locations that it might move to, given what it knows (computes, if you prefer) about the current trajectory of those objects. In other words, what it "knows" about the moving objects in its environment is not the objective properties of those objects, but the properties those objects have in relation to it, itself, the robot. i.e. its knowledge is subjective. It is, as it were, representing a given object as "the thing that is moving to my left and which I must avoid in order to reach the other side", not as "there is a thing moving eastwards". And to tackle your point head on: the robot itself is not "a point in space" - it is a set of possible future points in space relative to the object in question, those future points themselves being determined by the robots forward model, given the trajectory of the obstruction. A video might help: http://www.youtube.com/watch?v=YPoANTKo5kAElizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
04:20 AM
4
04
20
AM
PDT
Elizabeth: I have been rather explicit: "Just to avoid verbal misunderstandings, “aware” here means “having subjective experiences and representations”. and: "The problem is: how do simple objective events, such as changes in neurons, become subjective representations?" You say: "Clearly the robot has representations." But not subjective representations. You say: "Clearly its view is subjective (because objects are represented relative to the subject, namely, the robot)." This is simply a change of meaning. Subjective, in my context, obviously does not mean that a representation is related to a point in space. In that sense, a good painting with the correct perspective could be said to have "subjective representations". That's obviously not what I mean. You say: "Which leaves “experience” to define. How are you defining “experience”?" No. Really, my concept is "subjective representations and experiences". That's a simple way of describing something we know very well: what we expereince in ourselves every moment. It is a descriprion, not a definition. A description of something that exists: our personal subjective experiences. We perceive them directly in ourselves. We infer them in others. So I ask you: are you inferring that the robot has the same subjective experience? If not, then the robot is not aware.gpuccio
January 16, 2012
January
01
Jan
16
16
2012
03:08 AM
3
03
08
AM
PDT
Champignon and Petrushka: I will try to explain. You will not be convinced, but your objections are reasonable, and they deserve an answer. First of all, I have never used the concept of "soul" in this discussion. You are bringing it in, I don't know why. I have discussed, consciousness, because it is an empirical fact. If consciousness is a property of an immaterial souls is a philosophical question, and I have not discussed it. Second: The "I" represents things. That's what cosnciousness is: an I representing things in itself. There is no doubt that, in our human condition and especially in our waking state, consciousness represents mainly brain states. I have explicitly admitted that, and I do it again here. So, there is no surprise that, if brain states are affected, consiouness represents affected brain states. You seem to believe that there is a difference between simply representing functional brain states and representing disfunctional brain states. But that's not true. They are representation, just the same. You say: That’s right. If what gpuccio calls the “interface model” were correct, then the symptoms of brain damage and brain disorders would look entirely different. No. That's not true. If our consciousness is tied, in our condition, to represent brain states, brain damage will cuase damaged representations, That is perfectly compatible with the interface model. The point is: it is not that our consciousness can use the brain as an interface abd then, of its own free will, just drop the interface and do things in another way. That's not how it works. Our consciousness is tied to the brain interface. That's what being humans implies. NDEs are an example of consciousness partially disconnected from the brain interface. But that is not our usual condition. So, if the brain suggests suffering, we suffer. If the brain suggests a functional representation, we represent functionally. If the brain is disfunctional, we represent disfunctionally. You will say that such a scenario is too passibe for a suppsed independent principle of consciousness. But that's not true, because we have not included free will in the scenario. Free will is about how consciousness reacts to its representations. That reaction expresses itself in free modifications, sometimes very small, of the future representations, and in time it can very much change things. So, cvonsciousness is not passive: it has free will. But it is certainly very passive in representing things. Its activity manifests in how it reacts to things represented.gpuccio
January 16, 2012
January
01
Jan
16
16
2012
02:56 AM
2
02
56
AM
PDT
Because “having subjectibe experiences and representations" also requires definition. Clearly the robot has representations. Clearly its view is subjective (because objects are represented relative to the subject, namely, the robot). Which leaves "experience" to define. How are you defining "experience"?Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
02:51 AM
2
02
51
AM
PDT
If the robot is not aware, how are you defining aware? Robots can detect and identify objects, orient themselves towards them, avoid them, even if they are moving (involving making forward models) to achieve a goal. I'd say the robot is aware of those things, although not aware of itself as an agent. How would you define aware?Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
02:49 AM
2
02
49
AM
PDT
Elizabeth: Oddly, at this point, the objection I usually receive is: aha! The robot may be aware, but it is not self-aware! That is the Hard Problem! No. That's simply wrong. I would never say that! The problem is that the robot is not aware. Just to avoid verbal misunderstandings, "aware" here means "having subjectibe experiences and representations". I will paste again here the definition according to Chalmers that I quoted above to Campignon: “Chalmers is best known for his formulation of the notion of a hard problem of consciousness in both his book and in the paper “Facing Up to the Problem of Consciousness” (originally published in The Journal of Consciousness Studies, 1995). He makes the distinction between “easy” problems of consciousness, such as explaining object discrimination or verbal reports, and the single hard problem, which could be stated “why does the feeling which accompanies awareness of sensory information exist at all?” ” The problem is: how do simple objective events, such as changes in neurons, becone subjective representations? I say it again: you have done absolutely nothing to explain how a system becomes aware, that is starts to have subjective experiences. Again, subjective experiences do not require "self-awareness". They need not be complex. If I feel the taste of an apple, that is a subjective experience. If a system analyzes the chemical composiction of an apple, that is not a subjective experience.The system is not experiencing the taste of the apple. What have you done to explain that simple point?gpuccio
January 16, 2012
January
01
Jan
16
16
2012
02:40 AM
2
02
40
AM
PDT
Conching other things is more trivial than conching yourself. All you need is a basic body-centred attentional mechanism. I don't think either problem is Hard, although both are hard. gpuccio, I disagree that I have done "absolutely nothing" to explain how a system becomes aware. Clearly you'll have to look up the details, but I've given you the essentials. If you think I haven't, can you be specific? What parts have I missed out? Leaving aside the evolutionary question (we can regard this as a development question, if you like - how does a single cell become an aware person), it is fairly straightforward to explain how that developing embryo comes aware, in terms of the development of sensory organs and systems that result in the embryo being oriented towards perceived objects. We can even make robots that do this. Oddly, at this point, the objection I usually receive is: aha! The robot may be aware, but it is not self-aware! That is the Hard Problem! Gotta run, see you later.Elizabeth Liddle
January 16, 2012
January
01
Jan
16
16
2012
02:01 AM
2
02
01
AM
PDT
Elizabeth: Yes, I know. That’s why I started where I did. Self-awareness must follow awareness. I would like to clarify that the hard problem of consciousness regerds awareness, not self-awareness. It's awareness that cannot be explained materialistically, not self-awareness. Once you have awareness, explaining self-awareness becomes part of the "easy" problems of consciousness. The system in question. But our problem is exactly to explain how and why a "system" can become aware. You have done absolutely nothing in that sense, with your "conching" argument. And if that system is capable of conching itself as an agent within its world, then it self-conches, and not only that, may self-conches itself as the conching agent, and conches others as comparable self-conching conchers. Here you are simply explaining how a conscious perceiver can perceive itself and its functions. That is easy and trivial. The hard problem of consciousness is how and why a conscious perceiver exists, not how it perceives specific things.gpuccio
January 16, 2012
January
01
Jan
16
16
2012
01:43 AM
1
01
43
AM
PDT
Mark: Welcome to the game! "it is raining is an impersonal, non transitive expression. The subject is tachnically "it", bit the phrase just describes an objective event. Elizabeth says: "I think “conching something” – being conscious of it in traditional-speak – consists, primarily, of having a potential program of action – or several – with regard to it. This is why the literature on attention and action is so important." "To conch" is obviously proposed as a transitive verb, with an explicit subject and an explicit onbject. If "I conch an apple", there is a subject (I), a process (conching), and an object (the apple). Elizabeth is trying toi argue that only the object and the process exist, and that the subject is not necessary. That "conching the apple" creates in some way a subject that does not really exist independently. But the truth is, no process of "conching" exists without a subject. Otherwise, "conching" becomes another type of verb, one of the many objective verbs that do not require awareness, and that we can use with a non conscious subject, such as: a neuron receives a photon of light emitted by an apple, and transmits it to another neuron. In that kind of description awareness is not present. In that kind of description, awareness does not emerge.gpuccio
January 16, 2012
January
01
Jan
16
16
2012
01:37 AM
1
01
37
AM
PDT
Really interesting discussion you two - way above the usual for UD. Gpuccio - what's the subject in "It is raining"?markf
January 15, 2012
January
01
Jan
15
15
2012
11:17 PM
11
11
17
PM
PDT
Will it's a consistent worldview. Anything not fully understood must be magic or the work of magicians. The science of ID consists entirely in finding the blank spots on the map and stamping them with "here be dragons." I'm trying to remember the last time that approach bore fruit. But back to the injured brain. If the disembodied mind is affected by drugs, then it is parallel with the brain and serves no explanatory purpose. There is no reason to posit such a functionless entity other than to stave off fear of death.Petrushka
January 15, 2012
January
01
Jan
15
15
2012
09:02 PM
9
09
02
PM
PDT
Petrushka,
The separate, disembodied mind makes no sense at all when confronted with the problems of brain injury. A disembodied model of mind should recognize brain injury as a lack of data or a distortion of data, but not as a distortion in the interpretation of data. It is quite easy to illustrate what I mean. try wearing colored glasses, or listening through a distorting audio system. You will recognize the distortion. But that is not what happens when the brain is injured or drugged.
That's right. If what gpuccio calls the "interface model" were correct, then the symptoms of brain damage and brain disorders would look entirely different. The fact is that perception, consciousness, emotions, moral judgments, and even the will itself can all be disrupted by damage to the brain. If so, then what is left for the "soul" to do? Why posit a soul at all if it has no function? As far as I can see, the only function that gpuccio suggests for the soul is as the seat of consciousness. He thinks that if materialists haven't succeeded yet in explaining consciousness, then the solution is to imagine an immaterial soul that is magically conscious. But how is it that an immaterial soul has consciousness? Gpuccio, and dualists in general, have no explanation. And as I pointed out to gpuccio earlier, there is a difference between knowing that something is a physical phenomenon and being to explain the mechanism behind it:
That’s like saying to someone in the 12th century:
Unless you can explain why certain material systems are iridescent and others aren’t, you can’t claim that iridescence is a physical phenomenon.
Why some material systems are conscious (or iridescent) and others aren’t is an interesting question, but we don’t have to answer it in order to show that consciousness (or iridescence) is a physical phenomenon.
champignon
January 15, 2012
January
01
Jan
15
15
2012
08:41 PM
8
08
41
PM
PDT
OK, but brain physiology does not add any thing to what we already knew, as far as the hard problem of consciousness is concerned.
OK, other than explaining the physical implementation oof memory and why drugs and brain injuries affect memory the physical model explains nothing. Other than how learning occurs, how illusions and misperceptions occur. The separate, disembodied mind makes no sense at all when confronted with the problems of brain injury. A disembodied model of mind should recognize brain injury as a lack of data or a distortion of data, but not as a distortion in the interpretation of data. It is quite easy to illustrate what I mean. try wearing colored glasses, or listening through a distorting audio system. You will recognize the distortion. But that is not what happens when the brain is injured or drugged.Petrushka
January 15, 2012
January
01
Jan
15
15
2012
04:47 PM
4
04
47
PM
PDT
Yes, I know. That's why I started where I did. Self-awareness must follow awareness.Elizabeth Liddle
January 15, 2012
January
01
Jan
15
15
2012
04:09 PM
4
04
09
PM
PDT
The system in question. And if that system is capable of conching itself as an agent within its world, then it self-conches, and not only that, may self-conches itself as the conching agent, and conches others as comparable self-conching conchers.Elizabeth Liddle
January 15, 2012
January
01
Jan
15
15
2012
04:08 PM
4
04
08
PM
PDT
Elizabeth: You see, the devil is in the simple questions... :)gpuccio
January 15, 2012
January
01
Jan
15
15
2012
03:50 PM
3
03
50
PM
PDT
Elizabeth: Self awareness presupposes awareness.gpuccio
January 15, 2012
January
01
Jan
15
15
2012
03:49 PM
3
03
49
PM
PDT
Elizabeth: A verb has a subject. Who conches?gpuccio
January 15, 2012
January
01
Jan
15
15
2012
03:48 PM
3
03
48
PM
PDT
No, not easily, gpuccio. But I'll give you some idea as to why I think those are pertinent domains: My own view is that "consciousness" is best considered as a verb ("to be conscious of something") rather than as a nouns. So let's, for now, call the verb "to conch" and the noun "conching" just to emphasise its verb-al nature. I think "conching something" - being conscious of it in traditional-speak - consists, primarily, of having a potential program of action - or several - with regard to it. This is why the literature on attention and action is so important. I also think that it involves the re-input (this is key to the Edelman thesis) of output from that motor program, when still below the execution threshold, back as input into the selection process that determines which motor action, if any, is executed. It also involves the identification of the thing one is conscious of as an object, which is where perception comes in, and I think the binding of sensory information to something we call in object is part of the same processes by which we form the the program of action with regard to it. For example, when we "conch" an apple, that process involves the activation of many possible motor programs with regard to that apple, together with the projected output, and resulting anticipated input - how far one would have to reach for it, what it would feel like in the hand, how it would taste to bite it, swallow it, what other actions are also triggered by those actions, etc. And with the capacity to "conch" objects in the world, which also involves locating them, and predicting not only how we might act with regard to them, but also how they might behave, and therefore how we might respond, we also "conch" (become conscious of) agents in the world. And, to cut a hugely complicated, but still barely-scratching-the-surface story short, that then leads to the conching of both hypothetical objects and agents, and, with them, the conching of abstract ideas - the reification of concepts like "justice" or love, and also, the idea of oneself as an agent of the same category as other agents in the world - the capacity to have a world map in which we ourselves (together with our map) appear. In other words, the capacity to step outside ourselves and see ourselves from other vantage points, including our own projected vantage point in the future. That is what I would call self-awareness, and, moreover, awareness of ourselves as moral entitities. One day I'll write a book :)Elizabeth Liddle
January 15, 2012
January
01
Jan
15
15
2012
03:31 PM
3
03
31
PM
PDT
Petrushka: Memory is a function of consciousness, and like most functions of consciousness in the human state, it is expressed though the brain, at least in part. You must acknowledge that I am not denying in any way the power of the physical reality, including brain, but not only brain, to deeply affect consciousness, You say: "I do believe that nearly all the philosophers who have ever considered the problem of consciousness have been ignorant of brain physiology." OK, but brain physiology does not add any thing to what we already knew, as far as the hard problem of consciousness is concerned. Brain physiology has much to say about the "easy" problems of consciousness. As consciousness certainly uses the brain algorithms to express itself, understanding the brain algorithms can help us understanding how consciousness works. But not what consciousness is. You forget that the interface model explains perfectly everything that neurophysiology has discovered. And everything that we have always known, including the effects of alchool. You ask: "What is the point of an external, presumably superior agent of consciousness if it is susceptible to drugs and brain injuries?" That is a philosophical question, with deep religious implications. "The point" can only be investigated in the context of a general view of reality, and it is very likely that our personal general views of reality differ deeply. No phylosophy or religion, even while believing in the independent existence of consciousness, has ever affirmed that human consciousness is absolute and free. Indeed, most phylosophies and religions affirm the exact opposite. It is really strange that on one side champignon and yourself rely so much on modern neurophysiology to support your position, and on the other use alchool (or drugs, that is the same) to make exactly the same point. What is the contribution of modern neurophysiology, that goes beyond the well known effects of alchool, to decide the hard problem of consciousness?gpuccio
January 15, 2012
January
01
Jan
15
15
2012
03:23 PM
3
03
23
PM
PDT
Do you really believe that all the philosophers who have believed ib an independent existence of consciousness were stupid, and you are intelligent?
I do believe that nearly all the philosophers who have ever considered the problem of consciousness have been ignorant of brain physiology. The problem of drug effects are not trivial. For example, why do certain drugs leave consciousness intact, but prevent formation of memories? Why do certain brain injuries leave consciousness intact but prevent formation of memories? What is the point of an external, presumably superior agent of consciousness if it is susceptible to drugs and grain injuries?Petrushka
January 15, 2012
January
01
Jan
15
15
2012
02:42 PM
2
02
42
PM
PDT
Elizabeth: Well, I could refer you to a very large literature, but probably the best book-length account is Edelman and Tononi’s book “A Universe Of Consciousness: How Matter Becomes Imagination”. Relevant other literature is the vast literature on attention, perception, action, decision-making, social cognition, and Theory of Mind. Could you, more simply, sum up the arguments that you consider pertinent? Thank you.gpuccio
January 15, 2012
January
01
Jan
15
15
2012
02:35 PM
2
02
35
PM
PDT
1 2 3 4

Leave a Reply