Uncommon Descent Serving The Intelligent Design Community

Human Consciousness

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(From In the Beginning … ):

For the layman, it is the last step in evolution that is the most difficult to explain. You may be able to convince him that natural selection can explain the appearance of complicated robots, who walk the Earth and write books and build computers, but you will have a harder time convincing him that a mechanical process such as natural selection could cause those robots to become conscious. Human consciousness is in fact the biggest problem of all for Darwinism, but it is hard to say anything “scientific” about consciousness, since we don’t really know what it is, so it is also perhaps the least discussed.

Nevertheless, one way to appreciate the problem it poses for Darwinism or any other mechanical theory of evolution is to ask the question: is it possible that computers will someday experience consciousness? If you believe that a mechanical process such as natural selection could have produced consciousness once, it seems you can’t say it could never happen again, and it might happen faster now, with intelligent designers helping this time. In fact, most Darwinists probably do believe it could and will happen—not because they have a higher opinion of computers than I do: everyone knows that in their most impressive displays of “intelligence,” computers are just doing exactly what they are told to do, nothing more or less. They believe it will happen because they have a lower opinion of humans: they simply dumb down the definition of consciousness, and say that if a computer can pass a “Turing test,” and fool a human at the keyboard in the next room into thinking he is chatting with another human, then the computer has to be considered to be intelligent, or conscious. With the right software, my laptop may already be able to pass a Turing test, and convince me that I am Instant Messaging another human. If I type in “My cat died last week” and the computer responds “I am saddened by the death of your cat,” I’m pretty gullible, that might convince me that I’m talking to another human. But if I look at the software, I might find something like this:

if (verb == ‘died’)
fprintf(1,’I am saddened by the death of your %s’,noun)
end

I’m pretty sure there is more to human consciousness than this, and even if my laptop answers all my questions intelligently, I will still doubt there is “someone” inside my Intel processor who experiences the same consciousness that I do, and who is really saddened by the death of my cat, though I admit I can’t prove that there isn’t.

I really don’t know how to argue with people who believe computers could be conscious. About all I can say is: what about typewriters? Typewriters also do exactly what they are told to do, and have produced some magnificent works of literature. Do you believe that typewriters can also be conscious?

And if you don’t believe that intelligent engineers could ever cause machines to attain consciousness, how can you believe that random mutations could accomplish this?

Comments
KF: Think, maybe you have mistreated them for years, not suspecting their inner sensibility!gpuccio
September 7, 2010
September
09
Sep
7
07
2010
02:31 PM
2
02
31
PM
PDT
GP: BC109's of the world, unite! Arise and throw off your chains! You have nothing to lose but -- the "smoke" within?* GEM of TKI * There is a lame joke about how smoke is a key ingredient in electronics items and how one must be careful not to let the smoke out.kairosfocus
September 7, 2010
September
09
Sep
7
07
2010
02:04 PM
2
02
04
PM
PDT
BarryR: Frankly, I don't know of it is the case to go on with the discussion on these terms. A brief sum up: 1) You define consciousness as follows: “a third order model of experience (a model of experience, and a model of that model, in addition to the experience simpliciter). This is obtained by sufficiently complex feedback or feedforward loops in the brain.” which is a masterpiece of ambiguity and is completely inappropriate. First of all, it contains at lkeast two words, "model" and "experience" which are, at best, ambiguous. What do you mean by "experience"? Do you mean "conscious experience"? From dictionary.com (yes, a dictionary): experience: 5. Philosophy . the totality of the cognitions given by perception; all that is perceived, understood, and remembered. Obviously, a word related to consciousness. So, are you defining consciousness as a third order model of a conscious process? That's philosophical value, indeed. Then you say that "This is obtained by sufficiently complex feedback or feedforward loops". I ask you what you mean for complex, and you don't answer. I ask you if you mean Shannon's entropy, and you answer no. I ask you if you mean CSI, and you answer no. I ask you if you have another definition of complexity, and you don't answer. But you state that: "Since the word “information” is not used in my definition, and since I don’t recall using the word elsewhere in this thread, I’m pretty certain I’m not speaking about information at all." Maybe avoiding the fact that complexity and information are two strictly, although ambiguously related concepts. And yet you state, rather surprisingly, that: "minimal perception takes bits, minimal models of perception takes bytes, and minimal models of models of perception takes kilobytes." Bits? In a discourse which "is not speaking about information at all"? This time from Wikipedia, so that your judgement about my arguments may become even lower: "A bit or binary digit is the basic unit of information". And so on, as in your completely gratuitous statement that my definition of consciousness: the observed fact that we have subjective experiences, referred to a single perceiver “I” implies that: "A single transistor satisfies this definition: it can perceive a change in input power and react accordingly." So, according to what you say, a single transistor has subjective experiences (that was my definition, I think), or at least can be reasonably inferred to have them? And yet you say: "Just to be clear: based on the definition of consciousness that I use, a single transistor cannot be conscious". I agree, but then why do you say that it "satisfies my definition"? Are you saying that a single transistor has subjective experiences, but is not conscious because it is not a "third order model of experience"? And please, if you want to discuss, take the personal responsibility of what you say and of how you use your terms, and if necessary define them clearly and consistently, instead of just referring to vague philosophical literature, and despising dictionaries.gpuccio
September 7, 2010
September
09
Sep
7
07
2010
01:32 PM
1
01
32
PM
PDT
Scruffy, I will be glad to elaborate. As, I have stated above, I don't consider myself to be a philosopher, so I would be interested in what others have to say. In my thinking, the difference between a dog and a human is HOW they process the information about their most recent behavior. For the case of a dog - the information just seems to go into more input into behavior. There is no morality or self image needed to process this. The dog does not think in abstract terms, the nervous system just records behavior and result and factors it into next behavioral decisions e.g. Owner rings bell resulted in tasty treat being served to me. The nervous system records this and uses it as input to decisions. There is no introspection about whether it is a good thing to accept treats from owners, Whether by accepting such treat, dog is entering into a contractual agreement to serve owner. The dog is not self-aware because the fact that he is an entity with a purpose does not really enter into the decision process at all. I think that this line of reasoning has empirical evidence because many, many dog training books say that dog training failures come when owners ascribe such faculties as " am I doing well", "I won't perform for that person because I don't like him", to a dog. The dog is just responding to immediate stimuli, not thinking. The human can not help but evaluate his behavior according to some morality. He is self aware, so he self evaluates. Consciousness rests on this extra amount of processing. The processing that looks at the way my last behavior defines who I am. I define self aware, not just as having experienced something, but in that information of my experience impacting my opinion of self. In other words, the dog does not care how the last behavior made him look. He just responds according to the various biological systems that make up his body. The human being conscious, or self aware, makes decision based on what his behavior said about him. This self evaluation step is necessary to have what we recognize as human consciousness. To answer your questions A> It is good or bad according to the personal morality of the individual and how this effects his perception of himself. B> This process of self-evaluation is the essence of consciousness. If the self is not evaluated, if the behavior is not labeled good or bad, there is no real self awareness.JDH
September 7, 2010
September
09
Sep
7
07
2010
12:21 PM
12
12
21
PM
PDT
I don’t think my dog ever observes his behavior and decides whether he made a good choice. I'm slightly confused by this statement. It seems as though your entire post above that led you to the decision that dogs are not conscious hinges on the dogs ability to decide if it's behavior is good or bad. A. What do you mean by "good or bad"? B. Why would consciousness depend on the ability to examine and choose if said behavior is good or bad? I have more to say but I would really need you to clarify the above points (especially A.) before I can respond correctly.Scruffy
September 6, 2010
September
09
Sep
6
06
2010
10:55 PM
10
10
55
PM
PDT
Barry R First let me admit that the ideas below are not from any study of philosophy. They come from simple application of what I think is common sense. If I am only stating things which much more learned people have already discussed extensively in literature, I would be interested to see it. But I was very intrigued by your attempt to define consciousness on a continuum. Instead it seems to me what you did is put the appearance of consciousness on the continuum. Many things, dogs mosquitoes, even computer threads, appear to us to be making conscious decisions. But as far as we can tell, they are only making an evaluation based on experience, environment and biology. I don't think my dog ever observes his behavior and decides whether he made a good choice. His incredibly complex nervous system just makes the appropriate choice based on the competition between various internal systems. Some of these systems involve memory in brain cells. This is why a dog can be "trained". So to an external observer, there can be a continuum of what appears to be conscious behavior. The external observer makes a list of what conscious behavior looks like and guesses whether said entity behaved consciously or not. But the internal observer knows whether the behavior is conscious or not. In fact, in my understanding, consciousness means there is an internal observer. Whether or not there is an internal observer to comment on the behavior does not appear to me to be anything that can be defined in a way to put it on a continuum. It is a one or zero question. There is either one internal observer or none. Humans know they are conscious. There is a possibility that this consciousness is an illusion, but that seems a very unlikely proposition. They state that they have one internal observer. Dogs do not appear to have any opinion whether they are doing well or not. I think ( do not know ) that they have no internal observer. Their internal observer state is zero. An internal observer is the only thing that can throw the switch from "my biology says to do this, but instead I am going to do that." Only the internal observer can disobey. Only conscious objects can disobey.JDH
September 6, 2010
September
09
Sep
6
06
2010
09:36 PM
9
09
36
PM
PDT
gpuccio@57 [These kinds of conversations are much easier to have when there's proper quoting a la email or usenet. Not your problem --- I'm just being annoyed.] 1.
This is really strange. I thought we were talking of consciousness.
I was discussion why I didn't find your definition of consciousness persuasive:
the observed fact that we have subjective experiences, referred to a single perceiver “I”
A model-based definition avoids the problems I've pointed out, specifically, the problem that your definition can allow individual transistors to be thought of as conscious. Just to be clear: based on the definition of consciousness that I use, a single transistor cannot be conscious. 2.
Then, I believe that you are speaking of functional information, our CSI. Or have you another kind?
Since the word "information" is not used in my definition, and since I don't recall using the word elsewhere in this thread, I'm pretty certain I'm not speaking about information at all. 3.
Is Hamlet (the drama) a cosncious being?
Here is the definition I gave.
a third order model of experience (a model of experience, and a model of that model, in addition to the experience simpliciter). This is obtained by sufficiently complex feedback or feedforward loops in the brain.
I have quite a bit of experience with identifying third-order models, and I have quite a bit of experience with Hamlet-the-drama (BA Theater). I know of no third-order-models in Hamlet-the-drama. Not that it represents characters with their own third-order-models, and it can be represented by actors with their own third-order models, but as to the drama itself? Of course not. 4.
Wrong. A model can very well be an object.
I'm making the assumption here that you're bringing forward your best arguments. Stating I'm wrong because the world "model" can have other meanings when used outside of discussions of philosophy of mind tells me much more about the quality of your arguments than it does about mine. Look, perhaps I'm not being fair to you. I have a passing familiarity with this literature and you obviously do not. I can scare up a reading list if you would like to remedy this, and then we might rejoin the conversation when we have more of a shared vocabulary. Moving on... 5.
What kind of intelligent disposition of transistors contributes to the extraordinary emergence of a conscious I? Loops? The sheer number of them? Or the special arrangement of them?
The disposition is unimportant. What the disposition represents is. (That's another benefit of my definition --- it doesn't require consciousness to depend on a particular substrate.) I would consider any disposition --- of transistors or cells --- that instantiates a third-order model of experience to be conscious. (Note that this also frees the concept of consciousness from any intelligent design.)BarryR
September 6, 2010
September
09
Sep
6
06
2010
03:01 PM
3
03
01
PM
PDT
gpuccio, sorry, I am in a different time zone. The conversation has moved on and I will go back to lurking (which I am really enjoying with this thread).zeroseven
September 6, 2010
September
09
Sep
6
06
2010
02:15 PM
2
02
15
PM
PDT
BarryR: Certainly transistors possess the capacity for experience — for several definitions of experience. You may want to limit this discussion to conscious experiences, but in that case you’re precluded from using them to describe consciousness. This is really strange. I thought we were talking of consciousness. So you want to define experience independently from consciousness, so that you can attribute that "non conscious experience" to transistors, and then use them and that definition of experience (totally independent from the concept of consciousness) to "describe consciousness"? I think you can do better than that. So, just to be clear, is a transistor conscious? Has it conscious experiences? Are you speaking of traditional Shannon information? Not at all. Then, I believe that you are speaking of functional information, our CSI. Or have you another kind? That's interesting, because while we in ID certainly believe thayt CSI is the product of consciousness, none of us has ever thought that CSI can generate consciousness. Therefore, I ask again: Is Hamlet (the drama) a cosncious being? Because, as you certainly know, Hamlet is a perfect example of CSI (and a lot of it). But if I have misunderstood, and you have a different definition of the complexity you were speaking of, please let us know what it is. Next, I don’t see that great a difference between a “representation” (model) and a “conception” (idea). Both are abstractions; neither are objects. Wrong. A model can very well be an object. An idea is a conscious representation. You can certainly represent something in your mind, be it abstract or not, then it is an idea. If you build an object which represents that idea, be it abstract or not, you have an objective model. A model represented in a consciousness is an idea. A model implemented in a software is an object, even if abstract (indeed, it needs an objective support to exist out of a mind). You are constantly trying to confound the difference between subjective, conscious experiences and objects, which should be extremely easy to understand. That you do that is in itself not useful, but that you do that in a discussion about consciousness is completely confounding. Let's go to properties. Gosh, that’s a puzzler. If half a bird can’t fly, why should we expect a full bird to be able to fly? A full bird can fly because its structure (its CSI) is functional, and is obviously and tangibly related to the task of flying. That's why I was asking if you were speaking of CSI, when saying that a pool of transistors can become conscious because of its "complexity". Because CSI is functional, and can accomplish things. But again, the relationship between structure and function is always reasonably evident, or anyway reasonably detectable. Now, in the case of the bird, it is obvious that the wings are there for a certain purpose, and so the other parts of the body which contribute to flying. So I ask, in the structure of a computer, where transistors are arranged intelligently (CSI), what is the relationship netween CSi (intelligent structure) and the emergence of consiousness? What kind of intelligent disposition of transistors contributes to the extraordinary emergence of a conscious I? Loops? The sheer number of them? Or the special arrangement of them? Well, you have to say what. I believe that none of those things has any relationship with consciousness. You believe differently. So, please shou me why a loop has any relationship with being conscious, or maybe a series of if... then... constructs, or anything else. But there has to be a rationale in your suggestion. Otherwise, we can just believe that gym machines will become conscious as their technology improves. And then we can test that. We take your suggestion and apply it as long and as explicitly as necessary, and see what happens. I think reductionists should stop stating that properties come out of magic. They don't. A bird can fly because its structure is appropriate to fly, according to the laws of physics. There is nothing magic there, just CSI at work. But according to what laws of physics (or, for that, to what laws of any kind) should a series of transistor loops become conscious? That's magic, and of the worst kind. I definitely prefer Harry Potter.gpuccio
September 6, 2010
September
09
Sep
6
06
2010
02:15 PM
2
02
15
PM
PDT
JDH@50 You're making disobedience a property of intentionality. Good, I like precise definitions. So let's talk about intentionality. If I decline to definite intentionality as an intrinsic property and instead define it as a classification of behavior, then there's no reason I'm aware of that sufficiently complex mechanistic behavior cannot be seen as intentional. We do this --- constantly --- when we anthropomorphize. When programmers talk about about threads "wanting" to acquire a lock, for example, we're ascribing a (limited) intentionality to a machine. With intentionality thus on a continuum, I have no difficulty stating that threads, my cats, or the people around me are acting in an intentional fashion. It's a model and it's useful for navigating the world. But we were talking about consciousness, so let's narrow the focus to conscious intention. Using the definition I've given elsethread, decisions based on a third-order model qualify as conscious decisions, and I may choose to model them as intentional if that turns out to be useful. This approach has the advantages of being both simple and workable. A different approach would force intentionality to be an intrinsic property. This immediately runs into all kinds of difficulty: do dogs have intention? Do mosquitoes? Do trees? How do I go about unearthing this intrinsic property to make this determination? The problems are compounded if intentionality is restricted to humans by fiat. At that point, you've effectively removed physical explanations of intentionality and set up an unverifiable spirit world, all to prevent an outcome you don't like. That's not only bad philosophy; that's bad theology. (I'm not an atheist, btw. If you find it simpler to imagine that I am one, that's fine.)BarryR
September 6, 2010
September
09
Sep
6
06
2010
01:47 PM
1
01
47
PM
PDT
gpuccio@53 Certainly transistors possess the capacity for experience --- for several definitions of experience. You may want to limit this discussion to conscious experiences, but in that case you're precluded from using them to describe consciousness.
Are you speaking of traditional Shannon information?
Not at all. Next, I don't see that great a difference between a "representation" (model) and a "conception" (idea). Both are abstractions; neither are objects. (Dictionary definitions usually aren't helpful when discussion specialized domains, btw.) Finally,
Why, if a single transistor is not conscious, should a million of them variously assembled, become conscious?
Gosh, that's a puzzler. If half a bird can't fly, why should we expect a full bird to be able to fly? (I took several undergraduate philosophy classes and I can't remember anyone having to point out that collection of objects have properties that differ singleton objects. I think we took that one pretty much as a given.)BarryR
September 6, 2010
September
09
Sep
6
06
2010
01:19 PM
1
01
19
PM
PDT
BA: thank you!gpuccio
September 6, 2010
September
09
Sep
6
06
2010
12:57 PM
12
12
57
PM
PDT
BarryR: Please, clarify: A single transistor satisfies this definition: it can perceive a change in input power and react accordingly. What do you mean? That a transistor has subjective experiences? The perception is subjective and private. And so? It exists. It is a fact. As I have said many times, we directly know that we are conscious, and then we infer that in others by analogy. But the direct, subjective, private perception of ourselves as conscious is the basis for any other knowledge. Therefore, if it is "private", then all our knowledge is "private". Again, consciousness in ourselves is a fact, more than any other fact. Consciousness in other humans is a very strong inference by analogy. Consciousness in higher animals is a weaker, bur reasonable inference. Consciousness in a transistor? I think it's an inference almost nobody would agree with, not even in minimal part. Consciousness in assembled transistors, no matter how many, no matter in what order? Why, if a single transistor is not conscious, should a million of them variously assembled, become conscious? That's absolute nonsense. That's why I asked you if you had any notion of some form of formal complexity which would explain consciousness. You answer: But, if you want a rough guide: minimal perception takes bits, minimal models of perception takes bytes, and minimal models of models of perception takes kilobytes. Are you speaking of traditional Shannon information? So, again, is a very long random string conscious? Or are you speaking of CSI? And then, is Hamlet (the play) a conscious being? You equivocate on the meaning of words. Please, define what you mean by "perception". You use words which have been created, and used, for millennia (yes, for millennia) to describe subjective experiences in humans, and without any thought or justification you apply them to purely objective realities, like a transistor. Why should a transistor "perceive", and not stones? Why should bits cause consciousness, and not grain of sands? Substitute “model” for “idea” and you’ve arrived at a modern definition of consciousness. That only means that modern definitions of consciousness are senseless. You cannot substitute "model" for "idea". From "dictionary.com": model: a representation, generally in miniature, to show the construction or appearance of something a simplified representation of a system or phenomenon idea: any conception existing in the mind as a result of mental understanding, awareness, or activity. Is it so difficult to understand? A "model" is an object. An "idea" is a representation in the conscious mind. How can you substitute one for the other? Is it really so difficult?gpuccio
September 6, 2010
September
09
Sep
6
06
2010
12:56 PM
12
12
56
PM
PDT
This link should work: http://www.premier.org.uk/unbelievablebornagain77
September 6, 2010
September
09
Sep
6
06
2010
11:25 AM
11
11
25
AM
PDT
Coincidently the debate on premier Christian radio Saturday, the same day this blog went up, was about whether we have a soul or not. Here is the link: Unbelievable - 04 Sep 2010 - Christian Physicalism: Do we have a soul? http://www.premierradio.org.uk/listen/ondemand.aspx?mediaid={7A2179A8-B2C2-4F32-BE24-2AFF6628FF9F}bornagain77
September 6, 2010
September
09
Sep
6
06
2010
11:24 AM
11
11
24
AM
PDT
Barry R Why is it that atheists mistake randomness or chaotic behavior for intention. Your pseudo non-determinism does not coincide with disobedience. The fact that the answers converge shows that you really have just added a little uncertainty to the process. Uncertainty is not rebellion. But not one of your little programs is going to say, well this is the answer I was converging to, but I felt like doing this instead. Your oversimplification of the problem shows you are grasping at straws. You may claim it is a false dichotomy, but I see only two alternatives. 1. You don't really understand the full extent of the concept of disobedience, in which case, why should I trust your claims about your program. 2. You are aware at how over simplified your model is and you are projecting onto your results, conclusions which are not warranted. Then you have not answered the challenge.JDH
September 6, 2010
September
09
Sep
6
06
2010
10:57 AM
10
10
57
AM
PDT
gpuccio@45 As to the definition I gave, I judge it in terms of utility: I've given three necessary and testable components that allows me to distinguish (among other things) humans from typewriters. Your definition is not so picky.
the observed fact that we have subjective experiences, referred to a single perceiver “I”
A single transistor satisfies this definition: it can perceive a change in input power and react accordingly. The perception is subjective and private. I don't find it useful to think of single transistors having consciousness, so I'm going to reject your definition. BTW, this follows the Cartesian model (and no, it has not been used for a millenia). Descartes avoids the problem by limiting discussion to humans.
Thought. I use this term to include everything that is within us in such a way that we are immediately aware [conscii] of it. Thus all the operations of the will, the intellect, the imagination and the senses are thoughts. I say ‘immediately’ so as to exclude the consequences of thoughts; a voluntary movement, for example, originates in a thought. (CSM II 113 / AT VII 160; cf. Principles of Philosophy Part I, §9 / AT VIIIA 7–8)
[Unless noted otherwise, quotations are take from the Stanford Encyclopedia of Philosophy's article on Seventeenth Century Theories of Consciousness] Contrast this with Hobbs:
[I]f the appearances be the principles by which we know all other things, we must needs acknowledge sense to be the principle by which we know those principles, and that all the knowledge we have is derived from it. And as for the causes of sense, we cannot begin our search of them from any other phenomenon than that of sense itself. But you will say, by what sense shall we take notice of sense? I answer, by sense itself, namely, by the memory which for some time remains in us of things sensible, though they themselves pass away. For he that perceives that he hath perceived, remembers. (De Corpore 25.1, p. 389; for some discussion of this text, see Frost 2005)
Now we have both sensation and memory of sensation. Finally Spinoza (where I will quote the author's summary):
1. the mind has an idea that represents its body being affected by P; and 2. the mind has a second idea representing the first to itself.
Substitute "model" for "idea" and you've arrived at a modern definition of consciousness. As to you complexity question, I'm not sure why it's relevant. Complexity is a useful measure of computability and information, but since neither your definition nor mine relies on computability or information, I think you're making a category error. But, if you want a rough guide: minimal perception takes bits, minimal models of perception takes bytes, and minimal models of models of perception takes kilobytes. If you're interested in this problem in general, I found Jaegwon Kim's Physicalism, or Something Near Enough to be unusually clearly written. The first several chapters give a complete overview of the recent history of philosophy of mind. If you want to get a sense of his writing style, there's a sample chapter here.BarryR
September 6, 2010
September
09
Sep
6
06
2010
09:07 AM
9
09
07
AM
PDT
gpuccio, Very nice!HouseStreetRoom
September 6, 2010
September
09
Sep
6
06
2010
07:36 AM
7
07
36
AM
PDT
gpuccio, this is a better look at mega-savant Kim Peek Kim Peek - The Real Rain Man [2/5] http://www.youtube.com/watch?v=NJjAbs-3kc8&p=CB2BCFF0D34CE915&playnext=1&index=1bornagain77
September 6, 2010
September
09
Sep
6
06
2010
06:43 AM
6
06
43
AM
PDT
gpuccio, I guess you are right Derek does fit here, I have a few other videos of autistic savants I've collected: Autistic savants The Musical Genius - Derek Paravicini - Part 1/5 http://www.youtube.com/watch?v=1kwjDLHX92w Derek Paravicini on 60 MINUTES – Autistic Savant – video http://www.metacafe.com/watch/4303465 Kim Peek - The Real Rain Man [1/5] http://www.youtube.com/watch?v=dhcQG_KItZM The Boy with the Incredible Brain - Daniel Tammet http://video.google.com/videoplay?docid=2351172331453380070 Autistic Savant Stephen Wiltshire Draws the City Of Rome From Memory http://www.metacafe.com/watch/4200256 Savant syndrome, Beautiful minds - Elonzo Clemmens http://www.youtube.com/watch?v=lkDMaJ-wZmQ The Human Calculator - Ruediger Gamm - video http://www.metacafe.com/watch/4200252 as well I find it interesting that when ever anybody does something blatantly selfish, everybody will ask them/him/her , "Doesn't your conscious bother you?", reflecting the fact that everybody is expected to intuitively know the transcendent moral law of the golden rule. This following video clearly reflects how some academics have completely deluded themselves into thinking that this transcendent moral law, which is comprehended by our transcendent minds, does not exist, when clearly it does exist. Cruel Logic http://www.youtube.com/watch?v=4qd1LPRJLnIbornagain77
September 6, 2010
September
09
Sep
6
06
2010
06:29 AM
6
06
29
AM
PDT
BarryR: Your definition is only a gratuitous re-definition. Strangely, it is very similar to compatibilist re-definition of free will. Tis seems to be the faith of reductionists: if you don't like a fact, just redefine it as though it did not exist. I will be more clear. The only, universal, empirical definition of consciousness the following: the observed fact that we have subjective experiences, referred to a single perceiver "I" That is not only my definition, but the true meaning of the word, as it has been used for millennia. To be compatible with the true meaning of the word, your definition should be "re-re-defined" as follows: "a third order experience (an experience of experience, and an experience of that experience, in addition to the experience simpliciter). This is never obtained by sufficiently complex feedback or feedforward loops in any software system which is not capable of experiences." And even in this way, it would be a definition of "self-consciousness", and not of consciousness. Indeed, the "mise en abime" and infinite regress of which the perceiving self is certainly capable (always able to detach itself from any of its representations, to adopt a meta-perception of the perception), while important and revealing the transcendental nature of the self, is not necessary for consciousness to happen. If I perceive a red blot, I am conscious, even if I am not at that moment consciously perceiving that I am perceiving. But certainly, the perception of the perception (at whatever order you like) allows me to become "self-aware", and not only "aware", and to build "models" of my perception. But the model is not the perception. It never was, and never will be. Given that, can you affirm that you have written " robotic control software that was (very) minimally" aware? That had perceptions? That had a perceiving I? Subjective experiences do exist. They are a fact. You cannot rule them out. You can never rule out facts. If you re-define them as loops, you have to show loops with subjective experiences, not only loops which can give outputs vaguely similar to those of beings with subjective experiences. And if you really believe, like Hofstadter, that loops are the cause of subjective experience and of the I, please can you explain on what you found that conviction? And why simple loops are not aware, and complex loops should be? IOW, I repeat the question I made at posts 35, 41 and 42, which nobody has still addressed: what kind of formal "complexity" should be capable to cause consciousness to "emerge"?gpuccio
September 6, 2010
September
09
Sep
6
06
2010
02:58 AM
2
02
58
AM
PDT
gpuccio@42 This is the best concise definition of consciousness I've run across: "a third order model of experience (a model of experience, and a model of that model, in addition to the experience simpliciter). This is obtained by sufficiently complex feedback or feedforward loops in the brain." Using this definition, I've written robotic control software that was (very) minimally conscious. Don't get hung up on the word complexity here. It just means there are enough bits to handle the input, the model of the input, and the model of the model. There are other definitions out there; you may find some of them more palatable. The article relied on no formal definition being provided. JDH@10 Having a Ph.D. in Computer Science, I find it trivial to write code that disobeys me. One particular dislocation simulation I work with is effectively chaotic --- the final answer (and the execution time) depends on the order of operations which in turn depend on the order of message arrival. Even on a small number of nodes, this is effectively nondeterministic. The answers eventually start converging, so the physicists don't mind. Debugging it is another problem entirely. GS@9 What about typewriters? There's a mechanical mechanism for experience --- striking a key --- which fits the first requirement of the definition I gave. But at least for manual typewriters there's no way to model that experience, much less a way of modeling the model. So no, manual typewriters will not exhibit consciousness. (Note that I don't have to rely on arguments from incredulity once I have a definition in place. Looking forward to reading your definition.)BarryR
September 6, 2010
September
09
Sep
6
06
2010
02:15 AM
2
02
15
AM
PDT
jurassicmac @19 You said "JDH, I must respectfully point out you that you have made 2 logical fallacies at the same time." You sound like you consider yourself pretty wise. I am going to try to explain to you why I did not commit 2 logical fallacies. Then I am going to try and explain to you why your position is indefensible logically. You, unfortunately, will arrogantly not believe either. First I am not arguing by analogy. Maybe you did not read what I did. I did physics by computer simulation. This is not analogy. This is modeling. There is a distinct difference. When I do computer simulation, I make certain assumptions which allow me to take a complex system and represent it in a series of calculations. I do not construct analogies. I construct working models. Hopefully, in reducing the parameters in the model to something I can actually code in finite time, I have not missed something that effects the states of the real system. Thus when I perform the actual simulation, there is a one to one correspondence between the states of the model, and the distinct states of the real system I am modeling. This is how we can get real scientific results from modeling. The assumptions may be wrong such that I no longer get my one to one correspondence. Then the problem is with incorrect modeling, not with the idea of computer simulation. If you are right and we are nothing but chemicals in action, than the brain is nothing more than a sophisticated computer. Modeling the brain by a computer is not drawing an analogy. It is trying to eliminate from the real system those parameters which are not necessary for the current problem and modeling the behavior. Second I am not making an argument from incredulity. I know how to make computer programs. I know one thing I can not do. I can not program a computer to disobey the instructions I give it. This is an argument from overwhelming evidence. In all the years of programming computers they always obey the instructions they are told to do. Sometimes, because of buggy programs, the fact that the computer simply obeys the instructions given it has disastrous results. But it is not logical that one can design a computer to disobey. It is a logical contradiction. So considering consciousness to be an "emergent" property is just a way of hiding the fact that you believe in a logical contradiction. I have tried to state this as clear as possible. I know you will reject it. To see it, would be too painful to you.JDH
September 6, 2010
September
09
Sep
6
06
2010
01:01 AM
1
01
01
AM
PDT
BarryR: Well, I extend my questions in the previous post to you too. Maybe you can find the answer in "Kandel’s Principles of Neural Science and its 1,400+ pages about how the brain works (and fails to work)".gpuccio
September 5, 2010
September
09
Sep
5
05
2010
10:15 PM
10
10
15
PM
PDT
zeroseven (and jurassicmac): The link between consciousness and complexity seems pretty clear to me. I was not suggesting that there is no link between the manifestations of consciousness and complexity. My point is different. My point is that, as we in ID are often asked by darwinists and materialists to define functional information, or to be specific about the kind of information and complexity we are speaking of (and we answer in detail to those requests), I am now asking jurassicmag, and you, who have introduced complexity in your argument about consciousness, to define better what you mean. It should not be difficult, especially for you, as you stated that "the link between consciousness and complexity seems pretty clear", at least to you. So, do you mean complexity in the sense of Kolmogorov complexity, or Shannon's entropy? Is a long enough random string going to become conscious, in your opinion? Or do you mean functional complexity? IOW, CSI or some equivalent? That woul really be interesting, and would establish a strong epistemological connection between you and us IDists! So, please, specify what yhour model is, at least in principle. That's the least we can ask of a scientific model. Or do you just agree with jurassicmac that the point is that we don't really know: What computer technology will be like in 10, 100, or 1,000 years. Ah, that's certainly true... And I think we don't really know what airplane technology will be "in 10, 100, or 1,000 years". And, say, what ebook readers technology will be. So, in your model, what is more likely to become conscious in a near future, the last sequence of digits of pi, a computer, an airplane, or just Kindle 4? Maybe, if you specify your concept of "conscious generating complexity", we can at least make some predictions about that. You know, some people think that that is what science is about...gpuccio
September 5, 2010
September
09
Sep
5
05
2010
10:09 PM
10
10
09
PM
PDT
Once you remove all the aspects of thought and behavior where neural correlates are known, there’s very little remaining for a non-physical consciousness to do.
Yes. And if that is the case, then why, on a materialistic understanding, does consciousness bother to exist at all. Perhaps if materialistic science can do a really good job of explaining things (in 2800 pages, say, instead of just 1400), we will then know that consciousness shouldn't exist (because there is nothing remaining for it to do) and therefore materialism will have been decisively proven!Matteo
September 5, 2010
September
09
Sep
5
05
2010
06:37 PM
6
06
37
PM
PDT
--BarryR: "Once you remove all the aspects of thought and behavior where neural correlates are known, there’s very little remaining for a non-physical consciousness to do." I gather that you also believe a computer could develop an oedipus complex, or that we may one day have to worry about the suicide rate among aging process units.StephenB
September 5, 2010
September
09
Sep
5
05
2010
05:50 PM
5
05
50
PM
PDT
JDH (#10) wrote: Despite years of experience writing many complex codes, I can not write a computer program that disobeys me. I don’t even no how to do it. I can give you a clue. In his paper Programs with Common Sense, John McCarthy gave five requirements for a human level artificial intelligence: All behaviors must be representable in the system. Therefore, the system should either be able to construct arbitrary automata or to program in some general-purpose programming language. Interesting changes in behavior must be expressible in a simple way. All aspects of behavior except the most routine should be improvable. In particular, the improving mechanism should be improvable. The machine must have or evolve concepts of partial success because on difficult problems decisive successes or failures come too infrequently. The system must be able to create subroutines which can be included in procedures in units... Point #3 is the clue to how to make this happen. Humans have goal-seeking behavior without a fixed goal. Furthermore, we evaluate every goal as if it could be improvable. That means that we are "built" to observe that few, if any, things are what they ought to be.wrf3
September 5, 2010
September
09
Sep
5
05
2010
04:00 PM
4
04
00
PM
PDT
BarryR
What a curious laptop you have. If I were to participate in in a Turing test, I’d be asking questions like “Of the two most recent questions you were asked, which was the more difficult to answer and why?” Your dictionary approach doesn't work too well there, does it?
I really thought my very simplistic example was sufficient to illustrate this very simple point. But apparently you are less gullible than I and it would take a reasonable answer to the above question before you would believe the machine was conscious.
Kandel’s Principles of Neural Science is 1,400+ pages of how the brain works (and fails to work), starting at the electrochemical and working up through the cellular to larger brain structures. Once you remove all the aspects of thought and behavior where neural correlates are known, there’s very little remaining for a non-physical consciousness to do.
Wow, 1400+ pages on how the brain works. I'm convinced now that I'm just a complicated machine...sigh.Granville Sewell
September 5, 2010
September
09
Sep
5
05
2010
03:30 PM
3
03
30
PM
PDT
The link between consciousness and complexity seems pretty clear to me. As MarkF said, a slug is conscious to a degree. That's a pretty low degree. A dog is conscious to a greater degree, and a human to greater degree yet. The brains, or neurological functions of each of these animals are each more complex than the next. Of course this could be a coincidence, but the link between consciousness and brain size seems pretty universal (as far as we can tell). Consciousness seems a completely fuzzy term to me anyway. What is a workable definition of it? I am quite prepared to believe it is nothing special and simply arises naturally in a sufficiently complex system.zeroseven
September 5, 2010
September
09
Sep
5
05
2010
02:00 PM
2
02
00
PM
PDT
1 11 12 13 14 15

Leave a Reply