Uncommon Descent Serving The Intelligent Design Community

Machine 1 and Machine 2: A Challenge to the Ethics of the New Atheists

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email


(Photo of a gnu or wildebeest in the Ngorongoro Crater, Tanzania. Courtesy of Muhammad Mahdi Karim and Wikipedia.)

Do sapient beings deserve respect, simply because they are sapient? An affirmative answer to this question seems reasonable, but it also imperils the Gnu Atheist project of basing morality on our shared capacity for empathy. My short parable about two machines illustrates why. Let’s call them Machine 1 and Machine 2. Since this post is a parable written for atheists, I shall assume for argument’s sake that machines are in principle capable of thinking and feeling.

Machine 1 is like HAL9000, in the movie 2001. It has a fully human psyche, which is capable of the entire gamut of human emotions. It can even appreciate art. It also thinks: it is capable of speech, speech recognition, facial recognition, natural language processing and reasoning. Machine 1 is also capable of genuine empathy.

Machine 2 is different. It’s more like an advanced version of Watson, an artificial intelligence computer system developed by IBM which is capable of answering questions posed in natural language. IBM has described Watson as “an application of advanced Natural Language Processing, Information Retrieval, Knowledge Representation and Reasoning, and Machine Learning technologies to the field of open domain question answering,” which is “built on IBM’s DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.” Building on Watson’s successes in retrieving and interpreting useful information, Machine 2 uses its massively parallel probabilistic evidence-based architecture to advise human experts on fields as diverse as healthcare, technical support, enterprise and government. Since its advanced problem-solving capacities easily surpass those of any human being in breadth and depth, AI experts are unanimous in agreeing that Machine 2 can think. However, nobody has ever suggested that Machine 2 can feel. It was never designed to have feelings, or to interpret other people’s emotions for that matter. Also, it has no autobiographical sense of self.

Here’s my question for the Gnu Atheists. I take it you’re all agreed that it would be wrong to destroy Machine 1. But what about Machine 2? Would it be wrong to destroy Machine 2?

Machine 2 is extraordinarily intelligent – no human being comes close to matching its problem-solving abilities in scope or depth. Machine 2 is therefore sapient. So it seems perversely anthropocentric to say that it would be perfectly all right for a human being, who is much less intelligent than Machine 2, to dismantle it and then use it for spare parts.

But once we allow that it would be wrong to kill Machine 2, we are acknowledging that an entity can matter ethically, simply because it is sapient and not because it is sentient. Remember: Machine 2 has no feelings, and is unable to interpret feelings in others.

Why is this a problem for the Gnu Atheists? Because empathy constitutes the very foundation of their secular system of morality. For instance, an online article entitled Where do Atheists Get Their Morality From? tells readers that “[m]orality is a built-in condition of humanity” and that empathy is “the foundational principle of morality.” But where does that leave intelligent beings that lack empathy, such as Machine 2? If it is correct to say that sapient beings are ethically significant in their own right, then morality cannot be based on empathy alone. It has to be based on empathy plus something else, in order to ensure that sapient beings matter too, and not just sentient beings.

But if we want to define morality in terms of respecting both sentient beings and sapient beings, then we have to ask: why these two kinds of beings, and only these two? What do they have in common? Why not define morality in terms of respecting sentient beings and sapient beings and silicon-based beings – or for that matter, square beings or sharp beings?

One might be tempted to appeal to the cover-all term “interests”, in order to to bring both sentience and sapience under a common ethical umbrella. But Machine 2 doesn’t have any conscious interests. It’s just very, very good at solving all kinds of problems, which makes it intelligent. And if we are going to allow non-conscious interests to count as ethically significant, then why don’t plants matter in their own right, according to the Gnu atheists? Or do they? And why shouldn’t rocks or crystals matter? In his book, A New Kind of Science (2002), Stephen Wolfram argues that a vast range of systems, even “ones with very simple underlying rules … can generate at least as much complexity as we see in the components of typical living systems” (2002, pp. 824-825). This claim is elaborated in Wolfram’s Principle of Computational Equivalence, which says that “there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple” (2002, p. 717). More precisely: (i) almost all systems, except those whose behaviour is not “obviously simple”, can be used to perform computations of equivalent sophistication to those of a universal Turing machine, and (ii) it is impossible to construct a system that can carry out more sophisticated computations than a universal Turing machine (2002, pp. 720 – 721; the latter part of the Principle is also known as Church’s Thesis).

If Wolfram is right, then it seems that a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows that nearly every natural system has the same kind of intelligence that humans do, and if we allow that intelligence (or sapience) is morally insignificant in its own right, it follows that that there is no fundamental ethical difference betwen human beings and crystals.

Before I throw the discussion open to readers, I’d like to clarify two points. First, I deliberately chose machines to illustrate my point instead of people, in order to present the issues as clearly as possible. I am well aware that there are certain human beings who lack the qualities deemed ethically significant by the Gnu atheists, but I realized that if I attempted to point that out in an argument, all I’d get in response would be a load of obfuscation, as virtually no-one wants to appear cold and uncaring in their attitudes towards their fellow human beings.

Second, I anticipate that some Gnu atheists will retort: “If theists can’t provide a sensible answer to these vexing ethical questions, then why should we have to?” But I’m afraid that won’t do. After all, Gnu atheists are convinced that theism is fundamentally irrational, and even insane. Comparing your belief system with an insane system and saying that your system answers the big moral questions just as well as the insane one doesn’t give honest inquirers any reason to trust your system. In any case, the ethical dilemma I have presented here, relating to Machine 1 and Machine 2, presupposes the truth of materialism, as well as a computational theory of mind – both of which most theists would totally reject).

I’d like to hear what readers think about the issues I’ve raised. Thoughts, anyone?

Comments
Hi everyone, Thank you all very much for your comments on my machine post. I'll address them in chronological order. 1. DrREC: Congratulations, you're the first cab off the rank. You wrote:
Oh, and you've got a huge internal inconstancy - you reject materialism, as well as a computational theory of mind, and then posit a scenario of computers with minds.
Please re-read the first paragraph of my post, where I wrote:
Since this post is a parable written for atheists, I shall assume for argument's sake that machines are in principle capable of thinking and feeling. (Emphasis mine.)
By the way, I take it you meant "internal inconsistency" rather than "internal inconstancy". And yes, as a theist and an anti-materialist, I am not troubled by the scenario I depict. I would never call Machine 2 intelligent in the first place. I believe there's more to intelligence than doing computations, and I suggest you read J.R. Lucas's essay, Minds, Machines and Godel to see where I'm coming from, as well as this follow-up paper here, which Dr. Lucas read to the Turing Conference at Brighton on April 6th, 1990. You also wrote:
I don't think salt crystals or my TI-85 calculator will ever perform the same kind of calculations the human mind can. Ever.
Then you disagree with Dr. Stephen Wolfram's Principle of Computational Equivalence, which which says that "there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple" (A New Kind of Science, 2002, p. 717). I take it that you, as a thorough-going materialist, have a well-thought-out reason for your disagreement, and that you can back it up with mathematics. Please do. In criticizing my point that since [according to Wolfram] nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows [if we accept a computational theory of intelligence] that nearly every natural system has the same kind of intelligence that humans do, you offered a purported counter-example:
[A]ll life on Earth has evolved as long as any other life, and given a chance, could evolve into a intelligent self-aware being. But I don't think atheists treat all life equally on this basis.
In reply: I wasn't arguing for treating all life equally, and I never claimed that all life-forms were equal. What I claimed was that on a consistent materialist view, life-forms are not fundamentally (i.e. qualitatively) distinct from one another. Nor are they qualitatively distinct from crystals. Of course, I realize perfectly well that an atheist could consistently argue that natural computers with faster internal processors, such as human beings, should be accorded a lot more respect than bacteria - or inorganic crystals, for that matter - which are orders of magnitude slower in their problem-solving computations. My point was that it is surprising (on a naive sentientist view, acording to which only beings with feelings matter) that bacteria or crystals should be accorded any significance at all. I then concluded that a consistent Gnu atheist would have to accept that "there is no fundamental ethical difference betwen human beings and crystals." In other words, the only ethical difference between a human being and a crystal is quantitative, not qualitative. Finally, you wrote:
I'm also uncertain of the substitution of empathy for sentience or self-awareness, which I might consider a more important criteria.
I wasn't substituting empathy for sentience, although I may have inadvertently given that impression with my question: "But where does that leave intelligent beings that lack empathy, such as Machine 2?" If so, I apologize for my imprecise wording. Rather, my point was that if a capacity for empathy is the sole basis for moral behavior on our part, then it follows that any being (such as Machine 2) which has no feelings (as it has not been hard-wired for emotions) and no capacity of its own empathy will be a being with whom we cannot empathize, and we will therefore dismiss it as morally insignificant. The whole point of my post was to argue that it is ethically blinkered to claim that only sentient beings matter; surely intelligent beings do too. My example of Machine 2 was meant to challenge people's intuitions on that score. I am genuinely surprised to see the atheists digging in their heels and insisting that only sentient beings matter. 2. markf Thank you for your post. Re your remarks on empathy, please see my comments above in the last paragraph of my response to Dr.REC. In your response, you maintained that "empathy that is the most important cause of our behaving ethically towards other beings." My question for you is: is empathy the sole legitimate cause, on your account? Do you think it could ever be appropriate for us to try to behave ethically towards beings with whom we cannot empathize in principle, because they have no feelings of any sort? If not, why not? You also wrote:
What makes any being the object of empathy is its ability to suffer and be happy.
I agree. But I would ask you: why should we only value beings that can suffer? Why shouldn't we value beings that can think, even if they can't suffer? Isn't the ability to think equally precious? Isn't it the height of absurdity to claim that it's wrong to kill a sparrow for the fun of it, but that it's perfectly OK to destroy a being with an intelligence that would dwarf Einstein's, just for the fun of it? 3. thud You wrote:
Well what do you have to say if I say no, there's nothing wrong with destroying Machine 2? It seems like you're assuming that atheists are going to empathize with Machine 2. Why do you think that? Because I don't. It's a computer.
If you're a materialistic atheist (and I'm not sure whether you are), then you will probably accept that you're a computer too - unless, like Searle, you're one of those rare materialists who believes that the brain is not a computer. In any case, can you think of any reason that a materialistic atheist might have for saying that a computer could never be sentient, as opposed to intelligent? If not then your original argument for not empathizing with a computer - "Because I don't. It's a computer" - is rendered invalid. If a computer is capable of having genuine feelings, then empathizing with a computer seems perfectly appropriate. 4. rhampton7 Thank you for your post. Regarding what makes a being matter in its own right, you wrote:
I don't believe Machine 2's ability to feel is relevant (e.g. people who are psychopaths are still human beings). What is important is the "autobiographical sense of self," for this is what is needed, presumably, to pass the Chinese Room test within a hypothetical Turing test.
I take it you agree that animals are ethically significant. Virtually everyone accepts this: for instance, the Catechism of the Catholic Church writes of animals that "men owe them kindness" (paragraph 2416). A sentient non-human animal could not pass the Turing test, yet we regard it as being important in its own right. So why not a highly intelligent non-human computer, which is far smarter than the animal, but which (like the animal) lacks an autobiographical sense of self? 5. mike1962 Thank you for your post. You wrote:
Sam Harris points to consciousness as the proper object of empathy, not sapience or sentience, from what I've seen.
I haven't read Sam Harris's recent ethical writings, but I would still argue that whether or not we can empathize with a non-conscious intelligence, it would be narrow-minded to dismiss such an intelligence as ethically insignificant - particularly if it can solve every problem that we can solve. Of course, as an anti-materialist, I don't think there could ever be such an intelligence, but if it did turn out to exist, then I'd be prepared to bite the bullet and say that if we matter in our own right, then the non-conscious intelligence must matter in its own right, too. 6. Neil Rickert Thank you for your post. You wrote:
Personally, I do not believe that Machine 2 is sapient or intelligent.
OK. Why not? (I agree with you, of course, but I'm not a materialist.) You also wrote:
The harder question is about Machine 1. However, I doubt that Machine 1 will ever exist.
Why? I'm just curious, that's all. mike 1962, following up on your comment, asks:
I, as Neil Rickert, doubt such a machine will ever exist. But if it did, how could we tell? We suffer because we are conscious. How could we determine if a machine is conscious or not? Hell, I can't even tell if anyone besides myself is conscious or not, let alone a machine, and neither can neuro-scientists.
Two quick points. First, neuro-scientists have a fairly reliable set of neural indicators for consciousness, which work in the vast majority of human cases (PVS is a bit of a gray zone, however). Second, Wittgenstein's private language argument suffices to refute any notion that you and only you might be conscious. If that were the case, then you couldn't meaningfully be said to follow any rules (e.g. rules of discourse, or rules of a game), as there would be no standpoint from which you could ascertain whether you'd followed them correctly or not. A community of other minds provides such a standpoint - that's why we have soccer referees. Rules can only be followed and checked within a community. The problem of ascertaining whether a machine is conscious is formidable. But I submit that a machine which could churn out Proust-like volumes describing its inner experiences in great depth as it performed mundane tasks, and if it could do a better job of introspecting than I could, then I would seriously start wondering. Another way of ascertaining whether a machine is conscious would be to set it problems that only a being with a capacity for stepping in other people's shoes could solve. For example, this old conundrum:
You're in a room with two doors and two identical men whom you cannot tell apart. One of the men lies all the time, and the other always tells the truth. Behind one door, there is a lion who will eat you no matter what, and the other door leads to a way out. You can ask the men one question to get you out. What question do you ask to keep yourself from getting killed?
(Answer: Which door would the other man say is the safe one?) Incidentally, this invites an interesting philosophical question: are there problems which only a being with a capacity for empathy can solve? If the answer is "Yes", then Machine 2 is not smarter than human beings in all respects, after all. At best, it's smarter than human beings about "third-person" states of affairs. Problems requiring empathy it would flub. 7. Scott Andrews (and thud) I'm a great Douglas Adams fan. The guy's hilarious. I also don't eat meat, although I now eat fish after going nearly 20 years without fish. This post wasn't meant to be about vegetarianism as such, but briefly, I would say that any cow that wanted to be eaten is envisaging a future state, in which the eaters feel sated after having done so. In other words, the desire to be eaten presupposes a capacity for abstract thinking, which cows lack. Thanks for the hypotheticals. 8. bornagain77 Thanks as always for the vote of support. Much appreciated.vjtorley
September 21, 2011
September
09
Sep
21
21
2011
12:05 AM
12
12
05
AM
PDT
As to: What is reductive materialism? It is simply classical materialism as has been postulated since before the Ancient Greeks. Materialism Excerpt: In philosophy, the theory of materialism holds that the only thing that exists is matter; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions. In other words, matter is the only substance.,,, The professor of Philosophy at the University of Notre Dame Alvin Plantinga criticises it, and the Emiritus Regius Professor of Divinity Keith Ward suggests that materialism is rare amongst contemporary UK philosophers: "Looking around my philosopher colleagues in Britain, virtually all of whom I know at least from their published work, I would say that very few of them are materialists."[24] Some critics object to materialism as part of an overly skeptical, narrow or reductivist approach to theorizing, rather than to the ontological claim that matter is the only substance. Particle physicist and Anglican theologian John Polkinghorne objects to what he calls promissory materialism — claims that materialistic science will eventually succeed in explaining phenomena it has not so far been able to explain.[36] (Polkinghorne prefers dual-aspect monism to materialism.[37]) The psychologist Imants Barušs suggests that "materialists tend to indiscriminately apply a 'pebbles in a box' schema to explanations of reality even though such a schema is known to be incorrect in general for physical phenomena. Thus, materialism cannot explain matter, let alone anomalous phenomena or subjective experience,[38] but remains entrenched in academia largely for political reasons."[39] http://en.wikipedia.org/wiki/Materialism etc... etc.. etc...bornagain77
September 20, 2011
September
09
Sep
20
20
2011
06:01 PM
6
06
01
PM
PDT
How about this, take a normal cow--which does not want to be eaten--and conduct a hypothetical procedure on it which causes it to want to be eaten. How do we feel about that?thud
September 20, 2011
September
09
Sep
20
20
2011
05:36 PM
5
05
36
PM
PDT
This reminds me of a fictional character in a Douglas Adams book, The Restaurant at the End of the Universe. (To put it in context, it was humor.) It was a sentient creature, bovine I think, bred with the desire to be killed and served as food. It would come to the table, recommend parts of itself, and show up later on the plate. I wonder if killing that would be wrong, or perhaps engineering it.ScottAndrews
September 20, 2011
September
09
Sep
20
20
2011
05:24 PM
5
05
24
PM
PDT
What is reductive materialism?dmullenix
September 20, 2011
September
09
Sep
20
20
2011
05:10 PM
5
05
10
PM
PDT
"Machine 1 is like HAL9000, in the movie 2001. It has a fully human psyche, which is capable of the entire gamut of human emotions. It can even appreciate art. It also thinks: it is capable of speech, speech recognition, facial recognition, natural language processing and reasoning. Machine 1 is also capable of genuine empathy."
I, as Neil Rickert, doubt such a machine will ever exist. But if it did, how could we tell? We suffer because we are conscious. How could we determine if a machine is conscious or not? Hell, I can't even tell if anyone besides myself is conscious or not, let alone a machine, and neither can neuro-scientists. Maybe one day we shall be able. But not yet. I know this is beside the point. You're assuming that a machine can be conscious. (A tall order.) If a machine were conscious, then I would be inclined to treat it with empathy.mike1962
September 20, 2011
September
09
Sep
20
20
2011
04:33 PM
4
04
33
PM
PDT
Bottom line, we value consciousness and give a damn about its state of suffering because we are conscious and we are programmed to think it worthy. Some people ("sociopaths") are born with a lack of empathy, i.e, they lack the empathetic neural programming most of us have. Nothing anything can say or do can make them have empathy if they lack it. (All they can do is learn to fake it for their own interests. Faking empathy is not empathy.)mike1962
September 20, 2011
September
09
Sep
20
20
2011
04:26 PM
4
04
26
PM
PDT
Personally, I do not believe that Machine 2 is sapient or intelligent. If I hesitate to destroy it, that would only be because I value its computational abilities. The harder question is about Machine 1. However, I doubt that Machine 1 will ever exist. After reading the first two sentences, I wondered whether you were going to ask about vegetarianism. But it turned out that you did not go there.Neil Rickert
September 20, 2011
September
09
Sep
20
20
2011
04:15 PM
4
04
15
PM
PDT
DrREC, as to:
I don’t have to answer your query to discuss this question,,,
But alas, you can't build castles in midair!!!!bornagain77
September 20, 2011
September
09
Sep
20
20
2011
04:12 PM
4
04
12
PM
PDT
Sam Harris points to consciousness as the proper object of empathy, not sapience or sentience, from what I've seen.mike1962
September 20, 2011
September
09
Sep
20
20
2011
04:05 PM
4
04
05
PM
PDT
Calm down, there. No need for triple punctuation and assaults on my worldview-whatever it is you think you know it is. I'm just broadly saying I don't have to answer your query to discuss this question. In speaking of minds, in addition to reductive materialism (which in this case has a very precise meaning), philosophers consider non-reductive materialism (Functionalism), and Eliminative Materialism. Churchland, in Matter and Consciousness has a discusion of this.DrREC
September 20, 2011
September
09
Sep
20
20
2011
04:02 PM
4
04
02
PM
PDT
DrREC bear with me just a bit more here, you state:
A material theory of mind also doesn’t require reductive materialism, per se.
But alas, reductive materialism, upon which neo-Darwinism is based, requires exactly that premise, i.e. neo-Darwinism holds that mind 'emerged' from a reductive materialistic framework!!! Do you deny this staple of neo-Darwinian theory??? And since you are such a ardent supporter of neo-Darwinism, why does it not deeply concern you that the reductive materialistic foundation of neo-Darwinism is demonstrably false by modern science??? If you were truly concerned with building a coherent worldview, should you not humbly admit, at least to yourself, that you have no foundation, and to try to find a worldview that has a coherent foundation???bornagain77
September 20, 2011
September
09
Sep
20
20
2011
03:42 PM
3
03
42
PM
PDT
I don't believe Machine 2's ability to feel is relevant (e.g. people who are psychopaths are still human beings). What is important is the "autobiographical sense of self," for this is what is needed, presumably, to pass the Chinese Room test within a hypothetical Turing test. So I think what you are really asking is a form of the question famously debated between Gödel and Turing - whether or not our minds can be replicated by a computer. If so, then the brain is a (Turing) machine made of flesh and blood instead of chips and wires, but a machine none-the-less. If not, then the brain may said to have a "creative spark" that will forever be beyond the scope of machines. Of course this argument has developed many interesting nuances over the decades, but the core question remains.rhampton7
September 20, 2011
September
09
Sep
20
20
2011
03:28 PM
3
03
28
PM
PDT
BA77, Pointing out an internal inconstancy in an argument doesn't require that argument to be true or false. A material theory of mind also doesn't require reductive materialism, per se. For now, let's not change the topic. I'm sure it will just result in a dump of your quantum mechanics links for the umpteenth time.DrREC
September 20, 2011
September
09
Sep
20
20
2011
03:22 PM
3
03
22
PM
PDT
DrREC, pardon a bit if I digress, but could you please scientifically prove, with empirical evidence, that reductive materialism is true??? I've always been fascinated that the starting foundational presumption of atheists, reductive materialism itself, has never been rigidly defended as true by atheists on UD, even though their entire worldview rests on their primary assumption (faith) that reductive materialism is true. In fact, it seems that each time 'the problem' is brought up, that atheists avoid it altogether. DrREC, why should this be so since establishing the certainty of the foundation premise of your worldview is the most important thing a person could do in their quest to build solidly coherent worldview???bornagain77
September 20, 2011
September
09
Sep
20
20
2011
03:00 PM
3
03
00
PM
PDT
I would like to point out that a significant percentage of humans appear to be incapable of feeling empathy. The condition is condition is considered pathological, but it is probably just the extreme end of a spectrum. There are people who are empathetic to an extreme.Petrushka
September 20, 2011
September
09
Sep
20
20
2011
02:11 PM
2
02
11
PM
PDT
Well what do you have to say if I say no, there's nothing wrong with destroying Machine 2? It seems like you're assuming that atheists are going to empathize with Machine 2. Why do you think that? Because I don't. It's a computer.thud
September 20, 2011
September
09
Sep
20
20
2011
01:55 PM
1
01
55
PM
PDT
vj - you will not be surprised to read that I disagree with almost every word of the above. I agree that it is empathy that is the most important cause of our behaving ethically towards other beings. But that does not mean the object of our ethical behaviour has to be capable of empathy! Like most people I have a degree of empathy towards cats and will on occasion be ethical towards them. That doesn't mean cats have empathy towards anything! What makes any being the object of empathy is its ability to suffer and be happy. Machine 1 would appear to capable of suffering and being happy - therefore I would find it wrong to destroy it without due cause. Machine 2 appears to be incapable of suffering and being happy and therefore I would have no problem dismantling it provided that action increased the happiness/decreased the suffering of creatures that were capable of these things. So I don't agree that:
So it seems perversely anthropocentric to say that it would be perfectly all right for a human being, who is much less intelligent than Machine 2, to dismantle it and then use it for spare parts.
But the reason is nothing to do with empathy. It is to do with the ability to suffer or be happy.markf
September 20, 2011
September
09
Sep
20
20
2011
01:53 PM
1
01
53
PM
PDT
In judging the "rights" of a non-human being, I think an ordinary Turing testing human would look for evidence that the machine is both self-aware and has emotions. It is, of course, possible to have a program pass a Turing test in a superficial way. It's already happened in demonstrations where ordinary people were asked between human "experts" in particular fields and computer programs similar to Watson. At the other end of the spectrum we allow people on respirators to die when there is no evidence of brain activity. Empathy requires us to believe the other entity is "like us." At least to the extent of having emotions. People are empathetic toward dogs and cats. We even have laws against cruelty to animals. In one sense, the "rational" behavior of a computer is its least human like quality. Formal logic seems to be a rather recent invention. I suspect that many people, faced with the choice between saving the Watson computer from a fire, and saving a kitten, would save the kitten, even though the kitten is not sentient. So if we manage to make artificial intelligences, our relationship with them, from an ethical standpoint, will depend of how they behave and whether they convince us they have emotional inner lives. The "convincing" will take time and might require some knowledge on the part of the builders as to whether the behavior is emergent or programmed. All this has been the subject of vast numbers of science fiction stories.Petrushka
September 20, 2011
September
09
Sep
20
20
2011
01:45 PM
1
01
45
PM
PDT
Oh, and you've got a huge internal inconstancy-you reject materialism, as well as a computational theory of mind, and then posit a scenario of computers with minds. Is your solution that the scenario that presents a problem for atheists would never be a problem for theists, because if you're right, it can't happen?DrREC
September 20, 2011
September
09
Sep
20
20
2011
01:29 PM
1
01
29
PM
PDT
"a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows that nearly every natural system has the same kind of intelligence that humans do, and if we allow that intelligence (or sapience) is morally insignificant in its own right, it follows that that there is no fundamental ethical difference betwen human beings and crystals." What? You can't be serious-what a gallop of unbacked assertions, and false equivalences! "a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform" I don't think salt crystals or my TI-85 calculator will ever perform the same kind of calculations the human mind can. Ever. "(given enough time)" deserves an analogy-all life on Earth has evolved as long as any other life, and given a chance, could evolve into a intelligent self-aware being. But I don't think atheists treat all life equally on this basis. "if we allow that intelligence (or sapience) is morally insignificant in its own right" Why would we allow that? How many non-theist sci-fi authors and shows judge sentience to be the key feature that endows moral rights? I'm also uncertain of the substitution of empathy for sentience or self-awareness, which I might consider a more important criteria. This is odd, desperate stuff right here.DrREC
September 20, 2011
September
09
Sep
20
20
2011
01:26 PM
1
01
26
PM
PDT
1 2

Leave a Reply