Intelligent Design

Machine 1 and Machine 2: A Challenge to the Ethics of the New Atheists

Spread the love


(Photo of a gnu or wildebeest in the Ngorongoro Crater, Tanzania. Courtesy of Muhammad Mahdi Karim and Wikipedia.)

Do sapient beings deserve respect, simply because they are sapient? An affirmative answer to this question seems reasonable, but it also imperils the Gnu Atheist project of basing morality on our shared capacity for empathy. My short parable about two machines illustrates why. Let’s call them Machine 1 and Machine 2. Since this post is a parable written for atheists, I shall assume for argument’s sake that machines are in principle capable of thinking and feeling.

Machine 1 is like HAL9000, in the movie 2001. It has a fully human psyche, which is capable of the entire gamut of human emotions. It can even appreciate art. It also thinks: it is capable of speech, speech recognition, facial recognition, natural language processing and reasoning. Machine 1 is also capable of genuine empathy.

Machine 2 is different. It’s more like an advanced version of Watson, an artificial intelligence computer system developed by IBM which is capable of answering questions posed in natural language. IBM has described Watson as “an application of advanced Natural Language Processing, Information Retrieval, Knowledge Representation and Reasoning, and Machine Learning technologies to the field of open domain question answering,” which is “built on IBM’s DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.” Building on Watson’s successes in retrieving and interpreting useful information, Machine 2 uses its massively parallel probabilistic evidence-based architecture to advise human experts on fields as diverse as healthcare, technical support, enterprise and government. Since its advanced problem-solving capacities easily surpass those of any human being in breadth and depth, AI experts are unanimous in agreeing that Machine 2 can think. However, nobody has ever suggested that Machine 2 can feel. It was never designed to have feelings, or to interpret other people’s emotions for that matter. Also, it has no autobiographical sense of self.

Here’s my question for the Gnu Atheists. I take it you’re all agreed that it would be wrong to destroy Machine 1. But what about Machine 2? Would it be wrong to destroy Machine 2?

Machine 2 is extraordinarily intelligent – no human being comes close to matching its problem-solving abilities in scope or depth. Machine 2 is therefore sapient. So it seems perversely anthropocentric to say that it would be perfectly all right for a human being, who is much less intelligent than Machine 2, to dismantle it and then use it for spare parts.

But once we allow that it would be wrong to kill Machine 2, we are acknowledging that an entity can matter ethically, simply because it is sapient and not because it is sentient. Remember: Machine 2 has no feelings, and is unable to interpret feelings in others.

Why is this a problem for the Gnu Atheists? Because empathy constitutes the very foundation of their secular system of morality. For instance, an online article entitled Where do Atheists Get Their Morality From? tells readers that “[m]orality is a built-in condition of humanity” and that empathy is “the foundational principle of morality.” But where does that leave intelligent beings that lack empathy, such as Machine 2? If it is correct to say that sapient beings are ethically significant in their own right, then morality cannot be based on empathy alone. It has to be based on empathy plus something else, in order to ensure that sapient beings matter too, and not just sentient beings.

But if we want to define morality in terms of respecting both sentient beings and sapient beings, then we have to ask: why these two kinds of beings, and only these two? What do they have in common? Why not define morality in terms of respecting sentient beings and sapient beings and silicon-based beings – or for that matter, square beings or sharp beings?

One might be tempted to appeal to the cover-all term “interests”, in order to to bring both sentience and sapience under a common ethical umbrella. But Machine 2 doesn’t have any conscious interests. It’s just very, very good at solving all kinds of problems, which makes it intelligent. And if we are going to allow non-conscious interests to count as ethically significant, then why don’t plants matter in their own right, according to the Gnu atheists? Or do they? And why shouldn’t rocks or crystals matter? In his book, A New Kind of Science (2002), Stephen Wolfram argues that a vast range of systems, even “ones with very simple underlying rules … can generate at least as much complexity as we see in the components of typical living systems” (2002, pp. 824-825). This claim is elaborated in Wolfram’s Principle of Computational Equivalence, which says that “there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple” (2002, p. 717). More precisely: (i) almost all systems, except those whose behaviour is not “obviously simple”, can be used to perform computations of equivalent sophistication to those of a universal Turing machine, and (ii) it is impossible to construct a system that can carry out more sophisticated computations than a universal Turing machine (2002, pp. 720 – 721; the latter part of the Principle is also known as Church’s Thesis).

If Wolfram is right, then it seems that a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows that nearly every natural system has the same kind of intelligence that humans do, and if we allow that intelligence (or sapience) is morally insignificant in its own right, it follows that that there is no fundamental ethical difference betwen human beings and crystals.

Before I throw the discussion open to readers, I’d like to clarify two points. First, I deliberately chose machines to illustrate my point instead of people, in order to present the issues as clearly as possible. I am well aware that there are certain human beings who lack the qualities deemed ethically significant by the Gnu atheists, but I realized that if I attempted to point that out in an argument, all I’d get in response would be a load of obfuscation, as virtually no-one wants to appear cold and uncaring in their attitudes towards their fellow human beings.

Second, I anticipate that some Gnu atheists will retort: “If theists can’t provide a sensible answer to these vexing ethical questions, then why should we have to?” But I’m afraid that won’t do. After all, Gnu atheists are convinced that theism is fundamentally irrational, and even insane. Comparing your belief system with an insane system and saying that your system answers the big moral questions just as well as the insane one doesn’t give honest inquirers any reason to trust your system. In any case, the ethical dilemma I have presented here, relating to Machine 1 and Machine 2, presupposes the truth of materialism, as well as a computational theory of mind – both of which most theists would totally reject).

I’d like to hear what readers think about the issues I’ve raised. Thoughts, anyone?

51 Replies to “Machine 1 and Machine 2: A Challenge to the Ethics of the New Atheists

  1. 1
    DrREC says:

    “a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows that nearly every natural system has the same kind of intelligence that humans do, and if we allow that intelligence (or sapience) is morally insignificant in its own right, it follows that that there is no fundamental ethical difference betwen human beings and crystals.”

    What? You can’t be serious-what a gallop of unbacked assertions, and false equivalences!

    “a consistent Gnu atheist would have to acknowledge that since nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform”

    I don’t think salt crystals or my TI-85 calculator will ever perform the same kind of calculations the human mind can. Ever.

    “(given enough time)” deserves an analogy-all life on Earth has evolved as long as any other life, and given a chance, could evolve into a intelligent self-aware being. But I don’t think atheists treat all life equally on this basis.

    “if we allow that intelligence (or sapience) is morally insignificant in its own right”

    Why would we allow that? How many non-theist sci-fi authors and shows judge sentience to be the key feature that endows moral rights? I’m also uncertain of the substitution of empathy for sentience or self-awareness, which I might consider a more important criteria.

    This is odd, desperate stuff right here.

  2. 2
    DrREC says:

    Oh, and you’ve got a huge internal inconstancy-you reject materialism, as well as a computational theory of mind, and then posit a scenario of computers with minds.

    Is your solution that the scenario that presents a problem for atheists would never be a problem for theists, because if you’re right, it can’t happen?

  3. 3
    Petrushka says:

    In judging the “rights” of a non-human being, I think an ordinary Turing testing human would look for evidence that the machine is both self-aware and has emotions.

    It is, of course, possible to have a program pass a Turing test in a superficial way. It’s already happened in demonstrations where ordinary people were asked between human “experts” in particular fields and computer programs similar to Watson.

    At the other end of the spectrum we allow people on respirators to die when there is no evidence of brain activity.

    Empathy requires us to believe the other entity is “like us.” At least to the extent of having emotions. People are empathetic toward dogs and cats. We even have laws against cruelty to animals.

    In one sense, the “rational” behavior of a computer is its least human like quality. Formal logic seems to be a rather recent invention. I suspect that many people, faced with the choice between saving the Watson computer from a fire, and saving a kitten, would save the kitten, even though the kitten is not sentient.

    So if we manage to make artificial intelligences, our relationship with them, from an ethical standpoint, will depend of how they behave and whether they convince us they have emotional inner lives.

    The “convincing” will take time and might require some knowledge on the part of the builders as to whether the behavior is emergent or programmed.

    All this has been the subject of vast numbers of science fiction stories.

  4. 4
    markf says:

    vj – you will not be surprised to read that I disagree with almost every word of the above.

    I agree that it is empathy that is the most important cause of our behaving ethically towards other beings. But that does not mean the object of our ethical behaviour has to be capable of empathy! Like most people I have a degree of empathy towards cats and will on occasion be ethical towards them. That doesn’t mean cats have empathy towards anything!

    What makes any being the object of empathy is its ability to suffer and be happy.

    Machine 1 would appear to capable of suffering and being happy – therefore I would find it wrong to destroy it without due cause. Machine 2 appears to be incapable of suffering and being happy and therefore I would have no problem dismantling it provided that action increased the happiness/decreased the suffering of creatures that were capable of these things.

    So I don’t agree that:

    So it seems perversely anthropocentric to say that it would be perfectly all right for a human being, who is much less intelligent than Machine 2, to dismantle it and then use it for spare parts.

    But the reason is nothing to do with empathy. It is to do with the ability to suffer or be happy.

  5. 5
    thud says:

    Well what do you have to say if I say no, there’s nothing wrong with destroying Machine 2? It seems like you’re assuming that atheists are going to empathize with Machine 2. Why do you think that? Because I don’t. It’s a computer.

  6. 6
    Petrushka says:

    I would like to point out that a significant percentage of humans appear to be incapable of feeling empathy. The condition is condition is considered pathological, but it is probably just the extreme end of a spectrum. There are people who are empathetic to an extreme.

  7. 7
    bornagain77 says:

    DrREC, pardon a bit if I digress, but could you please scientifically prove, with empirical evidence, that reductive materialism is true??? I’ve always been fascinated that the starting foundational presumption of atheists, reductive materialism itself, has never been rigidly defended as true by atheists on UD, even though their entire worldview rests on their primary assumption (faith) that reductive materialism is true. In fact, it seems that each time ‘the problem’ is brought up, that atheists avoid it altogether. DrREC, why should this be so since establishing the certainty of the foundation premise of your worldview is the most important thing a person could do in their quest to build solidly coherent worldview???

  8. 8
    DrREC says:

    BA77,

    Pointing out an internal inconstancy in an argument doesn’t require that argument to be true or false. A material theory of mind also doesn’t require reductive materialism, per se.

    For now, let’s not change the topic. I’m sure it will just result in a dump of your quantum mechanics links for the umpteenth time.

  9. 9
    rhampton7 says:

    I don’t believe Machine 2’s ability to feel is relevant (e.g. people who are psychopaths are still human beings). What is important is the “autobiographical sense of self,” for this is what is needed, presumably, to pass the Chinese Room test within a hypothetical Turing test.

    So I think what you are really asking is a form of the question famously debated between Gödel and Turing – whether or not our minds can be replicated by a computer. If so, then the brain is a (Turing) machine made of flesh and blood instead of chips and wires, but a machine none-the-less. If not, then the brain may said to have a “creative spark” that will forever be beyond the scope of machines. Of course this argument has developed many interesting nuances over the decades, but the core question remains.

  10. 10
    bornagain77 says:

    DrREC bear with me just a bit more here, you state:

    A material theory of mind also doesn’t require reductive materialism, per se.

    But alas, reductive materialism, upon which neo-Darwinism is based, requires exactly that premise, i.e. neo-Darwinism holds that mind ’emerged’ from a reductive materialistic framework!!! Do you deny this staple of neo-Darwinian theory??? And since you are such a ardent supporter of neo-Darwinism, why does it not deeply concern you that the reductive materialistic foundation of neo-Darwinism is demonstrably false by modern science??? If you were truly concerned with building a coherent worldview, should you not humbly admit, at least to yourself, that you have no foundation, and to try to find a worldview that has a coherent foundation???

  11. 11
    DrREC says:

    Calm down, there. No need for triple punctuation and assaults on my worldview-whatever it is you think you know it is.

    I’m just broadly saying I don’t have to answer your query to discuss this question. In speaking of minds, in addition to reductive materialism (which in this case has a very precise meaning), philosophers consider non-reductive materialism (Functionalism), and Eliminative Materialism.

    Churchland, in Matter and Consciousness has a discusion of this.

  12. 12
    mike1962 says:

    Sam Harris points to consciousness as the proper object of empathy, not sapience or sentience, from what I’ve seen.

  13. 13
    bornagain77 says:

    DrREC, as to:

    I don’t have to answer your query to discuss this question,,,

    But alas, you can’t build castles in midair!!!!

  14. 14
    Neil Rickert says:

    Personally, I do not believe that Machine 2 is sapient or intelligent. If I hesitate to destroy it, that would only be because I value its computational abilities.

    The harder question is about Machine 1. However, I doubt that Machine 1 will ever exist.

    After reading the first two sentences, I wondered whether you were going to ask about vegetarianism. But it turned out that you did not go there.

  15. 15
    mike1962 says:

    Bottom line, we value consciousness and give a damn about its state of suffering because we are conscious and we are programmed to think it worthy.

    Some people (“sociopaths”) are born with a lack of empathy, i.e, they lack the empathetic neural programming most of us have. Nothing anything can say or do can make them have empathy if they lack it. (All they can do is learn to fake it for their own interests. Faking empathy is not empathy.)

  16. 16
    mike1962 says:

    “Machine 1 is like HAL9000, in the movie 2001. It has a fully human psyche, which is capable of the entire gamut of human emotions. It can even appreciate art. It also thinks: it is capable of speech, speech recognition, facial recognition, natural language processing and reasoning. Machine 1 is also capable of genuine empathy.”

    I, as Neil Rickert, doubt such a machine will ever exist. But if it did, how could we tell? We suffer because we are conscious. How could we determine if a machine is conscious or not? Hell, I can’t even tell if anyone besides myself is conscious or not, let alone a machine, and neither can neuro-scientists. Maybe one day we shall be able. But not yet.

    I know this is beside the point. You’re assuming that a machine can be conscious. (A tall order.) If a machine were conscious, then I would be inclined to treat it with empathy.

  17. 17
    dmullenix says:

    What is reductive materialism?

  18. 18
    ScottAndrews says:

    This reminds me of a fictional character in a Douglas Adams book, The Restaurant at the End of the Universe. (To put it in context, it was humor.) It was a sentient creature, bovine I think, bred with the desire to be killed and served as food. It would come to the table, recommend parts of itself, and show up later on the plate.
    I wonder if killing that would be wrong, or perhaps engineering it.

  19. 19
    thud says:

    How about this, take a normal cow–which does not want to be eaten–and conduct a hypothetical procedure on it which causes it to want to be eaten. How do we feel about that?

  20. 20
    bornagain77 says:

    As to: What is reductive materialism?

    It is simply classical materialism as has been postulated since before the Ancient Greeks.

    Materialism
    Excerpt: In philosophy, the theory of materialism holds that the only thing that exists is matter; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions. In other words, matter is the only substance.,,,

    The professor of Philosophy at the University of Notre Dame Alvin Plantinga criticises it, and the Emiritus Regius Professor of Divinity Keith Ward suggests that materialism is rare amongst contemporary UK philosophers: “Looking around my philosopher colleagues in Britain, virtually all of whom I know at least from their published work, I would say that very few of them are materialists.”[24]

    Some critics object to materialism as part of an overly skeptical, narrow or reductivist approach to theorizing, rather than to the ontological claim that matter is the only substance. Particle physicist and Anglican theologian John Polkinghorne objects to what he calls promissory materialism — claims that materialistic science will eventually succeed in explaining phenomena it has not so far been able to explain.[36] (Polkinghorne prefers dual-aspect monism to materialism.[37])

    The psychologist Imants Barušs suggests that “materialists tend to indiscriminately apply a ‘pebbles in a box’ schema to explanations of reality even though such a schema is known to be incorrect in general for physical phenomena. Thus, materialism cannot explain matter, let alone anomalous phenomena or subjective experience,[38] but remains entrenched in academia largely for political reasons.”[39]
    http://en.wikipedia.org/wiki/Materialism

    etc… etc.. etc…

  21. 21
    vjtorley says:

    Hi everyone,

    Thank you all very much for your comments on my machine post. I’ll address them in chronological order.

    1. DrREC:

    Congratulations, you’re the first cab off the rank. You wrote:

    Oh, and you’ve got a huge internal inconstancy – you reject materialism, as well as a computational theory of mind, and then posit a scenario of computers with minds.

    Please re-read the first paragraph of my post, where I wrote:

    Since this post is a parable written for atheists, I shall assume for argument’s sake that machines are in principle capable of thinking and feeling. (Emphasis mine.)

    By the way, I take it you meant “internal inconsistency” rather than “internal inconstancy”. And yes, as a theist and an anti-materialist, I am not troubled by the scenario I depict. I would never call Machine 2 intelligent in the first place. I believe there’s more to intelligence than doing computations, and I suggest you read J.R. Lucas’s essay, Minds, Machines and Godel to see where I’m coming from, as well as this follow-up paper here, which Dr. Lucas read to the Turing Conference at Brighton on April 6th, 1990.

    You also wrote:

    I don’t think salt crystals or my TI-85 calculator will ever perform the same kind of calculations the human mind can. Ever.

    Then you disagree with Dr. Stephen Wolfram’s Principle of Computational Equivalence, which which says that “there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple” (A New Kind of Science, 2002, p. 717). I take it that you, as a thorough-going materialist, have a well-thought-out reason for your disagreement, and that you can back it up with mathematics. Please do.

    In criticizing my point that since [according to Wolfram] nearly every system is capable (given enough time) of performing the same kind of computations that human beings perform, it follows [if we accept a computational theory of intelligence] that nearly every natural system has the same kind of intelligence that humans do, you offered a purported counter-example:

    [A]ll life on Earth has evolved as long as any other life, and given a chance, could evolve into a intelligent self-aware being. But I don’t think atheists treat all life equally on this basis.

    In reply: I wasn’t arguing for treating all life equally, and I never claimed that all life-forms were equal. What I claimed was that on a consistent materialist view, life-forms are not fundamentally (i.e. qualitatively) distinct from one another. Nor are they qualitatively distinct from crystals. Of course, I realize perfectly well that an atheist could consistently argue that natural computers with faster internal processors, such as human beings, should be accorded a lot more respect than bacteria – or inorganic crystals, for that matter – which are orders of magnitude slower in their problem-solving computations. My point was that it is surprising (on a naive sentientist view, acording to which only beings with feelings matter) that bacteria or crystals should be accorded any significance at all. I then concluded that a consistent Gnu atheist would have to accept that “there is no fundamental ethical difference betwen human beings and crystals.” In other words, the only ethical difference between a human being and a crystal is quantitative, not qualitative.

    Finally, you wrote:

    I’m also uncertain of the substitution of empathy for sentience or self-awareness, which I might consider a more important criteria.

    I wasn’t substituting empathy for sentience, although I may have inadvertently given that impression with my question: “But where does that leave intelligent beings that lack empathy, such as Machine 2?” If so, I apologize for my imprecise wording. Rather, my point was that if a capacity for empathy is the sole basis for moral behavior on our part, then it follows that any being (such as Machine 2) which has no feelings (as it has not been hard-wired for emotions) and no capacity of its own empathy will be a being with whom we cannot empathize, and we will therefore dismiss it as morally insignificant. The whole point of my post was to argue that it is ethically blinkered to claim that only sentient beings matter; surely intelligent beings do too. My example of Machine 2 was meant to challenge people’s intuitions on that score. I am genuinely surprised to see the atheists digging in their heels and insisting that only sentient beings matter.

    2. markf

    Thank you for your post.

    Re your remarks on empathy, please see my comments above in the last paragraph of my response to Dr.REC. In your response, you maintained that “empathy that is the most important cause of our behaving ethically towards other beings.” My question for you is: is empathy the sole legitimate cause, on your account? Do you think it could ever be appropriate for us to try to behave ethically towards beings with whom we cannot empathize in principle, because they have no feelings of any sort? If not, why not?

    You also wrote:

    What makes any being the object of empathy is its ability to suffer and be happy.

    I agree. But I would ask you: why should we only value beings that can suffer? Why shouldn’t we value beings that can think, even if they can’t suffer? Isn’t the ability to think equally precious? Isn’t it the height of absurdity to claim that it’s wrong to kill a sparrow for the fun of it, but that it’s perfectly OK to destroy a being with an intelligence that would dwarf Einstein’s, just for the fun of it?

    3. thud

    You wrote:

    Well what do you have to say if I say no, there’s nothing wrong with destroying Machine 2? It seems like you’re assuming that atheists are going to empathize with Machine 2. Why do you think that? Because I don’t. It’s a computer.

    If you’re a materialistic atheist (and I’m not sure whether you are), then you will probably accept that you’re a computer too – unless, like Searle, you’re one of those rare materialists who believes that the brain is not a computer.

    In any case, can you think of any reason that a materialistic atheist might have for saying that a computer could never be sentient, as opposed to intelligent? If not then your original argument for not empathizing with a computer – “Because I don’t. It’s a computer” – is rendered invalid. If a computer is capable of having genuine feelings, then empathizing with a computer seems perfectly appropriate.

    4. rhampton7

    Thank you for your post. Regarding what makes a being matter in its own right, you wrote:

    I don’t believe Machine 2’s ability to feel is relevant (e.g. people who are psychopaths are still human beings). What is important is the “autobiographical sense of self,” for this is what is needed, presumably, to pass the Chinese Room test within a hypothetical Turing test.

    I take it you agree that animals are ethically significant. Virtually everyone accepts this: for instance, the Catechism of the Catholic Church writes of animals that “men owe them kindness” (paragraph 2416). A sentient non-human animal could not pass the Turing test, yet we regard it as being important in its own right. So why not a highly intelligent non-human computer, which is far smarter than the animal, but which (like the animal) lacks an autobiographical sense of self?

    5. mike1962

    Thank you for your post. You wrote:

    Sam Harris points to consciousness as the proper object of empathy, not sapience or sentience, from what I’ve seen.

    I haven’t read Sam Harris’s recent ethical writings, but I would still argue that whether or not we can empathize with a non-conscious intelligence, it would be narrow-minded to dismiss such an intelligence as ethically insignificant – particularly if it can solve every problem that we can solve. Of course, as an anti-materialist, I don’t think there could ever be such an intelligence, but if it did turn out to exist, then I’d be prepared to bite the bullet and say that if we matter in our own right, then the non-conscious intelligence must matter in its own right, too.

    6. Neil Rickert

    Thank you for your post. You wrote:

    Personally, I do not believe that Machine 2 is sapient or intelligent.

    OK. Why not? (I agree with you, of course, but I’m not a materialist.) You also wrote:

    The harder question is about Machine 1. However, I doubt that Machine 1 will ever exist.

    Why? I’m just curious, that’s all.

    mike 1962, following up on your comment, asks:

    I, as Neil Rickert, doubt such a machine will ever exist. But if it did, how could we tell? We suffer because we are conscious. How could we determine if a machine is conscious or not? Hell, I can’t even tell if anyone besides myself is conscious or not, let alone a machine, and neither can neuro-scientists.

    Two quick points. First, neuro-scientists have a fairly reliable set of neural indicators for consciousness, which work in the vast majority of human cases (PVS is a bit of a gray zone, however). Second, Wittgenstein’s private language argument suffices to refute any notion that you and only you might be conscious. If that were the case, then you couldn’t meaningfully be said to follow any rules (e.g. rules of discourse, or rules of a game), as there would be no standpoint from which you could ascertain whether you’d followed them correctly or not. A community of other minds provides such a standpoint – that’s why we have soccer referees. Rules can only be followed and checked within a community.

    The problem of ascertaining whether a machine is conscious is formidable. But I submit that a machine which could churn out Proust-like volumes describing its inner experiences in great depth as it performed mundane tasks, and if it could do a better job of introspecting than I could, then I would seriously start wondering.

    Another way of ascertaining whether a machine is conscious would be to set it problems that only a being with a capacity for stepping in other people’s shoes could solve. For example, this old conundrum:

    You’re in a room with two doors and two identical men whom you cannot tell apart. One of the men lies all the time, and the other always tells the truth. Behind one door, there is a lion who will eat you no matter what, and the other door leads to a way out. You can ask the men one question to get you out. What question do you ask to keep yourself from getting killed?

    (Answer: Which door would the other man say is the safe one?)

    Incidentally, this invites an interesting philosophical question: are there problems which only a being with a capacity for empathy can solve? If the answer is “Yes”, then Machine 2 is not smarter than human beings in all respects, after all. At best, it’s smarter than human beings about “third-person” states of affairs. Problems requiring empathy it would flub.

    7. Scott Andrews (and thud)

    I’m a great Douglas Adams fan. The guy’s hilarious.

    I also don’t eat meat, although I now eat fish after going nearly 20 years without fish. This post wasn’t meant to be about vegetarianism as such, but briefly, I would say that any cow that wanted to be eaten is envisaging a future state, in which the eaters feel sated after having done so. In other words, the desire to be eaten presupposes a capacity for abstract thinking, which cows lack.

    Thanks for the hypotheticals.

    8. bornagain77

    Thanks as always for the vote of support. Much appreciated.

  22. 22
    mike1962 says:

    “Two quick points. First, neuro-scientists have a fairly reliable set of neural indicators for consciousness, which work in the vast majority of human cases (PVS is a bit of a gray zone, however).

    I disagree. They make assumptions that consciousness exists outside themselves in the first place. A leap of faith is required. I submit this leap of faith is hardwired into most people so that the alternative seems “ridiculous.” We’re hardwired against the notion of solipsism with regards to consciousness.

    Second, Wittgenstein’s private language argument suffices to refute any notion that you and only you might be conscious.”

    I disagree. Nothing in Wittgenstein’s argument requires consciousness, only a “mind”, that can “map words to ideas, concepts or representations.” It has not been conclusively demonstrated that consciousness is required for such a mind. (Nor that consciousness exists out of one’s own mind.)

  23. 23
    markf says:

    vj

    1) You write:

    Rather, my point was that if a capacity for empathy is the sole basis for moral behavior on our part, then it follows that any being (such as Machine 2) which has no feelings (as it has not been hard-wired for emotions) and no capacity of its own empathy will be a being with whom we cannot empathize, and we will therefore dismiss it as morally insignificant.

    I believe that machine 2 is morally insignificant and that empathy is not the sole cause of moral behaviour (although the most important). However, it is worth noting that your argument is not valid. Empathy might be the sole basis for causing moral behaviour but that doesn’t mean that the objects of moral behaviour have to be capable of empathy – see my example of cats.

    2) You ask:

    My question for you is: is empathy the sole legitimate cause, on your account? Do you think it could ever be appropriate for us to try to behave ethically towards beings with whom we cannot empathize in principle, because they have no feelings of any sort? If not, why not?

    I see empathy as the main (but not sole) cause of moral behaviour. This is just an observed fact about human nature – so there is no question of whether it is legitimate or not. So it would be weird and against normal human nature to behave ethically towards something we cannot show empathy towards such as machine 2. I would probably disagree strongly with someone who showed such behaviour.

    3) You ask:

    why should we only value beings that can suffer? Why shouldn’t we value beings that can think, even if they can’t suffer? Isn’t the ability to think equally precious? Isn’t it the height of absurdity to claim that it’s wrong to kill a sparrow for the fun of it, but that it’s perfectly OK to destroy a being with an intelligence that would dwarf Einstein’s, just for the fun of it?

    We might value a thinking but non-sentient machine as useful or for aesthetic reasons. I could not understand someone who felt a moral need to preserve such a machine. It is just a fact of human nature that only have moral feelings towards sentient beings.

    I am conscious there is some repetition in my answers – sorry not enough time to edit my response.

  24. 24
    Neil Rickert says:

    me: Personally, I do not believe that Machine 2 is sapient or intelligent.
    vjtorley: OK. Why not?

    Based on my own study of cognition, the problems that have to be solved are not computational problems, and thus a computer style solution cannot work.

    me: The harder question is about Machine 1. However, I doubt that Machine 1 will ever exist.
    vjtorley: Why? I’m just curious, that’s all.

    As I see it, conscious systems cannot be designed. They need to be highly adapted to their environment, and you can only get that with something akin to biological development. Roughly speaking, you need a system that designs itself in situ, rather than something from an external designer.

  25. 25
    DrREC says:

    ME:
    “I don’t think salt crystals or my TI-85 calculator will ever perform the same kind of calculations the human mind can. Ever.”
    You:
    “Then you disagree with Dr. Stephen Wolfram’s Principle of Computational Equivalence, which which says that “there is essentially just one highest level of computational sophistication, and this is achieved by almost all processes that do not seem obviously simple””

    I think the key phrase you’re missing is “obviously simple.” The notion that a salt packet from McDonalds or my TI calculator in a desk drawer on on a pathway to sentience is absurd. Selective skepticism to accept that as a possibility, but not evolution.

    Your conclusion that “it follows that that there is no fundamental ethical difference betwen human beings and crystals” (I think you meen between, spelling police that you are) requires the absurdity of treating any thing with any probability of ever developing sentience as equal to human. Considering most of us don’t even treat animals (much closer to sentience than salt) equally, this criteria doesn’t seem to be one that is largely accepted. There is also a huge bright line that is more important to not just me, but many of the posters here: sentient self-awareness.

    “The whole point of my post was to argue that it is ethically blinkered to claim that only sentient beings matter; surely intelligent beings do too.”

    ok, but even if we accept that “”it follows that that there is no fundamental ethical difference betwen human beings and crystals”
    does not logically follow.

  26. 26
    William J Murray says:

    Am I the only person here who considers it wrong to damage any functional mechanism (alive or not, conscious or not) for no good reason?

  27. 27
    ScottAndrews says:

    This may be splitting hairs, but I think everyone assumes their reasons are good. Like if I have a bazillion dollars and I buy I car so I can destroy it with a sledgehammer, is my entertainment a good reason?
    What if the bazillionaire allows a perfectly good car to rust away unused and neglected because he has other cars and doesn’t care about that one? Does that count as damaging it?

  28. 28
    William J Murray says:

    “For my entertainment” is not a good moral reason for anything, and neglect does count as damaging it.

  29. 29
    William J Murray says:

    I guess my point here is that I’m a theist and my morality is not based on empathy. I have virtually no empathy. It’s not wrong to torture infants because it pulls my heartstrings and makes me feel sad; it’s just obviously wrong.

    It’s also obviously wrong to destroy stuff for no good reason, whether that stuff is sentient and can feel pain or not.

  30. 30
    ScottAndrews says:

    I didn’t mean to follow this down a slope, but the answers lead me to more questions.
    What if I’m bored so I drive around with no purpose, placing unnecessary wear on the car and risking a flat tire or other damage that would not have occurred if I stayed home?

  31. 31
    ScottAndrews says:

    I mostly agree. We should not behave morally only when we feel empathy.

    But I don’t follow on not destroying objects. If it violates your conscience then it is wrong for you. And breaking stuff for no reason (or doing anything for no reason) doesn’t make sense. It even sounds wrong, but I struggle to think of what moral law it violates. Perhaps it wrongs the potential beneficiary of the object? If it had no value to one person then why not give it to someone else?

    Nonetheless, as I look at the cell phone on my desk, I think that to start smashing it on the desk until it broke would be idiotic, but not immoral. It’s my cell phone.

  32. 32
    William J Murray says:

    It’s my cell phone.

    That’s where you and I disagree. In my theism, everything is God’s. I look at everything which I have as God’s, and that it has been given to me to help me serve the purpose I was created for. Destroying or harming anything God has provided me without good reason is probably wrong, something I should avoid.

    As a general rule, I consider any destructive tendencies or impulses immoral, and indulgence in them moves me farther from god, not close.

  33. 33
    Eugene S says:

    “but I struggle to think of what moral law it violates.”

    Let us listen to the wise.

    St Abba Dorotheus of Gaza, AD 505-565.

    “Conscience should be guarded towards God, towards one’s neighbour and towards things. In relation to God, he guards his conscience who does not neglect God’s commandments and who, even in things not seen by men and that no one demands of us, guards his conscience towards God in secret. Guarding conscience towards our neighbour demands that we should never do anything which, to our knowledge, would offend or tempt him, whether by word or deed, look or expression. Guarding conscience towards things means not to misuse a thing, nor let it be spoiled nor throw it away needlessly. In all these respects conscience should be kept pure and unblemished, lest one should fall into the calamity against which the Lord warns us (Matthew 5:26).”

    From “Directions on the Spiritual Life”, more here.

  34. 34
    ScottAndrews says:

    I guess that’s what we have a conscience for, so that every single thing doesn’t have to be in writing. I couldn’t think of anything in the Bible addressing waste except perhaps the account of Onan, but that was more about what he didn’t do than what he did do.
    Then again, perhaps people weren’t inclined to waste then so there was no need to address it. Like how there’s nothing in the Bible about crack cocaine or a million other specific things.

    Here’s a thought – maybe it falls under being grateful. Destroying something doesn’t show gratitude.

  35. 35
    William J Murray says:

    Eugene S:

    Thank you for that wonderful quote and reference.

    ScottAndrews:

    Yes, a heart full of gratitude and appreciation should keep us from wantonly destroying or harming anything that we either can avoid or have no good reason for. I may not have much in the way of empathy, but I have an acute sense of gratitude and appreciation for both the living and non-living wonders and mechanisms this world abounds with.

  36. 36
    vjtorley says:

    markf

    Thank you for your post. You wrote:

    …[I]t would be weird and against normal human nature to behave ethically towards something we cannot show empathy towards such as machine 2. I would probably disagree strongly with someone who showed such behaviour…

    We might value a thinking but non-sentient machine as useful or for aesthetic reasons. I could not understand someone who felt a moral need to preserve such a machine. It is just a fact of human nature that we only have moral feelings towards sentient beings.

    Why does moral behaviour have to be based on feelings? Why can it not be based on an understanding of one’s duty to others?

    You seem to be defining ethical behaviour as behaviour which is based on, or grounded in, a feeling of empathy with the individual who is the target of one’s actions. You see someone suffering, and your impulse is to help them. I can understand that.

    But what if one defined ethical behaviour more broadly, as any behaviour which is intended to promote the good of the recipient, or benefit the recipient in some way? On this broad definition, it is by no means clear why I should confine my ethical behaviour to entities that are capable of having feelings. Machine 2 might therefore be a worthy recipient – especially as it would benefit in many ways by not being destroyed: there is much that it could accomplish.

    (Of course, as an anti-materialist, I find the very notion of speaking of a non-living entity as “benefiting” to be absurd, as I draw a distinction between the intrinsic finality of an organism and the extrinsic finality of a machine. But that’s another topic.)

    That’s all for now; I’ll be back later.

  37. 37
    markf says:

    But what if one defined ethical behaviour more broadly, as any behaviour which is intended to promote the good of the recipient, or benefit the recipient in some way? On this broad definition, it is by no means clear why I should confine my ethical behaviour to entities that are capable of having feelings. Machine 2 might therefore be a worthy recipient – especially as it would benefit in many ways by not being destroyed: there is much that it could accomplish.

    Of course you can define ethical behaviour in any way you like. This definition would include servicing my car which is intended to benefit the car.

  38. 38

    I agree with all the above 🙂

  39. 39
    Eugene S says:

    Are you sure you agree with *all* the above? 🙂

  40. 40
    vjtorley says:

    Hi markf,

    All right. Let me put it another way. According to Stephen Wolfram, our universe is chock-a-block full of systems that could be described as natural computers – and what’s more, they’re universal Turing machines at that, which is as good as it gets. The only differences between them and us, according to Wolfram, are ones of degree.

    Among all these natural computers, you seem to be saying that the only ones we should worry about are the tiny, tiny fraction that happen to be (a) alive and (b) sentient, possessing what Arnold Lunn, in a completely different context, once referred to as “funny internal feelings” or “fif”. The rest don’t matter in their own right; we have no duties towards them. I find that odd.

    Take fish. Neurologists are unanimous that they aren’t conscious: they lack the neural wherewithal for conscious experiences. However, they have remarkable cognitive abilities. For instance, features such as individual recognition, acquisition of new behaviour patterns by observational learning, transmission of group traditions, co-operative hunting, tactical deception (cheating), tit-for-tat punishment strategies, reconciliation, altruism and social prestige, formerly thought to be unique to primates or at least mammals, can all be found in fish societies. (Bshary R., Wickler W. and Fricke H. 2002. “Fish cognition: a primate’s eye view.” In Animal Cognition, vol. 5, March 2002, pp. 1-13.) You’re saying we have no duties towards them whatsoever?

    Or take bees, which are apparently capable of insight learning and of solving delayed matching to sample tasks – and even more remarkably, delayed non-matching to sample. I discussed bees’ remarkable feats in my thesis.

    It’s cases like these that make me doubt the wisdom of an ethic that grounds our moral behaviour to others exclusively in our feelings of empathy towards them.

  41. 41
    vjtorley says:

    Hi William F. Murray,

    Actually, I think you’re probably right about the immorality of destroying a thing of beauty (alive or not) for no good reason. However, the fact that destroying such a thing is wrong does not necessarily imply that in doing so, we are wronging that thing. One could simply say that we are wronging God, who gave it to us and/or allowed us to enjoy it.

  42. 42
    vjtorley says:

    Hi Dr.REC,

    Thank you for your post. I’m certainly not arguing that a salt crystal is on the road to sentience. What Wolfram claims is that salt crystals and many other inorganic systems can be used to perform the same kinds of computations that the brains of sentient beings perform. Computationally, there’s no bright line between sentient and non-sentient beings; and theoretically, there could be non-sentient beings whose computational powers exceed those of sentient beings. You might want to say we should ignore a being’s computational abilities and focus exclusively on its capacity to have subjective experiences when deciding whether it matters and how we should behave towards it, but I think that’s an odd position for a materialist to take.

    If you look at my reply to markf in 16.1, you’ll see that fish and bees have some pretty remarkable cognitive abilities, despite lacking sentience. I find it puzzling that a materialist who believes in behaving ethically would want to maintain that we may do as we please with these creatures, without wronging them.

    Wolfram’s animism (see http://www.wolframscience.com/nksonline/page-845 ) is much more consistent with materialism than utilitarianism is, in my opinion.

  43. 43
    vjtorley says:

    Neil Rickert:

    Interesting. Are you saying, then, that the environmental problems (e.g. food shortages) that conscious organisms learned to solve as they evolved are not computational problems? What are they then? I mention this because I’ve always believed that we could (at least in theory) design a computer capable of solving our current environmental problems.

  44. 44
    vjtorley says:

    mike1962

    I see. So you’d distinguish the “problem of other minds” from the “problem of other consciousnesses”, then? Interesting. By the way, what do you think of Descartes’ solution?

  45. 45
    markf says:

    vj

    I think the question of whether fish and other creatures can suffer is still very open. To me this is absolutely key as to whether angling for example is moral.

    However, I did say that while empathy is the most important cause of ethical behaviour (and therefore the ability to suffer or be happy the most important criterion for whether something should be the object of ethical behaviour) it is not the only cause. The exact definition of “ethical” is not precise – some people think that extraordinary courage in battle on behalf of your country is ethical, others not. Like many everday concepts there are woolly edges and a firm core. The belief that we should be kind to machine 2 on account of its complexity falls into these woolly edges – witness the disagreement on this very forum. However, compassion towards those that are capable of suffering and pleasure is absolutely core to what counts as ethical in the usual use of the word. Anyone who did not think it ethical to relieve suffering in other creatures is using the word in a manner I don’t recognise.

  46. 46
    mike1962 says:

    Solution to what?

  47. 47
    Neil Rickert says:

    Are you saying, then, that the environmental problems (e.g. food shortages) that conscious organisms learned to solve as they evolved are not computational problems?

    Yes, that is part of what I am saying. Computation is only a tool. Humans, working with the environment, might find that tool useful. But the computation by itself won’t achieve much.

    We can imagine pumping all of the neural signals from all of the stone age people into a super-super computer. And then we can imagine running the most powerful search program possible, to search for patterns in that input. That super computer is not going to come up with agriculture, for there is no such pattern to be found in those inputs. However, people eventually did come up with agriculture.

  48. 48
    vjtorley says:

    Hi mike1962,

    I was referring to Descartes’ solution to his skeptical doubts, which included doubts about the existence of other minds. In a nutshell, his solution is that God cannot deceive. See here:

    http://home.wlu.edu/~mahonj/Descartes.M3.God.htm
    http://www.iep.utm.edu/descarte/#SH6a

  49. 49
    vjtorley says:

    Hi markf,

    I would certainly agree with your last sentence.

  50. 50
    markf says:

    Well here’s something to think about. We appear to use the word “ethical” to mean much the same thing. I certainly don’t mean – “according to what God says is right”. So whatever we both mean by that word the definition does not include any reference to deity.

  51. 51
    mike1962 says:

    I see. To answer, I don’t think much of it

Leave a Reply