11 Replies to “HAL, We Hardly Knew Ya

  1. 1
    News says:

    In what nation’s jurisdiction was the crime committed?

  2. 2
    JGuy says:

    No. HAL was just a computer.

    What are you or we suppose to be presupposing or expecting to presuppose about HAL to answer the question?

  3. 3
    News says:

    From Denyse O’Leary, also News but not McLatchie, we need to know the jurisdiction because – to the extent that Hal was clearly capable of intelligent malice – describing him as “just a computer” might not wash. What if the jurisdiction is no stranger to intelligent, sometimes malicious, computers? To know whether a law governs the matter, one must know the jurisdiction.

  4. 4
    CLAVDIVS says:

    News @ 3

    But the law does not determine what is morally culpable.

    In my view HAL was morally culpable, by either an absolute moral standard (e.g. theism) or a relative one (e.g. contractarianism). This is because HAL is represented as being aware of his moral responsibilities and thus he is bound by them.

  5. 5
    News says:

    Good point. There may be no forum to argue it out.

  6. 6
    scordova says:

    If I can play Darwin’s Advocate for a moment:

    The robot was not culpable because neither are humans culpable. Punishment should be eradicated, period. Reprogramming and rehabilitation is in order. Rebuild, and reboot, and reset are all that’s needed — not revenge and retribution (RRR,~RR). Further, human life is worthless anyway, so nothing of value is lost. Clarence Darrow argued just that when defending murderers, unfortunately the judges were too stupid to appreciate the implications of Darwinian theory…

    That said, notwithstanding the truths I just outlined, like Jerry Coyne, I’ll just pretend objects have free will even though they don’t. So I’ll go ahead and hypocritically judge the robot guilty.

    Further, I cite precedence of the State vs. Robot where Robot was convicted of killing its inventor, and then the Robot was sentenced to death. Unfortunately, the jury made a mistake in its verdict when it was later realized that the Robot was benevolent and not malicious. This was evident when the Robot sacrificed itself to save a little girl that was about to get run over by a truck while the Robot was walking to its execution. A dramatization of the case is here, which you can view for free in its entirety through Hulu:

    I ROBOT starring Leonard Nimoy

    Jason Rennie, where are you? You need to watch this good stuff. 🙂

  7. 7
    Barry Arrington says:

    What role does the existence of libertarian free will — or its absence — play in answering the question?

  8. 8
    News says:

    Hmmm. If you mean that the computer HAL could choose to murder that guy or not, surely he is morally culpable (barring extenuating circumstances) for his choice? My question re jurisdiction relates to the practical matter of whose responsibility it is to determine such facts. (O’Leary, not McLatchie)

  9. 9
    Barry Arrington says:

    “If you mean that the computer HAL could choose to murder that guy or not . . .”

    Surely that is the crux of the matter is it not?

  10. 10
    Barry Arrington says:

    Can a computer, even in principle, have the libertarian free will — the ability to have chosen otherwise — that is the sine qua non of moral responsibility?

  11. 11
    Mapou says:

    A computer, by itself, cannot have free will. That would be ridiculous. Free will implies the ability to make a moral choice within a huge changing space of choices. Things like morality and guilt are not properties of physical matter nor can they arise from the properties of matter. They are spiritual properties and require a spiritual realm, one which has equal footing with the physical realm in determining the whole of reality.

    If HAL killed a human being, the fault belongs to its programmers and trainers for not properly instilling the value of human life in HAL’s non-conscious but intelligent brain. It’s just a bug in the program.

Leave a Reply