Intelligent Design

How Do You Know an Artificial Intelligence Advocate is Shining You On?

Spread the love

When they say they “know” that an AI machine is conscious.  How can I be so sure?  Easy.  As I have discussed before, we cannot in principle “know” that even other humans are conscious; far less can we know that an AI is conscious.  By its very nature, consciousness, as evidenced by subjective self-awareness, can be known for certain only by subjective experience.  It is self-evident to a person who is subjectively self-aware that he is conscious.  Indeed, this has been called the “primordial datum” – “me” and “not me” exist – from which all other knowledge proceeds. 

By definition, I can have subjective experience only of my own self.  I cannot be subjectively self-aware of any other self.  It follows, that I can be certain only of my own consciousness.  Now, as KF points out, knowledge and certainty come in degrees.  We infer others’ consciousness to a very high degree of certainty.  We are indeed, “morally certain” that other people are not mere meat machines.  Which is why materialists – unless they are also psychopaths – can never live their lives as if their basic metaphysical commitments are true.  Because if materialism means anything, it means that we are in fact meat machines without free will, programmed by evolution to act as we act.

Back to the current state of AI.  If a machine were ever to pass the Lovelace Test, maybe we would have something to talk about.  But until Commander Data shows up, it is safe to say that we do not have warrant to declare that any machine is conscious. (BTW, there is a reason Star Trek, The Next Generation is classified as “fiction.)

9 Replies to “How Do You Know an Artificial Intelligence Advocate is Shining You On?

  1. 1
    Ed George says:

    As I have discussed before, we cannot in principle “know” that even other humans are conscious; far less can we know that an AI is conscious.

    At most all we may be able to say, in the distant future, is that we can’t conclude that an AI isn’t conscious. But a bigger question, if that day ever happens, would we be morally bound to extend some basic rights to them?

  2. 2
    Barry Arrington says:

    No Ed, the bigger question for a materialist is why don’t we extend rights to computers and robots now? After all, they are merely machines and, according to materialism, we are merely meat machines. Under materialism, how do you justify carbon-based machines discriminating against silicon-based machines? Yeah, they are not very bright now. But even the least intelligent human has basic rights that we respect.

  3. 3
    Ed George says:

    BA

    No Ed, the bigger question for a materialist…

    I’m not talking about the biggest question for a materialist, which I am not, I am talking about the biggest question for a theist. As theists, if we ever get to the point where we can’t say with certainty that an AI isn’t conscious, do we have a moral obligation to extend some rights to the AI? After all, you have already admitted that we can’t say with certainty that other humans aren’t conscious (or that they are conscious). And we extend them rights.

  4. 4
    Barry Arrington says:

    Ed said that I said: “After all, you have already admitted that we can’t say with certainty that other humans aren’t conscious (or that they are conscious).”
    What I actually said: “We infer others’ consciousness to a very high degree of certainty. We are indeed, “morally certain” that other people are not mere meat machines.”
    This makes Ed a liar. Ed, what I said is right up there for everyone to see. If you are going to tell lies, at least don’t insult our intelligence when you are doing it.

  5. 5
    Ed George says:

    Barry@4, I apologize if I misinterpreted your response when I paraphrased it. I think that calling someone a liar for an innocent error is a little harsh.

    But my question still stands. If we get to the point where we can’t know for certain that an AI is not conscious, are we morally obliged to extend them some rights? Or, flip it around. If we ever get to the point where we are as certain as we can possibly be that AIs are conscious, are we morally obliged to extend them rights?

  6. 6
    Barry Arrington says:

    Ed,
    An innocent mistake is missing a nuanced shade of meaning. Saying that I said diametrically the opposite of what I said is culpable behavior. I grant that perhaps you are merely reckless. Either way, your effort to minimize your act with the epithet “innocent” only compounds it.

  7. 7
    Ed George says:

    BA@6, you are entitled to believe whatever you like. But your avoidance of my question is obvious.

  8. 8
    Barry Arrington says:

    Ed
    “you are entitled to believe whatever you like.”
    You say that as if there is some doubt about whether you recklessly or intentionally distorted what I said. Of course, there is not. You have admitted that. You are truly shameless.

  9. 9
    ET says:

    Ed:

    But a bigger question, if that day ever happens, would we be morally bound to extend some basic rights to them?

    Like the rights of a pet? Or our food? Or zoo animals?

    I would say we have to wait until that day comes and cross that bridge when we get to it.

Leave a Reply