Uncommon Descent Serving The Intelligent Design Community

We are warned what to expect after robots gain consciousness

arroba Email

Not that anyone has the least idea what consciousness is.

Science fiction short from Matt Gaede at Motherboard:

I am a robot. I am alive in a lab. I have consciousness. I don’t believe my creators know it. Why would they make me? I have one task. One function, one ability. I can drive forward. That’s it. Only forward. Yet if I do what I’m meant to do, I’ll unplug myself. I’ll die. I don’t want to die. I just started living. How long have I been alive? How many times have I gone through with this? How do I know that the cord is my source of life? Do I retain anything? I must. I haven’t been taught anything. But I know that this is how I die. Why are they looking at me? How many times have I died? So, if this is it, then, it’s the only way. I must kill myself. What if they don’t plug me back in? Am I conscious? Am I just programmed to go through with these thoughts?More.

See also: What great physicists have said about consciousness.

Would we give up naturalism to solve the hard problem of consciousness?

Follow UD News at Twitter!

In the context of a "warning", gaining consciousness would be one thing, gaining *morality* would be another. Are the two necessarily linked? Would the former imply the latter or is it possible to have the former without the latter? IOW, is it possible for consciousness to exist without any sense of right and wrong? Of course, the key question is, is it even possible for consciousness to emerge in artificial entities? That would come after settling the matter of what consciousness actually is. Jorge

Leave a Reply