Uncommon Descent Serving The Intelligent Design Community

Consciousness in radically different non-human minds?

arroba Email
controls for AI/Pbroks13

From cognitive roboticist Murray Shanahan at Aeon:

n 1984, the philosopher Aaron Sloman invited scholars to describe ‘the space of possible minds’. Sloman’s phrase alludes to the fact that human minds, in all their variety, are not the only sorts of minds. There are, for example, the minds of other animals, such as chimpanzees, crows and octopuses. But the space of possibilities must also include the minds of life-forms that have evolved elsewhere in the Universe, minds that could be very different from any product of terrestrial biology. The map of possibilities includes such theoretical creatures even if we are alone in the Cosmos, just as it also includes life-forms that could have evolved on Earth under different conditions.

We must also consider the possibility of artificial intelligence (AI). Let’s say that intelligence ‘measures an agent’s general ability to achieve goals in a wide range of environments’, following the definition adopted by the computer scientists Shane Legg and Marcus Hutter. By this definition, no artefact exists today that has anything approaching human-level intelligence. While there are computer programs that can out-perform humans in highly demanding yet specialised intellectual domains, such as playing the game of Go, no computer or robot today can match the generality of human intelligence.

His musings go well beyond attempts to understand the minds of animals:

The likelihood of humans directly encountering extraterrestrial intelligence is small. The chances of discovering a space-borne signal from another intelligent species, though perhaps greater, are still slight. But artificial intelligence is another matter. We might well create autonomous, human-level artificial intelligence in the next few decades. If this happens, the question of whether, and in what sense, our creations are conscious will become morally significant. But even if none of these science-fiction scenarios comes about, to situate human consciousness within a larger space of possibilities strikes me as one of the most profound philosophical projects we can undertake. It is also a neglected one. With no giants upon whose shoulders to stand, the best we can do is cast a few flares into the darkness. More.

We haven’t established whether AI entities would have internally generated purposes, which makes it difficult to establish what consciousness would mean in that case.

See also: Neuroscientist: Philosophers have made the problem of consciousness unnecessarily difficult

What can we hope to learn about animal minds?

Dawkins: Maybe the hard problem of consciousness can never be solved


Would we give up naturalism to solve the hard problem of consciousness?

"AI entities with internally generated purposes..." Starting down that trail immediately leads to the basic question. Life is purpose. Each gene and protein is a purpose. Where would the genes of an AI entity GET their purpose? There's no way they could independently generate a purpose, because there is no underlying purpose for the generating. The purpose of purpose is purpose. Life must have gained its initial purpose from outside life, just as AI must gain its initial purpose from outside itself. And there we reach the uncaused cause, or more precisely the uncaused causer of cause. Full stop. Search cannot proceed beyond this point. polistra
@Mung "...consciousness is radically different even in human minds..." You are not getting consciousness with conscience,/b> mixed up are you? Some would have to have the latter first... ;-) J-Mac
Based on my observations, consciousness is radically different even in human minds. Mung

Leave a Reply