- Share
-
-
arroba

From cognitive roboticist Murray Shanahan at Aeon:
n 1984, the philosopher Aaron Sloman invited scholars to describe ‘the space of possible minds’. Sloman’s phrase alludes to the fact that human minds, in all their variety, are not the only sorts of minds. There are, for example, the minds of other animals, such as chimpanzees, crows and octopuses. But the space of possibilities must also include the minds of life-forms that have evolved elsewhere in the Universe, minds that could be very different from any product of terrestrial biology. The map of possibilities includes such theoretical creatures even if we are alone in the Cosmos, just as it also includes life-forms that could have evolved on Earth under different conditions.
We must also consider the possibility of artificial intelligence (AI). Let’s say that intelligence ‘measures an agent’s general ability to achieve goals in a wide range of environments’, following the definition adopted by the computer scientists Shane Legg and Marcus Hutter. By this definition, no artefact exists today that has anything approaching human-level intelligence. While there are computer programs that can out-perform humans in highly demanding yet specialised intellectual domains, such as playing the game of Go, no computer or robot today can match the generality of human intelligence.
His musings go well beyond attempts to understand the minds of animals:
The likelihood of humans directly encountering extraterrestrial intelligence is small. The chances of discovering a space-borne signal from another intelligent species, though perhaps greater, are still slight. But artificial intelligence is another matter. We might well create autonomous, human-level artificial intelligence in the next few decades. If this happens, the question of whether, and in what sense, our creations are conscious will become morally significant. But even if none of these science-fiction scenarios comes about, to situate human consciousness within a larger space of possibilities strikes me as one of the most profound philosophical projects we can undertake. It is also a neglected one. With no giants upon whose shoulders to stand, the best we can do is cast a few flares into the darkness. More.
We haven’t established whether AI entities would have internally generated purposes, which makes it difficult to establish what consciousness would mean in that case.
See also: Neuroscientist: Philosophers have made the problem of consciousness unnecessarily difficult
What can we hope to learn about animal minds?
Dawkins: Maybe the hard problem of consciousness can never be solved
and
Would we give up naturalism to solve the hard problem of consciousness?