News

Sentient robots not possible

Spread the love

Mathematicians say:

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit. More:

41 Replies to “Sentient robots not possible

  1. 1
    Neil Rickert says:

    Your title says too much.

    I am quite skeptical that there will be sentient robots. However, Maguire only demonstrates a problem with the IITheory. As best I can tell from abstract, he does not prove that there could not be alternative approaches.

  2. 2
    fifthmonarchyman says:

    How is “non-lossy integrated information” different than Irreducible Complexity?

    If I’m right and they are equivalent then Maguire’s discovery is more profound than even he realizes.

    He may have just proved that IC is non-computable and therefore can not be produced by a algorithmic process like RS/NM.

    I hope some big brain will explore those implications

    Peace

  3. 3
    fifthmonarchyman says:

    Of course I meant RM/NS.

    Chalk it up to excitement and lack of sleep.

    Please don’t judge the value of the thought by my poor articulation of it.

    peace

  4. 4
    Robert Byers says:

    I think our minds are just priority memory machines. The bible hints at this. Our soul is what does the thinking etc.
    No matter how much memory is in a machine it never becomes alive.
    Our memories are just facts. They mean nothing without a thinking being putting them in context.

  5. 5
    Dionisio says:

    Is this OP somehow related to the hopes of achieving the ultimate goal of the so called ‘strong AI’ or AGI?

  6. 6
    Tim says:

    Really, people! Is this not ground already covered? When Turing theorized UTM’s and we encountered the Halting Problem, ALL STRONG AI died. End of story. It is silly to talk about machines that can fool people, fuzzy logic, quanta and the like. Speed and power of the computing machine is not the issue.

    Easy example: Deep Blue can sink most grandmasters, but Deep Blue can’t beat my 6-year-old nephew who has just grabbed a salt shaker and a spool of thread for two extra queens. Deep Blue can’t tell when it’s been cheated, not until, like everything else, it has been programmed to do so.

    The crux of the matter is that for non-trivial problems the problem is undecidable. Simply put; not only does the machine not “know” if it should stop, it doesn’t know, indeed it can’t know, if it has stopped.

    The fact that people are intuitively aware of the answers to such questions points to the inevitable: when considering our brains, whatever they are, they are more than physical embodiments of the UTM. There is a ghost in the machine.

    The only other option is that our awareness of the reality around us is only and merely an illusion. In other words, it is the awareness that is illusory. This may work for some, truly uninitiated, materialists — so wedded are they to the view that there are no metaphysics. Of course they stumble a bit when they are confronted with Turing.

    The Halting Problem tells us that UTM cannot even have the illusion of awareness.

    It would be nice if an advocate of strong AI who believes computers modeled on our brains will someday be sentient would simply address the implications of UTM; everything else in this endeavor rides on it. You know, “Hi, I am an AI advocate and Turing was just plain wrong.”

    First the drum roll . . .

    Then,

  7. 7
    Tim says:

    [crickets chirping]

  8. 8
    gpuccio says:

    Tim:

    Very well said!

  9. 9
    Joe says:

    Sentient robots are not possible now, but don’t confuse that for never.

  10. 10
    phoodoo says:

    I would put my bets on never.

    We will never get consciousness from a machine. Its one of the best evidence for a designed world- consciousness simply wouldn’t exist if all of matter was just dust in chaos.

    The fact that things live and think is unexplainable under Darwin. Why isn’t a sticky goo, which poisons all that attempts to breed, the best surviving chemical on the planet?

  11. 11
    Joe says:

    So only God or the designer can create/ design sentient beings? And we can never figure out how to do such a thing?

  12. 12
    phoodoo says:

    Joe,

    We may be able to do it someday, but it will only be by making life, it won’t happen with a machine.

    Can we ever create a living cell from non-life? Probably not, but if we do, we will simply copy what cells already do, by just using existing cells to make new ones.

  13. 13
    Acartia_bogart says:

    We will never be able to create sentient robots because if we ever get close, we will simply change the definition of what sentience is.

    There are plenty of examples of this. Humans are different because we can use tools. Oops, some other animals can use tools. Humans are different because we can modify tools. Oops, some animals can do this as well. Humans are different because we can teach learned behaviours to others. Oops, other animals can do this. Humans are different because we can use language. Oops, other animals can do this as well.

    Let’s face it, we are no different (better or worse) than other animals.

  14. 14
    Upright BiPed says:

    bogart,

    Humans are the only organisms that create thermodynamically-inert patterns and arrange them in iterative dimensional representations. This act has incredibly steep organizational requirements in order to function. Among other material conditions, such representations require two sets of independent protocols to produce physical effects from them.

    The only demonstrated instances of this phenomenon is in language, mathematics, and in the encoding/translation of the genetic code.

  15. 15
    Upright BiPed says:

    Also quickly, there is nothing in ID theory that requires humans to be anything other than the animals they are. So whatever point you are trying to make will have no effect on the validity of ID observations.

  16. 16
    EDTA says:

    @13

    Yes, we’ve predicted a few too many things as being human-only, and had to retract them later. But I’d wager a few bucks on the following ones: 1) Only humans improve tools using other tools in a chain that has remained largely continuous for 2K years. 2) Only humans record their thoughts in writing/printing so that each generation can build on the knowledge of the previous–and has done so for over 2K years. 3) No other species has as much knowledge of its own species’ physiology. OK, now go out to the jungle and find another species that proves me wrong.

    But in the original article, the researchers are wrong that memories do not degrade. They most certainly do, although many don’t want to admit it. But our memories clearly do decay over time, losing detail, and in many cases, substituting/crossing/confusing related memories. Just wait until you’re 20 years older than you are now. 😎

  17. 17
    Mapou says:

    Tim:

    When Turing theorized UTM’s and we encountered the Halting Problem, ALL STRONG AI died.

    Hi, I’m a computer programmer and I do research in AI. The halting problem is only a problem with the Turing Machine. Modern computers are not Turing machines (and neither are brains) because they can be interrupted at any time and they can have multiple sets of input and output arguments, even while the program is running. Worse, the program may consists of multiple communicating threads running on multiple processors or cores in parallel.

    Having said that, I see no connection between the Turing Machine/Halting Problem and intelligence. My prediction is that, soon, we will have machines that are just as clever as we are or more. I mean machines that can learn to speak and understand any language and perform any kind of tasks just as intelligently as humans. Or better.

    The only difference I see is that machines will never be able to appreciate beauty and the arts. For that, you need a spirit. Problem is, nobody knows how to cause a spirit to inhabit a machine. But no matter. Non-conscious, intelligent servants is what we will get. AI will change the world as we know it. Paradise or hell. Take your pick.

  18. 18
    Mapou says:

    Joe:

    So only God or the designer can create/ design sentient beings? And we can never figure out how to do such a thing?

    IMO, God (or anybody else) can only create physical stuff. Spirits are eternal and unchanging.

  19. 19
    Eric Anderson says:

    Joe:

    So only God or the designer can create/ design sentient beings? And we can never figure out how to do such a thing?

    Excellent question. For those who believe that God did in fact create humans from “the dust of the Earth,” so to speak, it is obviously possible to create a sentient being. We just need more knowledge. The alternative is that there might be something else about us that is more eternal (and need not have been created, per se) — a soul, an intelligence, or whatever we want to call it.

    A very good question.

    —–

    A_B @13:

    Let’s face it, we are no different (better or worse) than other animals.

    Nonsense.

    Get back to me when another animal writes a book, or composes a symphony, or contemplates higher math, or builds a rocket ship.

    The idea that man is “just another animal” is very seductive to the materialist mind. But it doesn’t stack up to the evidence.

  20. 20
    Tim says:

    Ab@ 13
    Nice attempt, by any chance are you in advertising? It is not a matter of moving the goalposts. Although activities like making tools “may be” a necessary cause of sentience they have never been deemed a sufficient cause.

    Mapou@ 17
    All artificial computers are reducible to UTMs. The problem with the Halting problem for modern physical computers remains. The fact that computers can be stopped does nothing to help them with the fact that they cannot decide to “stop”. It is not enough to say “well, after a certain number of iterations, the computer shuts itself down.” No it doesn’t! (well, technically it does, haha!) the point is obvious. The computer never decides to stop. This is analogous to saying they cannot decide to change. My cheating nephew (see 6) still beats Deep Blue.

    I repeat: doing tasks well or cleverly (which is really nothing more than saying faster than we can) has nothing to do with being aware. When you say that computers will “learn to speak . . . other languages”, I believe what you really mean is they will be programmed according to all available rules of linguistics to make connections and interpretations as fast as we can. As in the case of Deep Blue and chess, we do all the heavy lifting. This is not artificial intelligence (from inter- between or among and legere- to choose); it is advanced technology (from tekhne- skill in work).

    Mapou,
    Don’t get me wrong, I applaud your work in computer science. However, we should speak plainly about what is really going on. Seeming to be human is not being human. Saying such dehumanizes everybody and is a lie.

  21. 21
    Mapou says:

    Tim:

    All artificial computers are reducible to UTMs. The problem with the Halting problem for modern physical computers remains.

    This is actually a myth perpetrated by the Turing cult. A UTM is, by definition, uninterruptible. They cannot be stopped unless they are finished with their algorithmic computation. Modern computers do not have that problem.

    The fact that computers can be stopped does nothing to help them with the fact that they cannot decide to “stop”. It is not enough to say “well, after a certain number of iterations, the computer shuts itself down.” No it doesn’t! (well, technically it does, haha!) the point is obvious. The computer never decides to stop. This is analogous to saying they cannot decide to change. My cheating nephew (see 6) still beats Deep Blue.

    I have no idea what you’re talking about. My computer has no problem deciding to stop. This is not what the halting problem is about. Turing simply proved that a general algorithm to solve the halting problem for all possible programs and inputs cannot exist.

    I repeat: doing tasks well or cleverly (which is really nothing more than saying faster than we can) has nothing to do with being aware. When you say that computers will “learn to speak . . . other languages”, I believe what you really mean is they will be programmed according to all available rules of linguistics to make connections and interpretations as fast as we can. As in the case of Deep Blue and chess, we do all the heavy lifting. This is not artificial intelligence (from inter- between or among and legere- to choose); it is advanced technology (from tekhne- skill in work).

    No. This is not what I meant. I meant there is a way to make a machine learn anything (speech, images, walking, driving, etc.) just like humans, from scratch. The machine will have a full understanding of its surroundings and will act accordingly to achieve its goals. None of the machine’s knowledge will be programmed in advance. No grammar, no linguistics, no vocabulary, nothing. There is a universal learning mechanism that can do it all. This is how babies learn.

    Mapou,
    Don’t get me wrong, I applaud your work in computer science. However, we should speak plainly about what is really going on. Seeming to be human is not being human. Saying such dehumanizes everybody and is a lie.

    I think you got me wrong. I don’t expect intelligent machines to be human at all. I’m not a brain-dead materialist :-D. However, I do expect them to eventually learn to do your job and mine better than we can. There are many people who do expect intelligent machines to be conscious like humans but they are sorely mistaken.

  22. 22
    Mapou says:

    Let me add that the Halting Problem is a useless distraction, not unlike Godel’s incompleteness theorem. A lot of people make a lot of noise about them but after all is said and done, nobody uses them for anything useful. Ask any programmer if they think about the Halting Problem when writing code and the answer will be no. It’s a purely academic problem for egg heads.

  23. 23
    kairosfocus says:

    Mapou:

    We are back at the crux of the matter: rocks have no dreams, so if you are dreaming you hav e access to an internal state that is self evidently demonstrating the difference between a self and a rock.

    Or, as I have repeatedly summed it up: computing is not contemplation.

    So, let us notice the gap-jumping in:

    My prediction is that, soon, we will have machines that are just as clever as we are or more. I mean machines that can learn to speak and understand any language and perform any kind of tasks just as intelligently as humans. Or better.

    Machines compute, they don’t contemplate. Rocks have no dreams — or beliefs. Which inter alia means they do not have KNOWLEDGE (well-warranted, true belief). So, they will happily compute on rubbish, leading to Garbage In, Garbage Out.

    That is where big issues happen.

    KF

    PS: This clip from Wiki on halting shows one of the concerns with the discussion of halting you made:

    >> The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of states, and thus any deterministic program on it must eventually either halt or repeat a previous state:

    …any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine… (italics in original, Minsky 1967, p. 24)

    Minsky warns us, however, that machines such as computers with e.g., a million small parts, each with two states, will have at least 2^1,000,000 possible states:

    This is a 1 followed by about three hundred thousand zeroes … Even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle (Minsky 1967 p. 25):

    Minsky exhorts the reader to be suspicious—although a machine may be finite, and finite automata “have a number of theoretical limitations”:

    …the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness [of] the state diagram may not carry a great deal of significance. (Minsky p. 25)

    It can also be decided automatically whether a nondeterministic machine with finite memory halts on none of, some of, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision. >>

    In practice, a halt can be externally imposed, perhaps arbitrarily. But the real root of the problem is whether a solution state is arrived at. An arbitrary halt does not lead to a solution state, it leads to a forced decision. (E.g. do an interrupt every millisecond and impose a sanity check then if insane execute a prudent recovery algorithm per steps XYZ . . . often a vote between three or in extreme cases five parallel machines and go with the majority.) A non-arbitrary halt would be one internal to an algorithm, and it would require coming to a logically valid solution state. Making an insightful decision, even one that we impose on a microcontroller as a safety precaution,is an example of contemplation vs computing in action.

    Bottomline: evolutionary materialism cannot get beyond computation, indeed arguably its search algorithm of blind chance variation [by whatever means] feeding into mechanical necessity, and especially culling on differential reproductive success, is maximally unlikely ever to get to relevantly complex islands of function in config spaces. Which is the point where ever so many objectors to the design inference on FSCO/I, run into the bankruptcy of their view and react rather than respond based on reason. As we routinely see.

  24. 24
    kairosfocus says:

    PPS: I should have added another word, in practical terms, knowledge is well-warranted, credibly true belief.

  25. 25
    fifthmonarchyman says:

    KF said,

    Bottomline: evolutionary materialism cannot get beyond computation, indeed arguably its search algorithm of blind chance variation [by whatever means] feeding into mechanical necessity, and especially culling on differential reproductive success, is maximally unlikely ever to get to relevantly complex islands of function in config spaces.

    I say,

    Exactly this is not creationist bluster.

    It is now a demonstrated mathematical postulate. That is the unspoken implication of Maguire’s paper.

    If nonlossy integrated information exists anywhere in the universe (and we know it does) evolutionary materialism is false.

    Full stop end of story 😉
    peace

  26. 26
    Joe says:

    Mapou:

    The only difference I see is that machines will never be able to appreciate beauty and the arts.

    Looks like I am a machine- both beauty and the arts are overrated.

  27. 27
    Barb says:

    http://www.theatlantic.com/tec.....ng/370855/

    Interesting, the US Navy wants to teach robots right from wrong.

  28. 28
    Acartia_bogart says:

    Joe said: “Get back to me when another animal writes a book, or composes a symphony, or contemplates higher math, or builds a rocket ship.”

    Get back to me when humans can echolocate, or detect electrical signals, or can detect minute quantities of chemicals, or see in extremely dim light.

    Having different abilities does not mean that we are fundamentally different. All of your examples are the byproduct of a single organ that is enhanced relative to other animals.

  29. 29
    kairosfocus says:

    Pardon, but there are people who echolocate [mostly blind ones but some of us who move around in the dark detect presence by faint echoes too], we routinely detect minute quantities by taste and smell, and we can detect down to a single photon if properly accommodated. Such sensory abilities — with all due respect — are distractive from and irrelevant to the issue of rational, creative, evaluating conscious mind, and those who object like this full well know it or should. KF

  30. 30
    Tim says:

    Mapou,
    I will stick with my statement concerning computers and UTMs and assume that the egghead comment was directed at me and was a compliment, thank you.

    You write that

    My computer has no problem deciding to stop.

    Please describe that to me. I think you will struggle, but surprise me. Or, if you’d like, describe how Deep Blue manages to beat my nephew who changes the rules of chess. droppong extra queens on the board whenever he wants.

    Ab@28, answered by KF@29
    But your statement runs into even more trouble when we notice that the evidence shows that only humans know what echolocating is and can decide to echolocate. There is no evidence in the entire animal kingdom that any decision has ever been made.

    But I am just an amateur spouting off, if there is such evidence, please let me know. Of course, I am interested in evidence that the behavior in question is beyond instinct, learned behaviors and reactions.

    I just heard about a “smart dog” who made some type of noise, so its owner would let it out. Now, the dog makes a noise, the owner goes to let it out, and the dog scampers by and steals the owners warmed up spot on the couch. Cool example, but what is the evidence of intelligence. (the dog, not the owner.)

  31. 31
    Acartia_bogart says:

    Kairosfocus, yes, some blind people are able to echolocate to a very small extent, but none of them are going to catch for the Yankees. And we can detect small quantities of chemicals, but we still use dogs to detect drugs in luggage. Our senses are simply as good as they have to be for us to get by. But who among us doesn’t wish that we could see or hear or taste better than we do?

    Nobody has shown that any of our mental faculties are completely unique to humans. There are examples of other animals being able to learn, to teach, to reason, to use language. There are gradations in almost all other traits, abilities and senses in the animal kingdom. Some species are faster than others, Some are better at camouflage than others. Some have better senses of smell, or hearing, or sight than other species. Some species can echolocate better than others. Some species are more aggressive than others. Why are we arrogant enough to think that our mental abilities somehow mean that we are are the chosen few?

  32. 32
    Eric Anderson says:

    A_B @28:

    So being able to write a book or build a rocket ship is just another biological function? You’re going to have a hard time proving that.

    Don’t worry, I fully understand the materialistic mindset. For the materialist there is no difference between what humans can do with their minds and intelligence (because they think it is just a biological function) and any other biological function. So Darwin writing “The Origin” is no more impressive than when he blew his nose or took a leak. It’s all just biological function, so the thinking goes . . .

    Yes, some animals can do some impressive things. Some of them have biological capabilities that humans don’t. I’m personally not even sure that some don’t think. I’d even be willing to consider the possibility of real intelligence in certain creatures.

    But the idea that humans are “just” animals, and that there is some kind of continuum from the amoeba to the man is not a scientific or observational fact. It is an assertion of the materialistic mindset.

  33. 33
    Piotr says:

    Get back to me when another animal writes a book, or composes a symphony, or contemplates higher math, or builds a rocket ship.

    For 99% of their history humans did not do any of these things. No space flights before 1957, no higher maths before the 17th century (is calculus “higher” enough?), no symphonies before the 16th century or thereabouts (depending on the definition of the genre), no writing (let alone books) before the mid-4th millennium BC. How many commenters on this blog have written a book, composed a symphony, proved a new mathematical theorem, or built a spaceship? Are you human, guys?

  34. 34
    Acartia_bogart says:

    Eric, I never said that there was a continuum from amoeba to man. But your statement proves my point. Your statement infers that you are inherently superior to an amoeba. We are definitely different in magnitude, but different doesn’t mean superior. This is not materialism, it is just a conclusion based on evidence.

    An amoeba is just as evolved as a human.

  35. 35
    Eric Anderson says:

    Piotr @33:

    For 99% of their history humans did not do any of these things. No space flights before 1957, no higher maths before the 17th century (is calculus “higher” enough?), no symphonies before the 16th century or thereabouts (depending on the definition of the genre), no writing (let alone books) before the mid-4th millennium BC.

    Yep. Which shows that such things are not just a product of biology. Thanks for jumping in and proving my point. 🙂

    How many commenters on this blog have written a book, composed a symphony, proved a new mathematical theorem, or built a spaceship? Are you human, guys?

    These things are a matter of conscious decision. Any adult (who is not physically or developmentally disabled) could make more progress toward writing a book over the course of a long weekend than every other member of every other species over the course of Earth’s entire history combined.

    Thanks — again — for proving my point.

    Look, I understand that the idea of humans as just another animal holds a long tradition in materialist thought. It just isn’t supported by the evidence. A moment’s reflection by the objective observer is adequate to reveal that the idea of humans being just another animal exists more in the minds of materialist thinkers than in the real world.

    —–

    A_B @34:

    Is there a difference between something being unique and being superior? I didn’t bring up any claim of being “superior.” On the other hand, you claimed: “Let’s face it, we are no different (better or worse) than other animals.”

    Yet it is quite obvious to anyone who isn’t drinking deeply from materialistic philosophy that there are stark differences between humans and other species. I’m not claiming any “superiority” at this point. Important aspects of uniqueness, yes.

  36. 36
    Piotr says:

    EA:

    My purpose was not to show that humans are “just like other animals” but only that you set the standards of “being human” so high you don’t meet them yourself. So it’s a mere matter of personal choice? What a pity I didn’t decide at some point in my life to be a best-selling novelist, a composer, a mathematician and a spacecraft engineer!

  37. 37
    Piotr says:

    P.S. Get back to me with that symphony when your spaceship is ready.

  38. 38
    Tim says:

    Piotr,

    Not that it matters, but your numbers are off. By some accounts some of the first writing was historical in nature. But, human history is a written account of what happened to and was accomplished by humans. By that reckoning, writing has been around for 99% of human history! Hah!

    Ab writes:

    Nobody has shown that any of our mental faculties are completely unique to humans. There are examples of other animals being able to learn, to teach, to reason, to use language. There are gradations in almost all other traits, abilities and senses in the animal kingdom.

    Thus arguing against himself! Check out the first and third sentences! Hah! If there are gradations in only “almost all” then there are other traits for which there are dichotomies. Therefore, there are mental faculties completely unique to humans — exactly the opposite of what Ab asserted in his first sentence.

    Ab,
    Please clarify.

  39. 39
    Eric Anderson says:

    Piotr:

    Well, then you didn’t read what I wrote, but responded to something I didn’t. I never said an individual had to do be a novelist and a rocket scientist and who knows what else in order to be human. The question was whether humans, as a species, are unique or whether — as asserted by some — humans are no different than the animals.

    The very fact that this discussion is occurring is prime evidence that humans are unique.

  40. 40
    Piotr says:

    Human are exceptional in some ways: they are highly intelligent and capable of passing on acquired knowledge and keeping a record of the history of their culture (thanks to the use of language). In combination with being highly social, that allowed them to start a process of cultural evolution which, after a few hundred thousand years of slow progress, began to snowball and produced the modern civilisation. If it’s a good thing for the species is something yet to be seen. Quite possibly, instead of conquering outer space, we’ll just recklessly exhaust all available fossil fuels in a few centuries, turn the Earth into a refuse dump, bring our civilisation to a collapse, and slowly die out among its ruins. I hope not, but as Einstein allegedly said, “Two things are infinite, the universe and human stupidity, and I am not yet completely sure about the universe.”

  41. 41
    NetResearchGuy says:

    Mapou:

    Modern computers are reducible to Turing machines and let me explain why. You claim that inputs to a program from external sources or interruptibility or multi threading or whatever makes something not a Turing machine. The key point is that for a conceptual Turing machine the input stream and variations in timing can be considered part of the program.

    To give an example, the software I develop at my job takes input from human users and is multithreaded, and has random duration events like asynchronous I/O. For debugging, we log the inputs of the human user and timing for when threaded jobs start, and when I/O requests complete. With that stream of information we can run the program again and produce an identical result, even though the original inputs and timing were non deterministic.

    In other words I’ve proven the behavior of the program which you claim has properties that make it not a Turing machine, can be represented as a Turing machine, by appropriate consideration of what represents the boundaries of the machine.

    To explain why I believe the human mind is NOT reducible to a Turing machine is more complicated, but I won’t go into that here.

Leave a Reply