Uncommon Descent Serving The Intelligent Design Community

Sentient robots not possible

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Mathematicians say:

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit. More:

Comments
So only God or the designer can create/ design sentient beings? And we can never figure out how to do such a thing?Joe
May 14, 2014
May
05
May
14
14
2014
07:06 AM
7
07
06
AM
PDT
I would put my bets on never. We will never get consciousness from a machine. Its one of the best evidence for a designed world- consciousness simply wouldn't exist if all of matter was just dust in chaos. The fact that things live and think is unexplainable under Darwin. Why isn't a sticky goo, which poisons all that attempts to breed, the best surviving chemical on the planet?phoodoo
May 14, 2014
May
05
May
14
14
2014
06:58 AM
6
06
58
AM
PDT
Sentient robots are not possible now, but don't confuse that for never.Joe
May 14, 2014
May
05
May
14
14
2014
04:18 AM
4
04
18
AM
PDT
Tim: Very well said!gpuccio
May 14, 2014
May
05
May
14
14
2014
12:50 AM
12
12
50
AM
PDT
[crickets chirping]Tim
May 13, 2014
May
05
May
13
13
2014
10:41 PM
10
10
41
PM
PDT
Really, people! Is this not ground already covered? When Turing theorized UTM's and we encountered the Halting Problem, ALL STRONG AI died. End of story. It is silly to talk about machines that can fool people, fuzzy logic, quanta and the like. Speed and power of the computing machine is not the issue. Easy example: Deep Blue can sink most grandmasters, but Deep Blue can't beat my 6-year-old nephew who has just grabbed a salt shaker and a spool of thread for two extra queens. Deep Blue can't tell when it's been cheated, not until, like everything else, it has been programmed to do so. The crux of the matter is that for non-trivial problems the problem is undecidable. Simply put; not only does the machine not "know" if it should stop, it doesn't know, indeed it can't know, if it has stopped. The fact that people are intuitively aware of the answers to such questions points to the inevitable: when considering our brains, whatever they are, they are more than physical embodiments of the UTM. There is a ghost in the machine. The only other option is that our awareness of the reality around us is only and merely an illusion. In other words, it is the awareness that is illusory. This may work for some, truly uninitiated, materialists -- so wedded are they to the view that there are no metaphysics. Of course they stumble a bit when they are confronted with Turing. The Halting Problem tells us that UTM cannot even have the illusion of awareness. It would be nice if an advocate of strong AI who believes computers modeled on our brains will someday be sentient would simply address the implications of UTM; everything else in this endeavor rides on it. You know, "Hi, I am an AI advocate and Turing was just plain wrong." First the drum roll . . . Then, Tim
May 13, 2014
May
05
May
13
13
2014
10:40 PM
10
10
40
PM
PDT
Is this OP somehow related to the hopes of achieving the ultimate goal of the so called 'strong AI' or AGI?Dionisio
May 13, 2014
May
05
May
13
13
2014
09:36 PM
9
09
36
PM
PDT
I think our minds are just priority memory machines. The bible hints at this. Our soul is what does the thinking etc. No matter how much memory is in a machine it never becomes alive. Our memories are just facts. They mean nothing without a thinking being putting them in context.Robert Byers
May 13, 2014
May
05
May
13
13
2014
07:50 PM
7
07
50
PM
PDT
Of course I meant RM/NS. Chalk it up to excitement and lack of sleep. Please don't judge the value of the thought by my poor articulation of it. peacefifthmonarchyman
May 13, 2014
May
05
May
13
13
2014
06:31 PM
6
06
31
PM
PDT
How is "non-lossy integrated information" different than Irreducible Complexity? If I'm right and they are equivalent then Maguire's discovery is more profound than even he realizes. He may have just proved that IC is non-computable and therefore can not be produced by a algorithmic process like RS/NM. I hope some big brain will explore those implications Peacefifthmonarchyman
May 13, 2014
May
05
May
13
13
2014
06:25 PM
6
06
25
PM
PDT
Your title says too much. I am quite skeptical that there will be sentient robots. However, Maguire only demonstrates a problem with the IITheory. As best I can tell from abstract, he does not prove that there could not be alternative approaches.Neil Rickert
May 13, 2014
May
05
May
13
13
2014
06:03 PM
6
06
03
PM
PDT
1 2

Leave a Reply