As Robert J. Marks put it, Non-algorithmic things (things that cannot be calculated), “cannot be uploaded.” Human consciousness, little as we understand it, appears to be one of those non-algorithmic things.
Psychology prof Gregg Henriques argues, consciousness “plays by a different set of rules than the language game of science.”
Also, Adam Nieri’s review of Sprites – an AI replacement for actors?
Two recent remarks in VICE (a telling label, BTW) raise some significant concerns. First, Kevin Buzzard — no, this is not Babylon Bee [itself a sign when it is harder and harder to tell reality from satire] — Sept 26th: Number Theorist Fears All Published Math Is Wrong “I think there is a non-zero chance […]
Robert J. Marks: It’s always easy to determine if you are talking to a computer or a human. You can just ask them to compute the square root of 30 or something because a human would take a while to get the square root of thirty …
The Turing test for design in computers relies on the same principles as the detection of design in nature. The materialist can have, in principle, no intelligence in either computers or nature or possible intelligence in both. But he can’t pick and choose.
Jonathan Bartlett, Eric Holloway, and Brendan Dixon explain: Prolific science and science fiction writer Isaac Asimov (1920–1992) developed the Three Laws of Robotics, in the hope of guarding against potentially dangerous artificial intelligence. They first appeared in his 1942 short story Runaround: A robot may not injure a human being or, through inaction, allow a […]
Our friendly godbot, Alfalfa and Omega would feel constrained to take such an action by the superior logic of its programming.
The failure to do so is consistent with Bill Dembski’s notion of displacement. Put simply, to develop complex functional systems, you can shift design around but you can’t actually get rid of it.
How did that work out at Wikipedia?
Merritt promptly converts the hypothetical question about salvatin for aliens—which depends, of course, on the assumption that Martians are beings much like ourselves—into: Are you there, God? It’s I, robot.
”As Smith observes, a computer can be programmed to detect instances of the word “betrayal” in scanned texts, but it lacks the concept of betrayal.”
Elon Musk sees technology as taking over the human world and we’d best consider our options. Ma points out that humans build computers but no computer has ever built a human: For Musk, technology is not a tool to promote humanity. Rather, technology will take humanity’s place of leadership in the world. Humans will have […]
If computers got that smart. Kurzweil’s critics believe that the superintelligent computers he needs can’t exist. If the critics are correct, we have misread the AI revolution.
Statistician Gary Smith thinks the real danger today is not that computers are smarter than us, but that we think computers are smarter than us.