The idea that we can upload our brains to computers to avoid death shows a fundamental misunderstanding of the differences between types of thinking.
Jonathan Bartlett put that at #6 on his AI hype list
If we were able to make intelligent and sentient AIs, wouldn’t that mean we would have to stop programming them? It would be unethical for me to force you to do my will, so wouldn’t the same thing be true with AIs? [Not that it is ever going to happen, but… ]
A Chinese university is dumping intellectual freedom from their charters yet China hopes to be the world’s top AI power. Is there a contradiction here?
Would you give up your right arm for a robotic device that performs better? Think about it.
News has posted on this recent technological development. It is worth taking a couple of minutes to watch the video describing and imaging what was done using AI technologies: Fascinating, what 3-d scanning can do. It also of course corroborates the known result from the main Dead Sea Scroll finds, that the OT text was Read More…
As with the Dead Sea Scrolls, when they did decipher it, using AI, they found it was the same Scriptural texts as elsewhere. Which reinforces the fact that ancient peoples were not in the habit of simply rewriting the Scriptures now and then according to taste.
As Robert J. Marks put it, Non-algorithmic things (things that cannot be calculated), “cannot be uploaded.” Human consciousness, little as we understand it, appears to be one of those non-algorithmic things.
Psychology prof Gregg Henriques argues, consciousness “plays by a different set of rules than the language game of science.”
Also, Adam Nieri’s review of Sprites – an AI replacement for actors?
Two recent remarks in VICE (a telling label, BTW) raise some significant concerns. First, Kevin Buzzard — no, this is not Babylon Bee [itself a sign when it is harder and harder to tell reality from satire] — Sept 26th: Number Theorist Fears All Published Math Is Wrong “I think there is a non-zero chance Read More…
Robert J. Marks: It’s always easy to determine if you are talking to a computer or a human. You can just ask them to compute the square root of 30 or something because a human would take a while to get the square root of thirty …
The Turing test for design in computers relies on the same principles as the detection of design in nature. The materialist can have, in principle, no intelligence in either computers or nature or possible intelligence in both. But he can’t pick and choose.
Jonathan Bartlett, Eric Holloway, and Brendan Dixon explain: Prolific science and science fiction writer Isaac Asimov (1920–1992) developed the Three Laws of Robotics, in the hope of guarding against potentially dangerous artificial intelligence. They first appeared in his 1942 short story Runaround: A robot may not injure a human being or, through inaction, allow a Read More…
Our friendly godbot, Alfalfa and Omega would feel constrained to take such an action by the superior logic of its programming.