Merritt promptly converts the hypothetical question about salvatin for aliens—which depends, of course, on the assumption that Martians are beings much like ourselves—into: Are you there, God? It’s I, robot.
”As Smith observes, a computer can be programmed to detect instances of the word “betrayal” in scanned texts, but it lacks the concept of betrayal.”
Elon Musk sees technology as taking over the human world and we’d best consider our options. Ma points out that humans build computers but no computer has ever built a human: For Musk, technology is not a tool to promote humanity. Rather, technology will take humanity’s place of leadership in the world. Humans will have […]
If computers got that smart. Kurzweil’s critics believe that the superintelligent computers he needs can’t exist. If the critics are correct, we have misread the AI revolution.
Statistician Gary Smith thinks the real danger today is not that computers are smarter than us, but that we think computers are smarter than us.
Like Darwin’s Ascent of Man, Lovelock’s Ascent of the Cyborgs has no ladder and he doesn’t sense the need for one.
As AI types like to say, the system is so easily fooled because it doesn’t “know” anything. We are slowly learning, in consequence, more about what it means for a human being to “know” something.
It seems Watson couldn’t determine which medical information was more meaningful than a lot of data in a given situation.
And how it can transcend them via “intelligent design.” Be warned. In the middle of the bridge to the post-human artificial intelligence future sits a fat troll called the Halting Problem, waiting for an unsuspecting computer idealist to wander by…
The famous Jeopardy contest in 2011 worked around the fact that Watson could not grasp the meaning of anything.
Engineering prof Karl D. Stephan: Symbolic logic says nothing about the truth or reality of what you give it. To understand what things really are, you have to get outside the pristine mathematical structure of symbolic logic and embrace what Prof. Kreeft calls Socratic logic.
Marks’s point is that such biases are not a matter of villains taking over. It’s a normal feature of the way people think. And people program computers. Doubtless, it finds its way into evolution issues for which people say they ran a simulation on a computer.
It seems that the programmer would have to make the computer smarter than he is, which means smarter than itself. That’s a challenge.
“An abstract mathematical device cannot experience qualia or consciousness. If they could, we would expect mathematical formulas like the quadratic formula or the area of a sphere to experience consciousness. But that seems absurd, so we must conclude that a computer cannot exhibit consciousness. Put another way, consciousness is not a form of computation.”
The Gaia hypothesis started out as science, then discovered weed. But a digital Gaia movement for the 21st century will not, one suspects, be hippies. Maybe not as nice.