Especially to conservation of information theory: This brings us to a more general result known as the conservation of information. Design theorists William Dembski and Robert J. Marks defined the law of conservation of information in their 2009 paper “Conservation of Information in Search” and then proved the result in their follow-on 2010 paper “The Read More…
As a jokester recently demonstrated, even “shirts without stripes” is a fundamental, unsolvable problem for computers.
A scientific test should identify precisely what humans can do that computers cannot, avoiding subjective opinion: The “broken checkerboard” is not the ultimate scientific test for intelligence that we need. But it is a truly scientific test in the sense that it is capable of falsifying the theory that the mind is reducible to computation. Read More…
We often hear that what’s hard for humans is easy for computers. But it turns out that many kinds of problems are exceedingly hard for computers to solve. This class of problems, known as NP-Complete (NPC), was independently discovered by Stephen Cook and Leonid Levin.
Holloway: This test for intelligence, the Turing Test, was invented by and named after the mid-twentieth century computer pioneer Alan Turing. It is a subjective test in that it depends on whether an artificial intelligence is capable of convincing human testers that it is a human. But fooling humans, while impressive, is not really the same thing as actually possessing human-level intelligence.
Holloway: The fundamental implication is that nothing within math, science, and technology can create information. Yet information is all around us. This problem arises in many areas: evolution, artificial intelligence, economics, and physics.
Holloway: Richard Johns’s’ argument is deeper version of Captain Kirk’s scheme to defeat enemy robots in I, Mudd, a 1967 episode of Star Trek. Kirk posed a paradox that led to circuit meltdown.
He bought a brain wave scanning kit and tested it on physical signs of his abstract thought, playing a game.
Also, Adam Nieri’s review of Sprites – an AI replacement for actors?
ID-friendly philosopher Eric Holloway wrote ID As A Bridge Between Francis Bacon And Thomas Aquinas here, which garnered a lot of attention. But in science fiction, he turns his attention to the consequences of a materialist vs. a non-materialist interpretation of the human mind.
Our friendly godbot, Alfalfa and Omega would feel constrained to take such an action by the superior logic of its programming.
He says all such theories either deny the very thing they are trying to explain, result in absurd scenarios, or end up requiring an immaterial intervention.
Because the halting problem is undecideable
Based on what we know of how algorithms work, it can be demonstrated mathematically that algorithms cannot deal with non-computable concepts: There is another way to prove a negative besides exhaustively enumerating the possibilities With artificial general intelligence (AGI), if we can identify something algorithms cannot do, and show that humans can do it then Read More…
He argues that many arguments for strong artificial intelligence depend on an ideological commitment to explicit, unproven theories about the universe.