Uncommon Descent Serving The Intelligent Design Community

Silicon Valley tried to produce a mind but couldn’t. Why?


In two parts. Futurist George Gilder explains in Gaming AI:

Part I: Why AI geniuses think they can create true thinking machines: Early on, it seemed like a string of unbroken successes …

Why shouldn’t the computer scientists believe it? Silicon Valley’s AI triumphed at chess, Go, StarCraft II, and poker. Then AlphaGoZero beat AlphaGo 100-0 at Go—and by that point, it was all machine vs. machine. The same technology, known as AlphaFold, also detects protein folds, useful in medicine, much more quickly than human competitors.

Rapid technological development—Moore’s Law, fiber optics, RISCs (reduced instruction set computers) and now quantum computers—certainly helped shape the computer techie’s worldview.

And so what has been the hidden fatal flaw? There is a hidden weakness in the assumption that every type of mental process is a form of computation. If a mental process is not a form of computation, the computer can’t do it. Even though computer theorists see the human brain (assumed to be the source of consciousness) as a computer, it doesn’t resemble one in its operations.

Then …

Part II: Why AI geniuses haven’t created true thinking machines. The problems have been hinting at themselves all along.

Here is one of the crucial findings they defy (or ignore): Philosopher Charles Sanders Peirce (1839–1914) pointed out that, generally, mental activity comes in threes, not twos (so he called it triadic). For example, you see a row of eggs in a carton and think “12.” You connect the objects (eggs) with a symbol, 12.

In Peirce’s terms, you are the interpretant, the one for whom the symbol 12 means something. But eggs are not 12. 12 is not eggs. Your interpretation is the third factor that makes 12 mean something with respect to the eggs.

Gilder reminds us that, in such a case, “the map is not the territory” (p. 37) Just as 12 is not the eggs, a map of California is not California. To mean anything at all, the map must be read by an interpreter. AI supremacy assumes that the machine’s map can somehow be big enough to stand in for the reality of California and eliminate the need for an interpreter.

The problem, he says, is that the map is not and never can be reality.

Note: Quantum computers will not solve this problem. Quantum computers play by the same rules as digital ones: Meaningful information still requires an interpreter (observer) to relate the map to the territory.

You can download the book free here. Good reading!

Charles Aznavour's song "Venecia sin ti" (https://www.youtube.com/watch?v=hm6w2Z9hxlg Spanish version of the original) says that the same visual information received through the author's eyes provoked radically different emotions depending on a factor not directly related to what he sees. How can neuroscience explain that using EEG, fMRI, implanted electrodes and the whole nine yard? Can an advanced General AI entity ever be able to feel that way too? What would the general AI entity feel and understand love? Would the general AI entity feel anything at all? How would that work? Can somebody explain it? All that stuff written about general AI becoming conscious like humans is in the category of what in Russian language is called "yerunda". Complete nonsense rubbish. Let's stick to AI without the fictional "general" label. Let's get real and stop fantasizing so immaturely. jawa
"Silicon Valley tried to produce a mind but couldn’t." so, some of the smartest people in the world (Silicon Valley) could not replicate that 'bad' design we hear about from Darwinian clowns ... martin_r
Sandy @2 a good one! martin_r
A.I. is for mind what Urey-Miller experiment is for cell. Sandy
Despite the share value hype, it's not at all clear that quantum "computers" can even do the job of a computer. polistra

Leave a Reply