Uncommon Descent Serving The Intelligent Design Community

Why the brain still beats the computer, even from a naturalist perspective


From Liqun Luo at Nautilus:

Over the past decades, engineers have taken inspiration from the brain to improve computer design. The principles of parallel processing and use-dependent modification of connection strength have both been incorporated into modern computers. For example, increased parallelism, such as the use of multiple processors (cores) in a single computer, is a current trend in computer design. As another example, “deep learning” in the discipline of machine learning and artificial intelligence, which has enjoyed great success in recent years and accounts for rapid advances in object and speech recognition in computers and mobile devices, was inspired by findings of the mammalian visual system.8 As in the mammalian visual system, deep learning employs multiple layers to represent increasingly abstract features (e.g., of visual object or speech), and the weights of connections between different layers are adjusted through learning rather than designed by engineers. These recent advances have expanded the repertoire of tasks the computer is capable of performing. Still, the brain has superior flexibility, generalizability, and learning capability than the state-of-the-art computer. As neuroscientists uncover more secrets about the brain (increasingly aided by the use of computers), engineers can take more inspiration from the working of the brain to further improve the architecture and performance of computers. Whichever emerges as the winner for particular tasks, these interdisciplinary cross-fertilizations will undoubtedly advance both neuroscience and computer engineering.More.

Prediction: Artificial intelligence will get to a certain level and just stop. Lots of people will make a living pretending otherwise.

See also: Present philosophy behind artificial intelligence is false

Luo should be praised for sticking to a lucid analysis comparing machine architectures/components and the neural biological "design" and not claiming that future progress in AI will result in conscious AI systems with actual minds (the goal of many AI researchers - "strong AI"). This looks to be a fundamental barrier in the development of computational machines, however complex. See http://www.rawstory.com/2016/03/a-neuroscientist-explains-why-artificially-intelligent-robots-will-never-have-consciousness-like-humans/. "....intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations. The type of A.I. that includes these features is known amongst the scientific community as “Strong Artificial Intelligence”. Strong A.I., by definition, should possess the full range of human cognitive abilities. This includes self-awareness, sentience, and consciousness, as these are all features of human cognition. On the other hand, “Weak Artificial Intelligence” refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency. Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind. A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing Strong A.I. (a.k.a. Artificial General Intelligence). To them it is not a matter of “if”, but “when”. But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today’s computers’ total absence of any intentional behavior whatsoever. Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator." ....................................... "Everything a computer does involves manipulating two symbols in some way. As such, they can be thought of as a practical type of Turing machine—an abstract, hypothetical machine that computes by manipulating symbols. A Turing machine’s operations are said to be “syntactical”, meaning they only recognize symbols and not the meaning of those symbols—i.e., their semantics. Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding. It does not matter how fast the computer is, how much memory it has, or how complex and high-level the programming language. The Jeopardy and Chess playing champs Watson and Deep Blue fundamentally work the same as your microwave. Put simply, a strict symbol-processing machine can never be a symbol-understanding machine." doubter
Prediction: Artificial intelligence will get to a certain level and just stop. Lots of people will make a living pretending otherwise.
The latter is clearly demonstrated by the whole of modern history. Assuming No Free Lunch, the former is easily inferred; though where depends on what can be discovered/done by the raw scaling up of permutations/structures of those mental processes that we can materially modulate. LocalMinimum

Leave a Reply