Uncommon Descent Serving The Intelligent Design Community

Philosopher suggests another reason why machines can’t think as we do

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Michael Polanyi (1891-1976)

As philosopher Michael Polanyi has noted, much that we know is hard to codify or automate.

From Denyse O’Leary at Mind Matters Today

We have all encountered that problem. It’s common in healthcare and personal counseling. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home than in his rundown private home with several staircases.

The analysis, as such, is straightforward. But that is not the challenge the nurse faces. Her challenge is to convey to the patient, not the information itself, but her tacit knowledge that the proposed move would liberate, rather than restrict him. More.

Reality check: That’s why many jobs are not nearly as threatened by AI as some fear. But then many others are.

See also: Why can’t machines learn simple tasks?: They can learn to play chess more easily than to walk If specifically human intelligence is related to consciousness, the robotics engineers might best leave consciousness out of their goals for their products and focus on more tangible ones.

Comments
And isn't that hopeless and inevitable 'intuitive knowledge' deficit in even the most sophisticated robots, something most people not running away from God, understand intuitively and instantly, the moment the subject is broached ? And is not such wisdom or intuitive knowledge the product of our thoughts on the subject, and perhaps experience in it, from an early age until we die ? We lose some of the intellectual rigour of a young child, not being as purely motivated to seek truth about everything for its own sake. But surely meditating generally comes naturally to all of us, at least haphazardly, even as adults.Axel
August 13, 2018
August
08
Aug
13
13
2018
12:57 PM
12
12
57
PM
PDT
In your article "WHY CAN’T MACHINES LEARN SIMPLE TASKS?", you explained Hans Moravec's paradox thus: The skills that are hardwired through evolution don’t take conscious thought, and when you don’t have to think about something, it’s harder to figure out how to teach a machine to do it. Moravec is wrong about the reason for the paradox that was named after him. Yes, we find it easier to walk than to do math, but it's not because evolution (or whatever) made us that way. We, humans, are not born knowing how to walk. Walking is not an inborn skill. Babies have to learn how to walk the hard way: through trial and error. It is not true that it takes no conscious thought on the part of babies to learn how to walk. Babies put a lot of conscious effort into it. The reason that it's harder for machines to learn how to walk is simple: AI researchers do not know how we do it. Once we figure out how human babies learn how to walk, our intelligent machines will be just as good at it as we are and they will have just as hard a time learning math and chess as we do. AI researchers do not know how we learn to play chess and do math either. It just so happens that computers and programming languages were originally designed for tasks that are based on logic and math, and can be specified using symbolic logic and algorithms. Math and chess fall in that category.FourFaces
August 13, 2018
August
08
Aug
13
13
2018
11:53 AM
11
11
53
AM
PDT

Leave a Reply