Philosopher suggests another reason why machines can’t think as we do
|August 13, 2018||Posted by News under Artificial Intelligence, Mind|
As philosopher Michael Polanyi has noted, much that we know is hard to codify or automate.
From Denyse O’Leary at Mind Matters Today
We have all encountered that problem. It’s common in healthcare and personal counseling. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home than in his rundown private home with several staircases.
The analysis, as such, is straightforward. But that is not the challenge the nurse faces. Her challenge is to convey to the patient, not the information itself, but her tacit knowledge that the proposed move would liberate, rather than restrict him. More.
Reality check: That’s why many jobs are not nearly as threatened by AI as some fear. But then many others are.
See also: Why can’t machines learn simple tasks?: They can learn to play chess more easily than to walk If specifically human intelligence is related to consciousness, the robotics engineers might best leave consciousness out of their goals for their products and focus on more tangible ones.