Uncommon Descent Serving The Intelligent Design Community

William Dembski: Why computers will likely never perform abductive inferences

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Cover: The Myth of Artificial Intelligence in HARDCOVER

As Erik J. Larson points out in The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021), what computers “know” must be painstakingly programmed:

Larson did an interesting podcast with the Brookings Institution through its Lawfare Blog shortly after the release of his book. It’s well worth a listen, and Larson elucidates in that interview many of the key points in his book. The one place in the interview where I wish he had elaborated further was on the question of abductive inference (aka retroductive inference or inference to the best explanation). For me, the key to understanding why computers cannot, and most likely will never, be able to perform abductive inferences is the problem of underdetermination of explanation by data. This may seem like a mouthful, but the idea is straightforward. For context, if you are going to get a computer to achieve anything like understanding in some subject area, it needs a lot of knowledge. That knowledge, in all the cases we know, needs to be painstakingly programmed. This is true even of machine learning situations where the underlying knowledge framework needs to be explicitly programmed (for instance, even Go programs that achieve world class playing status need many rules and heuristics explicitly programmed).

Humans, on the other hand, need none of this…

William A. Dembski, “Why computers will likely never perform abductive inferences” at Mind Matters News

Takehome: Computers require complete data to come to a correct conclusion but humans often work very well with incomplete data.

PS: By the way, we told you Dembski was back, didn’t we?


You may also wish to read:

Are we spiritual machines? Are we machines at all? Inventor Ray Kurzweil proposed in 1999 that within the next thirty years we will upload ourselves into computers as virtual persons, programs on machines. The themes and misconceptions about computers and artificial intelligence that made headlines in the late 1990s persist to this day.

A critical look at the myth of “deep learning” “Deep learning” is as misnamed a computational technique as exists. The phrase “deep learning” suggests that the machine is doing something profound and beyond the capacity of humans. That’s far from the case.

Artificial intelligence understands by not understanding The secret to writing a program for a sympathetic chatbot is surprisingly simple… We needed to encode grammatical patterns so that we could reflect back what the human wrote, whether as a question or statement.

Automated driving and other failures of AI How would autonomous cars manage in an environment where eye contact with other drivers is important? In cossetted and sanitized environments in the U.S., we have no clue of what AI must achieve to truly match what humans can do.

and

Artificial intelligence: Unseating the inevitability narrative. William Dembski: World-class chess, Go, and Jeopardy-playing programs are impressive, but they prove nothing about whether computers can be made to achieve AGI. In The Myth of Artificial Intelligence, Erik Larson shows that neither science nor philosophy back up the idea of an AI superintelligence taking over.

Comments
Peace & joy. I spent lo! these many decades doing Process Analysis and creating Process Models for various Government programs. So I look on the question as basically BACKWARDS. That is, FIRST you have a need for a System. NEXT you begin identifying Processes (e.g., Pay Bill) that your System with perform. They you begin "decomposing" the system into Sub-Profecesses. You do this either by: 1) making a bunch of wild guesses, or 2) methodically decomposing the High Level Processes into their component processes. The default is "make wild guesses", because this allows the "business analysts" to write up ANY combination of crap as the "system spec". Then some other guys (and gals) create a poorly structured data base in Oracle and then write data entry and data query screen to fiddle with the data base, which is if course not any more "Relational" then a dozen Excel spreadsheets. Keep in mind that THE important consideration is that The Honcho (aka Program Manager) has the final decision on EVERYTHING, at a High Level. Based on my experience, this is true for both government AND commercial systems. And the "development manager" will most likely have moved on to have retired or moved on to another project before it becomes WIDELY recognized that the system DOES NOT WORK. But I'm in the middle of creating a Witches Coven for D&D, and pausing to define what's WRONG with System Development simply delays my selection of 30mm figures to flesh out the Coven.mahuna
April 25, 2021
April
04
Apr
25
25
2021
11:09 AM
11
11
09
AM
PDT
Key: "the key to understanding why computers cannot, and most likely will never, be able to perform abductive inferences is the problem of underdetermination of explanation by data." WmAD, President Emeritus, UDkairosfocus
April 25, 2021
April
04
Apr
25
25
2021
05:12 AM
5
05
12
AM
PDT
Dr. Dembski notes that we, basically, need access to an infinite amount of information in order to explain our intuitive ability to make fairly reliable abductive inferences. (aka retroductive inference or inference to the best explanation).
"But the “et cetera” here has no end."
That reminds me of these quotes by Gödel:
“Even if the finite brain cannot store an infinite amount of information, the spirit may be able to. The brain is a computing machine connected with a spirit. If the brain is taken to be physical and as [to be] a digital computer, from quantum mechanics [it follows that] there are then only a finite number of states. Only by connecting it [the brain] to a spirit might it work in some other way.” - Kurt Gödel - Section 6.2.14 from A Logical Journey by Hao Wang, MIT Press, 1996. "Either mathematics is too big for the human mind, or the human mind is more than a machine." - Kurt Gödel As quoted in Topoi : The Categorial Analysis of Logic (1979) by Robert Goldblatt, p. 13
bornagain77
April 25, 2021
April
04
Apr
25
25
2021
01:27 AM
1
01
27
AM
PDT

Leave a Reply