Uncommon Descent Serving The Intelligent Design Community

Jonathan Bartlett: Why I Doubt That AI Can Match the Human Mind

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There are fundament limits on what a calculating machine can do:

Computers are exclusively theorem generators, while humans appear to be axiom generators… My primary reason for doubting that AI can match human intelligence is that the difference between mind and machine is a difference of kind, not of quantity. Understanding the distinction will help us exploit the abilities of each to their maximum potential. More.

Jonathan Bartlett is the Research and Education Director of the Blyth Institute.

Also by Jonathan Bartlett: Google Search: It’s Secret of Success Revealed

and

Did AI show that we are “a peaceful species” triggered by religion?

Also: Human intelligence as a Halting Oracle (Eric Holloway)

Comments
FTR, I haven't found any formalized versions of Gentzen's consistency proof. I believe transfinite induction is implemented in both Coq and Mizar however. This stuff is well beyond my ken, frankly. Here's a thread on a Coq forum discussing the Gentzen proof, ordinals, Coq, Agda, etc etc etc.daveS
March 1, 2019
March
03
Mar
1
01
2019
04:59 AM
4
04
59
AM
PDT
No dice yet, but I ran across this humorous quote:
I think I was told that De Bruijn jokingly liked to say that proving the inconsistency of PA was an "interesting but tricky PhD subject".
daveS
February 28, 2019
February
02
Feb
28
28
2019
07:23 AM
7
07
23
AM
PDT
johnnyb, You have asserted that PA is obviously consistent upon inspection, to which I just shrug in response. I will say that it's barely conceivable to me that PA is inconsistent; I think just about anyone who has read the axioms for PA would agree. Regarding transfinite induction, I don't see any reason why such a proof cannot be formalized and checked mechanically. I believe that should be possible for any mathematical proof. [Edit: Perhaps Gentzen's proof has been formalized in Coq, for example? I'll look into this when I get some time].daveS
February 28, 2019
February
02
Feb
28
28
2019
05:39 AM
5
05
39
AM
PDT
PA is either consistent or not, and that has nothing to do with my personal state of mind.
Correct. However, knowing that it is consistent is a state of mind. As has been discussed here, (a) PA is obviously true upon inspection, but (b) requires transfinite induction to actually prove, and therefore cannot be proven mechanically. Therefore, it requires faith that our mental abilities actually match rationality. As Plantinga (and even Darwin himself) has shown, this does not follow from Naturalism.johnnyb
February 28, 2019
February
02
Feb
28
28
2019
05:12 AM
5
05
12
AM
PDT
johnnyb, I can only speak for myself and not the materialists, but your post raises a question: How can one obtain certainty about arithmetic (or about any matter) by exercising faith? PA is either consistent or not, and that has nothing to do with my personal state of mind.daveS
February 27, 2019
February
02
Feb
27
27
2019
06:38 PM
6
06
38
PM
PDT
So you essentially have to assume what you want to prove.
One thing that I think this discussion shows is the primacy of faith in the development of knowledge. There are some who believe that doubt is primary in science and epistemology. However, without faith we cannot even be certain of arithmetic - and then what of the rest of math and science? This is one of the long-term materialist traps. It purports to be equivalent to the scientific enterprise, but, in the end, it rips it to shreds.johnnyb
February 27, 2019
February
02
Feb
27
27
2019
05:50 PM
5
05
50
PM
PDT
@DaveS > I did mention that inductive logic programming would be involved from the start, so that if a large number of attempts to prove a proposition fail, then one might attempt to add the negation of that proposition to one’s list of axioms. One counter example here is a Godel sentence would then be considered false within the proof system, making it inconsistent. But in general, that does not seem to be how mathematicians arrive at proofs.EricMH
February 27, 2019
February
02
Feb
27
27
2019
03:05 PM
3
03
05
PM
PDT
johnnyb,
They are easily verifiable by inspection, and a verification of them has been shown using transfinite induction (which is, again, not mechanically verifiable). One could say that this is assuming the proof, but, in this case, basic logic rests on it.
Yes, my understanding is you can't prove the consistency of PA without assuming the consistency of some system at least as strong (or stronger, perhaps). So you essentially have to assume what you want to prove. I know of no way to verify that the axioms of PA are consistent by inspection. Furthermore, at least one competent mathematician has believed he had found a proof that PA is inconsistent (which would be incredibly bizarre, I agree). The alleged proof was withdrawn when a problem was found. But the argument was at least taken seriously.daveS
February 27, 2019
February
02
Feb
27
27
2019
02:58 PM
2
02
58
PM
PDT
verifying consistency mechanically is not generally possible. But no human has verified that the axioms for Peano Arithmetic are consistent either.
Hold on a minute, there. No human has verified the axioms mechanically. They are easily verifiable by inspection, and a verification of them has been shown using transfinite induction (which is, again, not mechanically verifiable). One could say that this is assuming the proof, but, in this case, basic logic rests on it. If humans intuition is so unreliable that we cannot trust our intuitions about Peano arithmetic, then we should give up math and science. If human intuition *is* reliable, then we must posit a cause for this that is sufficient to explain the effect. This is why many of us find materialism to be false - it violates the basic principle of sufficient reason.johnnyb
February 27, 2019
February
02
Feb
27
27
2019
02:45 PM
2
02
45
PM
PDT
EricMH, I did mention that inductive logic programming would be involved from the start, so that if a large number of attempts to prove a proposition fail, then one might attempt to add the negation of that proposition to one's list of axioms. Edit: And yes, I'm assuming some intelligently designed computer programs are created as a starting point.daveS
February 27, 2019
February
02
Feb
27
27
2019
12:51 PM
12
12
51
PM
PDT
Ah, but notice you've snuck in a new axiom that once a billion attempts are made it can conclude the statement is not satisfiable. That axiom came from the programmer, not the number checking program.EricMH
February 27, 2019
February
02
Feb
27
27
2019
12:30 PM
12
12
30
PM
PDT
EricMH, I suppose this depends on what the computer has to work with. If the Peano axioms are available, then the relation ≤ can be expressed, and the computer might attempt to prove the statement: (∃ x)(∀ y)(y ≤ x) and after a billion failed attempts at a proof, might be programmed to postulate ¬(∃ x)(∀ y)(y ≤ x).daveS
February 27, 2019
February
02
Feb
27
27
2019
11:57 AM
11
11
57
AM
PDT
Yes, and this is already done to some extent. For example, Scientists Use Computer to Mathematically Prove Gödel God Theorem http://www.spiegel.de/international/germany/scientists-use-computer-to-mathematically-prove-goedel-god-theorem-a-928668.html However, there is still a fundamental limitation on this sort of work. How many numbers does a computer need to check to verify there is no largest number? How many numbers does a human have to check? In the former case, there is no end to the checks, and in the latter case a human only needs to check a couple before realizing there is no largest number.EricMH
February 27, 2019
February
02
Feb
27
27
2019
11:44 AM
11
11
44
AM
PDT
EricMH, I think I agree with this, although now we're talking about whether computers really know things, rather than their ability to create or postulate axioms. I don't see any barrier to a computer being able to: * generate various axiom systems * vet them for obvious inconsistencies (search for proofs of reasonable length of 1 = 0, for example) and relevance to established work * search for proofs of theorems in the systems which survive this vetting * write up these proofs in natural-language format perhaps eventually being able to produce work of publishable quality. That is, to simulate the work of a real human mathematician. Without consciousness or the ability to understand what it is doing. Of course I would expect it would at this point require a lot of training on human-produced mathematics initially. Just as human mathematicians undergo extensive training.daveS
February 27, 2019
February
02
Feb
27
27
2019
11:04 AM
11
11
04
AM
PDT
@DaveS, what I mean by "never know" is stronger than our own lack of Cartesian certainty. Not being able to prove consistency means the computer has zero certainty its axioms are consistent. However, within mathematics we don't believe our axioms are baseless in this way. Mathematics, out of all the academic fields, never has to retract something once it is proven. Science, on the other hand, is constantly revising its theories about reality. The only really stable portions of science are the mathematical portions. So, back with the AoI example, the computer could randomly assemble such a statement from a grammar, this is true. But, it would never know it is true, contrasted with the 'largest number' thought experiment. Even finitists will not say there is a largest number. Or, if they do, then I propose their largest number, plus one :DEricMH
February 27, 2019
February
02
Feb
27
27
2019
10:40 AM
10
10
40
AM
PDT
EricMH,
That is true, but computers can never know, even within some confidence interval, whether an axiom is consistent, whereas we humans are pretty confident certain sets of axioms are consistent.
Maybe? I don't know. I mean computers can't "know" anything I suppose. I would bet that PA is consistent, but that's based partly on the fact that no one has found an inconsistency yet. I don't know if my confidence has much basis, ultimately.
For example, the axiom of infinity cannot be derived and has to be postulated. So, an unassisted computer could never realize there is such a thing as an infinite set. However, a child easily realizes there is no limit to the number of integers with the simple thought experiment of trying to imagine the largest number and then adding one to it.
Axioms are generally not derived from other propositions, so I would think they are always postulated. And of course, computers never "realize" anything. However, a computer could select the Axiom of Infinity out of the set of formulas in some language, and derive consequences from the AoI. I don't see any problem with a computer "creating" or "postulating" the AoI. By the way, there are lots of finitists lurking here, so be careful about making any claims that infinite sets exist! :-)daveS
February 27, 2019
February
02
Feb
27
27
2019
09:57 AM
9
09
57
AM
PDT
@DaveS That is true, but computers can never know, even within some confidence interval, whether an axiom is consistent, whereas we humans are pretty confident certain sets of axioms are consistent. For example, the axiom of infinity cannot be derived and has to be postulated. So, an unassisted computer could never realize there is such a thing as an infinite set. However, a child easily realizes there is no limit to the number of integers with the simple thought experiment of trying to imagine the largest number and then adding one to it.EricMH
February 27, 2019
February
02
Feb
27
27
2019
09:37 AM
9
09
37
AM
PDT
EricMH, Yes, verifying consistency mechanically is not generally possible. But no human has verified that the axioms for Peano Arithmetic are consistent either. Therefore I don't see that this argument illustrates a qualitative difference between humans and computers.daveS
February 27, 2019
February
02
Feb
27
27
2019
08:36 AM
8
08
36
AM
PDT
DaveS, problem is a random axiom in unlikely to be consistent, and computationally checking for consistency is not possible in general.EricMH
February 27, 2019
February
02
Feb
27
27
2019
08:22 AM
8
08
22
AM
PDT
Well this is very convenient https://mindmatters.ai/2019/02/artificial-intelligence-must-be-possible-really/AaronS1978
February 26, 2019
February
02
Feb
26
26
2019
07:26 PM
7
07
26
PM
PDT
EricMH,
Computers cannot create axioms, period.
I've been wondering about this from time to time ever since I read it here a while back, and I don't understand what the roadblock would be. Couldn't you write a program to choose some random well-formed-formula in a formal language, and declare that to be an axiom? For example, take the language of Peano arithmetic. I believe that the following is a well-formed formula in PA: (∀x)(∀y)(S(S(x + y)) = x + S(S(y))) That's not one of the usual axioms given for PA, but it is a theorem in PA (not that this really matters). In any case, a computer could generate this formula and then declare it to be an axiom for some theory (whatever that would mean). What am I missing?daveS
February 26, 2019
February
02
Feb
26
26
2019
02:38 PM
2
02
38
PM
PDT
Computers cannot create axioms, period. It is their fundamental limitation per Chaitin's incompleteness theorem. And, it's also surprising just how complex the simple tasks we do. Check out Moravec's paradox. Robots are just another tool, and tools only work within their narrowly constrained environments.EricMH
February 26, 2019
February
02
Feb
26
26
2019
12:02 PM
12
12
02
PM
PDT
Yes but we don't have to emulate the mind in an intelligent machine. We only want to emulate intelligence so that we can build a robot that can do tasks like washing and folding the laundry, mowing the lawn, taking care of the elderly and the sick, walking into a generic kitchen and making a sandwich. It's coming and we don't need a mind to do it, just causal principles.FourFaces
February 26, 2019
February
02
Feb
26
26
2019
08:24 AM
8
08
24
AM
PDT
I have similar doubts, but I'm not very certain that there will never be a day when a powerful computer can effectively emulate the human mind, for a couple of reasons. First, the hardware we have now still is nowhere near as complex as the human brain. Second, perhaps when better hardware is available, approaches such as inductive logic programming will finally take off, enabling computers to compete with humans at axiom generation.daveS
February 26, 2019
February
02
Feb
26
26
2019
07:27 AM
7
07
27
AM
PDT

Leave a Reply