Uncommon Descent Serving The Intelligent Design Community

Computer engineer Eric Holloway: Artificial intelligence is impossible

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Holloway distinguishes between meaningful information and artificial intelligence:

What is meaningful information, and how does it relate to the artificial intelligence question?

First, let’s start with Claude Shannon’s definition of information. Shannon (1916–2001), a mathematician and computer scientist, stated that an event’s information content is the negative logarithm* of its probability.

Claude Shannon/Conrad Jakobs

So, if I flip a coin, I generate 1 bit of information, according to his theory. The coin came down heads or tails. That’s all the information it provides.

However, Shannon’s definition of information does not capture our intuition of information. Suppose I paid money to learn a lot of information at a lecture and the lecturer spent the whole session flipping a coin and calling out the result. I’d consider the event uninformative and ask for my money back.

But what if the lecturer insisted that he has produced an extremely large amount of Shannon information for my money, and thus met the requirement of providing a lot of information? I would not be convinced. Would you?

A quantity that better matches our intuitive notion of information is mutual information. Mutual information measures how much event A reduces our uncertainty about event B. We can see mutual information in action if we picture a sign at a fork in the road. More.  (Eric Holloway, “Artificial intelligence is impossible” at Mind Matters Today)

See also: Could one single machine invent everything? (Eric Holloway)

So lifelike … Another firm caught using humans to fake AI: Byzantine claims and counterclaims followed as other interpreters came forward with similar stories. According to Qian, something similar happened last year.

and

The hills go high tech An American community finding its way in the new digital economy At present, says Hochschild, Ankur Gopal and Interapt are sourcing as many new hillbillies as they can find: “For now, there is so much demand for I.T. workers — 10,000 estimated openings by 2020 in the Louisville metro area alone — that Mr. Gopal is reaching out to new groups.

Comments
@daveS, yes that's correct. The key is the axioms must be consistent, i.e. cannot prove and disprove the same statement. However, algorithmically certifying an axiom is consistent requires scanning an infinite set, so it is undecidable. This is the reason Godel's incompleteness theorem works. And all it takes is one inconsistency, and the entire system falls apart. One principle of logic is that if your axiomatic system contains a single contradiction, then the system can prove absolutely anything, and becomes useless. I'm not sure why mathematics is not considered a clearcut example of humans doing something algorithms cannot. The counter arguments I've seen start with the circular premise that humans are algorithmic, or the strawman that humans cannot solve every problem. Neither are good arguments against the notion that mathematics demonstrates the mind is non computational. But, there is probably some good counter argument out there I haven't heard.EricMH
October 16, 2018
October
10
Oct
16
16
2018
06:10 AM
6
06
10
AM
PDT
PSS: One last try. If you try to give the theorem prover the ability to add new axioms to its axiom system by including axiom-generating-code, you have essentially added those axioms yourself, in compressed form, to the theorem prover.daveS
September 30, 2018
September
09
Sep
30
30
2018
05:21 AM
5
05
21
AM
PDT
PS: The two options I describe don't produce exactly the same results, of course. Under option 1 you will get theorems for the theory of abelian groups only.daveS
September 29, 2018
September
09
Sep
29
29
2018
07:01 PM
7
07
01
PM
PDT
EricMH,
At least in the realm of mathematics this seems to be happening.
I guess there must be some question about it? Otherwise this would provide a clear-cut example of something the human mind is capable of, but which a computer cannot achieve? Regarding this:
Also, the automated proof system cannot increase its set of axioms to increase the number of things it can prove due to the same problem of the uncomputability of Kolmogorov complexity.
I'm not clear what this means in concrete terms, so I'll try to ask a question using an example. Suppose someone has written a theorem prover for group theory. Initially, it just includes the four basic group axioms. If we wanted this theorem prover to be able to prove theorems about specific kinds of groups (for example abelian groups), one option would be to explicitly add that axiom to our program. Another option is to write a routine that would generate formulas in the language of the theory (for example "for all x, for all y, x*y = y*x" would be one) and then add them to our list of axioms. But either way, whether we explicitly add the axiom, or write some code to generate axioms, we are essentially bundling all that information for the axiom(s) into the code. Is that an illustration of what it means to say the automated theorem prover cannot increase its set of axioms?daveS
September 29, 2018
September
09
Sep
29
29
2018
06:38 PM
6
06
38
PM
PDT
@daveS > Are these things provable by humans? This is beyond my ken. That's a good question. At least in the realm of mathematics this seems to be happening. Humans are ever increasing the number of axioms we can work with, and thus ever increasing the number of things that can be proven.EricMH
September 29, 2018
September
09
Sep
29
29
2018
05:28 PM
5
05
28
PM
PDT
EricMH, Regarding the first paragraph, that's interesting to know. Are these things provable by humans? This is beyond my ken. To the second paragraph, these questions are more accessible to me. Yes, it is crucial that AlphaGo at some point became better at playing Go than humans. I don't conclude from that that human intelligence is computational though. Human intelligence is a mystery to me. What I find cool is that we are talking about a game with thousands of years of history and tradition behind it. Now, for the first time ever, machines can beat the best humans at this game. I envy those with a deep knowledge of the game, who can now watch and appreciate matches between AIs which are at a higher level than that humans are capable of. I think it would be fascinating to see if new strategies are developed (or "found") by these machines. And perhaps human players can learn from computers, thus increasing their level of play also. I believe this has happened to some extent in chess (which I don't know anything about either). Spectators will always want to see matches between competitors at the highest level. I don't care a great deal whether the competitors are humans or computers. And I certainly don't think AI is leading us toward an all-powerful mind or immortality. I just expect to see a gradual increase in the power of these machines (within the bounds of the laws of physics), perhaps with the occasional jump or period of stagnation.daveS
September 29, 2018
September
09
Sep
29
29
2018
02:02 PM
2
02
02
PM
PDT
@daveS here's one example of computational limits in the area of automatic proof deduction: No automated proof system can prove the Kolmogorov complexity of a bitstring is greater than the size of its axioms. I.e. the set of axioms can be represented as a finite bitstring, and the length of this bitstring limits what can be proven. Also, the automated proof system cannot increase its set of axioms to increase the number of things it can prove due to the same problem of the uncomputability of Kolmogorov complexity. Regarding the religion aspect, why are results such as Alpha Go and the other Deep Mind game playing AI projects considered so cool? The games themselves are not useful, so the 'cool' aspect is that AI seems to be exceeding human intelligence, suggesting that human intelligence is itself computational. There is not similar hype surrounding forks and cars, even though both of these exceed human capabilities. So, the question is, what is so astonishing about a tool that seems to replace human intelligence? I submit it is precisely the religious implications: that we can create an all powerful mind (create a god in our image) and that we could also possibly give our own minds immortality. Exactly the idolatry quote that JohnnyB brought up.EricMH
September 29, 2018
September
09
Sep
29
29
2018
01:08 PM
1
01
08
PM
PDT
@R J Sawyer, I'm saying there is a quantitative line AI cannot cross. There is some measurable empirical task that humans can do which AI will never be able to do. I'm making a hard science sort of claim, that's why I refer to mutual information, which at least in theory could be measured and we could detect if it is increased. What the actual experiment would look like is harder to pin down. I've made some attempts, but nothing that I could publish. However, the question does not seem to be outside the realm of science to address.EricMH
September 29, 2018
September
09
Sep
29
29
2018
10:46 AM
10
10
46
AM
PDT
Eric and Dave, interesting discussion. Eric, when you say that there is a line that AI’s will never be able to cross, I assume that you are referring to consciousness. But I would argue that this is more of a line that we would never acknowledge them crossing than one that it is not possible for them to cross. Since we can’t even agree what consciousness is, I guess this is understandable. If through interactions with an AI, using any test that humans would pass 95% of the time, we can’t distinguish it from a conscious being, how can we conclude that it is not conscious?R J Sawyer
September 29, 2018
September
09
Sep
29
29
2018
10:07 AM
10
10
07
AM
PDT
EricMH, I do agree (to the extent that I understand "AI") with your first two paragraphs. I'm just starting to (attempt to) read a book on automated theorem proving, and even in that realm, I can see how you need human (or other intelligent) input to get off the ground. Clearly I do disagree with the second two paragraphs, probably because religion is not an integral part of my worldview. I am interested to a degree in religion, because most of those around me are religious, but I don't think of things in religious terms as a rule. For me, the hype (by which I mean something like "irrational interest in") around AI is just the "cool factor". As I said above, when I have my "rational" hat on, I think in terms of self-driving cars and other practical benefits of AI. When I have my "non-rational" hat on, I think of less practical, but fascinating things such as AlphaGo, computer-generated art and music, and even mathematical proofs which were discovered by machine (with modest results so far, I take it). You might recall during the match between AlphaGo and Lee Sedol, there was a particular move by AlphaGo that Go experts found perplexing, and IIRC, it appeared that Lee thought that Aja Huang might have mistakenly put the stone in the wrong place. It turned out later to be a move key to AlphaGo's victory. That was fascinating to watch. The match in general was a watershed moment; it looks like humans may never again be able to defeat the best Go-playing computer programs. At the same time, I don't have any misconceptions that this computer program can in any way compete with humans in a general setting. It runs on a machine, which could in principle be made out of bricks, I suppose, with no electronics. It's a very cool thing, though.daveS
September 29, 2018
September
09
Sep
29
29
2018
07:52 AM
7
07
52
AM
PDT
I see AI as a very useful tool that may free us from having to do tedious or dangerous work, thus allowing us to be more creative, which is one of the attributes that distinguish us from other biological systems. We were made to be positively creative and more efficiently productive for the well-being of all people. AI may help to facilitate that important function. Obviously AI could be used for evil purposes as well. That's inevitable in this sinful world. However, I don't believe that "conscious" AI will ever happen. That's nonsense. We can't create conscious beings having free will. No matter how sophisticated the robots could ever be, they'll remain being programmed machines.PaoloV
September 29, 2018
September
09
Sep
29
29
2018
07:21 AM
7
07
21
AM
PDT
@daveS, yes I agree there is much practical utility with automation using AI and ML techniques. However, there is a dividing line between what computers can copy pretty well, and what is beyond the reach of computers. Unfortunately, there is not much clarity where that dividing line is, and attempting to cross it can lead to problems. A better approach, known as intelligence augmentation, is where the goal is not to completely eliminate human interaction, but to algorithmically improve human interaction. All that being said, the religious side does seem to be the underlying motivation and the source of the undying and unfulfilled hype. There is a belief that AI can ultimately do what human intelligence can do, so short term failures do not dissuade the hype. Similar to the hype around the constant failures of evolution theory. Otherwise, it is hard to explain why there is so much hype around what is essentially a fancier form of control systems and non-linear regression. I'm sure if we called AI non-linear regression the hype would disappear overnight, which makes it clear the religious dimension is the driving factor behind the AI hype.EricMH
September 29, 2018
September
09
Sep
29
29
2018
07:11 AM
7
07
11
AM
PDT
EricMH,
AI does look like the modern form of idolatry, complete with religion, priests and even an afterlife.
:-o I can see we have very different perspectives on this issue. I see the current hype around AI as part and parcel of our free enterprise system. I witnessed a sales pitch by a team from a multi-billion dollar company recently, and of course they mentioned that they have an AI (I immediately rolled my eyes; who doesn't have AIs these days?). Now I think it's probably the case that their AI does provide value for their clients, but my impression was that they simply wanted to say the magic word "AI" at some point during the presentation. But their job is to sell their product. They weren't lying about anything. There were no claims of achieving the mathematically or logically impossible. They didn't say their AIs create mutual information. It's just business. Even the Nectome people aren't lying, as far as I can tell. It's clear from their website that their "clients" will certainly die. And if Nectome does manage to reconstruct their brain/mind at some point in the future, that mind will not be "them", but rather a copy, so "they" won't enjoy an afterlife. But certainly there is a lot of salesmanship going on. Eventually, though, AIs in free enterprise will have to pay for themselves. My interest in AI (as a layperson and consumer) is more around practical devices which are clearly feasible, and which can improve our lives. For example, self-driving cars. We had a horrible accident in my area last year. It was a head-on collision on a desolate, straight highway, during the day in perfect weather, and everyone in both cars died. We actually have quite a few accidents like this, which I think likely could be prevented with (what some call) AI.daveS
September 29, 2018
September
09
Sep
29
29
2018
06:12 AM
6
06
12
AM
PDT
@JohnnyB, great point. AI does look like the modern form of idolatry, complete with religion, priests and even an afterlife. @daveS, yes, but this is largely due to actual humans behind the scenes powering the AI: https://mindmatters.today/2018/09/so-lifelike/ It is ironic that one of the major platforms for this crowdsourcing is called Mechanical Turk, which was the original "AI": a man hidden in a mechanical contraption that could beat humans at chess, tricking them into believing there was a true chess playing robot.EricMH
September 28, 2018
September
09
Sep
28
28
2018
10:39 AM
10
10
39
AM
PDT
EricMH, If I may jump in:
It is only in recent history when we suddenly are obsessed with our latest tool replacing us. A good question to ask is: why?
I think with the advent of virtual assistants such as Siri and Alexa, we are getting to the point where we can envision AIs which are nearly indistinguishable from humans, at least in certain contexts. They are already good enough to "replace" us in some ways. That is both a major technological achievement and a frightening thing.daveS
September 28, 2018
September
09
Sep
28
28
2018
10:11 AM
10
10
11
AM
PDT
@EricMH: Actually, it is not so recent:
Surely he cuts cedars for himself, and takes a cypress or an oak and raises it for himself among the trees of the forest. He plants a fir, and the rain makes it grow....Half of it he burns in the fire....But the rest of it he makes into a god, his graven image. He falls down before it and worships; he also prays to it and says, "Deliver me, for you are my god." (Isaiah 44:14-17)
johnnyb
September 28, 2018
September
09
Sep
28
28
2018
09:49 AM
9
09
49
AM
PDT
EricMH
@R J Sawyer, another perspective is the goal posts fail to define human intelligence, and we learn this once we reach the goal posts.
I am sure that this is true, at least in part.R J Sawyer
September 28, 2018
September
09
Sep
28
28
2018
09:49 AM
9
09
49
AM
PDT
@R J Sawyer, another perspective is the goal posts fail to define human intelligence, and we learn this once we reach the goal posts. It's also a matter of what goal posts you are talking about. We have been making tools to improve on our native capabilities for millennia, but none of our tools have ever replaced us, nor have we ever mistaken our tools for ourselves. It is only in recent history when we suddenly are obsessed with our latest tool replacing us. A good question to ask is: why?EricMH
September 28, 2018
September
09
Sep
28
28
2018
09:22 AM
9
09
22
AM
PDT
EricMH@25. I am by no means an expert in information theory or AI. My point is simply that it is our nature to keep shifting the goal posts such that we remain exceptional. My example with sign language demonstrates this. But we have done the same thing with AI. Its definition has changed over the years as each ceiling was crashed through.R J Sawyer
September 28, 2018
September
09
Sep
28
28
2018
07:58 AM
7
07
58
AM
PDT
@nonlin, I agree we are just representing abstract concepts. If I burn my math books I have not destroyed mathematics itself.EricMH
September 28, 2018
September
09
Sep
28
28
2018
07:57 AM
7
07
57
AM
PDT
@daveS if it is true that humans create mutual information then computers will never simulate human behavior to an indistinguishable level, except within a narrow domain.EricMH
September 28, 2018
September
09
Sep
28
28
2018
07:52 AM
7
07
52
AM
PDT
EricMH @18 So if mutual information doesn't work for all information (as shown), let's change that. What's the big deal? What do you mean:
Paintings etc share enormous amounts of mutual information with the metaphysical realm. Even modern art is not generated completely at random, or if it is that is precisely the point.
? It's not that information shares something with the metaphysical. Information is in fact abstract like math, not physical. And what passes for information in IT, physics, etc. is actually just data representing that information. Think about it: http://nonlin.org/biological-information/Nonlin.org
September 28, 2018
September
09
Sep
28
28
2018
07:35 AM
7
07
35
AM
PDT
EricMH,
The problem is there is a subtle equivocation going on in the field, where people think that by iteratively improving on current AI systems we will get something that equals or exceeds human intelligence. The underlying assumption is that there is no intrinsic difference between an algorithm and the human mind. My argument is that intelligence and algorithms are fundamentally different things, because intelligence can create mutual information and algorithms cannot. So, the transition to human+ level intelligence from existing AI systems is logically impossible.
Do you think it's possible that computers could one day simulate human behavior well enough to be practically indistinguishable from humans? (In other words, pass the Turing test, I guess). If so, would that that have an impact on your position?daveS
September 28, 2018
September
09
Sep
28
28
2018
07:15 AM
7
07
15
AM
PDT
@JohnnyB makes a great point that without preexisting teleology there is nothing to make mutual information with. The whole problem is that math notation is syntactic. There is a whole different realm of metaphysical truth that notation can describe, but has no intrinsic link to. For example, Godel proved mathematical notation cannot describe all possible mathematical truths. I'm using the term "notation" instead of math because per Godel mathematics itself is a separate entity from the notation we use to describe mathematics.EricMH
September 28, 2018
September
09
Sep
28
28
2018
06:54 AM
6
06
54
AM
PDT
@R J Sawyer, what do you think of my argument that if the human mind creates mutual information then human level AI is impossible?EricMH
September 28, 2018
September
09
Sep
28
28
2018
06:51 AM
6
06
51
AM
PDT
@daveS, that is a good point. There is certainly a semantic barrier here. We have systems already that people call AI and ML. The problem is there is a subtle equivocation going on in the field, where people think that by iteratively improving on current AI systems we will get something that equals or exceeds human intelligence. The underlying assumption is that there is no intrinsic difference between an algorithm and the human mind. My argument is that intelligence and algorithms are fundamentally different things, because intelligence can create mutual information and algorithms cannot. So, the transition to human+ level intelligence from existing AI systems is logically impossible. Not coincidentally, this equivocation between creating AI systems to reproducing human intelligence is very similar to the equivocation between micro evolution and macro evolution. Both equivocations deal with the same fundamental problem of information creation.EricMH
September 28, 2018
September
09
Sep
28
28
2018
06:50 AM
6
06
50
AM
PDT
R J Sawyer:
My prediction is that if we are capable of producing an AI that meets the current criteria for AI, we will simply change the criteria.
Typical evo answer for any and everything.
Then we taught a gorilla to use sign language.
And how did it work with other gorillas? Did they understand him?
Then the arguments started flying that the gorilla wan’t using proper grammar and syntax.
Or that it wasn't talking, which was the original point.
We have an obsession with convincing ourselves that we are at the pinnacle of life on this planet.
We don't need convincing as that is what the science says.ET
September 28, 2018
September
09
Sep
28
28
2018
06:46 AM
6
06
46
AM
PDT
When people talk about whether or not AI is possible, human exceptionalism always rears its ugly head. My prediction is that if we are capable of producing an AI that meets the current criteria for AI, we will simply change the criteria. I remember the same thing happening when it was thought that humans were the only animal capable of using language. Then we taught a gorilla to use sign language. Then the arguments started flying that the gorilla wan't using proper grammar and syntax. We have an obsession with convincing ourselves that we are at the pinnacle of life on this planet.R J Sawyer
September 28, 2018
September
09
Sep
28
28
2018
06:32 AM
6
06
32
AM
PDT
daves:
According to the definitions used by those who work in AI, artificial intelligence and machine learning already exist,...
Any examples? According to Wikipedia:
The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it".
And that has never been accomplished.ET
September 28, 2018
September
09
Sep
28
28
2018
05:49 AM
5
05
49
AM
PDT
EricMH, After this back-and-forth, I think my issues with the linked article and title of this thread are all or mostly semantic. Which is not to say they are unimportant to you getting your message out. According to the definitions used by those who work in AI, artificial intelligence and machine learning already exist, so it's jarring to read that AI is impossible or that machines cannot learn. Have you considered using slightly different language, talking about "limits on AI", for example?daveS
September 28, 2018
September
09
Sep
28
28
2018
05:03 AM
5
05
03
AM
PDT
1 2

Leave a Reply