Uncommon Descent Serving The Intelligent Design Community

Computer engineer Eric Holloway: Artificial intelligence is impossible

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Holloway distinguishes between meaningful information and artificial intelligence:

What is meaningful information, and how does it relate to the artificial intelligence question?

First, let’s start with Claude Shannon’s definition of information. Shannon (1916–2001), a mathematician and computer scientist, stated that an event’s information content is the negative logarithm* of its probability.

Claude Shannon/Conrad Jakobs

So, if I flip a coin, I generate 1 bit of information, according to his theory. The coin came down heads or tails. That’s all the information it provides.

However, Shannon’s definition of information does not capture our intuition of information. Suppose I paid money to learn a lot of information at a lecture and the lecturer spent the whole session flipping a coin and calling out the result. I’d consider the event uninformative and ask for my money back.

But what if the lecturer insisted that he has produced an extremely large amount of Shannon information for my money, and thus met the requirement of providing a lot of information? I would not be convinced. Would you?

A quantity that better matches our intuitive notion of information is mutual information. Mutual information measures how much event A reduces our uncertainty about event B. We can see mutual information in action if we picture a sign at a fork in the road. More.  (Eric Holloway, “Artificial intelligence is impossible” at Mind Matters Today)

See also: Could one single machine invent everything? (Eric Holloway)

So lifelike … Another firm caught using humans to fake AI: Byzantine claims and counterclaims followed as other interpreters came forward with similar stories. According to Qian, something similar happened last year.

and

The hills go high tech An American community finding its way in the new digital economy At present, says Hochschild, Ankur Gopal and Interapt are sourcing as many new hillbillies as they can find: “For now, there is so much demand for I.T. workers — 10,000 estimated openings by 2020 in the Louisville metro area alone — that Mr. Gopal is reaching out to new groups.

Comments
@Nonlin.org mutual information is the consistent usage of the term. Paintings etc share enormous amounts of mutual information with the metaphysical realm. Even modern art is not generated completely at random, or if it is that is precisely the point. @daveS per my argument, if the mind creates information then it cannot be reproduced computationally. My argument also implies all AI systems are limited by the fact they cannot create information, so require humans in the loop. Further, it also implies that machines cannot learn, they can only memorize, and that is what these neural networks are doing. Just as learning by memorizing for a test is deficient, then so are these neural networks deficient.EricMH
September 27, 2018
September
09
Sep
27
27
2018
07:22 PM
7
07
22
PM
PDT
11 EDTA @11 It's not always as clear cut. See my examples. In addition, a painting, a song, etc. is information without reducing any uncertainty about a secondary event B. And its impact is most definitely not measurable. Everyone, Shannon included, misuses the word 'information', but they should use 'data' instead. Yes, sometimes that 'data' (not 'information') is reducing some uncertainty by a measurable quantity, but not always.Nonlin.org
September 27, 2018
September
09
Sep
27
27
2018
06:19 PM
6
06
19
PM
PDT
I think that most people are missing why agency is required for this. There are multiple reasons, but here are two: 1) Most people don't realize the supreme reduction in search space that takes place, even for just associating a thermometer and the outside temperature. If a machine is given *no* prior information, trying to suss out that this (a) something like temperature exists and affects us in profound ways, and (b) this little object has anything at all to do with temperature, and (c) the readings in the object tell me something about that temperature, and (d) how those markings actually correlate to temperature are all HUGE search space reductions. Now, if someone arbitrarily limited the search space to, say, thermometer readings and outside events, then an AI might be able to establish the correlation. However, that gigantic reduction in search space is mutual information, and is supplied by the programmer. In my own writing, I often refer to this as "parameterizing the search space", but the function is the same, and mutual information is a more general way to describe it. The second issue is that it requires teleology in order to decide what sorts of mutual information we should establish. That is, it is pointless to create correlations if there is nothing to use them for. You can't use them *for* anything without teleology. Machines don't have teleology unless they are programmed with teleology, again importing from outside.johnnyb
September 27, 2018
September
09
Sep
27
27
2018
10:53 AM
10
10
53
AM
PDT
@Mung learning also requires creating mutual information. In your example someone invented the thermometer, and I created the link in my mind between the thermometer reading and the outside temperature.EricMH
September 27, 2018
September
09
Sep
27
27
2018
08:51 AM
8
08
51
AM
PDT
Given that mutual information is merely a measure of how much knowing one variable X, can reduce the uncertainty about a second variable, Y, what does it mean to say that humans create mutual information? Let's say that you don't know the temperature outside. Call that X. So you look at the thermometer. Call that Y. Now Y has reduced your uncertainty about X. But in what sense has the mutual information been created?Mung
September 27, 2018
September
09
Sep
27
27
2018
07:14 AM
7
07
14
AM
PDT
EricMH, Thanks for the detailed response. I had not heard of Nectome before. For reference, their mission statement:
Our mission is to preserve your brain well enough to keep all its memories intact: from that great chapter of your favorite book to the feeling of cold winter air, baking an apple pie, or having dinner with your friends and family. If memories can truly be preserved by a sufficiently good brain banking technique, we believe that within the century it could become feasible to digitize your preserved brain and use that information to recreate your mind. How close are we to this possibility? Currently, we can preserve the connectomes of animal brains and are working on extending our techniques to human brains in a research context. This is an important first step towards the development of a verified memory preservation protocol, as the connectome plays a vital role in memory storage.
Certainly the part I bolded could turn out to be mathematically or logically impossible, although I believe that's still an unsolved problem. When they talk about possibly achieving some goal within the century, they don't have a very detailed roadmap, to say the least. Perhaps it's outright fraud. I don't know enough about neural networks to comment on issues such as overfitting, hacking, or the critical role of crowdsourcing, but it's not clear to me that there are claims of mathematical or logical impossibilities involved. If you are pointing to the fact that companies who use things such as neural networks de-emphasize these issues for short-term gain, then I do agree.daveS
September 27, 2018
September
09
Sep
27
27
2018
05:30 AM
5
05
30
AM
PDT
@daveS, the most egregious example is the Singularity cult invented by Ray Kurzweil, which believes that since the mind is software, they can digitize their brains and live forever on a CPU. There is a startup now that is based on this premise: http://www.nydailynews.com/life-style/silicon-valley-startup-digitizing-brains-100-fatal-process-article-1.3876200 Less extreme, I'd say the emphasis on neural networks and other very complex machine learning models is misplaced, as is the AI and ML hype in general. They appear to "learn" a lot of information, but are actually overfitting, which makes them prime for exploitation. So, a burgeoning field is AI hacking, which will become even more an issue as AI is used in critical systems such as driverless cars. The direction SV should go instead is human-in-the-loop systems, which maximize the use of both AI and our unique ability to create information. And behind the scenes, this is what the big SV companies actually do. They just don't widely publicize how much their fancy algorithms are driven by human crowdsourcing. They all have their own private crowdsourcing platforms that are like Amazon's mechanical turk service. I learned this from a conference in the field of human computation, HCOMP 2016.EricMH
September 26, 2018
September
09
Sep
26
26
2018
09:26 PM
9
09
26
PM
PDT
Nonlin @ 8 >Is “mutual information” really information? Yes, it can be measured and quantified, at least in controlled scenarios like Shannon was describing. It is in play anytime learning something helps with learning something else that is related, or when communicating in a common language.EDTA
September 26, 2018
September
09
Sep
26
26
2018
06:42 PM
6
06
42
PM
PDT
EricMH, I should have been a little more careful with my definition of AI so as to rule out forks. But yes, I agree with your point. :-) I don't know anything about AI, aside from briefly playing around with simple neural net models and occasionally browsing Hacker's News. But from what I've seen, the people who work in the field view it as just a tool (although a surprisingly effective one) and certainly not something that can achieve the logically impossible, such as a violation of a mathematical theorem. Are there specific projects under way in Silicon Valley that you can point to whose objectives are clearly logically or mathematically impossible? Edit: I should add that obviously the field of AI is famous for hyping itself then crashing when it doesn't meet expectations, but I would guess there are many mathematically sophisticated people who work in AI and they wouldn't be so naive as to think they can violate mathematical laws.daveS
September 26, 2018
September
09
Sep
26
26
2018
04:47 PM
4
04
47
PM
PDT
@DaveS, yes if we define AI as machines designed to perform tasks equal to or better than humans, then of course we can build AI. A fork is an instance of AI because it is better at manipulating food than our fingers. A car is an instance of AI because it transports much more effectively than humans. AI becomes another name for the age old tool. However, if we define AI as something that replicates human's ability to *create* mutual information, the dream that infects Silicon Valley, then AI is logically impossible.EricMH
September 26, 2018
September
09
Sep
26
26
2018
04:17 PM
4
04
17
PM
PDT
"Mutual information measures how much event A reduces our uncertainty about event B"
Is "mutual information" really information? Consider that the value of A is whole dependent on us, the users. Show event A to a cat and it might not mean anything. On the other hand, if one way or another I convey the concept of 'a circle', is it not information even when it does not resolve any uncertainty?Nonlin.org
September 26, 2018
September
09
Sep
26
26
2018
03:13 PM
3
03
13
PM
PDT
EricMH,
the programming would be the creation of mutual information.
Hm. That sounds reasonable; but doesn't that allow that AI actually is possible? By AI, I mean intelligently designing machines to perform tasks (usually at a level comparable to or better than that of humans). For example, playing a game or classifying images according to their subject. I don't mean that AI will necessarily rival humans' general intelligence (time will tell), but I think the term AI is usually understood to mean what I described above.daveS
September 26, 2018
September
09
Sep
26
26
2018
11:55 AM
11
11
55
AM
PDT
@daveS the programming would be the creation of mutual information. And yes, my argument implies that our supra Turing minds are necessary to do pretty mundane stuff as well as amazing things.EricMH
September 26, 2018
September
09
Sep
26
26
2018
11:34 AM
11
11
34
AM
PDT
Fasteddious, I agree with most of what you say. The abilities the person exhibits when creating and installing the sign do not seem especially impressive to me. Essentially the designer extracts information from the environment (the topology of the local road system, for one thing) and builds the sign to convey some of that information. It seems like a rather mundane task, and something that a computer could easily be programmed to handle.daveS
September 26, 2018
September
09
Sep
26
26
2018
09:59 AM
9
09
59
AM
PDT
Of course computers and related systems can churn out information. Most of the information presented on The Weather Network comes from automated machines collecting raw data, transmitting it to computers which collate and massage it into what we see as radar images, weather maps, or modelling forecasts, mostly without direct human intervention. However, all those processes and all the information requires a lot of human design and intelligence to make it work and make sense of it. And it is only humans who understand what it all means. Any sensor feeding signals to a computer which then converts it to a reading and logs the data can be said to be generating information, but the information is strictly defined and constrained by human intelligence, and the data is only of use to humans or to other computers designed by humans to process the data in defined ways. Thus, the computer and associated machinery are just extensions of the human intelligence, just as a car or airplane are extensions of our physical abilities.Fasteddious
September 26, 2018
September
09
Sep
26
26
2018
09:37 AM
9
09
37
AM
PDT
EricMH (and others):
This raises the question: What can create mutual information? A defining aspect of the human mind is its ability to create mutual information. For example, the traffic sign designer in the example above created mutual information. You understood what the sign was meant to convey.
If I understand the example correctly, the sign painter has created and installed a sign, which will allow pedestrians to be more certain about getting to their desired destination. Isn't it true that computers can perform similar feats? I can type my home address and the address of my destination into a computer and receive a detailed set of instructions which will increase the chance of my getting to the destination. In fact, it would seem that this software actually could be used to design the street signs themselves and put the sign designer out of a job. A different example: Suppose I'm going to play a game of Go with a professional 9-dan player. I don't play at all, but I will almost certainly win if I follow the instructions of a state-of-the-art Go computer program (say AlphaGo Zero). Such a program literally provides me with "signs", guiding me through the state-space of Go, drastically increasing the certainty that I will win.daveS
September 26, 2018
September
09
Sep
26
26
2018
05:37 AM
5
05
37
AM
PDT
Shannon didn't really mean information. He was only concerned with data transmission:
Shannon wrote to Vannevar Bush at MIT in 1939: “I have been working on an analysis of some of the fundamental properties of general systems for the transmission of intelligence”… and then a few engineers especially in the telephone lab began speaking of information... A Mathematical Theory of Communication By C. E. SHANNON: “Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.”
http://nonlin.org/biological-information/ 1. ‘Information’, ‘data’ and ‘media’ are distinct concepts. Media is the mechanical support for data and can be any material including DNA and RNA in biology. Data is the symbols that carry information and are stored and transmitted on the media. ACGT nucleotides forming strands of DNA are biologic data. Information is an entity that answers a question and is represented by data encoded on a particular media. Information is always created by an intelligent agent and used by the same or another intelligent agent. Interpreting the data to extract information requires a deciphering key such as a language. For example, proteins are made of amino acids selected based on a translation table (the deciphering key) from nucleotides. ...Nonlin.org
September 25, 2018
September
09
Sep
25
25
2018
05:46 PM
5
05
46
PM
PDT
Where in the cited article is the definition of AI?PavelU
September 25, 2018
September
09
Sep
25
25
2018
05:32 PM
5
05
32
PM
PDT
1 2

Leave a Reply