Artificial Intelligence Information Intelligent Design

Computer engineer Eric Holloway: Artificial intelligence is impossible

Spread the love

Holloway distinguishes between meaningful information and artificial intelligence:

What is meaningful information, and how does it relate to the artificial intelligence question?

First, let’s start with Claude Shannon’s definition of information. Shannon (1916–2001), a mathematician and computer scientist, stated that an event’s information content is the negative logarithm* of its probability.

Claude Shannon/Conrad Jakobs

So, if I flip a coin, I generate 1 bit of information, according to his theory. The coin came down heads or tails. That’s all the information it provides.

However, Shannon’s definition of information does not capture our intuition of information. Suppose I paid money to learn a lot of information at a lecture and the lecturer spent the whole session flipping a coin and calling out the result. I’d consider the event uninformative and ask for my money back.

But what if the lecturer insisted that he has produced an extremely large amount of Shannon information for my money, and thus met the requirement of providing a lot of information? I would not be convinced. Would you?

A quantity that better matches our intuitive notion of information is mutual information. Mutual information measures how much event A reduces our uncertainty about event B. We can see mutual information in action if we picture a sign at a fork in the road. More.  (Eric Holloway, “Artificial intelligence is impossible” at Mind Matters Today)

See also: Could one single machine invent everything? (Eric Holloway)

So lifelike … Another firm caught using humans to fake AI: Byzantine claims and counterclaims followed as other interpreters came forward with similar stories. According to Qian, something similar happened last year.

and

The hills go high tech An American community finding its way in the new digital economy At present, says Hochschild, Ankur Gopal and Interapt are sourcing as many new hillbillies as they can find: “For now, there is so much demand for I.T. workers — 10,000 estimated openings by 2020 in the Louisville metro area alone — that Mr. Gopal is reaching out to new groups.

48 Replies to “Computer engineer Eric Holloway: Artificial intelligence is impossible

  1. 1
    PavelU says:

    Where in the cited article is the definition of AI?

  2. 2
    Nonlin.org says:

    Shannon didn’t really mean information. He was only concerned with data transmission:

    Shannon wrote to Vannevar Bush at MIT in 1939: “I have been working on an analysis of some of the fundamental properties of general systems for the transmission of intelligence”… and then a few engineers especially in the telephone lab began speaking of information…

    A Mathematical Theory of Communication By C. E. SHANNON: “Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.”

    http://nonlin.org/biological-information/
    1. ‘Information’, ‘data’ and ‘media’ are distinct concepts. Media is the mechanical support for data and can be any material including DNA and RNA in biology. Data is the symbols that carry information and are stored and transmitted on the media. ACGT nucleotides forming strands of DNA are biologic data. Information is an entity that answers a question and is represented by data encoded on a particular media. Information is always created by an intelligent agent and used by the same or another intelligent agent. Interpreting the data to extract information requires a deciphering key such as a language. For example, proteins are made of amino acids selected based on a translation table (the deciphering key) from nucleotides.

  3. 3
    daveS says:

    EricMH (and others):

    This raises the question: What can create mutual information?

    A defining aspect of the human mind is its ability to create mutual information. For example, the traffic sign designer in the example above created mutual information. You understood what the sign was meant to convey.

    If I understand the example correctly, the sign painter has created and installed a sign, which will allow pedestrians to be more certain about getting to their desired destination.

    Isn’t it true that computers can perform similar feats? I can type my home address and the address of my destination into a computer and receive a detailed set of instructions which will increase the chance of my getting to the destination. In fact, it would seem that this software actually could be used to design the street signs themselves and put the sign designer out of a job.

    A different example: Suppose I’m going to play a game of Go with a professional 9-dan player. I don’t play at all, but I will almost certainly win if I follow the instructions of a state-of-the-art Go computer program (say AlphaGo Zero). Such a program literally provides me with “signs”, guiding me through the state-space of Go, drastically increasing the certainty that I will win.

  4. 4
    Fasteddious says:

    Of course computers and related systems can churn out information. Most of the information presented on The Weather Network comes from automated machines collecting raw data, transmitting it to computers which collate and massage it into what we see as radar images, weather maps, or modelling forecasts, mostly without direct human intervention. However, all those processes and all the information requires a lot of human design and intelligence to make it work and make sense of it. And it is only humans who understand what it all means. Any sensor feeding signals to a computer which then converts it to a reading and logs the data can be said to be generating information, but the information is strictly defined and constrained by human intelligence, and the data is only of use to humans or to other computers designed by humans to process the data in defined ways. Thus, the computer and associated machinery are just extensions of the human intelligence, just as a car or airplane are extensions of our physical abilities.

  5. 5
    daveS says:

    Fasteddious,

    I agree with most of what you say.

    The abilities the person exhibits when creating and installing the sign do not seem especially impressive to me. Essentially the designer extracts information from the environment (the topology of the local road system, for one thing) and builds the sign to convey some of that information. It seems like a rather mundane task, and something that a computer could easily be programmed to handle.

  6. 6
    EricMH says:

    @daveS the programming would be the creation of mutual information.

    And yes, my argument implies that our supra Turing minds are necessary to do pretty mundane stuff as well as amazing things.

  7. 7
    daveS says:

    EricMH,

    the programming would be the creation of mutual information.

    Hm. That sounds reasonable; but doesn’t that allow that AI actually is possible? By AI, I mean intelligently designing machines to perform tasks (usually at a level comparable to or better than that of humans). For example, playing a game or classifying images according to their subject.

    I don’t mean that AI will necessarily rival humans’ general intelligence (time will tell), but I think the term AI is usually understood to mean what I described above.

  8. 8
    Nonlin.org says:

    “Mutual information measures how much event A reduces our uncertainty about event B”

    Is “mutual information” really information? Consider that the value of A is whole dependent on us, the users. Show event A to a cat and it might not mean anything.

    On the other hand, if one way or another I convey the concept of ‘a circle’, is it not information even when it does not resolve any uncertainty?

  9. 9
    EricMH says:

    @DaveS, yes if we define AI as machines designed to perform tasks equal to or better than humans, then of course we can build AI. A fork is an instance of AI because it is better at manipulating food than our fingers. A car is an instance of AI because it transports much more effectively than humans. AI becomes another name for the age old tool.

    However, if we define AI as something that replicates human’s ability to *create* mutual information, the dream that infects Silicon Valley, then AI is logically impossible.

  10. 10
    daveS says:

    EricMH,

    I should have been a little more careful with my definition of AI so as to rule out forks. But yes, I agree with your point.

    🙂

    I don’t know anything about AI, aside from briefly playing around with simple neural net models and occasionally browsing Hacker’s News. But from what I’ve seen, the people who work in the field view it as just a tool (although a surprisingly effective one) and certainly not something that can achieve the logically impossible, such as a violation of a mathematical theorem.

    Are there specific projects under way in Silicon Valley that you can point to whose objectives are clearly logically or mathematically impossible?

    Edit: I should add that obviously the field of AI is famous for hyping itself then crashing when it doesn’t meet expectations, but I would guess there are many mathematically sophisticated people who work in AI and they wouldn’t be so naive as to think they can violate mathematical laws.

  11. 11
    EDTA says:

    Nonlin @ 8

    >Is “mutual information” really information?

    Yes, it can be measured and quantified, at least in controlled scenarios like Shannon was describing. It is in play anytime learning something helps with learning something else that is related, or when communicating in a common language.

  12. 12
    EricMH says:

    @daveS, the most egregious example is the Singularity cult invented by Ray Kurzweil, which believes that since the mind is software, they can digitize their brains and live forever on a CPU. There is a startup now that is based on this premise:

    http://www.nydailynews.com/lif.....-1.3876200

    Less extreme, I’d say the emphasis on neural networks and other very complex machine learning models is misplaced, as is the AI and ML hype in general. They appear to “learn” a lot of information, but are actually overfitting, which makes them prime for exploitation. So, a burgeoning field is AI hacking, which will become even more an issue as AI is used in critical systems such as driverless cars.

    The direction SV should go instead is human-in-the-loop systems, which maximize the use of both AI and our unique ability to create information. And behind the scenes, this is what the big SV companies actually do. They just don’t widely publicize how much their fancy algorithms are driven by human crowdsourcing. They all have their own private crowdsourcing platforms that are like Amazon’s mechanical turk service. I learned this from a conference in the field of human computation, HCOMP 2016.

  13. 13
    daveS says:

    EricMH,

    Thanks for the detailed response. I had not heard of Nectome before. For reference, their mission statement:

    Our mission is to preserve your brain well enough to keep all its memories intact: from that great chapter of your favorite book to the feeling of cold winter air, baking an apple pie, or having dinner with your friends and family. If memories can truly be preserved by a sufficiently good brain banking technique, we believe that within the century it could become feasible to digitize your preserved brain and use that information to recreate your mind. How close are we to this possibility? Currently, we can preserve the connectomes of animal brains and are working on extending our techniques to human brains in a research context. This is an important first step towards the development of a verified memory preservation protocol, as the connectome plays a vital role in memory storage.

    Certainly the part I bolded could turn out to be mathematically or logically impossible, although I believe that’s still an unsolved problem. When they talk about possibly achieving some goal within the century, they don’t have a very detailed roadmap, to say the least. Perhaps it’s outright fraud.

    I don’t know enough about neural networks to comment on issues such as overfitting, hacking, or the critical role of crowdsourcing, but it’s not clear to me that there are claims of mathematical or logical impossibilities involved. If you are pointing to the fact that companies who use things such as neural networks de-emphasize these issues for short-term gain, then I do agree.

  14. 14
    Mung says:

    Given that mutual information is merely a measure of how much knowing one variable X, can reduce the uncertainty about a second variable, Y, what does it mean to say that humans create mutual information?

    Let’s say that you don’t know the temperature outside. Call that X. So you look at the thermometer. Call that Y. Now Y has reduced your uncertainty about X.

    But in what sense has the mutual information been created?

  15. 15
    EricMH says:

    @Mung learning also requires creating mutual information. In your example someone invented the thermometer, and I created the link in my mind between the thermometer reading and the outside temperature.

  16. 16
    johnnyb says:

    I think that most people are missing why agency is required for this. There are multiple reasons, but here are two:

    1) Most people don’t realize the supreme reduction in search space that takes place, even for just associating a thermometer and the outside temperature. If a machine is given *no* prior information, trying to suss out that this (a) something like temperature exists and affects us in profound ways, and (b) this little object has anything at all to do with temperature, and (c) the readings in the object tell me something about that temperature, and (d) how those markings actually correlate to temperature are all HUGE search space reductions.

    Now, if someone arbitrarily limited the search space to, say, thermometer readings and outside events, then an AI might be able to establish the correlation. However, that gigantic reduction in search space is mutual information, and is supplied by the programmer. In my own writing, I often refer to this as “parameterizing the search space”, but the function is the same, and mutual information is a more general way to describe it.

    The second issue is that it requires teleology in order to decide what sorts of mutual information we should establish. That is, it is pointless to create correlations if there is nothing to use them for. You can’t use them *for* anything without teleology. Machines don’t have teleology unless they are programmed with teleology, again importing from outside.

  17. 17
    Nonlin.org says:

    11
    EDTA @11

    It’s not always as clear cut. See my examples. In addition, a painting, a song, etc. is information without reducing any uncertainty about a secondary event B. And its impact is most definitely not measurable.

    Everyone, Shannon included, misuses the word ‘information’, but they should use ‘data’ instead. Yes, sometimes that ‘data’ (not ‘information’) is reducing some uncertainty by a measurable quantity, but not always.

  18. 18
    EricMH says:

    @Nonlin.org mutual information is the consistent usage of the term. Paintings etc share enormous amounts of mutual information with the metaphysical realm. Even modern art is not generated completely at random, or if it is that is precisely the point.

    @daveS per my argument, if the mind creates information then it cannot be reproduced computationally. My argument also implies all AI systems are limited by the fact they cannot create information, so require humans in the loop. Further, it also implies that machines cannot learn, they can only memorize, and that is what these neural networks are doing. Just as learning by memorizing for a test is deficient, then so are these neural networks deficient.

  19. 19
    daveS says:

    EricMH,

    After this back-and-forth, I think my issues with the linked article and title of this thread are all or mostly semantic. Which is not to say they are unimportant to you getting your message out.

    According to the definitions used by those who work in AI, artificial intelligence and machine learning already exist, so it’s jarring to read that AI is impossible or that machines cannot learn. Have you considered using slightly different language, talking about “limits on AI”, for example?

  20. 20
    ET says:

    daves:

    According to the definitions used by those who work in AI, artificial intelligence and machine learning already exist,…

    Any examples? According to Wikipedia:

    The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.

    And that has never been accomplished.

  21. 21
    R J Sawyer says:

    When people talk about whether or not AI is possible, human exceptionalism always rears its ugly head. My prediction is that if we are capable of producing an AI that meets the current criteria for AI, we will simply change the criteria.

    I remember the same thing happening when it was thought that humans were the only animal capable of using language. Then we taught a gorilla to use sign language. Then the arguments started flying that the gorilla wan’t using proper grammar and syntax. We have an obsession with convincing ourselves that we are at the pinnacle of life on this planet.

  22. 22
    ET says:

    R J Sawyer:

    My prediction is that if we are capable of producing an AI that meets the current criteria for AI, we will simply change the criteria.

    Typical evo answer for any and everything.

    Then we taught a gorilla to use sign language.

    And how did it work with other gorillas? Did they understand him?

    Then the arguments started flying that the gorilla wan’t using proper grammar and syntax.

    Or that it wasn’t talking, which was the original point.

    We have an obsession with convincing ourselves that we are at the pinnacle of life on this planet.

    We don’t need convincing as that is what the science says.

  23. 23
    EricMH says:

    @daveS, that is a good point. There is certainly a semantic barrier here. We have systems already that people call AI and ML.

    The problem is there is a subtle equivocation going on in the field, where people think that by iteratively improving on current AI systems we will get something that equals or exceeds human intelligence. The underlying assumption is that there is no intrinsic difference between an algorithm and the human mind.

    My argument is that intelligence and algorithms are fundamentally different things, because intelligence can create mutual information and algorithms cannot. So, the transition to human+ level intelligence from existing AI systems is logically impossible.

    Not coincidentally, this equivocation between creating AI systems to reproducing human intelligence is very similar to the equivocation between micro evolution and macro evolution. Both equivocations deal with the same fundamental problem of information creation.

  24. 24
    EricMH says:

    @R J Sawyer, what do you think of my argument that if the human mind creates mutual information then human level AI is impossible?

  25. 25
    EricMH says:

    @JohnnyB makes a great point that without preexisting teleology there is nothing to make mutual information with.

    The whole problem is that math notation is syntactic. There is a whole different realm of metaphysical truth that notation can describe, but has no intrinsic link to. For example, Godel proved mathematical notation cannot describe all possible mathematical truths. I’m using the term “notation” instead of math because per Godel mathematics itself is a separate entity from the notation we use to describe mathematics.

  26. 26
    daveS says:

    EricMH,

    The problem is there is a subtle equivocation going on in the field, where people think that by iteratively improving on current AI systems we will get something that equals or exceeds human intelligence. The underlying assumption is that there is no intrinsic difference between an algorithm and the human mind.

    My argument is that intelligence and algorithms are fundamentally different things, because intelligence can create mutual information and algorithms cannot. So, the transition to human+ level intelligence from existing AI systems is logically impossible.

    Do you think it’s possible that computers could one day simulate human behavior well enough to be practically indistinguishable from humans? (In other words, pass the Turing test, I guess).

    If so, would that that have an impact on your position?

  27. 27
    Nonlin.org says:

    EricMH @18

    So if mutual information doesn’t work for all information (as shown), let’s change that. What’s the big deal?

    What do you mean:

    Paintings etc share enormous amounts of mutual information with the metaphysical realm. Even modern art is not generated completely at random, or if it is that is precisely the point.

    ?

    It’s not that information shares something with the metaphysical. Information is in fact abstract like math, not physical. And what passes for information in IT, physics, etc. is actually just data representing that information. Think about it: http://nonlin.org/biological-information/

  28. 28
    EricMH says:

    @daveS if it is true that humans create mutual information then computers will never simulate human behavior to an indistinguishable level, except within a narrow domain.

  29. 29
    EricMH says:

    @nonlin, I agree we are just representing abstract concepts. If I burn my math books I have not destroyed mathematics itself.

  30. 30
    R J Sawyer says:

    EricMH@25. I am by no means an expert in information theory or AI. My point is simply that it is our nature to keep shifting the goal posts such that we remain exceptional. My example with sign language demonstrates this. But we have done the same thing with AI. Its definition has changed over the years as each ceiling was crashed through.

  31. 31
    EricMH says:

    @R J Sawyer, another perspective is the goal posts fail to define human intelligence, and we learn this once we reach the goal posts. It’s also a matter of what goal posts you are talking about. We have been making tools to improve on our native capabilities for millennia, but none of our tools have ever replaced us, nor have we ever mistaken our tools for ourselves. It is only in recent history when we suddenly are obsessed with our latest tool replacing us. A good question to ask is: why?

  32. 32
    R J Sawyer says:

    EricMH

    @R J Sawyer, another perspective is the goal posts fail to define human intelligence, and we learn this once we reach the goal posts.

    I am sure that this is true, at least in part.

  33. 33
    johnnyb says:

    @EricMH:

    Actually, it is not so recent:

    Surely he cuts cedars for himself, and takes a cypress or an oak and raises it for himself among the trees of the forest. He plants a fir, and the rain makes it grow….Half of it he burns in the fire….But the rest of it he makes into a god, his graven image. He falls down before it and worships; he also prays to it and says, “Deliver me, for you are my god.” (Isaiah 44:14-17)

  34. 34
    daveS says:

    EricMH,

    If I may jump in:

    It is only in recent history when we suddenly are obsessed with our latest tool replacing us. A good question to ask is: why?

    I think with the advent of virtual assistants such as Siri and Alexa, we are getting to the point where we can envision AIs which are nearly indistinguishable from humans, at least in certain contexts. They are already good enough to “replace” us in some ways. That is both a major technological achievement and a frightening thing.

  35. 35
    EricMH says:

    @JohnnyB, great point. AI does look like the modern form of idolatry, complete with religion, priests and even an afterlife.

    @daveS, yes, but this is largely due to actual humans behind the scenes powering the AI:

    https://mindmatters.today/2018/09/so-lifelike/

    It is ironic that one of the major platforms for this crowdsourcing is called Mechanical Turk, which was the original “AI”: a man hidden in a mechanical contraption that could beat humans at chess, tricking them into believing there was a true chess playing robot.

  36. 36
    daveS says:

    EricMH,

    AI does look like the modern form of idolatry, complete with religion, priests and even an afterlife.

    😮

    I can see we have very different perspectives on this issue.

    I see the current hype around AI as part and parcel of our free enterprise system. I witnessed a sales pitch by a team from a multi-billion dollar company recently, and of course they mentioned that they have an AI (I immediately rolled my eyes; who doesn’t have AIs these days?).

    Now I think it’s probably the case that their AI does provide value for their clients, but my impression was that they simply wanted to say the magic word “AI” at some point during the presentation.

    But their job is to sell their product. They weren’t lying about anything. There were no claims of achieving the mathematically or logically impossible. They didn’t say their AIs create mutual information. It’s just business.

    Even the Nectome people aren’t lying, as far as I can tell. It’s clear from their website that their “clients” will certainly die. And if Nectome does manage to reconstruct their brain/mind at some point in the future, that mind will not be “them”, but rather a copy, so “they” won’t enjoy an afterlife.

    But certainly there is a lot of salesmanship going on. Eventually, though, AIs in free enterprise will have to pay for themselves.

    My interest in AI (as a layperson and consumer) is more around practical devices which are clearly feasible, and which can improve our lives. For example, self-driving cars. We had a horrible accident in my area last year. It was a head-on collision on a desolate, straight highway, during the day in perfect weather, and everyone in both cars died. We actually have quite a few accidents like this, which I think likely could be prevented with (what some call) AI.

  37. 37
    EricMH says:

    @daveS, yes I agree there is much practical utility with automation using AI and ML techniques. However, there is a dividing line between what computers can copy pretty well, and what is beyond the reach of computers. Unfortunately, there is not much clarity where that dividing line is, and attempting to cross it can lead to problems.

    A better approach, known as intelligence augmentation, is where the goal is not to completely eliminate human interaction, but to algorithmically improve human interaction.

    All that being said, the religious side does seem to be the underlying motivation and the source of the undying and unfulfilled hype. There is a belief that AI can ultimately do what human intelligence can do, so short term failures do not dissuade the hype. Similar to the hype around the constant failures of evolution theory.

    Otherwise, it is hard to explain why there is so much hype around what is essentially a fancier form of control systems and non-linear regression. I’m sure if we called AI non-linear regression the hype would disappear overnight, which makes it clear the religious dimension is the driving factor behind the AI hype.

  38. 38
    PaoloV says:

    I see AI as a very useful tool that may free us from having to do tedious or dangerous work, thus allowing us to be more creative, which is one of the attributes that distinguish us from other biological systems. We were made to be positively creative and more efficiently productive for the well-being of all people. AI may help to facilitate that important function. Obviously AI could be used for evil purposes as well. That’s inevitable in this sinful world.
    However, I don’t believe that “conscious” AI will ever happen. That’s nonsense. We can’t create conscious beings having free will. No matter how sophisticated the robots could ever be, they’ll remain being programmed machines.

  39. 39
    daveS says:

    EricMH,

    I do agree (to the extent that I understand “AI”) with your first two paragraphs. I’m just starting to (attempt to) read a book on automated theorem proving, and even in that realm, I can see how you need human (or other intelligent) input to get off the ground.

    Clearly I do disagree with the second two paragraphs, probably because religion is not an integral part of my worldview. I am interested to a degree in religion, because most of those around me are religious, but I don’t think of things in religious terms as a rule.

    For me, the hype (by which I mean something like “irrational interest in”) around AI is just the “cool factor”.

    As I said above, when I have my “rational” hat on, I think in terms of self-driving cars and other practical benefits of AI.

    When I have my “non-rational” hat on, I think of less practical, but fascinating things such as AlphaGo, computer-generated art and music, and even mathematical proofs which were discovered by machine (with modest results so far, I take it).

    You might recall during the match between AlphaGo and Lee Sedol, there was a particular move by AlphaGo that Go experts found perplexing, and IIRC, it appeared that Lee thought that Aja Huang might have mistakenly put the stone in the wrong place. It turned out later to be a move key to AlphaGo’s victory.

    That was fascinating to watch. The match in general was a watershed moment; it looks like humans may never again be able to defeat the best Go-playing computer programs. At the same time, I don’t have any misconceptions that this computer program can in any way compete with humans in a general setting. It runs on a machine, which could in principle be made out of bricks, I suppose, with no electronics. It’s a very cool thing, though.

  40. 40
    R J Sawyer says:

    Eric and Dave, interesting discussion. Eric, when you say that there is a line that AI’s will never be able to cross, I assume that you are referring to consciousness. But I would argue that this is more of a line that we would never acknowledge them crossing than one that it is not possible for them to cross. Since we can’t even agree what consciousness is, I guess this is understandable.

    If through interactions with an AI, using any test that humans would pass 95% of the time, we can’t distinguish it from a conscious being, how can we conclude that it is not conscious?

  41. 41
    EricMH says:

    @R J Sawyer, I’m saying there is a quantitative line AI cannot cross. There is some measurable empirical task that humans can do which AI will never be able to do. I’m making a hard science sort of claim, that’s why I refer to mutual information, which at least in theory could be measured and we could detect if it is increased.

    What the actual experiment would look like is harder to pin down. I’ve made some attempts, but nothing that I could publish. However, the question does not seem to be outside the realm of science to address.

  42. 42
    EricMH says:

    @daveS here’s one example of computational limits in the area of automatic proof deduction:

    No automated proof system can prove the Kolmogorov complexity of a bitstring is greater than the size of its axioms. I.e. the set of axioms can be represented as a finite bitstring, and the length of this bitstring limits what can be proven. Also, the automated proof system cannot increase its set of axioms to increase the number of things it can prove due to the same problem of the uncomputability of Kolmogorov complexity.

    Regarding the religion aspect, why are results such as Alpha Go and the other Deep Mind game playing AI projects considered so cool? The games themselves are not useful, so the ‘cool’ aspect is that AI seems to be exceeding human intelligence, suggesting that human intelligence is itself computational. There is not similar hype surrounding forks and cars, even though both of these exceed human capabilities. So, the question is, what is so astonishing about a tool that seems to replace human intelligence? I submit it is precisely the religious implications: that we can create an all powerful mind (create a god in our image) and that we could also possibly give our own minds immortality. Exactly the idolatry quote that JohnnyB brought up.

  43. 43
    daveS says:

    EricMH,

    Regarding the first paragraph, that’s interesting to know. Are these things provable by humans? This is beyond my ken.

    To the second paragraph, these questions are more accessible to me. Yes, it is crucial that AlphaGo at some point became better at playing Go than humans. I don’t conclude from that that human intelligence is computational though. Human intelligence is a mystery to me.

    What I find cool is that we are talking about a game with thousands of years of history and tradition behind it. Now, for the first time ever, machines can beat the best humans at this game.

    I envy those with a deep knowledge of the game, who can now watch and appreciate matches between AIs which are at a higher level than that humans are capable of. I think it would be fascinating to see if new strategies are developed (or “found”) by these machines. And perhaps human players can learn from computers, thus increasing their level of play also. I believe this has happened to some extent in chess (which I don’t know anything about either).

    Spectators will always want to see matches between competitors at the highest level. I don’t care a great deal whether the competitors are humans or computers.

    And I certainly don’t think AI is leading us toward an all-powerful mind or immortality. I just expect to see a gradual increase in the power of these machines (within the bounds of the laws of physics), perhaps with the occasional jump or period of stagnation.

  44. 44
    EricMH says:

    @daveS

    > Are these things provable by humans? This is beyond my ken.

    That’s a good question. At least in the realm of mathematics this seems to be happening. Humans are ever increasing the number of axioms we can work with, and thus ever increasing the number of things that can be proven.

  45. 45
    daveS says:

    EricMH,

    At least in the realm of mathematics this seems to be happening.

    I guess there must be some question about it? Otherwise this would provide a clear-cut example of something the human mind is capable of, but which a computer cannot achieve?

    Regarding this:

    Also, the automated proof system cannot increase its set of axioms to increase the number of things it can prove due to the same problem of the uncomputability of Kolmogorov complexity.

    I’m not clear what this means in concrete terms, so I’ll try to ask a question using an example.

    Suppose someone has written a theorem prover for group theory. Initially, it just includes the four basic group axioms.

    If we wanted this theorem prover to be able to prove theorems about specific kinds of groups (for example abelian groups), one option would be to explicitly add that axiom to our program.

    Another option is to write a routine that would generate formulas in the language of the theory (for example “for all x, for all y, x*y = y*x” would be one) and then add them to our list of axioms.

    But either way, whether we explicitly add the axiom, or write some code to generate axioms, we are essentially bundling all that information for the axiom(s) into the code.

    Is that an illustration of what it means to say the automated theorem prover cannot increase its set of axioms?

  46. 46
    daveS says:

    PS: The two options I describe don’t produce exactly the same results, of course. Under option 1 you will get theorems for the theory of abelian groups only.

  47. 47
    daveS says:

    PSS: One last try. If you try to give the theorem prover the ability to add new axioms to its axiom system by including axiom-generating-code, you have essentially added those axioms yourself, in compressed form, to the theorem prover.

  48. 48
    EricMH says:

    @daveS, yes that’s correct. The key is the axioms must be consistent, i.e. cannot prove and disprove the same statement. However, algorithmically certifying an axiom is consistent requires scanning an infinite set, so it is undecidable. This is the reason Godel’s incompleteness theorem works. And all it takes is one inconsistency, and the entire system falls apart. One principle of logic is that if your axiomatic system contains a single contradiction, then the system can prove absolutely anything, and becomes useless.

    I’m not sure why mathematics is not considered a clearcut example of humans doing something algorithms cannot. The counter arguments I’ve seen start with the circular premise that humans are algorithmic, or the strawman that humans cannot solve every problem. Neither are good arguments against the notion that mathematics demonstrates the mind is non computational. But, there is probably some good counter argument out there I haven’t heard.

Leave a Reply