Artificial Intelligence Intelligent Design Mind

Eric Holloway: Why your computer will never talk to you

Spread the love

As a jokester recently demonstrated, even “shirts without stripes” is a fundamental, unsolvable problem for computers:

At first, “shirts without stripes” might not seem like much of an issue but it turns out that many important and interesting problems for computers fundamentally reduce to this “halting problem.” And understanding human language is one of these problems – Mind Matters News

Further thoughts from Eric Holloway on the halting problem:

Human intelligence as a halting oracle. Jonathan Bartlett proposes to model the human mind as a halting oracle.

and

Could one single machine invent everything? A tale.

12 Replies to “Eric Holloway: Why your computer will never talk to you

  1. 1
    ronvanwegen says:

    “shirts -stripes” works a lot better!

  2. 2
    AaronS1978 says:

    A customer of mine and myself had a ton of fun looking this up

  3. 3
    FourFaces says:

    Hold on a sec. A computer will never understand language because of the Halting Problem? And just because a Google search algorithm has trouble with “shirts without stripes”, it follows that no computer will ever understand language? I’m sorry but this is silly. First, the Halting problem only applies to a hypothetical sequential computer called the Turing machine. Modern computers are not Turing Machines because they can easily be halted by interrupts. Besides, it’s a trivial thing to prove whether or not most sequential programs will terminate. The HP only stipulates that this is not possible for all programs. Regardless of the hype surrounding Alan Turing (mostly because of PC reasons), the HP is merely an academic curiosity that is irrelevant to modern computing. Programmers never think about it when writing software. Now, let’s look at this supposed proof:

    We can use human language to describe any possible computer language. Since computers cannot understand computer languages due to the halting problem, they consequently cannot understand a language that can describe a computer language. Their lack of ability to understand is also due to the halting problem.

    Wow. How did we get to “computers cannot understand computer languages due to the halting problem”? A computer language is a set of symbolic codes specifically designed for a compiler running on a computer. I can assure you that every compiler knows exactly what to do with a set of coded instructions written in a given computer language: they convert them into ones and zeros. The HP has nothing to do with it.

    I’m afraid that, PhD or not, Eric Holloway should refrain from writing about something he doesn’t understand. I suspect Mr Holloway is hiding something about his competence in these matters. Artificial general intelligence is coming and it’s coming from the one place that nobody suspects. And I say this as a Christian AGI researcher.

  4. 4
    AaronS1978 says:

    @ FourFaces
    So this is not my realm of expertise
    Although it does relate to some that are my realm
    But I just had a few questions for you

    1. Agi. From what I’ve read and from what I’ve seen, no one’s near to completing or making this. I think one of the earliest estimates was showing 81 years. Do you know something that we don’t and if you do could you explain

    2. From where it coming that it is unexpected like quantum computing?

    3. Do you think strong AI and AGI are one in the same.? I don’t think there is a logical jump to go from nonconscious material to conscious
    (Strong AI referring to the ability to experience to feel) and I can’t see any Feasible pathway to make a wire and algorithms of any sort replicate our neurons and mind/brain/soul?

  5. 5
    jstanley01 says:

    FourFaces @3 – “Artificial general intelligence is coming and it’s coming from the one place that nobody suspects.”

    That’s an intriguing statement!

    But my characterization of the statement itself poses a philosophical question: Once a computer gains general intelligence, will it be capable of being intrigued by a mystery like I am? Further, will someone be able to characterize any aspects of a particular computer’s personality by what intrigues it, like he or she would be able to characterize certain aspects of my personality by the fact that a mystery along this line intrigues me? Moreover, if I interact with a computer claimed to have general intelligence which displays overweight interest along a certain axis of knowledge, should I assume that it does so because it is the computer which (or at this point, perhaps, “who”) is intrigued by knowledge along that axis? Or really, should I rather assume that the computer is merely reflecting what intrigues its programmer? And if the latter, is it really “general intelligence”?

    I don’t think the above questions are trivial when it comes to a definition of “general intelligence.” The definition I’m seeing looks problematic to me: “Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can.” Or more simply: “AGI is hypothetically equal to HGI.” Okay, but what are the parameters of HGI?

    You know, sometimes I say the exact opposite of what I mean. This phenomenon may be discernible, or at least suspected, by most people solely from the context of my statement. But it may be discernable only by someone who knows me; that is, someone who has characterized what I am like by observing what intrigues me, along with other measures that the person has made of me. Or maybe even, only by the way I raise my eyebrow in front of someone who knows me. It seems to me that for a computer to be able to claim general intelligence, it would have to be able to correctly parse when I’m saying what I mean and when I am saying the exact opposite of what I mean most of the time. Especially if it knows me.

    I call this “AI’s Irony Problem.”

  6. 6
    FourFaces says:

    AaronS1978 @4

    What I’m about to say will sound like a pile of BS which is just fine by me. I’m not trying to convince anyone. Here are my answers to the questions you posed.

    1. Yes, some of us Christians have a huge advantage over the heathens. We do know stuff that they don’t. Without this advantage, AGI could not be solved for a thousand years or more.
    2. It’s unexpected because the deep secrets of the brain are already here, hiding in plain sight. They can be found in a couple of ancient metaphorical (mystical or occult) scriptures in the Bible: the Book of Revelation and the first 6 chapters of the Book of Zechariah. If you can decode the metaphors, you will know how the brain works and solve AGI. But first, you have to believe.
    3. Strong AI is conscious AGI. You’re correct. Strong AI is impossible because consciousness requires a soul or spirit. General intelligence is possible without consciousness (spirit).

  7. 7
    FourFaces says:

    Jstanley01 @5,

    In my opinion, intelligent machines will be “intrigued” only by things that are relevant to whatever goals they’re trying to achieve. Machines will be conditioned to do what we tell them to do and nothing else. The’ll have no sense of abstract things like beauty or ugliness other than noticing how we, humans, react to certain patterns.

  8. 8
    Jim Thibodeau says:

    2. It’s unexpected because the deep secrets of the brain are already here, hiding in plain sight. They can be found in a couple of ancient metaphorical (mystical or occult) scriptures in the Bible: the Book of Revelation and the first 6 chapters of the Book of Zechariah. If you can decode the metaphors, you will know how the brain works and solve AGI. But first, you have to believe.

    If you know these Deep Secrets then just tell us what they are. Or submit them to science journals. Or even the ID one. Or use them to create the world’s best AGI. Do something, instead of just anonymously teasing people on a website.

  9. 9
    Jim Thibodeau says:

    Uh-huh.

  10. 10
    FourFaces says:

    Thibodeau @9,

    Obviously you don’t really care. So I deleted my previous reply to your post. See you around.

  11. 11
    Eugene says:

    FourFaces@3,

    Let me guess, you have not read Penrose’s “Shadows of the Mind”? It is a very worthwhile read before attempting to find holes in the logical arguments involving the HP. Understanding the mathematical proof for it also helps a lot.
    …Btw, modern computers are nothing more than Turing machines. The only things “modern” about them are speed and the amount of storage.

  12. 12
    FourFaces says:

    Eugene, you don’t know what you’re talking about. Unfortunately for you, I got neither the time nor the motivation to point out the error of your ways. LOL.

Leave a Reply