Artificial Intelligence Intelligent Design Mind

Oxford mathematician: Computers will not out-think humans

Spread the love

John Lennox offered Mind Matters News an exclusive interview re his new book, 2084:

Mind Matters News: Dr. Lennox, you quote astronomer Martin Rees as saying, “Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity — spanning tens of millennia at most — will be a brief precursor to the more powerful intellects of the inorganic post-human era.”

Okay. But what reason have we to believe that our artefacts will really be smarter than ourselves? Isn’t that something of a skyhook?

John Lennox: Very little. It is always dangerous to extrapolate exponentially and our undoubted progress in technology in terms of speed and competence can easily mask the huge barrier that stands in the way of superior intelligent machines — consciousness. Smart humans are conscious — as since we do not even know what consciousness is, we are no further forward in that direction.

News, “Exclusive!: John Lennox answers our questions about AI in 2084” at Mind Matters News

8 Replies to “Oxford mathematician: Computers will not out-think humans

  1. 1
    Seversky says:

    Since, as Dr Lennox admits, we don’t know what consciousness is, we are not in a position to know whether or not a computer will ever be able to emulate it or become conscious itself.

  2. 2
    mike1962 says:

    Godel and Turing proved that algorithms cannot discover new “mathematical truths”, that is, no new axioms.
    Yet (some) humans can do it. See Sir Penrose’s books on A.I.

  3. 3
    Retired Physicist says:

    @mike that’s not remotely what Godel discovered.

  4. 4
    ET says:

    @RP that’s not remotely what mike1962 said

  5. 5
    mike1962 says:

    Retired Physicist, you forgot Turing, who built on Godel’s work.
    If you know of an algorithm capable of generating axioms within a formal system, I’d be happy to consider it.

  6. 6
    Querius says:

    Actually, Jon Lennox’s points in the interview are well worth considering a little more deeply. What Dr. Lennox actually noted is that our development of AI is profoundly limited by AI not being conscious, something which we are far from being able to create let alone understand.

    In contrast, some people are completely overwhelmed by technology and have easily been misled to believe that any technology sufficiently large and complex will magically evolve both life and consciousness. This makes for exiting horror science fiction movies–for example, a car that disobeys your controls and drives wildly into a deep, dark forest!

    More dangerous is the misuse of information gathered and processed in large amounts by humans with an unencumbered potential for selfish or malevolent objectives. Even with the GDPR, who can guarantee it’s not being routinely violated, not to mention the curious silence from US politicians.

    A more immediate issue is who does your data and metadata belong to and what can third parties use it for.

    -Q

  7. 7
    daveS says:

    mike1962,

    Maybe I’m not understanding what you’re saying, but isn’t it rather trivial to generate axioms in a formal system?

    As an example, consider this famous formal system.

    It has axioms MI along with four additional axiom schemas. The first axiom schema is If xI is a theorem then so is xIM.

    Couldn’t you (for example) simply choose a random string S in this system, and declare that “if xS is a theorem, so is xSM” to be a new axiom?

    If you program a computer to generate axioms of this form, and connect it to a source of true randomness (e.g., cosmic rays), then the program could generate random axioms all day.

  8. 8
    JVL says:

    Mike1962: Godel and Turing proved that algorithms cannot discover new “mathematical truths”, that is, no new axioms.

    I’m just going to agree that: I don’t think it was ‘axioms’ Godel and Turing were talking about.

    The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

Leave a Reply