Artificial Intelligence Intelligent Design News

Artificial intelligence is the devil? More dangerous than nukes?

Spread the love

File:A small cup of coffee.JPG Wow.

Elon Musk, the chief executive of Tesla, has warned of the danger of artificial intelligence, saying that it is the biggest existential threat facing humanity.

Musk who was speaking at the Massachusetts Institute of Technology (MIT) Aeronautics and Astronautics department’s Centennial Symposium said that in developing artificial intelligence (AI) “we are summoning the demon.”

Worst of all, he wants “regulatory oversight.” So those who can’t stop or fix the machine will spy on the rest of us instead?

All this just in time for Hallowe’en too.

Follow UD News at Twitter!

6 Replies to “Artificial intelligence is the devil? More dangerous than nukes?

  1. 1
    Mapou says:

    Musk is also one of the main investors in Vicarious Systems, a leading edge AI startup whose goal is to emulate the abilities of the human brain. He says that the reason he is an AI investor is that he just wants to keep a close eye on a dangerous technology.

    Of course, Mr. Musk, being a devout Singulatarian, believes in machine consciousness and the possibility that the super intelligent machines of the not too distant future may decide they no longer like us and eliminate us. I think Musk should stick to electric vehicles and reusable rockets. Consciousness is not his forte.

  2. 2
    bornagain77 says:

    Although Artificial Intelligence (AI) may have some interesting, even unexpected, results, there is no danger that some AI supercomputer will ever become conscious and take over the world. In fact there is no danger that AI will ever generate any information above and beyond what was initially programmed into them.
    Dr. William Dembski and Dr. Robert Marks, who certainly knows a thing or two about Artificial Intelligence, have made this point clear in their ‘Conservation of Information’ work: Here is a list of their, and others, publications:

    Main Publications – Evolutionary Informatics

    Here is a fairly short lecture by Dr. Marks in which he points out the strict limits for computer programs to generate any information over and above what was initially progammed into them (even though they may have some interesting and unexpected results).

    On Algorithmic Specified Complexity by Robert J. Marks II – video
    paraphrase (Computers programs have failed to generate truly novel information- Robert Marks

    Here is a short sweet summary of the Conservation of Information principle as it relates to computers:

    LIFE’S CONSERVATION LAW – William Dembski – Robert Marks – Pg. 13
    Excerpt: (Computer) Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case.

    Here are a few supplemental notes on AI:

    What Is a Mind? More Hype from Big Data – Erik J. Larson – May 6, 2014
    Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, “Understanding Natural Language,” about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland’s article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required “holistic interpretation.” That is, the ambiguities weren’t resolvable except by taking a broader context into account. The words by themselves weren’t enough.
    Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal “test” to see if his claims were still valid today.,,,
    ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland’s point about machine translation made afresh, in 2014.
    Erik J. Larson – Founder and CEO of a software company in Austin, Texas

    Why We Can’t Yet Build True Artificial Intelligence, Explained In One Sentence – July 9, 2014
    “We don’t yet understand how brains work, so we can’t build one.”,,,
    [IBM’s “Jeopardy!”-winning supercomputer] Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, “read” is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.

  3. 3
    bornagain77 says:

    Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson
    Excerpt: For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information.

    ,,, since a computer has no free will to invent information, nor a consciousness so as to take context into consideration, then one simple way of defeating the infamous Turing test is to tell, or to invent, a joke:,,,Such as this joke:
    Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon

    “(a computer) lacks the ability to distinguish between language and meta-language.,,,
    As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human (from the computer with a new joke)”
    Per niwrad

  4. 4
    bornagain77 says:

    OT: You’re powered by quantum mechanics. No, really… – Jim Al-Khalili and Johnjoe McFadden – Saturday 25 October 2014
    Excerpt: “Schrödinger pointed out that many of life’s properties, such as heredity, depend of molecules made of comparatively few particles – certainly too few to benefit from the order-from-disorder rules of thermodynamics. But life was clearly orderly. Where did this orderliness come from? Schrödinger suggested that life was based on a novel physical principle whereby its macroscopic order is a reflection of quantum-level order, rather than the molecular disorder that characterises the inanimate world. He called this new principle “order from order”. But was he right?
    Up until a decade or so ago, most biologists would have said no. But as 21st-century biology probes the dynamics of ever-smaller systems – even individual atoms and molecules inside living cells – the signs of quantum mechanical behaviour in the building blocks of life are becoming increasingly apparent. Recent research indicates that some of life’s most fundamental processes do indeed depend on weirdness welling up from the quantum undercurrent of reality.”

  5. 5
    bornagain77 says:

    OT: podcast – On Human Origins: Ann Gauger Says “There’s Too Much to Do and Not Enough Time”
    On this episode of ID the Future, hear an excerpt of a presentation by Dr. Ann Gauger, recorded at a “Science and Human Origins” conference, sponsored by Discovery Institute in Coeur d’Alene, Idaho on Sept. 20, 2014.

  6. 6
    ciphertext says:

    I’m not sure that we really have too much to worry about based upon current research. I guess the extent of the threat would hinge upon the definition being used for intelligence wouldn’t it?

    I’m going to presume that the term artificial intelligence should be defined as if it could substituted for human intelligence.

    Note: the modifiers “artificial” and “human” raise interesting questions in their own right! Why “artificial” should be assumed “human equivalent” and not “ant equivalent” or “bug equivalent” etc…

    I think that we can say certain software programs have a “level” of intelligence built in. Even the most rudimentary of communication systems have error-correction baked into the system at some level (software and/or hardware device). In my mind, that is a level of intelligence which let’s the system both detect and perform corrections upon actions that it (the system) performs.

    I doubt, and it is just an opinion, that artificial intelligence will become sufficiently self aware to actually have the potential perceive concepts such as good, bad, truth, and recognize concepts (truths) such as symmetry and ratio.

Leave a Reply