Elon Musk, the chief executive of Tesla, has warned of the danger of artificial intelligence, saying that it is the biggest existential threat facing humanity.
Musk who was speaking at the Massachusetts Institute of Technology (MIT) Aeronautics and Astronautics department’s Centennial Symposium said that in developing artificial intelligence (AI) “we are summoning the demon.”
Worst of all, he wants “regulatory oversight.” So those who can’t stop or fix the machine will spy on the rest of us instead?
All this just in time for Hallowe’en too.
Follow UD News at Twitter!
Musk is also one of the main investors in Vicarious Systems, a leading edge AI startup whose goal is to emulate the abilities of the human brain. He says that the reason he is an AI investor is that he just wants to keep a close eye on a dangerous technology.
Of course, Mr. Musk, being a devout Singulatarian, believes in machine consciousness and the possibility that the super intelligent machines of the not too distant future may decide they no longer like us and eliminate us. I think Musk should stick to electric vehicles and reusable rockets. Consciousness is not his forte.
Although Artificial Intelligence (AI) may have some interesting, even unexpected, results, there is no danger that some AI supercomputer will ever become conscious and take over the world. In fact there is no danger that AI will ever generate any information above and beyond what was initially programmed into them.
Dr. William Dembski and Dr. Robert Marks, who certainly knows a thing or two about Artificial Intelligence, have made this point clear in their ‘Conservation of Information’ work: Here is a list of their, and others, publications:
Here is a fairly short lecture by Dr. Marks in which he points out the strict limits for computer programs to generate any information over and above what was initially progammed into them (even though they may have some interesting and unexpected results).
Here is a short sweet summary of the Conservation of Information principle as it relates to computers:
Here are a few supplemental notes on AI:
,,, since a computer has no free will to invent information, nor a consciousness so as to take context into consideration, then one simple way of defeating the infamous Turing test is to tell, or to invent, a joke:,,,Such as this joke:
Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon
http://imgs.xkcd.com/comics/turing_test.png
“(a computer) lacks the ability to distinguish between language and meta-language.,,,
As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human (from the computer with a new joke)”
Per niwrad
http://www.uncommondescent.com.....artifices/
OT: You’re powered by quantum mechanics. No, really… – Jim Al-Khalili and Johnjoe McFadden – Saturday 25 October 2014
Excerpt: “Schrödinger pointed out that many of life’s properties, such as heredity, depend of molecules made of comparatively few particles – certainly too few to benefit from the order-from-disorder rules of thermodynamics. But life was clearly orderly. Where did this orderliness come from? Schrödinger suggested that life was based on a novel physical principle whereby its macroscopic order is a reflection of quantum-level order, rather than the molecular disorder that characterises the inanimate world. He called this new principle “order from order”. But was he right?
Up until a decade or so ago, most biologists would have said no. But as 21st-century biology probes the dynamics of ever-smaller systems – even individual atoms and molecules inside living cells – the signs of quantum mechanical behaviour in the building blocks of life are becoming increasingly apparent. Recent research indicates that some of life’s most fundamental processes do indeed depend on weirdness welling up from the quantum undercurrent of reality.”
http://www.theguardian.com/sci.....cs-biology
OT: podcast – On Human Origins: Ann Gauger Says “There’s Too Much to Do and Not Enough Time”
http://www.discovery.org/multi.....ough-time/
On this episode of ID the Future, hear an excerpt of a presentation by Dr. Ann Gauger, recorded at a “Science and Human Origins” conference, sponsored by Discovery Institute in Coeur d’Alene, Idaho on Sept. 20, 2014.
I’m not sure that we really have too much to worry about based upon current research. I guess the extent of the threat would hinge upon the definition being used for intelligence wouldn’t it?
I’m going to presume that the term artificial intelligence should be defined as if it could substituted for human intelligence.
Note: the modifiers “artificial” and “human” raise interesting questions in their own right! Why “artificial” should be assumed “human equivalent” and not “ant equivalent” or “bug equivalent” etc…
I think that we can say certain software programs have a “level” of intelligence built in. Even the most rudimentary of communication systems have error-correction baked into the system at some level (software and/or hardware device). In my mind, that is a level of intelligence which let’s the system both detect and perform corrections upon actions that it (the system) performs.
I doubt, and it is just an opinion, that artificial intelligence will become sufficiently self aware to actually have the potential perceive concepts such as good, bad, truth, and recognize concepts (truths) such as symmetry and ratio.