Uncommon Descent Serving The Intelligent Design Community

Next artificial intelligence doom scenario is 2075

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

According to a new book, Superintelligence: Paths, Dangers, Strategies, by Swedish philosopher Nick Bostrom:

We are still far from real AI despite last month’s widely publicised “Turing test” stunt, in which a computer mimicked a 13-year-old boy with some success in a brief text conversation. About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075. Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of “superintelligence” faster than most expert
s expect – and that its impact is likely either to be very good or very bad for humanity.

The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.

“Machines have a number of fundamental advantages, which will give them overwhelming superiority,” he writes. “Biological humans, even if enhanced, will be outclassed.” He outlines various ways for AI to escape the physical bonds of the hardware in which it developed. More.

But, in the real world, how does one get a machine to want anything?

Follow UD News at Twitter!

Comments
Computers are dumber then animals. Not just cats. computers are only memory machines. All they do is simple simple memory operations. Memory is not intelligence. its only a RECORDING/memory of intelligence gained otherwise. Only God/people have intelligence or rather WISDOM, Understanding, and knowledge as proverds teaches on this.Robert Byers
July 22, 2014
July
07
Jul
22
22
2014
07:52 PM
7
07
52
PM
PDT
It was interesting for me to learn that computers, despite all the advances, have a very hard (impossible?) time with context:
What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014 Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough. Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,, ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014. Erik J. Larson - Founder and CEO of a software company in Austin, Texas http://www.evolutionnews.org/2014/05/what_is_a_mind085251.html Why We Can't Yet Build True Artificial Intelligence, Explained In One Sentence - July 9, 2014 "We don’t yet understand how brains work, so we can’t build one.",,, [IBM's "Jeopardy!"-winning supercomputer] Watson is basically a text search algorithm connected to a database just like Google search. It doesn't understand what it's reading. In fact, "read" is the wrong word. It's not reading anything because it's not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous. http://finance.yahoo.com/news/why-cant-yet-build-true-133937576.html?soc_src=mediacontentstory
Moreover, it was also interesting for me to learn that computers, despite the impressive proficiency that computers have in solving tedious calculations, cannot generate genuinely novel information that is above and beyond what was originally programmed into them.
Conservation of Information Made Simple - William A. Dembski - August, 2012 Excerpt: Biological configuration spaces of possible genes and proteins, for instance, are immense, and finding a functional gene or protein in such spaces via blind search can be vastly more improbable than finding an arbitrary electron in the known physical universe. ,,, ,,,Given this background discussion and motivation, we are now in a position to give a reasonably precise formulation of conservation of information, namely: raising the probability of success of a search does nothing to make attaining the target easier, and may in fact make it more difficult, once the informational costs involved in raising the probability of success are taken into account. Search is costly, and the cost must be paid in terms of information. Searches achieve success not by creating information but by taking advantage of existing information. The information that leads to successful search admits no bargains, only apparent bargains that must be paid in full elsewhere. http://www.evolutionnews.org/2012/08/conservation_of063671.html Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt - William A. Dembski - April 4, 2013 Excerpt: In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here). ,,, ,,, Here are the two seminal papers on conservation of information that I've written with Robert Marks: "The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486 "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061 For other papers that Marks, his students, and I have done to extend the results in these papers, visit the publications page at www.evoinfo.org http://www.evolutionnews.org/2013/04/before_theyve_e070821.html Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf
Thus since computers have and extremely difficult (impossible?) time with understanding context and creating truly novel information, then a simple way to defeat the infamous Turing test is to create a joke,
As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human (from the computer)” Per niwrad https://uncommondescent.com/intelligent-design/artificial-intelligence-or-intelligent-artifices/
,,Such as this joke:
Turing Test Extra Credit – Convince The Examiner That He’s The Computer – cartoon http://imgs.xkcd.com/comics/turing_test.png
Supplemental note:
The Turing Test is Dead. Long Live the Lovelace Test. - Robert J. Marks II - July 3, 2014 http://www.evolutionnews.org/2014/07/the_turing_test087391.html
bornagain77
July 22, 2014
July
07
Jul
22
22
2014
04:16 AM
4
04
16
AM
PDT

Leave a Reply