Artificial Intelligence Intelligent Design News

Claim: Humanity and AI inseparable by 2021

Spread the love

From Russell Brandom at Verge:

While some predict mass unemployment or all-out war between humans and artificial intelligence, others foresee a less bleak future. Professor Manuela Veloso, head of the machine learning department at Carnegie Mellon University, envisions a future in which humans and intelligent systems are inseparable, bound together in a continual exchange of information and goals that she calls “symbiotic autonomy.” In Veloso’s future, it will be hard to distinguish human agency from automated assistance — but neither people nor software will be much use without the other.

Veloso is already testing out the idea on the CMU campus, building roving, segway-shaped robots called “cobots” to autonomously escort guests from building to building and ask for human help when they fall short. It’s a new way to think about artificial intelligence, and one that could have profound consequences in the next five years.More.

Another student job tossed out the window, one that all participants enjoyed and learned from. But then the students can always sign up for more courses in the pursuit of grievances, so all’s well.

Why does all this remind one of Paul “Population Bomb” Ehrlich’s “The battle to feed humanity is over,” predicting inevitable widespread famines that, of course, never happened. And other a-crock-a-lypses?

Most apocalypses actually can’t happen because they are competitive. Subtractive, not additive. The TED talks will, however, assuredly happen.

See also: New Book: Philosophers, AI experts ask, are we living in an AI simulation? Will AI out think us?

John Searle talks to Google. Searle does a good job showing why computers are not conscious. (johnnyb)

and

Steve Fuller: Humans will merge with AI

Follow UD News at Twitter!

2 Replies to “Claim: Humanity and AI inseparable by 2021

  1. 1
    Seversky says:

    We will all be assimilated! Resistance is futile! Perhaps the concept of a Borg-like culture is not so far-fetched after all.

    Before that, however, automation/robots/AI pose a more immediate threat – the collapse of capitalist free-market economies. Think about it: as robots and AI become more capable and versatile they will replace human beings in more and more jobs. From the employers perspective, it would be great, machines or systems that work 24/7, don’t need to be paid or provided with ancillary benefits, never go sick or take vacations. One burger franchise owner has already promised that, as soon as there are reasonably-priced robots that can flip burgers, he will install them.

    There’s just one small problem with that Utopian (from the owners perspective) vision. Fairly obviously, a market economy depends on the existence of a market. A market is a population with money to spend on the goods and services being offered by suppliers. That population earned their money by doing paid work. But as robots and AI take over more and more functions, that paid work will dwindle and could eventually vanish. So where’s the market to which all these cheap goods and services have to be sold? Destroyed by the very machines and systems designed to service it

    We’re not there yet and it probably won’t happen for some time yet but the writing is on the wall. The question is who’s going to do anything about it? Not the manufacturers, all they see is the prospect of being able to make things without the inconvenience of having to employ people. Not the robot-makers and AI researchers. All they see are the excitement of pushing the boundaries of science and technology and the money and kudos that will come their way if they are successful.

    Time to put bags of water and chemicals first, I say.

  2. 2
    bornagain77 says:

    Much like Mark Twain, the death of humanity is greatly exaggerated:

    The fallacious belief that human intelligence is nothing but massive amounts of computational ability has been with us since Alan Turing invented computers.

    Alan’s brain tells his mind, “Don’t you blow it.”
    Listen up! (Even though it’s inchoate.)
    “My claim’s neat and clean.
    I’m a Turing Machine!”
    … ‘Tis somewhat curious how he could know it.

    Ironically, Alan Turing, in his demonstration that Godel’s incompleteness theorem applied to computers as well as to mathematics, i.e. the infamous ‘halting problem’, was himself instrumental in directly falsifying the belief that human intelligence could ever be programmed into computers. You can pick that bit of history up in the later part of the following video:

    Cantor, Gödel, & Turing: Incompleteness of Mathematics – video (excerpted from BBC’s ‘Dangerous Knowledge’ documentary)
    https://www.facebook.com/philip.cunningham.73/videos/vb.100000088262100/1119397401406525/?type=2&theater

    As to the implications of his incompleteness theorem as it is applied to computers, Godel himself stated this:

    “Either mathematics is too big for the human mind, or the human mind is more than a machine.”
    – Kurt Gödel As quoted in Topoi : The Categorial Analysis of Logic (1979) by Robert Goldblatt, p. 13

    Here are a few quotes backing up Godel’s claim,

    The mathematical world – James Franklin – 7 April 2014
    Excerpt: the intellect (is) immaterial and immortal. If today’s naturalists do not wish to agree with that, there is a challenge for them. ‘Don’t tell me, show me’: build an artificial intelligence system that imitates genuine mathematical insight. There seem to be no promising plans on the drawing board.,,,
    James Franklin is professor of mathematics at the University of New South Wales in Sydney.
    http://aeon.co/magazine/world-.....-be-about/

    Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson
    Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information.
    http://cires.colorado.edu/~dou...../info8.pdf

    The danger of artificial stupidity – Saturday, 28 February 2015
    “Computers lack mathematical insight: in his book The Emperor’s New Mind, the Oxford mathematical physicist Sir Roger Penrose deployed Gödel’s first incompleteness theorem to argue that, in general, the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational”
    http://machineslikeus.com/news.....-stupidity

    Evolutionary Computing: The Invisible Hand of Intelligence – June 17, 2015
    Excerpt: William Dembski and Robert Marks have shown that no evolutionary algorithm is superior to blind search — unless information is added from an intelligent cause, which means it is not, in the Darwinian sense, an evolutionary algorithm after all. This mathematically proven law, based on the accepted No Free Lunch Theorems, seems to be lost on the champions of evolutionary computing. Researchers keep confusing an evolutionary algorithm (a form of artificial selection) with “natural evolution.” ,,,
    Marks and Dembski account for the invisible hand required in evolutionary computing. The Lab’s website states, “The principal theme of the lab’s research is teasing apart the respective roles of internally generated and externally applied information in the performance of evolutionary systems.” So yes, systems can evolve, but when they appear to solve a problem (such as generating complex specified information or reaching a sufficiently narrow predefined target), intelligence can be shown to be active. Any internally generated information is conserved or degraded by the law of Conservation of Information.,,,
    What Marks and Dembski (mathematically) prove is as scientifically valid and relevant as Gödel’s Incompleteness Theorem in mathematics. You can’t prove a system of mathematics from within the system, and you can’t derive an information-rich pattern from within the pattern.,,,
    http://www.evolutionnews.org/2.....96931.html

    What Does “Life’s Conservation Law” Actually Say? – Winston Ewert – December 3, 2015
    Excerpt: All information must eventually derive from a source external to the universe,
    http://www.evolutionnews.org/2.....01331.html

Leave a Reply