Artificial Intelligence Intelligent Design Media

10. Is AI really becoming “human-like”?

Spread the love

AI help, not hype: Here’s #10 of Mind Matters’s Top Ten AI hypes, flops, and spins of 2018

A headline from the UK Telegraph reads “DeepMind’s AlphaZero now showing human-like intuition in historical ‘turning point’ for AI”. Subsequent text reads,

Washington, July 7 (UPI) Deep Mind revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

Let me quickly confess. I just lied to you. The subsequent text is actually the words from a July 8, 1958, article from The New York Times titled NEW NAVY DEVICE LEARNS BY DOING. Replace “Deep Mind” by “The Navy” in the text you get the original … More.

The world has changed. You’ve changed. This trick (the machine that thinks like a man) goes back to the ancient world and never dies out.

#9 to follow soon.

See also: Deep learning won’t solve AI AlphaGo pioneer: We need “another dozen or half-a-dozen breakthroughs”

One Reply to “10. Is AI really becoming “human-like”?

  1. 1
    vmahuna says:

    For simple, easily categorized events, decision-making programs are marvelous. They’re MUCH faster and more accurate (consistent in their errors…) than carbon-based clerks, or colonels.

    But, yeah, if the New Event doesn’t fit any of the current categories, both the carbon-based clerk and the silicon-based processor need help from a carbon-based Supervisor.

    Real AI would replace carbon-based Analysts: people who are given some general goal (don’t order unnecessary spare parts) and have to discover and invent facts and generalizations about the problem space. The carbon-based Analyst also has some TINY chance of TALKING to the Manager who has CREATED the problem and convincing that Manager to stop issuing REALLY stupid instructions to people (or computers) who cause the specific problems.

    But see discussions of Karl Marx’s realization just before he died that his cute Dialectic had become THREE sided due to the rise of The New Class: Managers. Managers live to wallow in their own local power, while making decisions WITHOUT consulting the Capitalists or the Proletarians. And Managers are interested in low level Decision Making (by AI or Real-I) ONLY to the extent that such decisions do NOT impact the Manager’s ability to APPEAR to be COMPLETELY in charge of ALL Decisionmaking. (see the GPO Style Guide; it’s one word)

    So, the LAST thing the Manager Class would EVER want to see happen is for an un-overrideable AI to prevent him or her from shuffling money (and other resources) around in ways that defy oversight AND being able to DICTATE actions to lesser humans, best commercial practices be damned.

    I don’t have a good reference handy, but in the last year, US DoD admitted that an EXPENSIVE multi-year attempt to force USAF to obey US civil and criminal law and ACCOUNT for tax money poured down rat-holes by USAF had FAILED. That is, USAF told US DoD that USAF had NO IDEA where the budget money was GOING, what it had actually been spent on, and most especially whether USAF and the USA in general received ANYTHING in return for the cash. The guess is that USA got SOMETHING, but it would cost WAY too much MORE cash and embarrass WAY too many more Managers to figure out what the Something was.

    At the same time, US Navy saw its brandy-new nuclear-powered All Electric (no hydraulics, no pneumatics, no small gas or diesel engines) aircraft carrier FAIL SEA TRIALS (it is UNTHINKABLE that the lead ship in a new Class NOT easily pass Sea Trials). The USS Gerald Ford is probably a COMPLETE write off, as are ALL of the under-construction follow-on All-Electric CVNs in the class. The cost to taxpayers, probably excluding attempts to simply SCRAP the steel hulls contaminated by the nuclear reactors, is already spoken of as not merely “billions” but as TRILLIONS.

    And ya can’t get an AI to screw ANYTHING up THAT bad.

Leave a Reply