Uncommon Descent Serving The Intelligent Design Community
Category

Artificial Intelligence

John Searle Talks to Google

John Searle gives a nice talk at Google about real intelligence vs. machine intelligence. The conversation is interesting for a number of reasons, including some historical background of Searle’s famous “Chinese Room Argument.”
Read More ›

Steve Fuller: Humans will merge with AI

From sociologist Steve Fuller, who has studied ID, at Telegraph: Stephen Hawking summed up the thinking of many of the researchers and funders behind artificial intelligence this week when he launched the new Leverhulme Centre for the Future of Intelligence at Cambridge by claiming that AI is “either the best or worst thing to happen to humanity.” Fuller argues for a different approach, making Hawking himself his example: Indeed, we would do better to start with Stephen Hawking himself, universally acknowledged as the one of the great intellects of our times. Near the start of his illustrious career in physics forty years ago he began to suffer from motor neurone disease, which eventually rendered him quadriplegic. The word “cyborg” probably Read More ›

Assisted intelligence vs. artificial intelligence

From software engineer Brendan Dixon at Evolution News & Views: AI theorists consider what they call Artificial Generalized Intelligence (or AGI) the ultimate goal: The intelligence of an AGI would match or beat — if you believe Musk, Kurzweil, and the other true believers — human intelligence. For these theorists, AI’s recent successes, including Google’s DeepMind, IBM’s Watson, and Tesla’s self-driving cars, are no more than steps toward that end. Like all goals, however, the pursuit of AGI rests not just on a desire to see what we can accomplish, but on beliefs about what is. … The misguided goals, the bad aim, of so much AI (though not all) arises from dismissing human uniqueness. Such AI becomes, not a Read More ›

We are warned what to expect after robots gain consciousness

Not that anyone has the least idea what consciousness is. Science fiction short from Matt Gaede at Motherboard: I am a robot. I am alive in a lab. I have consciousness. I don’t believe my creators know it. Why would they make me? I have one task. One function, one ability. I can drive forward. That’s it. Only forward. Yet if I do what I’m meant to do, I’ll unplug myself. I’ll die. I don’t want to die. I just started living. How long have I been alive? How many times have I gone through with this? How do I know that the cord is my source of life? Do I retain anything? I must. I haven’t been taught anything. Read More ›

What? Robots can’t dance?

From Uri Bram at Nautilus: Human learning is always social, embodied, and occurs in specific practical situations. Mostly, you don’t learn to dance by reading a book or by doing experiments in a laboratory. You learn it by dancing with people who are more skilled than you. … Yes. In its first few decades, artificial intelligence research concentrated on tasks we consider particular signs of intelligence because they are difficult for people: chess, for example. It turned out that chess is easy for fast-enough computers. Early work neglected tasks that are easy for people: making breakfast, for instance. Such easy tasks turned out to be difficult for computers controlling robots. Early AI learning research also looked at formal, chess-like problems, Read More ›

AI still can’t master language

From Will Knight at Technology Review: Machines that truly understand language would be incredibly useful. But we don’t know how to build them. … SHRDLU was held up as a sign that the field of AI was making profound progress. But it was just an illusion. When Winograd tried to make the program’s block world larger, the rules required to account for the necessary words and grammatical complexity became unmanageable. Just a few years later, he had given up, and eventually he abandoned AI altogether to focus on other areas of research. “The limitations were a lot closer than it seemed at the time,” he says. More. Could part of the problem be that language is not just a signal Read More ›

Do Computers Think Creatively?

The many advances in computer technology have convinced many people that AI is real and it is coming soon. This article focuses on the concept of creativity, and what that means for the question of whether someone can actually build an “artificial intelligence” with computers. Read More

Of course algorithms are biased

From Nanette Byrnes at Technology Review: We seem to be idolizing algorithms, imagining they are more objective than their creators. The dustup over Facebook’s “trending topics” list and its possible liberal bias hit such a nerve that the U.S. Senate called on the company to come up with an official explanation, and this week COO Sheryl Sandberg said the company will begin training employees to identify and control their political leanings. This is just one result, however, of a broader trend that Fred Benenson, Kickstarter’s former data chief, calls “mathwashing”: our tendency to idolize programs like Facebook’s as entirely objective because they have mathematics at their core.More. Grasshopper, who is the “we” who thought they weren’t biased? See also: Darwin’s Read More ›

Can we create minds from machines?

Erik Larson asks. Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute, and he is Science and Technology Editor at The Best Schools.org. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson’s Ph.D. dissertation served as the basis for the writing of a provisional patent on using hierarchical classification techniques to locate specific event mentions in free text. His work on supervised machine learning Read More ›

Neuroscience challenged by Donkey Kong

Let alone the human brain. From Ed Yong at Atlantic: The human brain contains 86 billion neurons, underlies all of humanity’s scientific and artistic endeavours, and has been repeatedly described as the most complex object in the known universe. By contrast, the MOS 6502 microchip contains 3510 transistors, runs Space Invaders, and wouldn’t even be the most complex object in my pocket. We know very little about how the brain works, but we understand the chip completely. So, Eric Jonas and Konrad Kording wondered, what would happen if they studied the chip in the style of neuroscientists? How would the approaches that are being used to study the complex squishy brain fare when used on a far simpler artificial processor? Read More ›

Will journals accept papers written by a … computer?

  They’ll even try to review them. Computer science prof Robert Marks writes at The Best Schools: Should we be surprised that phony papers generated by SCIgen have been accepted by conferences and journals? The pressure to publish has been applied to professors almost everywhere. Supply and demand dictates that journals and conferences be created to meet the demand. Many of these conferences and journals, motivated by profit, are not picky about the quality of the papers they accept. They are more interested in collecting fees. Although I’m not a big fan of peer review as it is currently practiced, there always needs to be a gatekeeper to ban entrance of garbage trucks. Okay, but what if we are paying Read More ›

Robots and Rationality

If humans are just meat robots, can we be rational creatures? Tim Stratton argues the case that libertarian free will is required in order to consider ourselves in any way rational – that if our decisions are solely the result of physics and chemistry, then we cannot then trust them to be rational in any significant sense. Even if naturalism were true, its being true would undercut our ability to justify the belief that it was true. Read Article

Big data raises bigger questions re artificial intelligence

From Gary Marcus at the Edge: People get very excited every time there’s a tiny advance, but the tiny advances aren’t getting us closer. There was a Google captioning thing that got a lot of press. I think it was the front page of The Times. You could show it some pictures and it looked like it was great. You’d show it a picture of a dog, a person, and a Frisbee and it might be able to say, that’s a dog catching a Frisbee. It gives the illusion of understanding the language. But it’s very easy to break these systems. You’d show it a picture of a street sign with some stickers on it and it said, that’s a Read More ›

Why AI won’t wipe out humanity?

Possibly because naturalists will be there ahead of it: At CNBC, futurist Michio Kaku explains that we are still the cavemen of 100,000 years ago (his “caveman principle”), so we just aren’t comfortable with brain implants and machines as persons. He goes on to say, “I think the ‘Terminator’ idea is a reasonable one — that is that one day the internet becomes self-aware and simply says that humans are in the way,” he said. “After all, if you meet an ant hill and you’re making a 10-lane super highway, you just pave over the ants. It’s not that you don’t like the ants, it’s not that you hate ants, they are just in the way.” More. Kaku’s conflicting pronouncements, Read More ›

AI skeptic on humanists’ paradox

Erik Larson at the Atlantic (May 2015): Questioning the Hype About Artificial Intelligence … Elon Musk, the founder of Tesla and SpaceX, has openly speculated that humans could be reduced to “pets” by the coming superintelligent machines. Musk has donated $10 million to the Future of Life Institute, in a self-described bid to help stave off the development of “killer robots.” At Berkeley, the Machine Intelligence Research Institute (MIRI) is dedicated to addressing what Bostrom and many others describe as an “existential threat” to humanity, eclipsing previous (and ongoing) concerns about the climate, a nuclear holocaust, and other major denizens of our modern life. Luminaries like Stephen Hawking and Bill Gates have also commented on the scariness of artificial intelligence. Read More ›