Uncommon Descent Serving The Intelligent Design Community

Philosopher: Machines lack common sense, and that’s the biggest danger AI represents

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
“Wanzijia”/Hawyih

Says philosopher Nick Bostom, proposing a thought experiment:

When the world ends, it may not be by fire or ice or an evil robot overlord. Our demise may come at the hands of a superintelligence that just wants more paper clips.

So says Nick Bostrom, a philosopher who founded and directs the Future of Humanity Institute, in the Oxford Martin School at the University of Oxford. He created the “paper-clip maximizer” thought experiment to expose flaws in how we conceive of superintelligence. We anthropomorphize such machines as particularly clever math nerds, says Bostrom, whose book Superintelligence: Paths, Dangers, Strategies was released in Britain in July and arrived stateside this month. Spurred by science fiction and pop culture, we assume that the main superintelligence-gone-wrong scenario features a hostile organization programming software to conquer the world. But those assumptions fundamentally misunderstand the nature of superintelligence: The dangers come not necessarily from evil motives, says Bostrom, but from a powerful, wholly nonhuman agent that lacks common sense.

Imagine a machine programmed with the seemingly harmless, and ethically neutral, goal of getting as many paper clips as possible. First it collects them. Then. realizing that it could get more clips if it were smarter, it tries to improve its own algorithm to maximize computing power and collecting abilities. Unrestrained, its power grows by leaps and bounds, until it will do anything to reach its goal: collect paper clips, yes, but also buy paper clips, steal paper clips, perhaps transform all of earth into a paper-clip factory. More.

Thoughts?

By the way, isn’t that the government some places have now? Everything except the “superintelligence” part? – O’Leary for News

Follow UD News at Twitter!

See also:

Comments
Thoughts? Yes, I have a thought, and it surrounds the comment cited in the OP.
Then. realizing that it could get more clips if it were smarter . . .
"Realizing?" Here's my thought. Computers don't and cannot, in principle, have thoughts. The computer in the example will never "realize" because it is too busy "getting" paper clips. Its power will not grow by leaps and bounds . . . but by lines of code written by, surprise, surprise, humans. This is ground well-covered. I like Mapou's comment. I would only extend it by saying there is no computer with any common sense at all. One that says to itself, "Wait a second, if what I'm running works, all those people are going to waste their time playing Flappy Bird and that's just silly!" The way I heard the story, a human started it and a human stopped it.Tim
September 13, 2014
September
09
Sep
13
13
2014
08:03 PM
8
08
03
PM
PDT
GIGOkairosfocus
September 13, 2014
September
09
Sep
13
13
2014
02:51 PM
2
02
51
PM
PDT
It would seem to me that a machine that lacks common sense is not AI. We got a lot of those machines around already.Mapou
September 13, 2014
September
09
Sep
13
13
2014
11:19 AM
11
11
19
AM
PDT
Thoughts?
I've always believed that "artifical intelligence" is a misnomer. "Artificial Ignorance is more accurate. AI is limited by our ability to infuse heuristic learning capabilities into machines, and such artifical heuristic learning will never be better than our human heuristic learning, which is woefully limited, flawed, bereft of honesty and stewardship, and in decline. Pity the species "cared for" by machines that learn materialist darwinian "survival of the fitest".Charles
September 13, 2014
September
09
Sep
13
13
2014
10:22 AM
10
10
22
AM
PDT

Leave a Reply