Uncommon Descent Serving The Intelligent Design Community

Eric Holloway: Friendly AI would kill us all

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Is that a shocking idea? Let’s follow the logic:

Remember, the goal of friendly AI is to create a godbot that is guaranteed to be kind and good, to never do anything bad, and not be stupid. Now in order to guarantee that the bot will always be good, it must be completely predictable, so that we can predict with 100% accuracy that it will never be bad. This programming fits the “Alf Criterion A,” that the godbot be completely predictable.

But the second point is that, in order to not be stupid, the godbot must be able to make decisions and not just blindly do what it is told. This is “Alf Criterion B.”

Therefore, our friendly AI, the omnibenevolent godbot, must fulfill both Alf Criteria.Eric Holloway, “Friendly artificial intelligence would kill us” at Mind Matters News


If you are worried about things like this happening, check out Eric Holloway’s Could AI think like a human, given infinite resources? Given that the human mind is a halting oracle, the answer is no.

Whew.

Some do worry about an AI takeover though. Check out, for example, Tales of an invented god

Follow UD News at Twitter!

Comments

Leave a Reply