Uncommon Descent Serving The Intelligent Design Community

Why artificial intelligence (AI) cannot produce a Universal Answers Machine

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Okay, let’s start with Can an algorithm be racist? Well, the machine has no opinion. It processes vast tracts of data. But, as a result, the troubling hidden roots of some data are exposed

[Yonatan Zunger] offers the example of high school student Kabir Alli’s Google image search in 2016: for “three white teenagers” and “three black teenagers.” The request for white teens turned up stock photography for sale and the request for black teens turned up local media stories about arrests: The ensuing anger over deep-seated racism submerged the fact that the algorithm was not a decision someone made. It was an artifact of what people were looking for: “When people said ‘three black teenagers’ in media with high-quality images, they were almost always talking about them as criminals, and when they talked about ‘three white teenagers,’ they were almost always advertising stock photography.’”*

The machine only shows us what we ask for, it doesn’t tell us what we should wonder about. Zunger cautions that, of course, “Nowadays, either search mostly turns up news stories about this event.” That makes it difficult to determine whether anything has changed.

There is no simple way to automatically remove bias because much of it comes down to human judgment. Most Nobel Prize winners are men but a thoughtful human being will not assume that a winner “must be” a man. A machine learning system “knows” nothing other than the data input. It certainly doesn’t “know” that it might be creating prejudice or giving offense. If we want it to prevent that, we must constantly monitor its output. More.

In short, there will always be a job or a business for a person with good judgment. You can’t automate it.

* Stock photography: Those “faceless face” photos that adorn racks of pamphlets advocating good nutrition or volunteering are sold by stock photography houses. The fact that the face probably doesn’t look very much like the kid who lives across the road is part of the package that the publisher is buying. If it is a face at all, you naturally look at it, right? But you aren’t supposed to see anything that distracts you from the pamphlet’s message.

See also: Did AI teach itself to “not like” women?

and

Ethics for an Information Society

Follow UD News at Twitter!

Comments

Leave a Reply