George D. Montañez, (shown chatting with Mike Behe in July, left) a machine learning specialist, reflects on Micah 6:8 as a guide to developing ethics for the rapidly growing profession:
The AI and ML systems we have in place today are not sentient, but they are still dangerous. I am not worried about the future of AI, but I am concerned about the dangers artificial learning systems currently pose. There are obvious threats: weaponization, terrorism, fraud. But there are also less intentional threats, such as increased inequality, privacy violations, and negligence resulting in harm. For example, consider the case of the self-driving Uber vehicle that killed an Arizona pedestrian in March. According to some, that fatal crash proved that self-driving technology just isn’t ready yet, and releasing these cars on roads in their current form is an act of extreme negligence.
Cathy O’Neil, in her book, Weapons of Math Destruction highlights several cases of machine learning systems harming poor and minority groups through the uncritical use of questionable correlations in data. For example, models that attempt to predict the likelihood that someone will commit a crime if released from prison (criminal recidivism models), use early run-ins with police as a feature. Because poor minority neighborhoods are more heavily policed and minority youth are prosecuted more often than their rich white counterparts for the same crimes, such as recreational drug use and possession, this feature of prediction strongly correlates with race and poverty levels, punishing those with the wrong economic background or skin color. Another feature of these systems assesses whether a person lives near others who themselves have had trouble with the police. While this feature may in fact correlate with a higher risk of committing a crime in the future, it punishes the person for what others have done, which they have no control over. Given that poorer people cannot just move into wealthier neighbourhoods to improve their statistics, the poor are doubly punished, simply for being poor. Other examples are not hard to find. Because of these dangers, ethics has become an area of increasing concern for AI researchers. George G. Montañez, “Think About Ethics efore Trouble Arises” at Mind Matters
True, but—as Montañez makes clear—we only care about ethics if we aren’t machines. His piece highlights the way data can embody hidden ethical problems.
George D. Montañez is an assistant professor of computer science at Harvey Mudd College, in Claremont, California, specializing in machine learning.
Also by George Montañez: What is learning anyway?