From Maria Temming at Science News:
Computers get a say in these life-changing decisions because their crime forecasts are supposedly less biased and more accurate than human guesswork.
A comparison of the volunteers’ answers with COMPAS’ predictions for the same 1,000 defendants found that both were about 65 percent accurate. “We were like, ‘Holy crap, that’s amazing,’” says study coauthor Hany Farid, a computer scientist at Dartmouth. “You have this commercial software that’s been used for years in courts around the country — how is it that we just asked a bunch of people online and [the results] are the same?”
There’s nothing inherently wrong with an algorithm that only performs as well as its human counterparts. But this finding, reported online January 17 in Science Advances, should be a wake-up call to law enforcement personnel who might have “a disproportionate confidence in these algorithms,” Farid says.
Farid has his doubts that computers can show much improvement. He and Dressel built several simple and complex algorithms that used two to seven defendant features to predict recidivism. Like COMPAS, all their algorithms maxed out at about D-level accuracy. That makes Farid wonder whether trying to predict crime with anything approaching A+ accuracy is an exercise in futility. More.
Maybe computers would be better at predicting crime among sociopathic robots than among humans. 😉
See also: Math prof asks Rob Sheldon: But how do we know that it isn’t a conscious machine?
Why human beings cannot design a conscious machine: Basic physics would suggest that even that single neuron has properties that cannot be duplicated by all the world’s supercomputers running Attoflop simulations.