This time using big data. Say what you want about them, they’re serious: Not sure if these people are “Darwinists,” as the title implies, but they sure place a lot of faith in outmoded ideas about genetics:
Machine-learning algorithms have recently experienced an explosion of uses, some legitimate and others problematic. Still others are downright racist. Several recent examples promise governments and private companies the ability to glean private information from people’s appearances alone. Stanford University researchers have constructed a “gaydar” algorithm that they claim can differentiate straight and gay faces far more accurately than people can.
The researchers claim that their results are consistent with the “prenatal hormone theory” – an idea supposing that fetal exposure to high levels of male sex hormones called androgens helps to determine sexual orientation. Why they assumed high levels of male hormones would determine homosexuality was not clear. The researchers cite this “much-contested claim” that these hormone exposures also result in gender-atypical faces. The problem is, rather than producing objective insights, artificial intelligence (AI) programs often reinforce human biases. These biases, if trusted in practice, can harm already marginalized populations.
One example can be found in China: “University students taking online exams monitored by proctoring algorithms not only have to answer the questions, but also maintain the appearance of a student who is not cheating.” Unfortunately for the students, these algorithms reportedly often make false accusations against minority students, such as those with disabilities who move their faces and hands in atypical ways. Black or dark-skinned students have been required to work under bright lights to have their relevant features more detectable. The most egregious example is the attempt to read faces to identify “criminal types”Jerry Bergman, “Darwinists Still Attempting to Prove Criminality is Genetic” at Creation Evolution Headlines
Claimed findings from such algorithms are generally bunk. With a large enough database, you can find pretty much any conclusion you want and that is exactly what many researchers do. See, for example, an interview with one of the authors of the The Phantom Pattern Problem (Oxford, October 1, 2020) here.