LGBTQ advocacy groups have denounced a “dangerous and flawed” Stanford University study that claims to have used artificial intelligence to help determine a person’s sexuality by looking at their face with remarkable accuracy.
The study, which is in draft form but has been accepted by the Journal of Personality and Social Psychology, reportedly used “deep neural networks to extract features” from more than 35,000 facial images that men and women had posted on an unidentified dating website.
Researchers Michal Kosinski and Yilun Wang said they used an algorithm that could correctly identify gay men 81 percent of the time, The Economist reported Friday. Similarly, they claimed the tool was accurate for 74 percent of the women it tested.
It’s unclear what benefits, if any, the research would provide. Instead, Kosinski and Wang said their work was a “preventative measure” that highlights the safety risks many LGBTQ people could face if such technology becomes widely available. In keeping with that mindset, they chose not to disclose the dating website used in their research in an effort to discourage copycats.
“Tech companies and government agencies are well aware of the potential of computer vision algorithm tools,” the men wrote. “In some cases, losing the privacy of one’s sexual orientation can be life-threatening. The members of the LGBTQ community still suffer physical and psychological abuse at the hands of governments, neighbors, and even their own families.”
The research, which was first revealed by The Economist on Friday, made global headlines, with Newsweek, The Guardian, MIT Technology Review and other publications each running their own takes. Officials at GLAAD and the Human Rights Campaign (HRC) quickly condemned the research, noting that the study relied on “myriad flaws” and “had not been peer reviewed.”