AI researchers condemn predictive crime software, citing racial bias and flawed methods

Taylor Hatmaker
MINNEAPOLIS, MINNESOTA - MAY 30: Police advance on demonstrators who are protesting the killing of George Floyd on May 30, 2020 in Minneapolis, Minnesota. Former Minneapolis police officer Derek Chauvin was arrested for Floyd's death and is accused of kneeling on Floyd's neck as he pleaded with him about not being able to breathe. Floyd was pronounced dead a short while later. Chauvin and three other officers, who were involved in the arrest, were fired from the police department after a video of the arrest was circulated. (Photo by Scott Olson/Getty Images)

A collective of more than 2,000 researchers, academics and experts in artificial intelligence are speaking out against soon-to-be-published research that claims to use neural networks to "predict criminality." At the time of writing, more than 50 employees working on AI at companies like Facebook, Google and Microsoft had signed on to an open letter opposing the research and imploring its publisher to reconsider.

The controversial research is set to be highlighted in an upcoming book series by Springer, the publisher of Nature. Its authors make the alarming claim that their automated facial recognition software can predict if a person will become a criminal, citing the utility of such work in law enforcement applications for predictive policing.

"By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses," Harrisburg University professor and co-author Nathaniel J.S. Ashby said.

The research's other authors include Harrisburg University assistant professor Roozbeh Sadeghian and Jonathan W. Korn, a Ph.D. student highlighted as an NYPD veteran in a press release. Korn lauded software capable of anticipating criminality as "a significant advantage for law enforcement agencies."

In the open letter opposing the research's publication, AI experts expressed "grave concerns" over the study and urged Springer's review committee to withdraw its offer. The letter also called on other publishers to decline to publish similar future research, citing a litany of reasons why both facial recognition and crime prediction technology should be approached with extreme caution and not leveraged against already vulnerable communities.

Google employees demand company stop selling tech to police


The publication's opponents don't just worry that the researchers have opened an ethical can of worms — they also cast doubt on the research itself, criticizing "unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years."

Update: Springer Nature Communications Manager Felicitas Behrendt reached out to TechCrunch and provided the following statement:

“We acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication. It was submitted to a forthcoming conference for which Springer will publish the proceedings of in the book series Transactions on Computational Science and Computational Intelligence and went through a thorough peer review process. The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June. The details of the review process and conclusions drawn remain confidential between the editor, peer reviewers and authors.”

Facial recognition algorithms have long been criticized for poor performance in identifying non-white faces, among many other scientific and ethical concerns frequently raised about this kind of software. Given that the research in question developed facial recognition software that can be applied for predictive policing purposes, the technology's stakes couldn't be higher.

"Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world," the letter's authors warn.

"The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimize violence against marginalized groups."

IBM ends all facial recognition business as CEO calls out bias and inequality