As facial recognition software becomes inescapable in everyday life, the developers behind a new web app-slash-art project want to show people exactly how they look in the eyes of AI—and the revelations are often jarring.
The tool, called ImageNet Roulette, detects human faces in any uploaded photo and assign them labels using ImageNet, an academic training set with millions of pictures depicting almost anything imaginable, and WordNet, the corresponding text tags. As viral examples on Twitter have shown, the results of this process are more often than not completely useless—nonsensical at best and racist or otherwise offensive at worst.
In some cases, it would label black men as “offenders” or “wrongdoers,” while other times it would spit out racial slurs against Asians or outdated and offensive terms for black people.
The offensiveness was more or less the point, says co-creator, Kate Crawford, who is also a co-founder of New York University’s AI Now Institute, which studies the social implications of artificial intelligence.
ImageNet, a joint research project of Stanford University and Princeton University, is often credited as the impetus for an ongoing boom period in machine learning research and applications spanning everything from self-driving cars and shopping apps. But its “person” category, which Crawford describes as “strange, fascinating and often offensive” leaves much to be desired, a testament to the difficulties of reducing human identities to simple categories and the built-in biases that manifest when doing so.
“It reveals the deep problems with classifying humans—be it race, gender, emotions or characteristics,” Crawford tweeted. “It’s politics all the way down, and there’s no simple way to ‘debias’ it.”
The researchers behind ImageNet plan to attempt to do so anyway; a spokesperson for the team told BuzzFeed that they submitted research on improving the data for peer review last month and are currently awaiting the results.
“We welcome input and suggestions from the research community and beyond on how to build better and fairer datasets for training and evaluating AI systems,” ImageNet’s statement read.
The tool’s release was accompanied by the publication of a two-year investigation by Crawford and fellow ImageNet Roulette co-creator Trevor Paglen of assumptions baked into machine learning training datasets going back to the first experiments with facial recognition in the 1960s. The whole project is part of a bigger art exhibition from the duo with the same themes called “Training Humans,” on display at Milan’s Osservatorio Fondazione Prada. (Berlin-based developer Leif Ryge also co-created Image Roulette but is not part of the bigger project.)
Facial recognition’s well-documented tendency for blindspots around race and other diversity issues has become a flashpoint in an emerging debate around what constitutes the ethical creation and use of deep-learning algorithms. That discussion comes as universities have begun to invest in multidisciplinary approaches to AI research emphasizing social impact, with MIT, Stanford and other prestigious schools establishing their own institutes similar to NYU’s AI Now.
“There is much at stake in the architecture and contents of the training sets used in AI,” Crawford and Paglen write in the conclusion to their report. “They can promote or discriminate, approve or reject, render visible or invisible, judge or enforce. And so we need to examine them—because they are already used to examine us—and to have a wider public discussion about their consequences, rather than keeping it within academic corridors.
“As training sets are increasingly part of our urban, legal, logistical, and commercial infrastructures, they have an important but underexamined role: the power to shape the world in their own images.”