Training Humans, conceived by Kate Crawford, AI researcher, artist and professor, and Trevor Paglen, artist and researcher, is the first major photography exhibition devoted to training images: the collections of photos used to train artificial intelligence (AI) systems in how to “see” and categorize the world.
In this exhibition, Crawford and Paglen reveal the evolution of training image sets from the 1960s to today. As stated by the artists, “when we first started conceptualizing this exhibition over two years ago, we wanted to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI systems. We weren’t interested in either the hyped, marketing version of AI nor the tales of dystopian robot futures. Rather, we wanted to engage with the materiality of AI, and to take those everyday images seriously as a part of a rapidly evolving machinic visual culture. That required us to open up the black boxes and look at how these engines of seeing currently operate”.
Training Humans explores two fundamental issues in particular: how humans are represented, interpreted and codified through training datasets, and how technological systems harvest, label and use this material. As the classifications of humans by AI systems becomes more invasive and complex, their biases and politics become apparent. Within computer vision and AI systems, forms of measurement easily – but surreptitiously – turn into moral judgments.
Of import to Crawford and Paglen are classificatory taxonomies related to human affect and emotions. Based on the heavily criticized theories of psychologist Paul Ekman, who claimed that the breadth of the human feeling could be boiled down to six universal emotions, AI systems are now measuring people’s facial expressions to assess everything from mental health, whether someone should be hired, to whether a person is going to commit a crime. By looking at the images in this collection, and see how people’s personal photographs have been labeled, raises two essential questions: where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these systems?
As underlined by the artists, “a stark power asymmetry lies at the heart of these tools. What we hope is that Training Humans gives us at least a moment to start to look back at these systems, and understand, in a more forensic way, how they see and categorize us.”
The exhibition will be accompanied by an illustrated publication in the Quaderni series, published by Fondazione Prada, including a conversation between Kate Crawford and Trevor Paglen on the complex topics addressed in their project.
Fondazione Prada
Milan Osservatorio
Galleria Vittorio Emanuele II
20121 Milan
Italy