Saturday 5 October 2024

Guest Blog: Interoperable and Standardized Algorithmic Images: The Domestic War on Drugs and Mugshots Within Facial Recognition Technologies

by Aaron Tucker

Tucker, A. (2024). Interoperable and standardized algorithmic images: The domestic war on drugs and mugshots within facial recognition technologies. Big Data & Society, 11(3). https://doi.org/10.1177/20539517241274593

Generative AI (GenAI), such as Midjourney, Stable Diffusion, and DALL-E, are data visualization systems. Such technologies are the result of their training data in combination with dense algorithmic mathematics: the images produced by such systems surface that original training data, pairings of text and image, for better and worse. 

This dynamic is especially problematic when image data that are laced with racialized vectors of power, such as mugshots, are freely available to those building GenAI models. From their inception in the late 19th century, by scientists such as Francis Galton and Alphonse Bertillion, mugshots were always meant to be mobile and standardized: the accepted visuality of the mugshot, as a front facing pose and a side profile taken in light designed to maximize visibility, ensured that the photograph could be “accurately” compared to any face in question across a variety of locations. 

Such logics were adopted by computer scientists solving the problem of computational face recognition and mugshots. Reports such as the 1997 “Best Practice Recommendation for the Capture of Mugshots” stressed that mugshots needed to be “interoperable” so that they could shift between various FRTs and applications. 

Mugshots are intercut with socio-technical systems such as policing practices, mental health support, and addiction support; mugshots are not neutral images, but rather a composite of affect, lived narrative, social power structures, and, often, violence in many forms. Therefore, as Katherine Biber warned in her 2013 article, “In Crime’s Archive,” there are real dangers when the criminal archive slip uncritically into the cultural sphere. It is crucial that we pay attention to GenAI and the ways that it tells on itself and the data it is visualizing through its creations. As my article describes, the ability to generate images with the prompt of “a mugshot” that are defined by the same biases as mugshot databases is alarming. 

The solution is not to ban such prompts or crack down on prompt engineering that surfaces such results, but rather to address the root issue: the uncritical use and re-use of problematic data in machine training, not just in computer vision systems, but in all AI systems.