Big Data & Society, https://doi.org/10.1177/2053951721996046
In recent years, targeted advertising has gained a prominent place in American politics. In particular, political campaigns, candidates, and advocacy organizations have turned to Facebook for its voluminous array of options for targeting users according to “interests” inferred by machine learning algorithms. In this study, we explored for whom this (algorithmic) classification system for ad targeting works in order to contribute to conversations about the ways such systems produce knowledge about the world rooted in power structures. To do this, we critically analyzed the classification system from a variety of vantage points, particularly focusing on the representation of people of color (POC), women, and LGBTQ+ people. First, we drew on donated user data, which included a list of political ad categories people had been sorted into on Facebook. We also examined Facebook's documentation, training materials, and patents for insight into the inner workings of the system. Finally, we entered into the system via Facebook’s tools for advertisers to explore its contents.
Through this investigation, we catalogued a series of cases that reveal the political order enacted via Facebook’s classification system for ad targeting. We particularly highlight four themes. First, we demonstrate how certain ad categories reflect what Joy Buolamwini calls a "coded gaze," or the “embedded views that are propagated by those who have the power to code systems” (2016: n.p.). Second, we highlight how a disproportionate number of ad categories for women and people of color hint at an unmarked user and what Tressie McMillan Cottom (2020) calls "predatory inclusion." Third, we describe cases of ad categories that flatten dimensions of identity and suggest Kimberlé Crenshaw’s (1989) notion of a "single-axis framework" of identity, which fails to capture the intersectionality of identity. Fourth, we illustrate how Facebook's classification system exhibits something akin to what Whitney Phillips (2018) refers to as "both-sides-ism" by allowing for ad categories that could either represent an interest in civil rights or the endorsement of hateful ideologies.
Through these cases, we argue that Facebook's classification system for ad targeting is necessarily political as a result of its underlying technical and commercial logics and the human choices embedded in datafication processes. The system prioritizes the interests of the socially and economically powerful and represents those who have been historically marginalized not on their own terms, but on the terms of those occupying more privileged positions. We suggest that, as a tool for political communication, Facebook’s classification system may have downstream implications for the political voice and representation of marginalized communities to the extent that political campaigns, advocacy groups, and activists increasingly rely on it for cultivating and mobilizing supporters. As Facebook weighs the decision of if/when to reinstate political advertising, our study urges continued critical reflection on whose “interests” are served by Facebook’s classification system (and others like it).
Buolamwini J (2016) InCoding — In the Beginning Was the Coded Gaze. Available at: https://medium.com/mit-media-lab/incoding-in-the-beginning-4e2a5c51a45d
Cottom TM (2020) Where platform capitalism and racial capitalism meet: The sociology of race and racism in the digital society. Sociology of Race and Ethnicity 6(4): 441–449. DOI: 10.1177/2332649220949473.
Crenshaw K (1989) Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum Article 8: 139.
Phillips W (2018) The oxygen of amplification: Better practices for reporting on extremists, antagonists, and manipulators online. Data & Society. https://datasociety.net/library/oxygen-of-amplification/