Monday, 9 September 2019

How should we analyze algorithmic normativities?

Special theme issue
Guest editors: Francis Lee* and Lotta Björklund Larsen**

* Uppsala University, Sweden
** TARC (Tax Administration Research Centre) at the University of Exeter Business School, U.K.

Algorithmic normativities shape our world. But how do we analyze them?
Algorithms are making an ever-increasing impact on our world. In the name of efficiency, objectivity, or sheer wonderment algorithms are increasingly intertwined with society and culture. It is more and more common to argue that algorithms automate inequality, that they are biased black boxes that reproduce racism, and that they control our money and information.1 Implicit, or sometimes very explicit, in many of these observations is that algorithms are meshed with different normativities and that these normativities come to shape our world. The special theme Algorithmic normativities contains a diversity of analyses and perspectives that deal with how algorithms and normativities are intertwined.

In the editorial, How should we theorize algorithms? Five ideal types in analyzing algorithmic normativities (and in an abridged version in this blogpost as well) we want to playfully draw on the metaphor of the engine hood to situate and discuss the articles contained in the special theme in relation to five analytical ideal types. We all recognize the trope of going under the hood to understand the nefarious politics of biased algorithms. But what other tropes of analysis can we identify in critical algorithm studies? And what are the benefits and drawbacks of these different analytical ideal types? In doing this, we categorize the articles in the special theme according to how the different contributions theorize, analyze, and understand algorithmic normativities. A goal is thus to continue exploring the meta-discussion about analytical strategies in critical studies of algorithms (cf. Kitchin 2014).

Under the hood: the politics inscribed in the algorithm
The first ideal type looks “under the hood” of the algorithm, which implies an analysis of the politics, effects, and normativities that are designed into algorithms. In this analytical ideal type, the logic of the algorithm appear like a deus ex machina impinging on society’s material politics (see for instance Winner’s seminal article on the politics of artefacts). This is reflected for instance in Christopher Miles’ article, where he illustrates how algorithms become intertwined with specific normativities in American farming. Miles shows that although new digital practices are introduced, existing socio-economic normativities are often preserved or even amplified and also seem to thwart other imagined futures. Algorithms here emerge as material laws of society that reshapes it according to different ideal types. But what other ways of approaching algorithmic normativities are productive in understanding the emerging algorithmic society?

Above the hood: algorithms in practices
On the other side of this spectrum of ideal types, working “above the hood” algorithms can be seen as “contingent upshot of practices, rather than [as] a bedrock reality” (Woolgar and Lezaun 2013, 326). In this ideal type, for instance Malte Ziewitz’ (2017) account of an “algorithmic walk” humorously points us toward the constant work of interpreting, deciding, and debating about algorithmic output. In this vein, Farzana Dudwhala and Lotta Björklund Larsen's contribution to the special theme proposes that people recalibrate algorithmic outputs. Sometimes people accept the algorithmic output, sometimes not, but always with the aim to achieve and establish a “normal” situation. Also Patricia DeVries and Willem Schinkel’s article undertakes an analysis of the practical politics of algorithms. Their paper addresses how three artists construct face masks to resist and critique facial recognition systems, thus they explore the practices and negotiations around algorithms of surveillance and control. In sum, this ideal type brings negotiations around the algorithm in focus, while the politics of “under the hood” recedes into the background, perhaps leading to the algorithm “itself” becoming obscured behind human action.

Hoods in relations: a relational perspective on algorithms
A possibility for a middle ground between “under the hood” and “above the hood” might be offered by an analytical ideal type that highlights the intertwining of algorithmic and human action (cf. Callon and Law 1997). This is the strategy in Francis Lee et al’s article. In it they criticize the current focus on fairness, bias, and oppression in algorithm studies as a step toward objectivism and realism. They instead propose to pay attention to operations of folding as a tool to highlight the constant intertwining of for instance practices, algorithms, data, or simulations. With a similar relational perspective, Elizabeth Reddy et al’s article highlights, through the algorithmic experiments of a comic book artist, how algorithms can be used to automate the work of assembling stories. Even though an algorithm might produce “the work”, ideas about authorship and accountability are still organized around human subjectivity and agency. The power of this perspective is that it allows a sensitivity to how agency is constituted in relation between humans and algorithms, and how action is a negotiation between both artefact and human. But it might also appear blind to the power-struggles and the oppression of weaker actors as well as apolitical ignoring effects that algorithms could have on the world (cf. Star 1991; Winner 1993; Galis and Lee 2014).

Lives around hoods: the social effects of algorithms
The fourth ideal type widens the lens and homes in on the social effects of algorithms. This analytical position takes an interest in infrastructures of classification and how they affect society and the lives of people. This type of analysis highlights how people’s lives become shaped by algorithmic systems.2 In this vein, Helene Gad Ratner and Evelyn Ruppert’s article analyzes the transformation of data for statistical purposes to show how metadata and data cleaning as aesthetic practices are in fact classification struggles with normative effects. Ratner and Ruppert thus highlight the effects of classification work in relation to homeless and student populations—how people are performed with infrastructures.

The mobile mechanics: reflexivity and the study of algorithms
Finally, we wish to highlight “the mobile mechanics”. By being attentive to how we as social scientists relate to algorithms as well as to those who work with them, our inherent normativities and presumptions come to the fore. In the special theme, David Moats and Nick Seaver’s article challenges our thoughts about how computer scientists understand the work of social scientists—and vice versa. Moats and Seaver document their attempt to arrange an experiment with computer scientists to test ingrained boundaries: how can the quantitative tools of computer science be used for critical social analysis? As it turns out, the authors were instead confronted with their own normative assumptions. Jeremy Grosman and Tyler Reigeluth’s article offers a similarly reflexive approach. In their contribution, the notion of normativity in algorithmic systems is discussed from various analytical position: technical, sociotechnical, and behavioral. The authors argue that algorithmic systems are inhabited by normative tensions between different kinds of normativities, and that a fruitful approach is to explore the tensions instead of the normativities themselves.

In conclusion
A point of departure for this special theme was that algorithms are intertwined with normativities at every step of their existence; in their construction, implementation, as well as their use in practice. The articles in this special theme thus scrutinize ideas of normativities in and around algorithms: how different normativities are enacted with algorithms, as well as how different normativities are handled when humans tinker with algorithms. The array of theoretical approaches—anxieties, pluralities, recalibrations, folds, aesthetics, accountability—that implicate algorithms force us to engage with the multiple normative orders that algorithms are entangled with. We wish you good reading and welcome comments and further discussions! Welcome to the special theme Algorithmic normativities.

References
Amoore L (2013) The Politics of Possibility: Risk and Security beyond Probability. Durham, NC: Duke University Press.

Beer D (2009) Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society 11: 985–1002.

Callon, M. and Law, J. (1997) ‘After the Individual in Society: Lessons on Collectivity from Science , Technology and Society’, Journal of Sociology, 22(2), pp. 165–182. doi: 10.2307/3341747.

Dourish, P. (2016) ‘Algorithms and their Others : Algorithmic Culture in Context’, Big Data & Society, July-Decem, pp. 1–11.

Galis V and Lee F (2014) A sociology of treason: The constructionof weakness. Science, Technology, & Human Values 39: 154–179.

Gillespie, T. (2014) ‘The Relevance of Algorithms’, Media Technologies, (Light 1999), pp. 167–194. doi: 10.7551/mitpress/9780262525374.003.0009.

Kitchin, R. (2014) ‘Thinking Critically About and Researching Algorithms’, SSRN Electronic Journal. doi: 10.2139/ssrn.2515786.

Neyland, D. (2016) ‘Bearing Account-able Witness to the Ethical Algorithmic System’, Science Technology and Human Values, 41(1), pp. 50–76. doi: 10.1177/0162243915598056.

Schüll, N. (2012) Addiction by design: Machine gambling in Las Vegas. Princeton, N.J.: Princeton University Press.

Seaver N (2017) Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society 4: 1–12.

Star SL (1991) Power, technologies and the phenomenology of conventions: On being allergic to onions. In: Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge, pp. 26–56.

Striphas T (2015) Algorithmic culture. European Journal of Cultural Studies 18: 395–412.

Totaro P and Ninno D (2014) The concept of algorithm as an interpretative key of modern rationality. Theory Culture & Society 31: 29–49.

Winner L (1993) Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology & Human Values 18: 362–378.

Woolgar S and Lezaun J (2013) The wrong bin bag: A turn to ontology in science and technology studies. Social Studies of Science 43: 321–340.

Ziewitz, M. (2017) A not quite random walk: Experimenting with the ethnomethods of the algorithm, Big Data & Society, 4(2), pp. 1-13. doi: 10.1177/2053951717738105.

Notes
1 See for instance, Amoore 2013; Beer 2009; Dourish 2016; Gillespie 2014; Kitchin 2014; Neyland 2016; Schüll 2012; Seaver 2017; Striphas 2015; Totaro and Ninno 2014; Ziewitz 2017. See also the critical algorithm studies list https://socialmediacollective.org/reading-lists/critical-algorithm-studies.
2 On torque see Bowker & Star (1999).