Tuesday, 1 October 2019

Datafied knowledge production: practices, mechanisms and imaginaries at work in big data analyses

Special Theme Issue
Guest lead editors: Nanna Bonde Thylstrup*, Mikkel Flyverbom**, Rasmus Helles***

* Aarhus University
** Copenhagen Business School
*** University of Copenhagen

Digital transformations, such as datafication and algorithmic sorting, create new conditions for how we come to see, know, feel and act. This special issue explores the intersections between digital transformations and knowledge production by asking questions such as: what does datafied knowledge production look like? Which digital infrastructures support its future development? And what potentialities and limits do datafied forms of analysis and knowledge production contain? The responses we offer include the suggestion that while the resources, material features and analytical operations involved in datafied knowledge production may be different, many fundamental concerns about epistemology, ontology and methods remain relevant to understand what shapes it. By seeking to understand and explicate such assumptions, operations and consequences, the articles in this special issue sketch the contours of knowledge production in a digital and datafied world.

Editorial: Datafied Knowledge Production
Nanna Bonde Thylstrup, Mikkel Flyverbom, Rasmus Helles

Datastructuring—Organizing and curating digital traces into action
Mikkel Flyverbom and John Murray

Data out of place: data waste and the politics of data recycling
Nanna Bonde Thylstrup

Make data sing: The automation of storytelling
Kristin Veel

The optical unconscious of Big Data: Datafication of vision and care for unknown futures
Daniela Agostinho

Data in the smart city: How incongruent frames challenge the transition from ideal to practice
Anders Koed Madsen

Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media
Anja Bechmann and Geoffrey C Bowker

Thursday, 26 September 2019

Event: Data Rights - Subjects or Citizens?

The Mile End Institute (Queen Mary, University of London (QMUL)) is hosting an event that will discuss how the exponential accumulation of data from everyday online and offline activities raises tensions about who has the rights to produce and own such data. A panel will feature three speakers from a recently published book edited by Didier Bigo, Engin Isin, and Evelyn Ruppert (Editor, BD&S): Data Politics: Worlds, Subjects, Rights (2019). Engin Isin (QMUL) will chair the panel with Elspeth Guild (QMUL), Jennifer Gabrys (Cambridge; Co-editor, BD&S) and Didier Bigo (Sciences Po and KCL) speaking about their contributions. Click here for more information and to register.  The book is Open Access and a pdf copy can be downloaded here.

Monday, 9 September 2019

How should we analyze algorithmic normativities?

Special theme issue
Guest editors: Francis Lee* and Lotta Björklund Larsen**

* Uppsala University, Sweden
** TARC (Tax Administration Research Centre) at the University of Exeter Business School, U.K.

Algorithmic normativities shape our world. But how do we analyze them?
Algorithms are making an ever-increasing impact on our world. In the name of efficiency, objectivity, or sheer wonderment algorithms are increasingly intertwined with society and culture. It is more and more common to argue that algorithms automate inequality, that they are biased black boxes that reproduce racism, and that they control our money and information.1 Implicit, or sometimes very explicit, in many of these observations is that algorithms are meshed with different normativities and that these normativities come to shape our world. The special theme Algorithmic normativities contains a diversity of analyses and perspectives that deal with how algorithms and normativities are intertwined.

In the editorial, How should we theorize algorithms? Five ideal types in analyzing algorithmic normativities (and in an abridged version in this blogpost as well) we want to playfully draw on the metaphor of the engine hood to situate and discuss the articles contained in the special theme in relation to five analytical ideal types. We all recognize the trope of going under the hood to understand the nefarious politics of biased algorithms. But what other tropes of analysis can we identify in critical algorithm studies? And what are the benefits and drawbacks of these different analytical ideal types? In doing this, we categorize the articles in the special theme according to how the different contributions theorize, analyze, and understand algorithmic normativities. A goal is thus to continue exploring the meta-discussion about analytical strategies in critical studies of algorithms (cf. Kitchin 2014).

Under the hood: the politics inscribed in the algorithm
The first ideal type looks “under the hood” of the algorithm, which implies an analysis of the politics, effects, and normativities that are designed into algorithms. In this analytical ideal type, the logic of the algorithm appear like a deus ex machina impinging on society’s material politics (see for instance Winner’s seminal article on the politics of artefacts). This is reflected for instance in Christopher Miles’ article, where he illustrates how algorithms become intertwined with specific normativities in American farming. Miles shows that although new digital practices are introduced, existing socio-economic normativities are often preserved or even amplified and also seem to thwart other imagined futures. Algorithms here emerge as material laws of society that reshapes it according to different ideal types. But what other ways of approaching algorithmic normativities are productive in understanding the emerging algorithmic society?

Above the hood: algorithms in practices
On the other side of this spectrum of ideal types, working “above the hood” algorithms can be seen as “contingent upshot of practices, rather than [as] a bedrock reality” (Woolgar and Lezaun 2013, 326). In this ideal type, for instance Malte Ziewitz’ (2017) account of an “algorithmic walk” humorously points us toward the constant work of interpreting, deciding, and debating about algorithmic output. In this vein, Farzana Dudwhala and Lotta Björklund Larsen's contribution to the special theme proposes that people recalibrate algorithmic outputs. Sometimes people accept the algorithmic output, sometimes not, but always with the aim to achieve and establish a “normal” situation. Also Patricia DeVries and Willem Schinkel’s article undertakes an analysis of the practical politics of algorithms. Their paper addresses how three artists construct face masks to resist and critique facial recognition systems, thus they explore the practices and negotiations around algorithms of surveillance and control. In sum, this ideal type brings negotiations around the algorithm in focus, while the politics of “under the hood” recedes into the background, perhaps leading to the algorithm “itself” becoming obscured behind human action.

Hoods in relations: a relational perspective on algorithms
A possibility for a middle ground between “under the hood” and “above the hood” might be offered by an analytical ideal type that highlights the intertwining of algorithmic and human action (cf. Callon and Law 1997). This is the strategy in Francis Lee et al’s article. In it they criticize the current focus on fairness, bias, and oppression in algorithm studies as a step toward objectivism and realism. They instead propose to pay attention to operations of folding as a tool to highlight the constant intertwining of for instance practices, algorithms, data, or simulations. With a similar relational perspective, Elizabeth Reddy et al’s article highlights, through the algorithmic experiments of a comic book artist, how algorithms can be used to automate the work of assembling stories. Even though an algorithm might produce “the work”, ideas about authorship and accountability are still organized around human subjectivity and agency. The power of this perspective is that it allows a sensitivity to how agency is constituted in relation between humans and algorithms, and how action is a negotiation between both artefact and human. But it might also appear blind to the power-struggles and the oppression of weaker actors as well as apolitical ignoring effects that algorithms could have on the world (cf. Star 1991; Winner 1993; Galis and Lee 2014).

Lives around hoods: the social effects of algorithms
The fourth ideal type widens the lens and homes in on the social effects of algorithms. This analytical position takes an interest in infrastructures of classification and how they affect society and the lives of people. This type of analysis highlights how people’s lives become shaped by algorithmic systems.2 In this vein, Helene Gad Ratner and Evelyn Ruppert’s article analyzes the transformation of data for statistical purposes to show how metadata and data cleaning as aesthetic practices are in fact classification struggles with normative effects. Ratner and Ruppert thus highlight the effects of classification work in relation to homeless and student populations—how people are performed with infrastructures.

The mobile mechanics: reflexivity and the study of algorithms
Finally, we wish to highlight “the mobile mechanics”. By being attentive to how we as social scientists relate to algorithms as well as to those who work with them, our inherent normativities and presumptions come to the fore. In the special theme, David Moats and Nick Seaver’s article challenges our thoughts about how computer scientists understand the work of social scientists—and vice versa. Moats and Seaver document their attempt to arrange an experiment with computer scientists to test ingrained boundaries: how can the quantitative tools of computer science be used for critical social analysis? As it turns out, the authors were instead confronted with their own normative assumptions. Jeremy Grosman and Tyler Reigeluth’s article offers a similarly reflexive approach. In their contribution, the notion of normativity in algorithmic systems is discussed from various analytical position: technical, sociotechnical, and behavioral. The authors argue that algorithmic systems are inhabited by normative tensions between different kinds of normativities, and that a fruitful approach is to explore the tensions instead of the normativities themselves.

In conclusion
A point of departure for this special theme was that algorithms are intertwined with normativities at every step of their existence; in their construction, implementation, as well as their use in practice. The articles in this special theme thus scrutinize ideas of normativities in and around algorithms: how different normativities are enacted with algorithms, as well as how different normativities are handled when humans tinker with algorithms. The array of theoretical approaches—anxieties, pluralities, recalibrations, folds, aesthetics, accountability—that implicate algorithms force us to engage with the multiple normative orders that algorithms are entangled with. We wish you good reading and welcome comments and further discussions! Welcome to the special theme Algorithmic normativities.

References
Amoore L (2013) The Politics of Possibility: Risk and Security beyond Probability. Durham, NC: Duke University Press.

Beer D (2009) Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society 11: 985–1002.

Callon, M. and Law, J. (1997) ‘After the Individual in Society: Lessons on Collectivity from Science , Technology and Society’, Journal of Sociology, 22(2), pp. 165–182. doi: 10.2307/3341747.

Dourish, P. (2016) ‘Algorithms and their Others : Algorithmic Culture in Context’, Big Data & Society, July-Decem, pp. 1–11.

Galis V and Lee F (2014) A sociology of treason: The constructionof weakness. Science, Technology, & Human Values 39: 154–179.

Gillespie, T. (2014) ‘The Relevance of Algorithms’, Media Technologies, (Light 1999), pp. 167–194. doi: 10.7551/mitpress/9780262525374.003.0009.

Kitchin, R. (2014) ‘Thinking Critically About and Researching Algorithms’, SSRN Electronic Journal. doi: 10.2139/ssrn.2515786.

Neyland, D. (2016) ‘Bearing Account-able Witness to the Ethical Algorithmic System’, Science Technology and Human Values, 41(1), pp. 50–76. doi: 10.1177/0162243915598056.

Schüll, N. (2012) Addiction by design: Machine gambling in Las Vegas. Princeton, N.J.: Princeton University Press.

Seaver N (2017) Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society 4: 1–12.

Star SL (1991) Power, technologies and the phenomenology of conventions: On being allergic to onions. In: Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge, pp. 26–56.

Striphas T (2015) Algorithmic culture. European Journal of Cultural Studies 18: 395–412.

Totaro P and Ninno D (2014) The concept of algorithm as an interpretative key of modern rationality. Theory Culture & Society 31: 29–49.

Winner L (1993) Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology & Human Values 18: 362–378.

Woolgar S and Lezaun J (2013) The wrong bin bag: A turn to ontology in science and technology studies. Social Studies of Science 43: 321–340.

Ziewitz, M. (2017) A not quite random walk: Experimenting with the ethnomethods of the algorithm, Big Data & Society, 4(2), pp. 1-13. doi: 10.1177/2053951717738105.

Notes
1 See for instance, Amoore 2013; Beer 2009; Dourish 2016; Gillespie 2014; Kitchin 2014; Neyland 2016; Schüll 2012; Seaver 2017; Striphas 2015; Totaro and Ninno 2014; Ziewitz 2017. See also the critical algorithm studies list https://socialmediacollective.org/reading-lists/critical-algorithm-studies.
2 On torque see Bowker & Star (1999).

Sunday, 28 July 2019

The expansion of the health data ecosystem – Rethinking data ethics and governance

Special Theme Issue
Guest Editors: Tamar Sharon* and Federica Lucivero**


* Interdisciplinary Hub for Security, Privacy and Data Governance, Radboud University, NL
** Ethox and Wellcome Centre for Ethics and Humanities, University of Oxford, UK

As in other domains, digital data are taking on an ever more central role in health and medicine today. And as it has in other domains, datafication is contributing to a re-configuration of health and medicine, prompting its expansion to include new spaces, new practices, new techniques and new actors. Indeed, possibilities to quantify areas of life that have not traditionally been considered the remit of biomedicine – such as a person’s consumption patterns, her social media activity or her dietary habits – have contributed to a redefinition of almost any data as health-related data (Lucivero and Prainsack, 2015; Weber et al., 2014). Increasingly, these data are being generated outside the traditional spaces of medicine, as people go about their daily lives interacting with consumer mobile devices. Similarly, the technological tools needed to capture, store, analyze and manage the flow of these data, from wearables and smart phones to cloud platforms and machine learning, increasingly rely on infrastructure and know-how that lie beyond the scope of traditional medical systems and scientists, amongst data scientists and ICT specialists. Moreover, new stakeholders are cropping up in these quasi-medical yet still undomesticated territories. On one end of the spectrum, individuals who generate health data as they track and monitor their health are both solicited as research participants and are making demands on researchers to utilize their personal health data (HDE, 2014). On the other end of the spectrum, consumer technology corporations such as Apple and Google are reinventing themselves as obligatory passage points for data-intensive precision medicine (Sharon, 2016). And somewhere in between, not-for-profit organizations, such as Sage Bionetworks and OpenHumans.org, are positioning themselves as mediators in this ecosystem in formation, between the medical research community, individual and collective generators of data, and technology developers.

As proponents uphold, this expansion and decentralization of the health data ecosystem is promising: it may advance data-driven research and healthcare, and it may render research more inclusive (Shen, 2015; Topol, 2015). But, as critical scholars of science and technology have consistently shown, a fuller grasp of our technological present must always include the far-reaching, unexpected and sometimes deleterious social, political and cultural effects of discourses of scientific progress and technologically-enabled democratization and participation. In recent years, such critical scholarship has been particularly wary of the new power asymmetries that datafication contributes to. Rather than levelling power relations, critics observe, these are being redrawn along new digital divides based on data ownership or access, control over digital infrastructures, and new types of computational expertise, where those who generate data, especially citizens, patients, and consumers, are positioned on the losing side of the on-going extraction and scramble for the world’s data driven by state and corporate actors (Andrejevic, 2014; boyd and Crawford, 2012; Taylor, 2017; Zuboff, 2016).

In the context of the data economy, the dominant response to these growing power differentials has been to ensure that individual data subjects acquire more control over the data they produce – what Prainsack calls the “Individual Control” approach in her contribution to this special theme. Examples include the EU’s General Data Protection Regulation or initiatives that allow individuals to monetize their personal data (Lanier, 2013; www.commodify.us). In the context of data-driven medicine, this emphasis on increasing individual control over data has translated into attempts to develop better anonymization techniques and more fine-grained informed consent (Kaye et al., 2015), as well as the configuration of patients as the rightful “owners” of their own medical data (Kish and Topol, 2015).

However, scholars from different disciplines have begun questioning whether enhancing individual control over data is the most effective or desirable means of addressing the new power differentials of digital society. While some scholars emphasize the relational and social nature of persons and data (Taylor, 2012), others question the legal feasibility of individual ownership of data (Evans, 2016), and others highlight the futility of monetization schemes as a means of redressing inequalities (Casilli, 2019). Most importantly, the emphasis on individual rights and values may result in a reframing of societal concerns as individual ones all the while undermining the political power of collectives.

Each of the contributions that make up this special theme addresses the reconfiguration of existing relationships and the emergence of new power differentials that result from the expansion of the health data ecosystem. While they do this from different disciplinary perspectives, they all share the same starting point: the understanding that increased individual control of data subjects is insufficient for anticipating the far-reaching risks and preventing the societal, if not individual, harms associated with this expansion. In light of this, they argue for new governance frameworks, technological infrastructures and narratives that are predicated on the shared responsibility of multiple stakeholders and collective decision-making and control.

The commentaries by Brian Bot, Lara Mangravite and John Wilbanks, and by Bart Jacobs and Jean Popma both discuss the types of technical methods and arrangements that need to be developed to enable secure, responsible and equitable data sharing in the context of decentralized medical research. Both groups of authors are involved in the design and implementation of novel data management infrastructures.

A better understanding of the workings of data management infrastructures is discussed in a filmed interview with José van Dijck, on the recent book she has co-authored with Thomas Poell and Martijn de Waal, Platform Society: Public Values in a Connective World (2018). Van Dijck and Sharon discuss the importance of grasping how the material functioning of internet platforms contributes to shaping a new political and social reality.

In their commentary, Alessandro Blasimme, Effy Vayena and Ine Van Hoyweghen scrutinize how the proliferation of citizen generation of medical data, in initiatives like the American “All of Us” program, is unsettling the position of a less commonly studied stakeholder: the private insurance sector. Such initiatives, they argue, create a new “information asymmetry” between private insurers and those of their policy-holders who enroll in such research, which will likely make people more reluctant to donate personal health data for precision medicine research.

Tuukka Lehtiniemi and Minna Ruckenstein focus on data activism as a means of challenging the power asymmetries of datafied societies. Based on their engagement as social scientists with MyData, a data activism initiative originating in Finland, they identify and disentangle two parallel social imaginaries, a “technological” and a “socio-cultural imaginary”. They discuss the benefits and disadvantages of each and call for a greater role for the latter, while acknowledging its weaknesses.

The contributions by Barbara Prainsack and Linnet Taylor & Nadezdha Purtova both address the limitations of the framework of the commons –  today’s preferred site of theoretical and practical resistance for those scholars and activists seeking to counter digital power asymmetries by foregrounding collective, rather than individual, control over data. While Prainsack argues that a more systematic discussion of processes of inclusion and exclusion in commons is required, Taylor and Purtova call for more attention to which stakeholders are affected by data practices. Both agree that in light of the multiple nature of data, the original commons framework cannot be easily transposed from physical to data commons.

In her article, Tamar Sharon calls for a closer examination of the different conceptualizations of the common good that are at work in one specific area of the expanding health data ecosystem, what she calls the “Googlization of health research”, or the recent entrance of large consumer tech corporations into the medical domain. Using the framework of justification analysis (Boltanski and Thévenot, 2006), she identifies a plurality of conceptualizations of the common good that different actors mobilize to justify collaborating within these new multi-stakeholder research projects.

We hope that this special theme offers a productive – albeit far from comprehensive – overview of arguments for and examples of infrastructure, governance and ethics that are collective-centric in addressing the challenges posed by the datafication and expansion of the health ecosystem.

References

Andrejevic M (2014) The Big Data Divide. International Journal of Communication 8: 1673–89.

Boltanski L, and Thévenot, L (2006) On Justification: Economies of Worth. Princeton: Princeton University Press.

boyd d and Crawford K (2012) Critical Questions for Big Data. Information Communication & Society 15(5): 662–79.

Casilli A (2019) En Attendant les Robots: Enquête sur le Travail du Clic. Paris: Seuil.

Evans B (2016) Barbarians at the Gate: Consumer-Driven Health Data Commons and the Transformation of Citizen Science. American Journal of Law & Medicine (4): 1–34.

Health Data Exploration Project (HDE) (2014) Personal Data for the Public Good: New Opportunities to Enrich Understanding of Individual and Population Health. Calit2, UC Irvine and UC San Diego. Available at: at http://hdexplore.calit2.net/wp-content/uploads/2015/08/hdx_final_report_small.pdf

Kaye J, Whitley E, Lund D, Morrison M, Teare H and Melham K (2015) Dynamic consent: a patient interface for twenty-first century research networks. European Journal of Human Genetics 23(2): 141-146.

Kish L and Topol E (2015) Unpatients – why patients should own their medical data. Nature Biotechnology 33(9): 921-924.

Lanier J (2013) Who Owns the Future? London: Penguin Books.
Lucivero F and Prainsack B (2015) The lifestylisation of healthcare? “Consumer genomics” and mobile health as technologies for healthy lifestyle’. Applied and Translational Genomics, 4. http://dx.doi.org/10.1016/j.atg.2015.02.001.

Sharon T (2016) The Googlization of health research: from disruptive innovation to disruptive ethics. Personalized Medicine. DOI: 10.2217/pme-2016-0057.

Shen H (2015) Smartphones set to boost large-scale health studies. Nature. DOI:10.1038/nature.2015.17083.

Taylor L (2017) What is data justice? Big Data & Society 4(2): 1-14.

Taylor M (2012) Genetic Data and the Law: A Critical Perspective on Privacy Protection. Cambridge: Cambridge University Press.

Topol E (2015) The Patient Will See You Now: The Future of Medicine Is in Your Hands. New York: Basic Books.

van Dijck J, Poell T and de Waal M (2018) The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press.

Weber GM, Mandl KD and Kohane IS (2014) Finding the missing link for big biomedical data. JAMA 311(24):2479–2480. doi:10.1001/jama.2014.4228

Zuboff S (2015) Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology 30(1): 75–89.

Monday, 8 July 2019

Summer break

The Big Data and Society Editorial Team will be on summer break from July 15th until August 15th. Please accept delays in processing and reviewing your submission during that time.

Many thanks for your understanding.

Friday, 14 June 2019

Call for Special Theme Proposals for Big Data & Society

The SAGE open access journal Big Data & Society (BD&S) is soliciting proposals for a Special Theme to be published in late 2020 or early 2021. BD&S is a peer-reviewed, interdisciplinary, scholarly journal that publishes research about the emerging field of Big Data practices and how they are reconfiguring academic, social, industry, business and government relations, expertise, methods, concepts and knowledge. BD&S moves beyond usual notions of Big Data and treats it as an emerging field of practices that is not defined by but generative of (sometimes) novel data qualities such as high volume and granularity and complex analytics such as data linking and mining. It thus attends to digital content generated through online and offline practices in social, commercial, scientific, and government domains. This includes, for instance, content generated on the Internet through social media and search engines but also that which is generated in closed networks (commercial or government transactions) and open networks such as digital archives, open government and crowd-sourced data. Critically, rather than settling on a definition the Journal makes this an object of interdisciplinary inquiries and debates explored through studies of a variety of topics and themes.

Special Themes can consist of a combination of Original Research Articles (8000 words; maximum 6), Commentaries (3000 words; maximum 4) and one Editorial (3000 words). All Special Theme content will be waived Article Processing Charges. All submissions will go through the Journal’s standard peer review process.

Past special themes for the journal have included: Knowledge Production, Algorithms in Culture, Data Associations in Global Law and Policy, The Cloud, the Crowd, and the City, Veillance and Transparency, Environmental Data, Spatial Big Data, Critical Data Studies, Social Media & Society, Assumptions of Sociality, Health Data Ecosystems and Data & Agency. See http://journals.sagepub.com/page/bds/collections/index to access these special themes.

Format of Special Theme Proposals
Researchers interested in proposing a Special Theme should submit an outline with the following information.

- An overview of the proposed theme, how it relates to existing research and the aims and scope of the Journal, and the ways it seeks to expand critical scholarly research on Big Data.

- A list of titles, abstracts, authors and brief biographies. For each, the type of submission (ORA, Commentary) should also be indicated. If the proposal is the result of a workshop or conference that should also be indicated.

- Short Bios of the Guest Editors including affiliations and previous work in the field of Big Data studies. Links to homepages, Google Scholar profiles or CVs are welcome, although we don’t require CV submissions.

- A proposed timing for submission to Manuscript Central. This should be in line with the timeline outlined below.

Information on the types of submissions published by the Journal and other guidelines is available at https://us.sagepub.com/en-us/nam/journal/big-data-society#submission-guidelines.

Timeline for Proposals
Please submit proposals by September 1, 2019 to the Managing Editor of the Journal, Prof. Matthew Zook at zook@uky.edu. The Editorial Team of BD&S will review proposals and make a decision by November 2019. Manuscripts would be submitted to the journal (via manuscript central) by or before March 2020. For further information or discuss potential themes please contact Matthew Zook at zook@uky.edu.

Saturday, 25 May 2019

Video abstract: Experiments with a data-public

Anders Koed Madsen and Anders Kristian Munk discuss their paper "Experiments with a data-public: Moving digital methods into critical proximity with political practice" in Big Data & Society 6(1), https://doi.org/10.1177/2053951718825357. First Published February 15, 2019.

Video Abstract


Text Abstract
Making publics visible through digital traces has recently generated interest by practitioners of public engagement and scholars within the field of digital methods. This paper presents an experiment in moving such methods into critical proximity with political practice and discusses how digital visualizations of topical debates become appropriated by actors and hardwired into existing ecologies of publics and politics. Through an experiment in rendering a specific data-public visible, it shows how the interplay between diverse conceptions of the public as well as the specific platforms and data invoked, resulted in a situated affordance-space that allowed specific renderings take shape, while disadvantaging others. Furthermore, it argues that several accepted tropes in the literatures of digital methods ended up being problematic guidelines in this space. Among these is the prescription to shown heterogeneity by pushing back at established media logics.

Keywords: Digital methods, public engagement, pragmatism, controversy-mapping, critical proximity, multiplicity