Friday, 29 October 2021

Editorial Board Update

We are very happy to announce the renewal of the Editorial Board for Big Data and Society. We take this step every three years to ensure that we refresh and expand the range of researchers engaged in the journal.

We are looking forward to the engaging and productive interactions that are to come.

We also wish to thank everyone who has previously served on the Editorial Board for helping make the journal the success that it is today. 

Thursday, 21 October 2021

A Guest Speaker Series about the Covid-19 Infodemic phenomenon by the Social Media Lab at Ryerson University

In the new Big Data & Society Special Theme “Studying the COVID-19 Infodemic at Scale”, we see this rapid circulation of both correct and misinformation related to Covid-19 and other health issues as infodemic. Our editors, Anatoliy Gruzd, Manlio De Domenico, Pier Luigi Sacco, Sylvie Briand, offer cutting-edge approaches to studying the effects, strategies and challenges associated with the emergent infodemic. They have sought to open up a space for investigating and discussing this phenomenon by selecting six research articles and four commentaries from six countries (see more details here).

Our editors have extended the discussion of the Covid-19 infodemic by initiating a Guest Speaker Series with the Social Media Lab at the Ted Rogers School of Management at Ryerson University from Oct 7th to Nov 19th. The talks are: 

  • Oct 7 (Thu) 2 pm. Kai-Cheng Yang, Observatory on Social Media, Indiana University, The COVID-19 Infodemic: Twitter versus Facebook
  • Oct 21 (Thu), 2 pm. Mark Green, Department of Geography & Planning, University of Liverpool. Identifying how COVID-19-related misinformation reacts to the announcement of the UK national lockdown: An interrupted time-series study
  • Oct 28 (Thu), 2 pm. Jon Roozenbeek, Department of Psychology, University of Cambridge. Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation
  • Nov 4 (Thu), 2 pm. Kacper Gradon, Department of Security and Crime Science, University College London. Countering misinformation: A multidisciplinary approach
  • Nov 11 (Thu), 2 pm. Michael Robert Haupt, Department of Cognitive Science, University of California San Diego. Identifying and characterizing scientific authority-related misinformation discourse about hydroxychloroquine on twitter using unsupervised machine learning
  • Nov 19 (Fri) 2 pm. Paola Pascual-Ferrá, Department of Communication, Loyola University Maryland. Toxicity and verbal aggression on social media: Polarized discourse on wearing face masks during the COVID-19 pandemic

If you wish to attend this event in real-time, please register here in order receive a Google Meet access link to each talk. All talks are recorded and available to access (after the real-time event) via the same webpage.

Everyone is welcome to join the Guest Talk Series. The guest talks will focus on the questions:

1) what are the benefits and challenges of detecting and combating the spread of Covid-19 misinformation on social media?

2) how can the digital method of data-tracing function as a useful methodology to examine and mitigate the risks that an infodemic could pose to individuals and society?

Thursday, 14 October 2021

Reflections on big data as a white method

by Kaarina Nikunen

My new article, 'Ghosts of white methods? The challenges of Big Data research in exploring racism in digital context', is an outcome of the challenges met while using big data methods to study the complex phenomenon of racism in digital culture. The article is a reflection of the research process and an attempt to develop more nuanced approaches to study inequalities in digital culture. 

The starting point is the concept of white method introduced by Bonilla-Silva and Zuberi (2008) in critical race studies. White method refer to the dominance of whiteness in the social sciences and the tendency to fortify racial inequalities via research design. By exploring our research process, the article explores ways to challenge white logic and white methods in big data research.  

Three approaches are introduced: 1) using big data analytics to identify the dynamics of how racist discourse is produced online 2) complementing big data and going beyond big data via qualitative approaches and 3) using big data to question or reform the infrastructures that foster hate and racism – in other words, questioning the data- based logics of racism and discrimination. 

The first approach points out the need to open up data for reflexive interrogation of categories and possible built-in biases. It discusses what kind of racial biases arise when the research only leans on mainstream media sources or the usual social media database. The second approach shows how knowledge from experience can offer valuable insight on the structures, sites and experiences of racial injustice that often remain invisible in big data studies.  However, to avoid re-categorization, the knowledge from experience requires an anti-essentialist approach that is sensitive to complexities within and across experience. 

Finally, the article points out the significance of institutional context for countering white methods. Understanding the epistemic value of diversity on institutional level is crucial. Research of a complex phenomenon such as racism, requires time and different forms of collaboration that may not be easy in academic institutional contexts that emphasize speedy publishing.

No single method can answer or solve all our problems. Multiple approaches are needed to account for the different articulations of racial power and they need to be explored also in our own research processes. By revealing challenges met in the course of research I hope that the article might help others who are struggling with their methodological choices.

Racial Formations as Data Formations

by Thao Phan and Scott Wark

In the commentary Racial Formations as Data Formations,’ we outline what we see as the challenges and implications for studying big data’s effect on the category of race itself. The commentary is a part of Big Data & Society's special theme, Data, Power and Racial Formations. 

This essay is framed around a proposition made by Paul Gilroy in some of his earlier work on the link between race and technology. In the 90’s, Gilroy made the provocative argument that new scientific and medical techniques for imaging bodies ought to allow us to renounce the concept of race, because at the micro-scales imaged by, for instance, MRI scans, all bodies are the same.

Without necessarily endorsing this claim – old forms of racism have an unfortunate habit of persisting – we use Gilroy’s provocation to suggest that scholars need to pay more attention to the link between racialisation and mediation. For Gilroy, the category of race itself has to be understood as an effect of the techniques we use to represent it. 

Big data provides a new means of mediating race visually – for example, through techniques like facial recognition. But it also operates using a post-visual logic. Large-scale data collection is post-visual  because it happens at scales and through processes which we can not only not see, but which also produce effects that are not at all seeable. It occurs in what we call a ‘post-visual regime.’

Whereas much of the – excellent and inspiring – existing research on data emphasises how computational systems reproduce historical inequalities by using biased data, we argue that scholars also need to account for their post-visual effects. Algorithmic techniques transform the inputs they work on. With enough data, for instance, people can be racialised using ‘proxy’ indicators of race, like taste, interests, or location. 

Large-scale data processing doesn’t just reproduce existing racial formations, to use Michael Omi and Howard Winant’s hugely-influential term. It produces new ones. This is what we call racial formations as data formations. 

This is our essay’s provocation: recapitulated as data formations, racial formations reconceive us in new ways that we cannot see and that we might struggle to access. By transforming the category of race itself, these formations also change the terms in which it might be contested.

Tuesday, 31 August 2021

Algorithms as organizational figuration: The sociotechnical arrangements of a fintech start-up

To further the understanding of sociotechnical arrangements that involve algorithms, our journal paper developed the conceptualization of algorithms as organizational figuration, illustrating how algorithms and organizations shape each other in the case of a fin-tech start-up.

Organizations increasingly employ machine learning algorithms, whether as part of their original business model or in efforts to optimize and reform existing practices. For this study, we followed a Danish fintech start-up through its process of defining itself as a fintech, developing an algorithmic tool for screening investment portfolios, and launching its core product of sustainable investment opportunities for pensions savings. We found that the company follows a general trajectory of shaping the algorithm, then being shaped by it, but that this process involves a series of more specific sociotechnical arrangements, some of which develop consecutively while others co-exist for shorter or longer periods of time.

To gain deeper understanding of this process, we turned to the concept of figuration, as developed by Couldry and Hepp (2017) and Braidotti (2011), respectively. Beginning from their common understanding of figurations as relational arrangements of social and technical elements, which are to be ‘mapped’ rather than interpreted or otherwise explained, we follow Braidotti’s notion that figurations are living maps of organizations that can be identified as ‘conceptual personae’ or performative images of socio-political becoming. Returning to our case, we can now detail the figuration of our case organization as variously foregrounding the agency of human actors and of the algorithm and show how these agencies are predicated upon each other as well as on other aspects of the sociotechnical arrangement. The algorithm, initially developed to fulfil the organization’s purpose of sustainable investment, increasingly shapes the organization, shifting its purpose from normative potentiality to pragmatic realization. As one of human members of the organization observes: 

I was attracted by it, like cool that it was a pension that would be good for a lot of things. But now it is like it has become more specialized in the direction of climate because it is quantifiable and measurable and easy to report. But the more parameters that have to add up, like human rights and women in management, the more difficult it will be to create a portfolio that still gives a good return. Because it should not cost our users any money to protect something good; that is our entire selling proposition.  

Here, the agency of the algorithm interacts with economic imperatives of profit-maximization (for the organization as well as its users) to figure the organization as a conduit for turning input (investment portfolios) into output (optimal profitability, given set requirements of sustainability). 

Empirically, then, our case is one of an organization that becomes figured more and more as its algorithm. Conceptually, however, this implies understanding algorithms in relation to the organizations they figure – and by which they are figured. An algorithm may be a procedure for turning input into output in accordance with set rules, but that is never all it is. Or rather, the ‘black box’ of that procedure is shaped from the outside as it were, through the relations between algorithmic and other organizational elements (be they social or technical). Understanding algorithms as organizational figuration, in sum, means providing living maps of their relationality, multiplicity and processuality, identifying the ways in which the give shape to and take shape from processes of organizing, establishing spatial multiplicity within a temporal trajectory. 

Monday, 30 August 2021

Dashboard stories: How narratives told by predictive analytics reconfigure roles, risk and sociality in education

Juliane Jarke and Felicitas Macgilchrist introduce a new paper on, "Dashboard stories: How narratives told by predictive analytics reconfigure roles, risk and sociality in education", out in Big Data & Society  doi:10.1177/20539517211025561. First published June 29, 2021.

Video abstract


In this paper, we explore how the development and affordances of predictive analytics may impact how teachers and other educational actors think about and teach students and, more broadly, how society understands education. Our particular focus is on the data dashboards of learning support systems which are based on Machine Learning (ML). While previous research has focused on how these systems produce credible knowledge, we explore here how they also produce compelling, persuasive and convincing narratives. Our main argument is that particular kinds of stories are written by predictive analytics and written into their data dashboards. Based on a case study of a leading predictive analytics system, we explore how data dashboards imply causality between the ‘facts’ they are visualising. To do so, we analyse the stories they tell according to their spatial and temporal dimensions, characters and events, sequentiality as well as tellability. In the stories we identify, teachers are managers, students are at greater or lesser risk, and students’ sociality is reduced to machine-readable interactions. Overall, only data marked as individual behaviours becomes relevant to the system, rendering structural inequalities invisible. Reflecting on the implications of these systems, we suggest ways in which the uptake of these systems can interrupt such stories and reshape them in other directions.

Keywords: Predictive analyticslearning analyticseducationdata visualisationdataficationstorytellingdashboardsmachine learningartificial intelligence

Monday, 12 July 2021

Big Data & Society summer break

The journal Big Data & Society will be on summer break from July 17th to August 28th. Please accept any delays in processing and reviewing your submission, and in related correspondence during that time.

Have a great summer!