by Shaul A. Duke, Peter Sandøe, Thomas Andras Matthiessen Skelly, and Sune Holm
Tuesday, 28 October 2025
Guest Blog: Healthcare big data projects are producing ambivalence among members of the public
Tuesday, 21 October 2025
Guest Blog: AI is Mental – Evaluating the Role of Chatbots in Mental Health Support
by Briana Vecchione and Ranjit Singh
In our research on mental health and chatbots, we’ve been increasingly noticing people aren’t just using chatbots for mental health support; they’re actively comparing them to therapy.
Across platforms like ChatGPT and Replika, people turn to AI chatbots for emotional support once reserved for friends, therapists, or journals. These tools are always available and never judge—features that many of our research participants value. But what happens when care shifts from human relationships to algorithmic systems?
Chatbots are appealing because they ease the burden of emotional accounting, the work of naming what we feel and why. They respond quickly, “remember” past conversations, and provide validation. However, this sense of support can quickly feel superficial when the bot forgets, misremembers, or can’t go deeper. A good therapist doesn’t just affirm; they push, challenge, and guide.
A chatbot does not read body language, hold space, or pick up on subtleties beyond silences. However, its limitations can turn emotional reliance into harm. The story of Replika, an app where many users built romantic bonds with AI companions, shows how sudden changes in chatbot behavior can trigger grief and distress.
The impact extends beyond users. As chatbots scale, mental health professionals may be
pushed into oversight roles, reviewing transcripts or intervening when AI fails, instead of building deep, ongoing therapeutic relationships. This shift risks eroding the relational core of therapy and devaluing practitioners’ expertise.
Meanwhile, key ethical and regulatory questions remain unanswered. Who owns your data
when you confide in a chatbot? Who’s accountable when harm occurs? Framing AI companions as inevitable risks normalizes care that is scalable but shallow.
The perennial availability of AI is rapidly reshaping our expectations of care. People now seek support that is always-on, efficient, and emotionally responsive. Yet, it is also often limited. Without proper discernment, we risk mistaking availability for presence, agreeability for empathy, and validation for understanding. Care is not mere output. It lives in the slow, difficult work of attunement, which no machine can truly provide.
Thursday, 25 September 2025
Guest Blog: Interviewing AI: Using Qualitative Methods to Explore and Capture Machines’ Characteristics and Behaviors
by Mohammad Hossein Jarrahi
Jarrahi, M. H. (2025). Interviewing AI: Using qualitative methods to explore and capture machines’ characteristics and behaviors. Big Data & Society, 12(3), 20539517251381697. (Original work published 2025)
AI systems are now woven into the everyday lives of millions, yet their behavior often surprises even their own creators. Traditional methods of researching technology can certainly tell part of the story, especially how different groups of users make sense of these tools, but they may miss how AI systems actually act and behave in unpredictable ways. I argue that we need to develop new methods of data collection and analysis. This paper therefore grew out of that gap. I asked a simple question: what if I studied AI the way social scientists study people, by interviewing it?
I introduce "interviewing AI," a qualitative toolkit for making sense of machine behavior. It begins with open exploration to learn a system's quirks. It then moves to structured probing that varies prompts, stages scenarios, pushes at boundaries, and poses counterfactual "what if" questions to surface hidden patterns, breakdowns, and traces of reasoning. To widen the lens, I offer two designs: Temporal Interaction Analysis tracks the same system over time to capture shifts after updates or sustained use, and Comparative Synchronic Analysis compares multiple systems on the same tasks to reveal differences that matter in practice.
I pair these methods with qualitative analysis techniques such as thematic coding and critical discourse analysis to identify patterns and uncover embedded biases. Throughout, I argue for transparency and reflexivity, and I caution against reading too much human intent into machine outputs (i.e., anthropomorphization). I also make a case for treating prompts and AI responses as a single unit of analysis, as humans and AI co-produce the exchange.
Readers can expect a practical, research-ready holistic qualitative framework that complements other research approaches in human-centered studies of AI, helping designers, governors, and scholars document where systems help, where they fail, and why, and turn messy interactions into rigorous insights.
Thursday, 12 June 2025
Guest Blog: Problematising AI for Climate Action as a Global Practice
by Abdullah Hasan Safir and Sanjay Sharma
Safir, A. H., & Sharma, S. (2025). Networks, narratives and neocoloniality of AI for Climate Action. Big Data & Society, 12(2), 20539517251334095. https://doi.org/10.1177/20539517251334095. (Original work published 2025)
Our paper problematises AI for Climate Action by challenging the assumptions and normative beliefs around AI’s promise to ‘solve’ global climate problem.
We ask two fundamental questions to investigate geopolitical power asymmetries around this practice: First, who is doing AI for Climate Action? and second, what discourses are they making?
Our findings reveals that tech corporations from the Global North are leading the use of AI for Climate Action, with global non-profits and civil society organisations as their facilitators. They use certain discourses such as ‘leverage’, ‘innovate’, and ‘responsibility’ to justify their actions and capitalistic, profit-driven goals – in disguise of their benevolent efforts to address climate problems.
The Global South countries can face challenges in accessing digital infrastructures for AI innovation for climate action due to high entry costs and Global North-centric corporate interests. In addition, high-level risk associated with AI applications could cause more harm than benefits for them.
Understanding the Global North dominance and their exploitative narratives is important. Many scholars suggest that climate problem is a colonial problem and show that the Global South is disproportionately affected by climate vulnerabilities. Drawing parallel, we frame AI for Climate Action as a neocolonial practice in our work. If AI technologies are not designed in response to Global South contexts and if the control of these applications remains in the Global North, it would further exacerbate global climate injustice.
Our work also offers methodological innovation. It combines a digital method, ‘issue mapping’ (inspired from Actor-network Theory in STS), with Critical Discourse Analysis (CDA). Critical AI scholarship will increasingly require such inter-disciplinary approaches like the one we have applied to address the complex crises induced by AI.
Monday, 2 June 2025
Call for Special Theme Proposals (Due September 15, 2025)
Call for Special Theme Proposals for Big Data & Society (Due September 15, 2025)
The SAGE open access journal Big Data & Society (BD&S) is soliciting proposals for Special Themes to be published in 2026. BD&S is a peer-reviewed, interdisciplinary, scholarly journal that publishes interdisciplinary social science research about the emerging field of Big Data practices and how they are reconfiguring relations, expertise, methods, concepts and knowledge across academic, social, cultural, political, and economic realms. BD&S moves beyond usual notions of Big Data to engage with an emerging field of practices that is not defined by but generative of (sometimes) novel data qualities such as extensiveness, granularity, automation, and complex analytics including data linking and mining. The journal attends to digital content generated through online and offline practices, including social media, search engines, Internet of Things devices, and digital infrastructures across closed and open networks, from commercial and government transactions to digital archives, open government and crowd-sourced data. Rather than settling on a definition of Big Data, the journal makes this an area of interdisciplinary inquiry and debate explored through multiple disciplines and themes.
Special Themes can consist of a combination of Original Research Articles (6 maximum, up to 10,000 words each), Commentaries (4 maximum, 3,000 words each) and one Editorial Introduction (3,000 words). All Special Theme content will have the Article Processing Charges waived. All submissions will go through the journal’s standard peer review process.
While open to submissions on any theme related to Big Data, we particularly welcome proposals on topics that the journal has not previously published extensively. You can find the full list of special themes published by BD&S at http://journals.sagepub.com/
Format of Special Theme Proposals
Researchers interested in proposing a Special Theme should submit an outline with the following information.
- An overview of the proposed theme, including how it relates to existing research and the aims and scope of the journal, and the ways it seeks to expand critical scholarly research on Big Data.
- A list of titles, abstracts, authors, and brief biographies. For each, the type of submission (ORA, Commentary) should also be indicated. If the proposal is the result of a workshop or conference, that should also be indicated.
- Short bios of the Guest Editors, including affiliations and previous work in the field of Big Data studies. Links to homepages, Google Scholar profiles, or CVs are welcome, although we don’t require CV submissions.
- A proposed timing for submission to Manuscript Central. This should be in line with the timeline outlined below.
Information on the types of submissions published by the journal and other guidelines is available at https://journals.sagepub.com/
Timeline for Proposals
Submit proposals by September 15, 2025, via this online form: https://forms.gle/iguDkz4c3SXVaPtQ6
(Note: You must have a Google account in order to access this form). Do not send proposals via email, as they will not be reviewed. The Editorial Team of BD&S will review proposals and make a decision by late October 2025. For selected proposals, manuscripts would be submitted to the journal (via Manuscript Central) by or before February 28, 2026.
For further information or to discuss potential themes, please contact Dr. Matthew Zook at zook@uky.edu.
Monday, 26 May 2025
Guest Blog: What Happens When Your Voice Is Algorithmically Silenced?
by Reham Hosny and Mohamed A. Nasef
Hosny, R., & Nasef, M. A. (2025). Lexical algorithmic resistance: Tactics of deceiving Arabic content moderation algorithms on Facebook. Big Data & Society, 12(2). https://doi.org/10.1177/20539517251318277 (Original work published 2025)
Our research reveals a fascinating and urgent phenomenon we call lexical algorithmic resistance—a form of digital micro-resistance where users alter their language to bypass automated censorship. From breaking up words into syllables to using unpainted Arabic script, these subtle acts defy AI’s watchful eye while preserving activist messages.
By scraping Facebook data, we mapped out the most common resistance keywords and strategies across Arab regions. We also examined the sociopolitical contexts that shape these tactics. The result is a deeper understanding of how digital repression operates—and how ordinary users resist it with extraordinary creativity.
In a world where platforms wield imperial control over political discourse, language becomes a battleground. This study isn’t just about censorship—it’s about reclaiming agency. It highlights how technology, while often oppressive, can also be bent toward resistance.
Ultimately, we hope readers will come away with a renewed appreciation for the quiet ingenuity of digital activism—and the importance of safeguarding voices that fight to be heard.
Wednesday, 14 May 2025
Guest Blog: Against interoperability? Terrains of technopolitical contestation and the remaking of EU border infrastructure
by Paul Trauttmansdorff
Trauttmansdorff, P. (2025). Against interoperability? Terrains of technopolitical contestation and the remaking of EU border infrastructure. Big Data & Society, 12(2). https://doi.org/10.1177/20539517241307909 (Original work published 2025)
Europe’s borders have undergone profound changes in recent decades, one major driver being the ongoing development of large-scale databases for migration and border control. Critical scholarship has examined the futures, promises, and expectations associated with the increased collection of data on travelers in the European border regime. One of the most recent and probably most transformative initiatives in the European Union is interoperability, which aims to interconnect different databases containing the personal data of migrants, asylum seekers, and convicted criminals. These databases should be joined up through a shared repository where biometric and personal data are cross-referenced in order to detect so-called “identity fraud,” such as duplicate identities.
Like many researchers studying the intersections of data, borders, and migration, my work has so far focused on critically examining initiatives such as interoperability, exploring the imaginaries of (in)security behind them, and questioning their solutionist approach to technically “fix” deeply social and political issues of cross-border mobility. However, less attention has been paid to how these digital transformation initiatives are routinely contested and opposed. Contestation and resistance are naturally linked to the agency and recalcitrance of migration, which authorities constantly struggle to govern and “control.” In my article, I aim to point to various other sites of contestation in the European border regime, using the example of creating an interoperable border infrastructure. This is why I introduce the concept of “terrains of technopolitical contestation.” I identify three such terrains—the parliamentary, the consultative, and the civil society terrain—where contentious claims, dissent, and opposition have emerged against the making of interoperability. The term “technopolitical” denotes that, on these terrains, claims and debates primarily revolve around defining the boundaries between the political and the technical. I believe this framework can help us understand how crucial boundary-making processes shape the framing of claims and issues while also constraining the space for critical intervention critique and political critique in the current technopolitics of the EU border regime. I hope that future research can further explore how various terrains of contestation evolve over time, shape the digital governance of migration, and demonstrate further possibilities for contestation within an increasingly data-driven border regime.