Monday, 9 February 2026
Guest Blog: Whose Voice Counts? The Role of Large Language Models in Public Commenting
Saturday, 17 January 2026
Guest Blog: When is black-box AI justifiable to use in healthcare?
By Sinead Prince and Julian Savulescu
Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore
Prince, S., & Savulescu, J. (2025). When is black-box AI justifiable to use in healthcare?. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251386037. (Original work published 2025)
This research explores a pressing dilemma in healthcare today: when is it justifiable to use black-box AI, which lacks transparent explanations for its decisions, in medical practice? The topic drew our interest due to the growing tension between the potential benefits these AI systems offer—such as more accurate diagnostics—and the strict regulations in regions like Australia and Europe that prohibit their use without adequate transparency.
In AI ethics literature, many argue that AI in healthcare must be explainable. However, there are crucial situations in which accuracy and reliability become more important than transparency. For example, in urgent or serious cases, faster and more precise AI-driven decisions can save lives even if the reasoning is inscrutable. Explainability cannot be the only ethical requirement—context, accuracy, bias, seriousness, urgency, and human involvement also play essential roles in determining when to use black-box AI in medical decision making.
Our main argument is that automatic bans on black-box AI overlook its practical value. Rather than demanding explainability as a universal rule, we must consider the context and balance multiple factors. We advocate for a pluralistic framework: black-box AI is ethically permissible if it is highly accurate and reliable, and if other conditions—such as patient consent and regulatory oversight—are met. Accuracy alone isn’t enough, but neither is explainability; justification depends on the specifics of each decision.
We should therefore adopt a flexible approach to AI ethics in healthcare. Instead of one-size-fits-all standards, ethical use should depend on balancing factors like patient needs, urgency, benefits, and risks. Black-box AI, deployed responsibly, can be justified in healthcare settings—even without perfect transparency.
Tuesday, 23 December 2025
Guest Blog: Cross-Cultural Challenges in Generative AI: Addressing Homophobia in Diverse Sociocultural Contexts
by Lilla Vicsek, Mike Zajko, Anna Vancsó, Judit Takacs, and Szabolcs Annus
Vicsek, L., Zajko, M., Vancsó, A., Takacs, J., & Annus, S. (2025). Cross-cultural challenges in generative AI: Addressing homophobia in diverse sociocultural contexts. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251396069. (Original work published 2025)
Calls for greater cultural sensitivity in artificial intelligence are increasingly shaping debates about how generative AI should interact with users across the world. Yet, when these technologies adapt too closely to local values, tensions can emerge between cultural sensitivity and universal human rights. Our research set out to explore what this tension looks like in practice.
We analyzed how two major generative AI systems—ChatGPT 3.5 and Google Bard—respond to homophobic statements. To test whether contextual information would affect their replies, we varied the prompts by including details about the user’s religion or country.
The results revealed ChatGPT’s responses often reflected cultural relativism, emphasizing that different cultures hold different viewpoints on this topic and that all perspectives should be respected. Responses of Bard, in contrast, tended to foreground human rights more, providing stronger and more explicit support for LGBTQ+ people and equality. Both systems demonstrated significant variation in their responses depending on the background information of the user, suggesting that AI systems may adjust the degree and form of support they express for LGBTQ+ people and issues according to the information they receive about a user.
By uncovering this dynamic, our study highlights an important ethical dilemma. While cultural awareness is important, AI’s efforts to align with diverse values must not come at the cost of endorsing discriminatory or exclusionary beliefs. We argue that generative AI design and governance should be firmly grounded in universal human rights principles to ensure protection for marginalized groups across cultures.
Wednesday, 17 December 2025
Guest Blog: Navigating AI–nature frictions: Autonomous vehicle testing and nature-based constraints
Das, P., Woods, O., & Kong, L. (2025). Navigating AI–nature frictions: Autonomous vehicle testing and nature-based constraints. Big Data & Society, 12(4). https://doi.org/10.1177/20539517251406123 (Original work published 2025)
AI technologies are increasingly embedded in urban environments, with expanding applications across transportation, healthcare, disaster management, and public safety. Often promoted by tech firms and policymakers as a transformative tool for more efficient urban management, AI is often imagined as operating smoothly across urban spaces. In practice, however, urban environments are inherently erratic, shaped by complex and unpredictable social, political, cultural, and ecological interactions. These interactions and conditions are not always compatible with AI systems. Rather than viewing such challenges as temporary obstacles, our work foregrounds the tensions and frictions that fundamentally shape how AI is integrated into urban life.
In our recent paper, we focus on one specific dimension of AI–urban friction: the unpredictability of nature-based factors such as weather patterns and vegetation growth. We explore this through the case of autonomous vehicle (AV) testing in Singapore. The study forms part of a larger research project on smart city knowledge transfer in Southeast Asia, spanning Singapore, Thailand, Indonesia, and Vietnam. As a leader in digital transformation, Singapore positions itself as a testbed for advanced technologies such as AI and digital twins. During our fieldwork, AV testing emerged as a key site where frictions between AI systems and urban nature became especially visible in Singapore.
To make sense of these dynamics, we introduce the concept of frictional urbanisms in our paper. This framework captures how the smooth operational demands of AI comes into conflict with the rough and unpredictable conditions of urban environments. Our findings show that such frictions are not exceptional circumstances, but everyday conditions that shape how AI systems are tested, adapted, and governed.
The paper also lays the foundation for our next research project, “Autonomizing environmental governance in Asian cities: AI, climate change and frictional urbanisms,” funded by the Ministry of Education, Singapore. Beginning in January 2026, the project will examine how AI is reshaping urban environmental governance across South and Southeast Asia, with a focus on the emerging challenges at the intersection of two rapidly evolving fields: AI and urban climate change governance.
Sunday, 7 December 2025
Guest Blog: The Sociotechnical Politics of Digital Sovereignty: Frictional Infrastructures and the Alignment of Privacy and Geopolitics
by Samuele Fratini
Fratini, S. (2025). The sociotechnical politics of digital sovereignty: Frictional infrastructures and the alignment of privacy and geopolitics. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251400729. (Original work published 2025)
When policymakers and scholars talk about digital sovereignty, they mostly talk about laws, borders and state power. Nevertheless, everyday technologies often seem to do the same kind of political work. I applied a Science & Technology Studies approach to Threema, a Swiss secure-messaging app, because it offered a rich, situated case where design choices, contracts and marketing all appeared to be doing something like sovereignty in practice.
In the paper I argue two linked things. First, digital sovereignty is best seen as a hybrid black box: a provisional assemblage where technical choices (encryption, server location), institutional arrangements (procurement, service clauses) and discursive claims (trust, Swissness) come together and, unexpectedly, reinforce state digital sovereignty by reducing foreign dependencies. Second, the moments that reveal how that order is made are frictions — specific tensions that open the box. I trace three productive frictions in the Threema case: privacy (limits on metadata and surveillance), seclusion (refusal to interoperate), and territorialism (integration within institutions and servers).
Empirically, interviews with Threema staff and Swiss institutional actors, together with corporate and institutional documents, show how these frictions translate into concrete outcomes: server-location clauses, closed architectures, and privacy narratives that align private infrastructure with public expectations.
What should readers take away? If we want a meaningful theory of digital sovereignty, our epistemologies cannot stop at laws. We need to pay attention to procurement rules, certification systems and the everyday design decisions of developers: the practical mechanisms through which sovereignty is actually enacted. This paper offers the conceptual foundation to ground geopolitical competition onto everyday sociotechnical practices.
Wednesday, 3 December 2025
Guest Blog: LLMs and the Embedding of All the World
By Mikael Brunila
Brunila, M. (2025). Cosine capital: Large language models and the embedding of all things. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251386055. (Original work published 2025)
Large language models (LLMs) have taken the world by storm since the release of ChatGPT in November 2022. The promise of Artificial Intelligence (AI) evolving into Artificial General Intelligence (AGI) is prompting companies to invest increasingly substantial sums in these technologies. This mania is only paralleled by quickly growing fears of automation and replacement among intellectual and creative workers of all stripes.
Looking beyond these dominant narratives on AI, I wanted to take a closer glimpse at the first principles of language modelling to understand (1) how they make sense of the world and (2) what kind of fundamental power asymmetries might result from their proliferation. Through examining Word2Vec, an early neural language model, I show that language models (whether large or small) extend a metaphor of communication that was established during early information theory and the encoding of data as “bits”. Instead of reducing the world into zeros and ones, LLMs encode words into “embeddings”, a series of hundreds of numbers that are “learned” as the model is trained on massive troves of textual data. What is more, any sequential data, not only text, can function as the input for producing embeddings. Text, images, behavioral profiles of swipes on dating apps, listening preferences on streaming services, clicks in browser sessions, and DNA sequences can all be “embedded”.
To describe the consequences of this new regime of modeling, I introduce the concept of “cosine capital”. This term is informed by two lines of inquiry. On the one hand, “cosine” refers to the measure used to evaluate how similar two embeddings are. On the other hand, “capital” describes the manner in which embeddings are accumulated over time. As technical systems increasingly rely on embeddings, our interactions with these systems end up producing data for the fine-tuning of more and “better” embeddings. This is what happens when we use ChatGPT, for instance. This movement, where companies that control a technology end up accumulating more and more of it, is reminiscent of Marx’s understanding of value as something that is always in motion. Cosine capital, then, is my attempt to theorize how LLMs act as harbingers of a new paradigm of quantifying the social world.
The article is supplemented with a GitHub repository that dives deeper into the technical relationship between bits, entropy, and embeddings.