Wednesday, 25 February 2026

Guest Blog: Shaky Technology, Steady Momentum: How Generative AI Innovation Survives Setbacks

by Choroszewicz and Rannisto

Choroszewicz, M., & Rannisto, A. (2026). AI innovation at the boundaries: Justifying a generative AI decision support tool. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261424159 (Original work published 2026)

Generative AI is entering the public sector under intense conditions: political pressure, organizational enthusiasm, and a widely shared conviction that innovation must move fast. Public organizations across Europe are experimenting at pace. These trials are often wrapped in familiar promises – greater efficiency and productivity, cost savings, better services for citizens, and relief from the friction of bureaucratic routines.

Our paper examines a generative AI decision support tool in Finnish public administration and shows how AI projects can keep continue even when their promised outcomes remain unfulfilled. We found that the project was sustained through boundary-spanning practices and a powerful “package” of justification frames that made the tool appear irresistible across organizational and professional boundaries.

How a Technology Becomes Irresistible

We identified nine recurring justifications that together formed a protective structure around the tool’s development. This structure did two things at once: it kept the innovation moving from one experiment to the next, and it buffered the project against criticism when doubts about the tool’s reliability began to surface. We observed how these frames emerged and circulated through the project events, artefacts, and representations, making continued development appear well justified. Some of these frames drew their justificatory force from the imagined tool itself, while others drew on the conditions and practices that emerged around it.

The tool-oriented frames leaned on familiar AI promises – efficiency and cost savings – but also on claims about employee well-being and fairness for citizens, with desirability often functioning as a proxy for value. Around the tool, a set of process- and ideology-oriented frames cast speed, bold initiative, and experimentation as virtues, normalized setbacks and sustained innovation momentum.

Why It Matters Where Justifications Land

Justifications do not carry the same weight everywhere: what counts as a “good reason” depends on the institutional and cultural landscape in which it is received. In our paper, the Nordic welfare state context seemed to make certain appeals especially resonant, because they aligned organizational performance with worker protection and civic ideals.

A specific justificatory “package” stood out as particularly powerful, combining elements of (i) efficiency: faster, more consistent operation and decisions; (ii) employee well-being: reduced cognitive load for claims specialists; and (iii) civic fairness: fairer, more transparent, and more equitable outcomes for citizens.

Together, they formed a compact public-value package that travelled across groups and stakeholders, making continued development appear justified even though the tool’s performance remained limited.

Boundary Work: Alliances, Divides, and Shifting Responsibilities

Because the justification frames did not operate in isolation, their enactment and force also depended on boundary work, the practices through which organizational and professional lines were crossed, reinforced, or temporarily rearranged as the project unfolded. In other words, what could be justified, to whom, and on what terms was shaped by how relationships, roles, and resources were arranged around the tool.

Collaborative and configurational boundary work took the form of alliances with managers and consultants, alongside practical reconfigurations of existing boundaries to make experimentation possible. New meeting formats, shared artefacts, experiment arrangements, and reporting practices helped gather resources and attention around the tool and sustain its development.

Competitive boundary work surfaced most clearly at the interface between innovation and frontline work, a divide between the flexible world of innovation and the controlled routines, limited risk tolerance and evaluative standards of frontline work. At this interface, the central questions emerged: whose judgments carried weight in defining what the tool was, what it should do, and how its performance should be interpreted?

As the tool’s promises proved difficult to realize, responsibility for “making it work” increasingly drifted from the tool’s outputs toward organizational conditions, expectations, and patterns of use: user interaction, training, prompting practices, document formats, workflow changes, and “AI readiness” more broadly. This shift did not remove the tool’s technical limitations, but it changed where they were made visible – and where critique tended to land – recasting innovation success less as technical robustness and more as organizational and user transformation.

Failure as Business as Usual

The tool’s ongoing inability to deliver on its core promises did not bring the project to a halt. Instead, failure was often folded into the rhythm of innovation praxis as something to be anticipated, worked around, and learned from. Within this framing, failure became normalized as a default condition of a progressive innovation, a spur to further activity. Continuing uncertainty was taken as part of the work itself, and so further investments of time, attention, and resources appeared not only reasonable but necessary.

At the same time, the tool’s technical opacity made failure difficult to locate and therefore difficult to settle. When it is hard to say why a system fails, it is also hard to know what would count as a decisive reason to stop. Meanwhile, the surrounding hype around generative AI, combined with the rapid pace of language model development, made it plausible to expect that technical improvements would arrive “from the outside” as models matured. In our case, that expectation proved wrong several times. 

Why Some AI Projects Become Hard to Stop

Our paper shows that sustaining AI innovation is not merely a technical matter. It relies on ongoing boundary-crossing practices and on powerful justifications that resonate with shared values and organizational aspirations. Crucially, what matters is often not any single justification, but how certain justifications cluster into persuasive packages that fit the context in which they circulate.

Our papers also shows that the tool’s development persisted not because it met its promises, but because the surrounding justificatory dynamics made continuation seem reasonable and even difficult to interrupt. Such dynamics can generate momentum, mobilize attention, and direct resources. But they can also narrow the space for critical reflection – locking organizations into particular innovation trajectories and obscuring consideration of alternative pathways, including the option of pausing.

Wednesday, 11 February 2026

Guest Blog: The Uneven Politics of AI Bias: Ageism as a Blind Spot

by Francesca Belotti, Mireia Fernández-Ardèvol, Veysel Bozan, Francesca Comunello, and Simone Mulargia

Belotti, F., Fernández-Ardèvol, M., Bozan, V., Comunello, F., & Mulargia, S. (2026). Double standards of generative AI chatbots: Unveiling (digital) ageism versus sexism through sociological interviews. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261419407 (Original work published 2026)

Many conversational AI systems already display visible “guardrails” when required to provide information about gender. However, when age—particularly older age—enters the picture, caution often fades. Our research shows that generative AI chatbots tend to reproduce a striking double standard: they actively avoid sexism but not ageism, treating age-based stereotyping as largely unproblematic.

We approach AI ageism as a form of digital ageism. Under the widespread assumption that digital culture is inherently youthful, later life is framed as less competent or in need of support. We analyze AI ageism from a sociotechnical perspective, as generative AI systems are not neutral tools but function as communicative infrastructures that absorb historically situated data, norms, and design choices—only to circulate them back as seemingly reasonable and “authentic” responses.

To explore this dynamic, we conducted semi-structured interviews with five widely used chatbots (ChatGPT, Gemini, Copilot, Jasper, and Perplexity). We took a traditional qualitative sociological approach, considering these conversations as sites where machine sociality becomes visible. 

Two recurring patterns emerged. First, the chatbots frequently resisted assigning gender to our fictional users, often invoking disclaimers about bias. Yet they readily assigned age ranges to the same profiles—typically associating youth with creativity, trendiness, and digital fluency, and older age with traditionalism, limitation, or distance from digital culture. Second, the chatbots framed similar functions differently: younger users were offered “tips,” creativity, and exploration, while older users were offered “assistance,” simplification, and health-related guidance—an example of benevolent stereotyping that reinforces inequality.

Generative AI mirrors society, and ageism is one of its least questioned forms of discrimination. Yet, this blind spot also points to a broader issue: LLMs are quietly transforming historically situated assumptions into default outputs, which, in turn, shape what appears normal, relevant, or legitimate in everyday human–machine communication. Are these conversations more ageist than those between humans? 


Monday, 9 February 2026

Guest Blog: Whose Voice Counts? The Role of Large Language Models in Public Commenting

by Amelia Arsenault and Sarah Kreps

Arsenault, A. C., & Kreps, S. (2026). Whose voice counts? The role of large language models in public commenting. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261419341 (Original work published 2026)

Large language models (LLMs) may help bridge the accessibility gap in public commenting by helping citizens navigate dense policy documents and draft effective comments. At the same time, they may also exacerbate existing inequalities. To examine this tension, we conducted a survey experiment in which participants drafted public comments with and without LLM assistance. We find that LLMs made writing significantly easier across all education levels but did not improve policy comprehension. Instead, education remained the strongest predictor of understanding. Comments written with AI assistance received consistently higher quality ratings regardless of education level, suggesting that LLMs may help less-educated citizens meet the threshold for serious consideration by policymakers. Although these tools may expand access for historically underrepresented voices, broader investments in civic education and AI literacy remain essential for achieving substantively meaningful democratic engagement.


 

Saturday, 17 January 2026

Guest Blog: When is black-box AI justifiable to use in healthcare?

By Sinead Prince and Julian Savulescu

Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore

Prince, S., & Savulescu, J. (2025). When is black-box AI justifiable to use in healthcare?. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251386037. (Original work published 2025)

This research explores a pressing dilemma in healthcare today: when is it justifiable to use black-box AI, which lacks transparent explanations for its decisions, in medical practice? The topic drew our interest due to the growing tension between the potential benefits these AI systems offer—such as more accurate diagnostics—and the strict regulations in regions like Australia and Europe that prohibit their use without adequate transparency.

In AI ethics literature, many argue that AI in healthcare must be explainable. However, there are crucial situations in which accuracy and reliability become more important than transparency. For example, in urgent or serious cases, faster and more precise AI-driven decisions can save lives even if the reasoning is inscrutable. Explainability cannot be the only ethical requirement—context, accuracy, bias, seriousness, urgency, and human involvement also play essential roles in determining when to use black-box AI in medical decision making. 

Our main argument is that automatic bans on black-box AI overlook its practical value. Rather than demanding explainability as a universal rule, we must consider the context and balance multiple factors. We advocate for a pluralistic framework: black-box AI is ethically permissible if it is highly accurate and reliable, and if other conditions—such as patient consent and regulatory oversight—are met. Accuracy alone isn’t enough, but neither is explainability; justification depends on the specifics of each decision.

We should therefore adopt a flexible approach to AI ethics in healthcare. Instead of one-size-fits-all standards, ethical use should depend on balancing factors like patient needs, urgency, benefits, and risks. Black-box AI, deployed responsibly, can be justified in healthcare settings—even without perfect transparency.

Tuesday, 23 December 2025

Guest Blog: Cross-Cultural Challenges in Generative AI: Addressing Homophobia in Diverse Sociocultural Contexts

 by Lilla Vicsek, Mike Zajko, Anna Vancsó, Judit Takacs, and Szabolcs Annus

Vicsek, L., Zajko, M., Vancsó, A., Takacs, J., & Annus, S. (2025). Cross-cultural challenges in generative AI: Addressing homophobia in diverse sociocultural contexts. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251396069. (Original work published 2025)

Calls for greater cultural sensitivity in artificial intelligence are increasingly shaping debates about how generative AI should interact with users across the world. Yet, when these technologies adapt too closely to local values, tensions can emerge between cultural sensitivity and universal human rights. Our research set out to explore what this tension looks like in practice.

We analyzed how two major generative AI systems—ChatGPT 3.5 and Google Bard—respond to homophobic statements. To test whether contextual information would affect their replies, we varied the prompts by including details about the user’s religion or country. 

The results revealed ChatGPT’s responses often reflected cultural relativism, emphasizing that different cultures hold different viewpoints on this topic and that all perspectives should be respected. Responses of Bard, in contrast, tended to foreground human rights more, providing stronger and more explicit support for LGBTQ+ people and equality. Both systems demonstrated significant variation in their responses depending on the background information of the user, suggesting that AI systems may adjust the degree and form of support they express for LGBTQ+ people and issues according to the information they receive about a user. 

By uncovering this dynamic, our study highlights an important ethical dilemma. While cultural awareness is important, AI’s efforts to align with diverse values must not come at the cost of endorsing discriminatory or exclusionary beliefs. We argue that generative AI design and governance should be firmly grounded in universal human rights principles to ensure protection for marginalized groups across cultures.

Wednesday, 17 December 2025

Guest Blog: Navigating AI–nature frictions: Autonomous vehicle testing and nature-based constraints

Das, P., Woods, O., & Kong, L. (2025). Navigating AI–nature frictions: Autonomous vehicle testing and nature-based constraints. Big Data & Society, 12(4). https://doi.org/10.1177/20539517251406123 (Original work published 2025)

AI technologies are increasingly embedded in urban environments, with expanding applications across transportation, healthcare, disaster management, and public safety. Often promoted by tech firms and policymakers as a transformative tool for more efficient urban management, AI is often imagined as operating smoothly across urban spaces. In practice, however, urban environments are inherently erratic, shaped by complex and unpredictable social, political, cultural, and ecological interactions. These interactions and conditions are not always compatible with AI systems. Rather than viewing such challenges as temporary obstacles, our work foregrounds the tensions and frictions that fundamentally shape how AI is integrated into urban life.

In our recent paper, we focus on one specific dimension of AI–urban friction: the unpredictability of nature-based factors such as weather patterns and vegetation growth. We explore this through the case of autonomous vehicle (AV) testing in Singapore. The study forms part of a larger research project on smart city knowledge transfer in Southeast Asia, spanning Singapore, Thailand, Indonesia, and Vietnam. As a leader in digital transformation, Singapore positions itself as a testbed for advanced technologies such as AI and digital twins. During our fieldwork, AV testing emerged as a key site where frictions between AI systems and urban nature became especially visible in Singapore.

To make sense of these dynamics, we introduce the concept of frictional urbanisms in our paper. This framework captures how the smooth operational demands of AI comes into conflict with the rough and unpredictable conditions of urban environments. Our findings show that such frictions are not exceptional circumstances, but everyday conditions that shape how AI systems are tested, adapted, and governed.

The paper also lays the foundation for our next research project, “Autonomizing environmental governance in Asian cities: AI, climate change and frictional urbanisms,” funded by the Ministry of Education, Singapore. Beginning in January 2026, the project will examine how AI is reshaping urban environmental governance across South and Southeast Asia, with a focus on the emerging challenges at the intersection of two rapidly evolving fields: AI and urban climate change governance.

Sunday, 7 December 2025

Guest Blog: The Sociotechnical Politics of Digital Sovereignty: Frictional Infrastructures and the Alignment of Privacy and Geopolitics

by Samuele Fratini

Fratini, S. (2025). The sociotechnical politics of digital sovereignty: Frictional infrastructures and the alignment of privacy and geopolitics. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251400729. (Original work published 2025)

When policymakers and scholars talk about digital sovereignty, they mostly talk about laws, borders and state power. Nevertheless, everyday technologies often seem to do the same kind of political work. I applied a Science & Technology Studies approach to Threema, a Swiss secure-messaging app, because it offered a rich, situated case where design choices, contracts and marketing all appeared to be doing something like sovereignty in practice.

In the paper I argue two linked things. First, digital sovereignty is best seen as a hybrid black box: a provisional assemblage where technical choices (encryption, server location), institutional arrangements (procurement, service clauses) and discursive claims (trust, Swissness) come together and, unexpectedly, reinforce state digital sovereignty by reducing foreign dependencies. Second, the moments that reveal how that order is made are frictions — specific tensions that open the box. I trace three productive frictions in the Threema case: privacy (limits on metadata and surveillance), seclusion (refusal to interoperate), and territorialism (integration within institutions and servers).

Empirically, interviews with Threema staff and Swiss institutional actors, together with corporate and institutional documents, show how these frictions translate into concrete outcomes: server-location clauses, closed architectures, and privacy narratives that align private infrastructure with public expectations.

What should readers take away? If we want a meaningful theory of digital sovereignty, our epistemologies cannot stop at laws. We need to pay attention to procurement rules, certification systems and the everyday design decisions of developers: the practical mechanisms through which sovereignty is actually enacted. This paper offers the conceptual foundation to ground geopolitical competition onto everyday sociotechnical practices.