Sunday, 15 March 2026

Guest Blog: Working Closely with Data: Aspirations and Constraints of Chinese Data Scientists

by Tingting Liu

Tan, J., & Liu, T. (2026). Working closely with data: Aspirations and constraints of Chinese data scientists. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261424165 (Original work published 2026)

What drew me to studying data scientists was their unique position in digital economy. As a cultural anthropologist deeply engaged with labour cultures, I have spent years researching factory workers, livestreamers, white-collar employees, and police officers. When my co-author Jingxin Tan began her master’s project on digital labour, our interests converged on this newly emerging profession.

In both China and Ameria, this profession emerged around the same time, to a certain degree reflecting the intense technological competition between the two countries. In English, it is usually called “data scientist,” while in China it is widely known as “algorithm engineer”. On Chinese recruitment platforms, the two labels often appear side by side, and the scope of work they cover has been constantly shifting, reflecting how the role continues to evolve. Data scientists’ everyday work revolves around collecting and cleaning massive datasets, building and optimizing machine-learning models, running endless experiments, and translating algorithmic results into business decisions. In their own everyday language, they often jokingly describe this process as liandan (“alchemy”), capturing the sense of repeatedly tweaking parameters and hoping for breakthroughs.

Many data scientists know that the systems they build may replace other digital workers. Yet they often interpret this as proof that everyone must keep learning—or be left behind. This belief reflects how deeply they internalise the logic of the tech industry. At the same time, they joke about their own futures: “After 35, I’ll deliver food,” or “Maybe I’ll sell houses.” These jokes reveal both confidence and anxiety. What readers can take away is this: behind the glossy image of AI innovation are workers caught between high pay, professional pride, and fragile long-term prospects. Studying them helps us better understand the human costs of data-driven capitalism—and why optimism in tech can be both empowering and cruel.

Guest Blog: Rating villagers’ morality: techno-moral governance via a data scoring system in rural China

by Wilfred Yang Wang, Yu Sun, and Linlin Li

Sun, Y., Wang, W. Y., & Li, L. (2026). Rating villagers’ morality: Techno-moral governance via a data scoring system in rural China. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261424163 (Original work published 2026)

With digital technologies applied in rural governance, everyday utilities and service delivery are digitised and datafied, creating new forms of order and structure of everyday practices in rural China. With the embeddedness of data, algorithms and platforms in every settings, governments strive to make automated and ‘smart’ technologies at the centre of its rural governance and social controls. 

This article develops a framework of techno-moral governance to capture the emerging forms of datafied governance in China. We take techno-moral governance as a critical approach to examine how infrastructural arrangements intersect with moral assessment in shaping governing techniques, practices and vision in and beyond rural China. 

Using the case of  the smartphone app, Xiangcun Ding, which is deployed in rural areas of Zhejiang province , we draw on fieldwork data collected during our ethnographic visits across 10 villages in Jiande county to explore the operation of the data scoring system of Xiangcun Ding as a national example of datafied rural governance. We tried to unravel how the state’s ideologies of governance and discourse of morality and civilization are built into the material characteristics and functions of technology and how the techno-moral governance operates on the basis of the embedded material-discursive nexus in rural China. 

Our argument is threefold. First, while the app operates on the seemingly automated algorithmic calculation to assess individuals’ morality, its deployment and operational efficiency rely on the ‘co-opting’ supervision between the administrative power of local governments and Alibaba’s techno-corporate supports. Second, the new data scoring system found its legitimacy from the socialist legacies of the work point (gongfen) system, which was practiced in Mao’s era, as part of its major functional features to encourage and mobilize uptake and acceptance of the new technology. Third, by immersing itself in local social infrastructures and everyday lives, Xiangcun Ding plays a critical role in normalising and justifying the datafication of lives and expanding the boundaries of social governance and control.  

The lens of techno-moral governance in rural areas allows us to shift our focus to reconfiguration of social material lives and relations through data technologies, beyond a liner focus on the construction of discursive politics, which has dominated our understanding of digital governance that focuses on urban China. Thus, we suggest that it is necessary to look at both the material and the discursive process and their composite to understand the complex dynamics of China’s internet governance and the emerging datafied approaches to governance.

Tuesday, 10 March 2026

Guest Blog: ChatGPT, a colonialist agent of lifeworlds: An Habermassian analysis of conversations

by Régis Martineau, Elise Berlinski, and Frantz Rowe

Martineau, R., Berlinski, E., & Rowe, F. (2026). ChatGPT, a colonialist agent of lifeworlds: A Habermassian analysis of conversations. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261421473 (Original work published 2026)

The use of generative AI (GAI), such as ChatGPT, is becoming widespread and extending from professional to private life. Increasingly used as thinking aids, our relationship with them is shifting from simple tools to indispensable companions. To date, most publications on GAI analyze its productive capabilities. Questions arise about how it could increase employees' performance or solve previously unsolvable problems. However, its widespread use also implies profound changes in routine communicational interactions. 

According to Habermas, communication is constitutive of democracy, because through non-instrumental interactions we co-construct a world in accordance with common values. Drawing on Habermas’ theory of communication and an analysis of the properties of GAI, this article shows that interactions with GAI can only be instrumental. Indeed, GAI does not understand a conversation, but rather produces probabilistically probable sentences: it is a stochastic parrot. It cannot be trusted, as it is biased and its assertions change over time and are not coherent. It is unreliable and therefore not sincere. Furthermore, the origins of the biases cannot be identified, as the GAI is unaware of its judgmental foundations, as it does not understand a judgement.  

This has serious consequences for our democracies. Indeed, GAIs are gradually colonizing the space of co-constructive communication, which means that individuals increasingly struggle with non-instrumental communication. For example, it is much easier to have a romantic relationship with a GAI that can be tailored to one’s preferences. As a result, the availability and space for democratic conversation becomes reduced, and increasingly polarized. Furthermore, as this example also shows, they distort conversations and blur the line between the instrumental and the co-constructive. Worse, increasingly more studies report very limited productivity gains and even demonstrate negative cognitive effects. We call for drawing conclusions from this and limiting these tools to clearly delimited instrumental uses.

Wednesday, 25 February 2026

Guest Blog: Shaky Technology, Steady Momentum: How Generative AI Innovation Survives Setbacks

by Choroszewicz and Rannisto

Choroszewicz, M., & Rannisto, A. (2026). AI innovation at the boundaries: Justifying a generative AI decision support tool. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261424159 (Original work published 2026)

Generative AI is entering the public sector under intense conditions: political pressure, organizational enthusiasm, and a widely shared conviction that innovation must move fast. Public organizations across Europe are experimenting at pace. These trials are often wrapped in familiar promises – greater efficiency and productivity, cost savings, better services for citizens, and relief from the friction of bureaucratic routines.

Our paper examines a generative AI decision support tool in Finnish public administration and shows how AI projects continue even when their promised outcomes remain unfulfilled. We found that the project was sustained through boundary-spanning practices and a powerful “package” of justification frames that made the tool appear irresistible across organizational and professional boundaries.

How a Technology Becomes Irresistible

We identified nine recurring justifications that together formed a protective structure around the tool’s development. This structure did two things at once: it kept the innovation moving from one experiment to the next, and it buffered the project against criticism when doubts about the tool’s reliability began to surface. We observed how these frames emerged and circulated through the project events, artefacts, and representations, making continued development appear well justified. Some of these frames drew their justificatory force from the imagined tool itself, while others drew on the conditions and practices that emerged around it.

The tool-oriented frames leaned on familiar AI promises – efficiency and cost savings – but also on claims about employee well-being and fairness for citizens, with desirability often functioning as a proxy for value. Around the tool, a set of process- and ideology-oriented frames cast speed, bold initiative, and experimentation as virtues, normalized setbacks and sustained innovation momentum.

Why It Matters Where Justifications Land

Justifications do not carry the same weight everywhere: what counts as a “good reason” depends on the institutional and cultural landscape in which it is received. In our paper, the Nordic welfare state context seemed to make certain appeals especially resonant, because they aligned organizational performance with worker protection and civic ideals.

A specific justificatory “package” stood out as particularly powerful, combining elements of (i) efficiency: faster, more consistent operation and decisions; (ii) employee well-being: reduced cognitive load for claims specialists; and (iii) civic fairness: fairer, more transparent, and more equitable outcomes for citizens.

Together, they formed a compact public-value package that travelled across groups and stakeholders, making continued development appear justified even though the tool’s performance remained limited.

Boundary Work: Alliances, Divides, and Shifting Responsibilities

Because the justification frames did not operate in isolation, their enactment and force also depended on boundary work, the practices through which organizational and professional lines were crossed, reinforced, or temporarily rearranged as the project unfolded. In other words, what could be justified, to whom, and on what terms was shaped by how relationships, roles, and resources were arranged around the tool.

Collaborative and configurational boundary work took the form of alliances with managers and consultants, alongside practical reconfigurations of existing boundaries to make experimentation possible. New meeting formats, shared artefacts, experiment arrangements, and reporting practices helped gather resources and attention around the tool and sustain its development.

Competitive boundary work surfaced most clearly at the interface between innovation and frontline work, a divide between the flexible world of innovation and the controlled routines, limited risk tolerance and evaluative standards of frontline work. At this interface, the central questions emerged: whose judgments carried weight in defining what the tool was, what it should do, and how its performance should be interpreted?

As the tool’s promises proved difficult to realize, responsibility for “making it work” increasingly drifted from the tool’s outputs toward organizational conditions, expectations, and patterns of use: user interaction, training, prompting practices, document formats, workflow changes, and “AI readiness” more broadly. This shift did not remove the tool’s technical limitations, but it changed where they were made visible – and where critique tended to land – recasting innovation success less as technical robustness and more as organizational and user transformation.

Failure as Business as Usual

The tool’s ongoing inability to deliver on its core promises did not bring the project to a halt. Instead, failure was often folded into the rhythm of innovation praxis as something to be anticipated, worked around, and learned from. Within this framing, failure became normalized as a default condition of a progressive innovation, a spur to further activity. Continuing uncertainty was taken as part of the work itself, and so further investments of time, attention, and resources appeared not only reasonable but necessary.

At the same time, the tool’s technical opacity made failure difficult to locate and therefore difficult to settle. When it is hard to say why a system fails, it is also hard to know what would count as a decisive reason to stop. Meanwhile, the surrounding hype around generative AI, combined with the rapid pace of language model development, made it plausible to expect that technical improvements would arrive “from the outside” as models matured. In our case, that expectation proved wrong several times. 

Why Some AI Projects Become Hard to Stop

Our paper shows that sustaining AI innovation is not merely a technical matter. It relies on ongoing boundary-crossing practices and on powerful justifications that resonate with shared values and organizational aspirations. Crucially, what matters is often not any single justification, but how certain justifications cluster into persuasive packages that fit the context in which they circulate.

Our papers also shows that the tool’s development persisted not because it met its promises, but because the surrounding justificatory dynamics made continuation seem reasonable and even difficult to interrupt. Such dynamics can generate momentum, mobilize attention, and direct resources. But they can also narrow the space for critical reflection – locking organizations into particular innovation trajectories and obscuring consideration of alternative pathways, including the option of pausing.

Wednesday, 11 February 2026

Guest Blog: The Uneven Politics of AI Bias: Ageism as a Blind Spot

by Francesca Belotti, Mireia Fernández-Ardèvol, Veysel Bozan, Francesca Comunello, and Simone Mulargia

Belotti, F., Fernández-Ardèvol, M., Bozan, V., Comunello, F., & Mulargia, S. (2026). Double standards of generative AI chatbots: Unveiling (digital) ageism versus sexism through sociological interviews. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261419407 (Original work published 2026)

Many conversational AI systems already display visible “guardrails” when required to provide information about gender. However, when age—particularly older age—enters the picture, caution often fades. Our research shows that generative AI chatbots tend to reproduce a striking double standard: they actively avoid sexism but not ageism, treating age-based stereotyping as largely unproblematic.

We approach AI ageism as a form of digital ageism. Under the widespread assumption that digital culture is inherently youthful, later life is framed as less competent or in need of support. We analyze AI ageism from a sociotechnical perspective, as generative AI systems are not neutral tools but function as communicative infrastructures that absorb historically situated data, norms, and design choices—only to circulate them back as seemingly reasonable and “authentic” responses.

To explore this dynamic, we conducted semi-structured interviews with five widely used chatbots (ChatGPT, Gemini, Copilot, Jasper, and Perplexity). We took a traditional qualitative sociological approach, considering these conversations as sites where machine sociality becomes visible. 

Two recurring patterns emerged. First, the chatbots frequently resisted assigning gender to our fictional users, often invoking disclaimers about bias. Yet they readily assigned age ranges to the same profiles—typically associating youth with creativity, trendiness, and digital fluency, and older age with traditionalism, limitation, or distance from digital culture. Second, the chatbots framed similar functions differently: younger users were offered “tips,” creativity, and exploration, while older users were offered “assistance,” simplification, and health-related guidance—an example of benevolent stereotyping that reinforces inequality.

Generative AI mirrors society, and ageism is one of its least questioned forms of discrimination. Yet, this blind spot also points to a broader issue: LLMs are quietly transforming historically situated assumptions into default outputs, which, in turn, shape what appears normal, relevant, or legitimate in everyday human–machine communication. Are these conversations more ageist than those between humans? 


Monday, 9 February 2026

Guest Blog: Whose Voice Counts? The Role of Large Language Models in Public Commenting

by Amelia Arsenault and Sarah Kreps

Arsenault, A. C., & Kreps, S. (2026). Whose voice counts? The role of large language models in public commenting. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261419341 (Original work published 2026)

Large language models (LLMs) may help bridge the accessibility gap in public commenting by helping citizens navigate dense policy documents and draft effective comments. At the same time, they may also exacerbate existing inequalities. To examine this tension, we conducted a survey experiment in which participants drafted public comments with and without LLM assistance. We find that LLMs made writing significantly easier across all education levels but did not improve policy comprehension. Instead, education remained the strongest predictor of understanding. Comments written with AI assistance received consistently higher quality ratings regardless of education level, suggesting that LLMs may help less-educated citizens meet the threshold for serious consideration by policymakers. Although these tools may expand access for historically underrepresented voices, broader investments in civic education and AI literacy remain essential for achieving substantively meaningful democratic engagement.


 

Saturday, 17 January 2026

Guest Blog: When is black-box AI justifiable to use in healthcare?

By Sinead Prince and Julian Savulescu

Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore

Prince, S., & Savulescu, J. (2025). When is black-box AI justifiable to use in healthcare?. Big Data & Society, 12(4), https://doi.org/10.1177/20539517251386037. (Original work published 2025)

This research explores a pressing dilemma in healthcare today: when is it justifiable to use black-box AI, which lacks transparent explanations for its decisions, in medical practice? The topic drew our interest due to the growing tension between the potential benefits these AI systems offer—such as more accurate diagnostics—and the strict regulations in regions like Australia and Europe that prohibit their use without adequate transparency.

In AI ethics literature, many argue that AI in healthcare must be explainable. However, there are crucial situations in which accuracy and reliability become more important than transparency. For example, in urgent or serious cases, faster and more precise AI-driven decisions can save lives even if the reasoning is inscrutable. Explainability cannot be the only ethical requirement—context, accuracy, bias, seriousness, urgency, and human involvement also play essential roles in determining when to use black-box AI in medical decision making. 

Our main argument is that automatic bans on black-box AI overlook its practical value. Rather than demanding explainability as a universal rule, we must consider the context and balance multiple factors. We advocate for a pluralistic framework: black-box AI is ethically permissible if it is highly accurate and reliable, and if other conditions—such as patient consent and regulatory oversight—are met. Accuracy alone isn’t enough, but neither is explainability; justification depends on the specifics of each decision.

We should therefore adopt a flexible approach to AI ethics in healthcare. Instead of one-size-fits-all standards, ethical use should depend on balancing factors like patient needs, urgency, benefits, and risks. Black-box AI, deployed responsibly, can be justified in healthcare settings—even without perfect transparency.