Tuesday, 4 February 2025

Guest Blog: What holds the field of AI together?

by Glen Berman, Kate Williams, and Eliel Cohen

Berman, G., Williams, K., & Cohen, E. (2025). The benefits of being between (many) fields: Mapping the high-dimensional space of AI research. Big Data & Society, 12(1). https://doi.org/10.1177/20539517241306355

Artificial Intelligence is being mobilized across the university sector to coordinate a wide range of research programs and activities. AI is ‘in’, and nobody wants to be left out. But this begs the question: what is it about AI—as a discursive concept—that makes it so adaptable? What is it about the notion of AI that enables such a heterogenous array of actors—computer scientists, climate scientists, roboticists, ethicists, lawyers, artists, and more—to enroll it in their efforts to attract resources, coordinate research projects, and communicate research impacts? And, across this heterogenous network, what, if anything, lends the field of actors associated with AI coherence and stability?

In our paper, we begin to answer these questions. Through semi-structured interviews (n = 90) with academics affiliated with AI research networks in the UK, US, and Australia, we explore how notions of the ‘AI researcher’ and the ‘AI research field’ are constructed. Through our empirical account, we describe an AI research field that is uncertain and unstable. The field emerges at the intersection of multiple overlapping boundaries between disciplines, between academia, industry and government, between national and international hierarchies. And, the field is marked by commitments to highly applied, interdisciplinary and intersectoral research activities. Within this context, notions of AI are strategically mobilized by AI researchers and AI research networks to position themselves as intermediators at these intersections. In this light, we argue, that the definitional messiness associated with AI is a strategic commitment of the field—the fact that there are no shared or bounded definitions of what constitutes AI helps enable AI researchers to move between disciplines and sectors.

Yet, not anyone can claim to be an AI researcher. To be legitimized in a high dimensional field requires cultivation of an expertise that can be readily translated into the evaluation structures of many different fields. AI researchers’ commitment to applied research can be interpreted as one strategy for achieving this. Applied research can meet the demands of government funders and industry partners, can translate into news media reporting, and can be published in parallel academic disciplines. And, in the absence of formal qualifications or training programs, legitimization relies on affiliation with organizations or institutions whose value readily translates across fields. For individual researchers and for university administrations, then, establishing new, explicitly intersectoral and interdisciplinary AI-branded research networks is a self-reinforcing response to the high dimensionality of the AI research field.

Wednesday, 8 January 2025

Guest Blog: Exploring Digital Platforms in Agriculture: Oligopolistic Platformization and Its Impacts

Sauvagerd, M., Mayer, M., & Hartmann, M. (2024). Digital platforms in the agricultural sector: Dynamics of oligopolistic platformisation. Big Data & Society, 11(4). https://doi.org/10.1177/20539517241306365

The agricultural sector is undergoing a profound digital transformation. Unlike other industries where new entrants disrupt existing markets, agriculture presents a unique phenomenon which we term “oligopolistic platformization.” Here, multinational upstream agribusinesses like John Deere and Bayer collaborate with Big Tech giants such as Microsoft, Amazon, and Google to integrate digital tools into farming while consolidating their market dominance.

Agribusinesses have developed sectoral platforms offering specialized solutions like real-time field monitoring, weather analytics, and precision farming. These platforms often create “digital twins” of farms, seamlessly integrating data sources such as spatial, climatic, and machine-generated inputs. Big Tech, in turn, provides critical infrastructure – AI, cloud computing, and data analytics – that powers the scalability and innovation of these platforms.

However, this transformation comes with complexities. At the heart of the shift lies datafication (turning farm activities into actionable data), selection (deciding what data is shared and with whom), and commodification (monetizing insights derived from this data). While these processes generate value, they also consolidate power among a few key players, potentially marginalizing smaller startups and reducing farmers’ autonomy.

Our visualization maps out the economic collaborations between Big Tech, agribusinesses, startups, and public institutions to reveal the industry’s reliance on a small number of tech giants for digital infrastructure, underscoring the growing consolidation in this space. Major agribusinesses like John Deere, Syngenta, and BASF have formed partnerships with these infrastructure providers, increasingly blurring boundaries between seeds, agrochemicals, biotechnology, and digital agriculture.*


As digital platforms become integral to agriculture, it is critical to address issues such as power asymmetries, opportunities for smaller players, and equitable data-sharing. Future research should examine the evolving dynamics of oligopolistic platformization under regulations like the EU Data Act, which mandates data access and interoperability. 

* While not fully comprehensive, the visualization represents an overall picture of the economic relationships within the industry.




Tuesday, 10 December 2024

Seeking Assistant Editors for Big Data & Society

 The journal Big Data & Society (https://journals.sagepub.com/home/bds) is calling for expressions of interest in serving as an Assistant Editor (AE) for a term of 1 to 2 years. The goal of the journal's AEs program is providing opportunities for early stage scholars (most often Ph.D. students) to network and gain experience in the peer-review journal publishing process. Assistant editors will have opportunities to discuss research topics with journal editors and co-editors, will be invited to review for the journal, contribute to promoting the journal (via Twitter) and to other projects as they arise. Unfortunately the AE position does not come with any monetary stipend.

To express interest in becoming an Assistant Editor, please fill out the for available at this Google Form by January 10, 2025. If you have questions you can contact the Editor-in-Chief of the journal, Dr. Matthew Zook (zook@uky.edu).

You can also find more information about the editorial team and latest papers at the Journal's blog https://bigdatasoc.blogspot.com/ and website https://journals.sagepub.com/articles/BDS'. 


----
Big Data & Society (BD&S) is an Open Access peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities and computing, and their intersections with the arts and natural sciences about the implications of Big Data for societies. The Journal's key purpose is to provide a space for connecting debates about the emerging field of Big Data practices and how they are reconfiguring academic, social, industry, business and governmental relations, expertise, methods, concepts and knowledge. 


Sunday, 1 December 2024

Guest Blog: Navigating Crisis with Joint-Sensemaking: Insights from China's Health Code


by Yang Chen


Qu, J., Chen, L., Zou, H., Hui, H., Zheng, W., Luo, J.-D., Gong, Q., Zhang, Y., Wen, T., & Chen, Y. (2024). Joint-sensemaking, innovation, and communication management during crisis: Evidence from the DCT applications in China. Big Data & Society11(3). https://doi.org/10.1177/20539517241270714

 

In times of crisis, technology plays a key role in shaping collective responses. The COVID-19 pandemic highlighted the potential of digital contact tracing (DCT) applications, such as China's Health Code, to manage public health challenges. However, these technologies are not just technical solutions; they are sociotechnical phenomena emerging from complex interactions among technology, society, and governance.

 

The study "Joint-sensemaking, innovation, and communication management during crisis" explores how stakeholders in China navigated these complexities. It challenges the conventional view of innovation diffusion as linear, proposing instead a model of joint-sensemaking—a collaborative process of meaning-making among diverse actors.

 

A significant aspect of this research is the application of the structural hole theory, which examines how certain stakeholders, like official media, act as connectors or "structural hole spanners" within the communication network. These entities bridge different groups, facilitating information flow and influencing public sentiment more directly than traditional two-step flow models suggest. This highlights their critical role in shaping how innovations are perceived and adapted during crises.

 

The article provides empirical insights through the analysis of over 113,000 Weibo posts, revealing two pathways of sensemaking: the Patching and Add-in paths. These pathways illustrate how different interventions shape public sentiment and technology acceptance. By focusing on the dynamic interactions and the bridging roles played by key stakeholders, the study offers a nuanced understanding of innovation diffusion in crisis contexts.

 

We describe how these insights can inform strategies for policymakers and public health authorities. By leveraging the influence of structural hole spanners like official media, stakeholders can foster acceptance and trust, ensuring that DCT technologies are effectively integrated into society. This approach underscores the importance of continuous assessment and adaptation of communication strategies to meet the evolving needs of the public.

 

Overall, the study compels us to rethink crisis management as a transdisciplinary endeavor, involving ongoing dialogue and collaboration across different sectors and perspectives. It shifts the focus from problem-solving to problem-opening, encouraging a more comprehensive and inclusive approach to managing innovations during crises.

Tuesday, 26 November 2024

Bookcast: David Golumbia's Cyberlibertarianism

Bookcast details:

Featuring discussants: George Justice and Frank Pasquale
Book: Cyberlibertarianism: The Right-Wing Politics of Digital Technology (University of Minnesota Press, 2024)

This bookcast has been excerpted from a longer discussion: hear more on the University of Minnesota Press podcast here




Thursday, 7 November 2024

Paper Highlight: "Ethical scaling for content moderation: Extreme speech and the (in)significance of artificial intelligence"

Sahana Udupa discusses ''Ethical scaling for content moderation: Extreme speech and the (in)significance of artificial intelligence'', published by her and colleagues Antonis Maronikolakis and Axel Wisiorek in Big Data&Society. Read it on this link.

The paper received the ICA Outstanding Article Award for 2024. Congratulations!

Click here to know more about the Centre for Digital Dignity.

Friday, 25 October 2024

Guest Blog: Automated Informed Consent

by Adam John Andreotta and Björn Lundgren

Andreotta, A. J., & Lundgren, B. (2024). Automated informed consent. Big Data & Society, 11(4). https://doi.org/10.1177/20539517241289439

If you are an average internet user, you probably experience prompts for consent daily. For example, you may have been asked, via a pop-up window, whether you would like to accept all cookies, or only those necessary to function on a website you have visited. While it is a good thing that companies actually seek your consent, repeated requests cause so much annoyance that many of us become complacent about the content of the agreement. No one really has the time to read and accept every privacy policy that applies to them online. 

One solution to this problem that has emerged in the last few years is so-called “automated consent.” The idea is that software could learn what your privacy preferences are, and then do all the consenting for you. The idea is promising, but there are technical, legal, and ethical issues associated with it. In the paper, we deal with some of the ethical issues around automated consent. We start by articulating three different versions of automated consent: a reject-all option, a semi-automated option, and a fully automated option, which can feature AI and machine learning. Regardless of which version is used, we argue that a series of normative issues need to be wrestled with before wide scale adoption of the technology.  These include concerns about whether automated consent would prohibit peoples’ ability to give informed consent; whether automated consent might negatively impact people’s online competencies; whether the personal data collection required for automated consent to function raises new privacy concerns; whether automated consent might undermine market models; and how responsibility and accountability might be impacted by automated consent. Of course, there is much to be said regarding the legal implications of the technology itself, and we invite legal scholars and computer scientists to explore the regulatory implications and technological options related to automated consent in our discussion.