Thursday, 28 February 2019

Weaving seams with data: Conceptualizing City APIs as elements of infrastructures

Weaving seams with data: Conceptualizing City APIs as elements of infrastructures by Christoph Raetzsch, Gabriel Pereira, Lasse S Vestergaard, and Martin Brynskov

Listen to the authors of this new article discussing how application programming interfaces (APIs) are weaving new seams of data into the urban fabric, and why they are important as elements of infrastructures.

Video Abstract

Text Abstract: This article addresses the role of application programming interfaces (APIs) for integrating data sources in the context of smart cities and communities. On top of the built infrastructures in cities, application programming interfaces allow to weave new kinds of seams from static and dynamic data sources into the urban fabric. Contributing to debates about “urban informatics” and the governance of urban information infrastructures, this article provides a technically informed and critically grounded approach to evaluating APIs as crucial but often overlooked elements within these infrastructures. The conceptualization of what we term City APIs is informed by three perspectives: In the first part, we review established criticisms of proprietary social media APIs and their crucial function in current web architectures. In the second part, we discuss how the design process of APIs defines conventions of data exchanges that also reflect negotiations between API producers and API consumers about affordances and mental models of the underlying computer systems involved. In the third part, we present recent urban data innovation initiatives, especially CitySDK and OrganiCity, to underline the centrality of API design and governance for new kinds of civic and commercial services developed within and for cities. By bridging the fields of criticism, design, and implementation, we argue that City APIs as elements of infrastructures reveal how urban renewal processes become crucial sites of socio-political contestation between data science, technological development, urban management, and civic participation.

Sunday, 17 February 2019

Jumping to exclusions? Why “data commons” need to pay more attention to exclusion (and why paying people for their data is a bad idea)

Barbara Prainsack

In January, a large German university proudly announced that Facebook had chosen them for a 6.5m Euro grant to establish a research centre on the ethics of artificial intelligence. The public response to this announcement took them by surprise: instead of applauding the university’s ability to attract big chunks of industry money, many were outraged about the university’s willingness to take money from a company known to spy and lie. If Facebook was so keen to support research on the ethics of artificial intelligence, people said, they should pay their taxes so that governments could fund more research focusing on these aspects.

Resistance against the growing power of large technology companies has left the ivory towers of critical scholarship and has reached public fora. The acronym GAFA, initially a composite of the names of some of the largest tech giants, Google, Amazon, Facebook, and Apple, has become shorthand for multinational technology companies who, as quasi-monopolists, cause a range of societal harms: They stifle innovation by buying up their competition, they evade and avoid tax, and (some more than others) threaten the privacy of their users. They have also, as Frank Pasquale argued, stopped to be market participants but become de facto market regulators. As I argued elsewhere, they have become an iLeviathan, the rulers of a new commonwealth where people trade freedom for utility. Unlike with Hobbes’ Leviathan, the utility that people obtain is no longer the protection of their life and their property, but the possibility to purchase or exchange services and goods faster and more conveniently, to communicate with others across the globe in real time, and in some instances, to be able to obtain services and goods at all.

As I argue in a new paper in Big Data & Society, responses to this power asymmetry can be largely grouped in two camps: On the one side are those that seek to increase individual-level control of data use. They believe that individuals should own their data, at least in a moral, but possibly also in a legal sense. Some go as far as proposing that, as an expression of the individual ownership of their data, individuals should be paid by corporations that use their data. For them, individual-level monetisation is the epitome of respecting individual data ownership.

On the other side are those who believe that enhancing individual-level control is insufficient to counteract power asymmetries, and that it can also create perverse effects: For example, paying individuals for their data would create even larger temptations for those who cannot pay for services or goods with money to pay with their privacy instead. From this perspective, individual-level monetisation of data would exacerbate the new social division between data givers and data takers. Instead, what is needed, they argue, is greater collective control and ownership of data.

In this second camp, which in my paper I call the “Collective Control” group (and to which I also count my own work), one solution that is being suggested is the creation of digital data commons. Drawing upon the work of scholars such as Elinor Ostrom and David Bollier, some scholars believe that data commons – understood as resources that are jointly owned and governed by people – are an important way to move digital data out of the enclosures of for-profit corporations and into the hands of citizens (in my paper, I discuss what this may look like in practice). A data commons, some of them argue, is a place where nobody is excluded from benefiting from the data that all of us had a share in creating.

But is this so? As I argue in this article, in much of the literature on physical commons – such as the grazing grounds and fisheries that Elinor Ostrom and other commons scholars analysed - the possibility to exclude people from commons is considered a necessary condition for commons to be governed effectively. When everybody has access to something and nobody can be excluded, it is likely that those who are already more powerful will be able to make the best use of the resource, often at the cost at those less privileged. For these reasons, Ostrom and others conceived commons not as governed by open access regimes – meaning that nobody holds property rights – but as ruled by a common property regime. Such a common property regime would allow the owners of the resource to decide how the resource can be used, and who can be excluded. In other words, to avoid inequitable use of commons, those governing the commons must be able to set the rules, and must be able to exclude.

The issue of who and how actors are or can be excluded from commons, however, has received very little systematic attention so far in the growing scholarship on digital data commons. In my article, I propose a systematic way to consider what types of exclusion from contributing data to the commons, from using, or benefitting from, the data commons, and from partaking in the governance of the commons are harmful, and how forms and practices of exclusion that cause undue harm can be avoided. In this manner, I argue, it is possible for us to distinguish between data commons that will help to counteract existing power imbalances and to increase data justice on the one hand, and those that use the commons rhetoric to serve particularistic and corporate interest on the other.

In this context, it is also apparent that either way, individual-level monetisation in the form of paying people for their data is a bad idea. Not only would it lure the cash-poor into selling their privacy, but it also plays into the hands of those whose who seek to individualise relationships between data givers and data takers to avoid a collective response to the increasing power asymmetries in the digital data economy.

Monday, 17 December 2018

Call for Special Theme Proposals for Big Data & Society

Call for Special Theme Proposals for Big Data & Society

The SAGE open access journal Big Data & Society (BD&S) is soliciting proposals for a Special Theme to be published in early 2020. BD&S is a peer-reviewed, interdisciplinary, scholarly journal that publishes research about the emerging field of Big Data practices and how they are reconfiguring academic, social, industry, business and government relations, expertise, methods, concepts and knowledge. BD&S moves beyond usual notions of Big Data and treats it as an emerging field of practices that is not defined by but generative of (sometimes) novel data qualities such as high volume and granularity and complex analytics such as data linking and mining. It thus attends to digital content generated through online and offline practices in social, commercial, scientific, and government domains. This includes, for instance, content generated on the Internet through social media and search engines but also that which is generated in closed networks (commercial or government transactions) and open networks such as digital archives, open government and crowd-sourced data.  Critically, rather than settling on a definition the Journal makes this an object of interdisciplinary inquiries and debates explored through studies of a variety of topics and themes.

Special Themes can consist of a combination of Original Research Articles (8000 words; maximum 6), Commentaries (3000 words; maximum 4) and Editorial (3000 words). All Special Theme content will be waived Article Processing Charges. All submissions will go through the Journal’s standard peer review process.

Past special themes for the journal have included: Knowledge Production, Algorithms in Culture, Data Associations in Global Law and Policy, The Cloud, the Crowd, and the City, Veillance and Transparency, Environmental Data, Spatial Big Data, Critical Data Studies, Social Media & Society, Health Data Ecosystems, Assumptions of Sociality and Data & Agency.  See to access these special themes.

Format of Special Theme Proposals
Researchers interested in proposing a Special Theme should submit an outline with the following information.

- An overview of the proposed theme, how it relates to existing research and the aims and scope of the Journal, and the ways it seeks to expand critical scholarly research on Big Data.

- A list of titles, abstracts, authors and brief biographies. For each, the type of submission (ORA, Commentary) should also be indicated. If the proposal is the result of a workshop or conference that should also be indicated.

- Short Bios of the Guest Editors including affiliations and previous work in the field of Big Data studies. Links to homepages, Google Scholar profiles or CVs are welcome, although we don’t require CV submissions.

- A proposed timing for submission to Manuscript Central.

Information on the types of submissions published by the Journal and other guidelines is available at .

Timeline for Proposals
Please submit proposals by Monday January 14, 2019 to the Managing Editor of the Journal, Prof. Matthew Zook at The Editorial Team of BD&S will review proposals and make a decision by mid- to late-January 2019. For further information or discuss potential themes please contact Matthew Zook at

Sunday, 16 December 2018

Illustrating Big Data discourses in the healthcare field

Marthe Stevens, Rik Wehrens and Antoinette de Bont

Over the last few years, there has been a growing critical scholarly discourse that reflects on how Big Data shape our knowledge and our understanding. Primarily the fields of Science and Technology Studies and Critical Data Studies have been instrumental in elaborating the neglected and problematic dimensions of Big Data. However, it is unclear how and to what extent such insights become embedded in the healthcare field.

At the same time, we notice that the healthcare field welcomes initiatives that aim to improve healthcare through Big Data. This development is interesting, as the healthcare field is characterized by a strongly institutionalized set of epistemological principles and generally accepted methodologies. The field favors, for example, high-quality evidence from randomized controlled trials and observational studies to guide treatment decisions. Big Data challenge these principles and methodologies as they promise faster and more representative knowledge on the basis of large-scale data analyses.

In our recent article in Big Data & Society, “Conceptualizations of Big Data and their epistemological claims: a discourse analysis”, we studied the various ways in which Big Data is conceptualized in the healthcare field and assess the consequences of these different conceptualizations. We constructed five ideal-typical discourses that all frame Big Data in specific ways and that use other metaphors to describe Big Data. Three of the discourses (the modernist, instrumentalist and pragmatist) frame Big Data in positive terms and disseminate a compelling rhetoric. Metaphors of capturing, illuminating and harnessing data presume that Big Data are benign and leading to valid knowledge. The scientist and critical-interpretive discourses question the objectivity and effectivity claims of Big Data. Their metaphors of selecting and constructing data illustrate another political message, framing Big Data as limited.

The modernist discourse: capturing data
Illustration by: Sue Doeksen

During our analysis, it became apparent that especially the critical-interpretive discourse has not broadly infiltrated the healthcare domain, despite the attention that is given to the problematic assumptions and epistemological difficulties of Big Data in fields such as Science and Technology Studies and Critical Data Studies. We argue that the healthcare field would benefit from a more prominent critical-interpretive discourse, as the other discourses do not address important reflections on the normativity and situatedness of Big Data as well as the social and political processes that create Big Data.

For the article, we worked together with an illustrator to visualize the discourses, as we believed that illustrations could help to deepen our and the reader’s understanding of the discourses. We contacted Sue Doeksen ( and she was very willing to help us and think along. What followed was an exciting process in which we and Sue both inspired each other. She wanted to have a clear message to present in a simple illustration. We had to make sure that the essence of the discourses was captured in the images.

This paper is part of a broader research project that focusses on the expectations and imaginaries associated with Big Data in healthcare. In the project, we conceptualize Big Data as a collection of practices and we aim to study what sorts of meaning it receives, is given to and how it changes practices. During the study, we specifically focus on the epistemological claims of Big Data.

About the authors:

Marthe Stevens is a PhD candidate at the department of Healthcare Governance at the Erasmus School of Health Policy and Management (Erasmus University Rotterdam, the Netherlands) and WTMC. She studies the use of Big Data and Artificial Intelligence in hospital settings in the Netherlands and in Europe. Her work focuses on the expectations and imaginaries associated with these new (data-driven) technologies. For more information see

Rik Wehrens is an assistant professor at the department of Healthcare Governance at the Erasmus School of Health Policy & Management. His (STS) research work focuses on issues of knowledge translation and ‘epistemological politics’, such as the coordination work between public health researchers and practitioners in negotiating the meaning of ‘practice-based health research’, and ‘valuation work’ in healthcare improvement programs. His current work explores the roles and expectations of Big Data in healthcare through ethnographic and discursive research ‘lenses’. As a part of the EU-funded project Big Medilytics (, he is involved in an international comparison of formal and informal rules for Big Data in various European countries.

Antoinette de Bont is an endowed professor at the Erasmus School of Health Policy and Management. Her research agenda addresses national and international policy priorities, like the diversification of the healthcare work or the use of Big Data to increase efficiency in healthcare. The research question that defines her agenda is: how do interdependencies between people and technology explain innovation in healthcare.

Thursday, 6 December 2018

Holiday break

The Big Data and Society Editorial Team will be on winter break from December 21st until January 7th. Please accept delays in processing and reviewing your submission during that time. Many thanks for your understanding.

Thursday, 4 October 2018

3rd international Data Power conference, 12th/13th September 2019, Bremen, Germany

DATA POWER: Global in/securities 
Thursday 12th and Friday 13th SEPTEMBER 2019, University of Bremen, Germany

With increasingly globalized digital infrastructures and a global digital political economy, we face new concentrations of power, leading to new inequalities and insecurities with respect to data ownership, data geographies and different data-related practices. The Global in/securities theme of the 2019 Data Power conference attends to questions around these phenomena, asking: How does data power further or contest global in/securities? How are global in/securities constructed through or against data? How do civil society actors, government, people engage with societal and individual in/securities through and with data? What are appropriate ontologies to think about data and persons? How may we envisage a just data society? And what does decolonizing data in/securities look like?

The organising committee will select papers for a special theme proposal to be submitted to Big Data & Society. For more information and the call for papers (deadline: January 31st 2019), please visit the conference website.

Andreas Hepp, University of Bremen, ZeMKI

Monday, 18 June 2018

Data Associations in Global Law and Policy

Lyria Bennett Moses, Fleur Johns, Daniel Joyce

Shifting forms can create toeholds for thought and action. Complex social phenomena that tend to confound diagnosis may sometimes be grasped obliquely in the course of their transformation. The aim of this special issue is to trace mutations underway in associations rendered or experienced in data. In particular, contributors to this issue reflect upon associations traceable in data that are of a juridical nature (or could be so understood), or that have salience for legal institutions and norms. This is something other than inviting consideration of ‘problems’ that technology makes for law. It is something other, too, than thinking about whether law does or does not determine or reflect socio-technical practice, or vice versa, and how some law-technology correspondence might ‘properly’ be maintained. Instead, contributors engage here in a collective experiment of envisioning data as vectors of lawful relations on the global plane, and at other scales.

This is unfinished business for Big Data & Society. In this journal’s opening issue, Rob Kitchin argued that “the development of digital humanities and computational social sciences… propose radically different ways to make sense of culture, history, economy and society”. But what “sense” could “Big Data empiricism”, as Kitchin described it, make in, of and for global law and policy? This is among the questions that the contributors to this special issue take up. Neither digital technology nor law is pivotal to this inquiry, so much as their irrepressible leaking and morphing into would-be or could-be versions of the other.

As paradigmatic a shift as the turn to epistemologies of Big Data might seem, making connections between these emergent epistemologies and older associations is also an important task of this collection. Sheila Jasanoff traces, for instance, the history of the production of “a panoptic viewpoint from which the entire diversity of human experience can be seen, catalogued, aggregated, and mined” from the mid-twentieth century, especially in the emergence of the “global environment” as an “actionable object for law and policy”. Naveen Thayyil likewise draws an analogy between change in weather and climatological studies from the 1960s onwards (from instrument reading techniques to computer modeling) and parallel shifts in approaches to risk regulation (from conventional risk assessment to precautionary approaches, the latter increasingly advanced through “big data” automation). Ben Hurlbut similarly connects “scientifically authorized imaginations of future risk” on the global plane to earlier incarnations of the “republic of science” assembled around pandemic risk since the nineteenth century. Other contributions to this volume re-frame contemporary phenomena by reference to associations of more recent provenance: Sarah Logan analyses “post 9-11 mass surveillance” and the “anxious information state” it enshrines. Likewise, Gavin Smith; Kath Albury, Jean Burgess, Ben Light, Kane Race, Rowan Wilken; and Daniel Joyce focus on “data cultures” ascendant during the past decade and the legal and political conflicts and connections that surface amid them.

The protagonists and environs of the stories told in these pages vary greatly. Not all are of a kind that one might expect to find featured in a journal about “Big Data and Society”. Scientists keep company with museum designers; government officials rub shoulders with journalists and activists; terrorists and those who hunt them mingle with weather forecasters; software and search engine developers are interspersed with “quantified selves”; dating app users fraternize with bird watchers contributing to citizen science initiatives. What Daniel Joyce calls “the challenge and opportunity of big data” turns out to have stakes for many who may not see themselves as so invested or enrolled. Nonhuman protagonists are similarly diverse and varied in sophistication and scale. They include files (both paper and digitized), reports, remote sensors, satellites and diverse forms of scientific equipment, viruses and the organisms that transmit them, government computer systems and the smart phones ubiquitous across many parts of the world. Settings range from Hawaii’s Mauna Loa observatory to the ICRC’s Red Cross and Red Crescent Museum in Geneva, from Indonesian bird markets to gatherings of scientific experts, from courtrooms and security agencies to the hybrid space of screen-mediated sexual encounters. To draw all these persons, places and things into a collective account of contemporary juridical mediation in data is, from one angle, preposterous. And yet the very preposterousness of this agglomeration conveys something of the voracious indifference, roving opportunism, and endless repurposing characteristic of new analytical methods and software designs that aim to extract actionable insight from massive datasets using machine learning and other automated techniques.

The dilemmas with which these protagonists grapple, or the conditions under which they come to be datafied, are similarly diverse. Nonetheless, common quandaries recur in the stories that our contributors relate. One is the difficulty of trying to generate or project a sense of a whole out of unresolved difference, or making the global – as such – available to experience and asserting sovereignty over its scalar elements. As this volume makes plain, this quintessentially modern challenge persists amid tendencies that seem aimed in another direction: towards data-driven personalization, nano-surveillance and therapeutic attention to the singular.

A second theme that emerges from this collection surrounds the actual and potential substance of legal order. Long-held ideas endure about sociality and culture, on one hand, and market-based exchange, on the other, as that which comprises the “stuff” of which order is made, and that which legal norms and institutions must foster and defend. Yet this collection entertains a further, speculative idea: that there may be forms of relation of growing significance, manifest or realised in data, not reducible to the expression or defence of exchange or socio-emotional connection, but which nonetheless have legal ramifications. That is, digital data may be “lay[ing] the groundwork for new claims and appeals to conscience” and responsibility (Jasanoff in this volume) and constituting “moral and technical borderland[s] where powerful agencies… coalesce” (Smith in this volume). Consider, for example, relations of correlation between data patterns associated with a terror suspect, and data patterns identified with other persons, in the surveillance work of which Sarah Logan writes in this volume. Correlations in data create a basis for supposition and the visualization of juridical futures, in such a setting, without necessarily corresponding to any apparent economic or social relation. Consider, also, the celebrity-follower relation maintained online, of which Daniel Joyce writes in this volume when discussing recent judicial efforts to protect reputation online. This tie is not quite explicable in terms of economic relations, nor in terms of conventional sociality, although both may be imbricated within it.

Thus, when the contributors to this volume write of data associations in global law and policy, they write not just of pre-existing relations finding expression (accurately or otherwise) in data or being reoriented or “nudged” by data-driven operations and designs. They write also of data as a medium for publicly imagining and re-imagining those relations. Distinct legal and political cultures predispose publics towards adopting quite divergent ways of perceiving and representing global conditions in data, contributors to this volume show. Only by taking account of this divergence and variety, Sheila Jasanoff contends herein, might we recognize “official forgetfulness and underestimation” in the “data practices of ruling institutions” and discern the unanswered pleas for justice embedded in those. We invite you to read and engage with these provocative works and look forward to tracing their afterlife in your writings.

Photo credit: Dominik Bartsch via Flickr CC BY 2.0