Barbara Prainsack
In January, a large German university proudly announced that Facebook had chosen them for a 6.5m Euro grant to establish a research centre on the ethics of artificial intelligence. The public response to this announcement took them by surprise: instead of applauding the university’s ability to attract big chunks of industry money, many were outraged about the university’s willingness to take money from a company known to spy and lie. If Facebook was so keen to support research on the ethics of artificial intelligence, people said, they should pay their taxes so that governments could fund more research focusing on these aspects.
Resistance against the growing power of large technology companies has left the ivory towers of critical scholarship and has reached public fora. The acronym GAFA, initially a composite of the names of some of the largest tech giants, Google, Amazon, Facebook, and Apple, has become shorthand for multinational technology companies who, as quasi-monopolists, cause a range of societal harms: They stifle innovation by buying up their competition, they evade and avoid tax, and (some more than others) threaten the privacy of their users. They have also, as Frank Pasquale argued, stopped to be market participants but become de facto market regulators. As I argued elsewhere, they have become an iLeviathan, the rulers of a new commonwealth where people trade freedom for utility. Unlike with Hobbes’ Leviathan, the utility that people obtain is no longer the protection of their life and their property, but the possibility to purchase or exchange services and goods faster and more conveniently, to communicate with others across the globe in real time, and in some instances, to be able to obtain services and goods at all.
As I argue in a new paper in Big Data & Society, responses to this power asymmetry can be largely grouped in two camps: On the one side are those that seek to increase individual-level control of data use. They believe that individuals should own their data, at least in a moral, but possibly also in a legal sense. Some go as far as proposing that, as an expression of the individual ownership of their data, individuals should be paid by corporations that use their data. For them, individual-level monetisation is the epitome of respecting individual data ownership.
On the other side are those who believe that enhancing individual-level control is insufficient to counteract power asymmetries, and that it can also create perverse effects: For example, paying individuals for their data would create even larger temptations for those who cannot pay for services or goods with money to pay with their privacy instead. From this perspective, individual-level monetisation of data would exacerbate the new social division between data givers and data takers. Instead, what is needed, they argue, is greater collective control and ownership of data.
In this second camp, which in my paper I call the “Collective Control” group (and to which I also count my own work), one solution that is being suggested is the creation of digital data commons. Drawing upon the work of scholars such as Elinor Ostrom and David Bollier, some scholars believe that data commons – understood as resources that are jointly owned and governed by people – are an important way to move digital data out of the enclosures of for-profit corporations and into the hands of citizens (in my paper, I discuss what this may look like in practice). A data commons, some of them argue, is a place where nobody is excluded from benefiting from the data that all of us had a share in creating.
But is this so? As I argue in this article, in much of the literature on physical commons – such as the grazing grounds and fisheries that Elinor Ostrom and other commons scholars analysed - the possibility to exclude people from commons is considered a necessary condition for commons to be governed effectively. When everybody has access to something and nobody can be excluded, it is likely that those who are already more powerful will be able to make the best use of the resource, often at the cost at those less privileged. For these reasons, Ostrom and others conceived commons not as governed by open access regimes – meaning that nobody holds property rights – but as ruled by a common property regime. Such a common property regime would allow the owners of the resource to decide how the resource can be used, and who can be excluded. In other words, to avoid inequitable use of commons, those governing the commons must be able to set the rules, and must be able to exclude.
The issue of who and how actors are or can be excluded from commons, however, has received very little systematic attention so far in the growing scholarship on digital data commons. In my article, I propose a systematic way to consider what types of exclusion from contributing data to the commons, from using, or benefitting from, the data commons, and from partaking in the governance of the commons are harmful, and how forms and practices of exclusion that cause undue harm can be avoided. In this manner, I argue, it is possible for us to distinguish between data commons that will help to counteract existing power imbalances and to increase data justice on the one hand, and those that use the commons rhetoric to serve particularistic and corporate interest on the other.
In this context, it is also apparent that either way, individual-level monetisation in the form of paying people for their data is a bad idea. Not only would it lure the cash-poor into selling their privacy, but it also plays into the hands of those whose who seek to individualise relationships between data givers and data takers to avoid a collective response to the increasing power asymmetries in the digital data economy.