Friday, 13 November 2020

The Sale of Heritage on eBay: Market Trends and Cultural Value

by Mark Altaweel and Tasoula Georgiou Hadjitofi

Big Data & Society 7(2), DOI https://doi.org/10.1177/2053951720968865. First published: Nov 11th, 2020.

Understanding the heritage market, which includes sales of portable objects from the ancient to more recent past, has been a challenge for heritage experts. In part, this is because the market is not only dynamic but has also included illegal sales that are difficult to track. This study attempts to understand the heritage market by using a potentially important proxy, specifically eBay’s site for selling heritage on the international market. By looking at eBay, the relatively large volume of sales can be monitored to determine patterns of sales that include what cultures are sold, the types of objects sold, and the materials that they are made from. Additionally, it is possible to determine where items are sold from, including what country are selling given objects. While sales data provide us with potential insights that may inform on the wider market, what is also useful is we can use a site like eBay to see how clearly cultural value may potentially drive sales. Cultural value is defined as the appreciation or beliefs one develops about given cultures, including ancient ones, from media, educational, and other resources. Effectively cultural value is driven by our experiences in our modern lives that make us value given cultures, including ones from the ancient past, more than others. The link between cultural value and monetary value is potentially a strong link, as the results of this work show.

Data on eBay sales are partially unstructured. This requires an approach that not only can obtain relevant data but can apply machine learning methods to determine relevant terms for describing information about cultures, types of items, and materials for given objects sold. Named entity recognition (NER) has been a category of methods that attempts to meaningfully categorise terms that could be summarised for information, including quantitative methods that summarise wider term patterns. Conditional random fields (CRFs) are one way to apply NER, where word patterns can be recreated as an undirected graph and the probabilities of terms associated with other terms can help to categorize a given term. For instance, Iron Age can be categorised as a period, based on the association of Age with Iron. However, iron in the terms iron sword can be categorised as a metal given its association with sword. These term associations help to categorised such unstructured descriptive terms that are provided as part of the sale data on eBay. Additionally, using term dictionaries can assist with the fact that sometimes the lack of training data may make categorisation more difficult. Overall, the approach applied in this work allows us to relatively accurately categorise the cultures, types of objects, and the materials objects are made from. Structured data within eBay also allow us to know the countries where sales happen, the price items sold at, and even the sellers (which was anonymised in the article but tracked for statistical purposes).

For the study period (October 21, 2018-January 3, 2020), countries such as the UK, US, Cyprus, Germany, and Egypt are among leading sellers of antiquities, or heritage objects, on eBay. These countries have shown strong cultural value for the cultures that appear to sell the most on eBay. Cultures such as the Romans, ancient Egypt, Vikings (or Norse/Danes), and the ancient Near East have been a fixture in Western education and media and are the top selling cultures. Additionally, items such as jewellery, statues and figurines, and religious items sold the most; masks and vessels (e.g., such as vases) may not have as high a volume in sales but generally fetch higher prices. Metal, stone, and terracotta are also the most commonly sold materials. On the other hand, ivory, papyrus, and wood obtain some of the higher prices. What is also clear is that sales are often driven by a relatively small number of sellers, with about 40% of sales over part of the study period dominated by 10 sellers. Sales disproportionally concentrate in Western countries, but emerging markets such as Thailand are evident. Many countries that dominate sales in one category of items also dominate sales for other objects (Figure 1). Some countries, however, are also specialists with certain cultures or object types, such as Canada, Latvia, India, and Israel. The key finding in this work is we can see that cultural value has a link with sales on eBay and eBay, at least in part, can act as a proxy for wider antiquity sales given that it seems to demonstrate Western markets dominate sales, similar to what has been noticed anecdotally. Other sites and even social media, in the long term, would need to be monitored for their heritage sales to obtain a fuller idea of the market. This work is a start, with the code and data used provided as part of the outputs.


Figure 1. Maps showing where cultures were sold (a) and total sales for countries (USD).

Wednesday, 11 November 2020

Techno-solutionism and the standard human in the making of the COVID-19 pandemic

Stefania Milan introduces her commentary "Techno-solutionism and the standard human in the making of the COVID-19 pandemic" in Big Data & Society 7(2), https://doi.org/10.1177/2053951720966781. First published: October 20, 2020.

Video abstract


Text abstract

Quantification is particularly seductive in times of global uncertainty. Not surprisingly, numbers, indicators, categorizations, and comparisons are central to governmental and popular response to the COVID-19 pandemic. This essay draws insights from critical data studies, sociology of quantification and decolonial thinking, with occasional excursion into the biomedical domain, to investigate the role and social consequences of counting broadly defined as a way of knowing about the virus. It takes a critical look at two domains of human activity that play a central role in the fight against the virus outbreak, namely medical sciences and technological innovation. It analyzes their efforts to craft solutions for their user base and explores the unwanted social costs of these operations. The essay argues that the over-reliance of biomedical research on “whiteness” for lab testing and the techno-solutionism of the consumer infrastructure devised to curb the social costs of the pandemic are rooted in a distorted idea of a “standard human” based on a partial and exclusive vision of society and its components, which tends to overlook alterity and inequality. It contends that to design our way out of the pandemic, we ought to make space for distinct ways of being and knowing, acknowledging plurality and thinking in terms of social relations, alterity, and interdependence.

Keywords: COVID-19, calculation, whiteness, contact tracing, decolonial, pluriverse

Tuesday, 3 November 2020

The Norwegian Covid-19 tracing app experiment revisited

by Kristin B Sandvik

In my Big Data & Society commentary ‘Smittestopp: If you want your freedom back download now’ (published on July 28, 2020 within the Viral Data symposium) I mapped out the first phase of the Norwegian version of a Covid-19 tracing app. My goal in the commentary was to “engage critically with the Smittestopp app as a specifically Norwegian technofix ... co-created by the mobilization of trust and dugnaðr, resulting in the launch of an incomplete and poorly defined data-hoarding product with significant vulnerabilities.” Since I submitted my final version of the commentary the app went down in flames and a rancorous domestic blame game ensued, only for the app to be rescheduled for a phoenix-like rebirth in late 2020.

In this blog post, I want to contemplate a set of issues pertaining specifically to the legacy of Smittestopp, but of relevance to other Covid-19 tracing apps. This relates to how democratic government actors respond to criticism of digital initiatives in the context of emergencies; and the type of challenges civil society actors face in holding public and private sector actors accountable. For context, I begin by giving a recap (a longer version here) of the rise and fall of Smittestopp. All translations from Norwegian are my own.

The rise and fall of Smittestopp

April 16, 2020, the Norwegian COVID-19 tracking app Smittestopp was launched to great fanfare. The Ministry of Health, the Norwegian Institute of Public Health (NIPH), the developer of Smittestopp, Simula – a government founded research laboratory (and thus a public entity) – and the prime minister, Erna Solberg, underscored the civic duty to download the app. Norway’s total population is 5.3 million. At the time of the launch, “enough people” was thought to be 50–60% of the population over the age of 16. At its height, the app had almost 900 000 active users. By the end in early June, it had been downloaded 1.6 million times.

Starting weeks before the actual launch of the app in Mid-April, the Norwegian Data protection Authorities (DPA), part of the media, some politicians and a large slice of the domestic tech community had grown exasperated with the procurement processes, functionalities, and designated application of Smittestopp. From the launch date, the app was hampered by technical problems and subjected to a deluge of criticism from self-organizing members of the Norwegian tech and data protection community for being experimental, intrusive, non-transparent, relying on unfounded techno-optimism and abusing trust. As succinctly phrased by a tech activist, the grievance was that NIPH /Simula did not listen to experts, and “made a surveillance app with infection tracing, not an infection tracing app”.

By June 12, DPA issued a formal warning to the NIPH that they were considering a temporary ban on the app due to the app being disproportionately intrusive with respect to personal data. By June 15, the NIPH stopped collecting data, declaring that this meant a weakness of preparedness against increased infection rates as we are ‘loosing time to develop and test the app’, and at the same time loosing our capability to fight the spread of COVID-19. By June 16, Amnesty published a report declaring that “Bahrain, Kuwait and Norway have run roughshod over people’s privacy, with highly invasive surveillance tools which go far beyond what is justified in efforts to tackle COVID-19”. On July 6, the DPA issued a temporary ban on the app. September 28 it was finally over. The Minister of Health, Bent Høie, clarified that 40 million Norwegian kroner of the taxpayer money and months of work had to be scrapped and that a new app would only be ready by Christmas.

Dealing with criticism while Norwegian

While the heated online and offline exchanges over why Smittestopp failed and who was to blame are best described as mudslinging, they also shed light on how Norwegian government actors – rightly praised for the Covid-19 response and generally unaccustomed to international condemnation – respond to criticism of digital initiatives in the context of emergencies. After the cancellation of Smittestopp, Simula launched a determined effort to manage the post-Smittestopp narrative, through the publication of a document dedicated to absolving themselves of most criticism, and forceful media engagement, including friendly reporting and less friendly tirades at twitter and elsewhere about the flaws of both criticisms and critics. Similarly, the health authorities have also been curiously unwilling to accept the legitimacy of the criticism of the legality of the app and the reasons for why it was banned. The storyline projected in these engagements is worth summarizing.

Blaming Big Tech. According to what can best be described as a ‘blaming Big Tech narrative’, Smittestopp was on a path to become a feat of digital innovation but was undermined by opposition from Google and Apple, effectively rendering nation states helpless in the face of the tactics and strategies of Big Tech. Simula contends that “The unique experience of developing Smittestopp in collaboration with the authorities in other countries and Europe's top technologists has been a reminder of how helpless national states can become in the face of global IT companies”, with reference to a critical report (here) on the privacy problems with Google Play from Trinity College. It’s vital for nations to regulate Big Tech – but the lack of adequate regulation was not why Smittestopp failed.

Blaming Amnesty. For a country that prides itself of being a humanitarian superpower and human rights trailblazer, the domestic human rights sector has been curiously silent with respect to the governments Covid-19 efforts, and Smittestopp in particular. When international actors entered the ranks of the critics, further acrimony ensued. Simula has tussled with Amnesty international, suggesting that the organization was not up front about its criticism when the organization initially made contact with the government and called the report from June for “absolute trash”, stating that it’s a report of “exceptionally low quality”, and that Amnesty is abusing their position and power, providing poorly documented conclusions and behaving in an unprofessional manner, letting itself be abused by activists (!), stating that “the meaninglessness of their claims is high as the sky” (the audio file in Norwegian is here).

Blaming the DPA. In radio and television appearances by NIPH and the Minister of Health following the formal cancellation of Smittestopp, the minister of health blamed the DPA for the Smittestopp fiasco arguing that the solution being worked on in June would have effective and complied with data protection regulations. This also meant that the health authorities did not get access to important COVID-19 tracing data. The Minister and NIPH furthermore disagreed that the app was illegal and would like “clearer advice” in the future. The head of the DPA replied that it was not a lack of advice but things the stakeholders themselves did as well as the inability to document the utility of the app which made it disproportionately intrusive and thus illegal- and that the law applies, also in times of national emergencies.

The future of civic activism in emergencies

The passion with which activists, technologists, bureaucrats and scholars have fought against the app, and the strongly worded accusations flying from both sides also suggest that for the rule of law and democratic accountability, struggles over personal data and privacy will continue to get fiercer. The lack of interest in adopting a lessons learned perspective, the absence of humility, the governments continued and willful misunderstanding of how GDPR – and the DPA – works, and the branding of critics as selfish, abusive and cowardly have been an extraordinary spectacle to watch. As observed by a tech activist:

“We have a high level of trust in the state and the government – because the government rarely demands more of its citizenry than is deemed reasonable. Simula, on the other hand, acts if this trust gives us the opportunity to intervene into the private sphere in ways we would call scary if undertaken by a country like Russia or even Great Britain.”

However, an important takeaway from this controversy is what this suggests about the future of civic activism in a country like Norway, which is not historically home to strong foundations, labs, non-profits or grassroots NGOs focusing on holding actors democratically accountable in the digital domain. From early March, individual data activists have been using the Freedom of Information Act to painstakingly map government, Health authorities and DPA interaction on Smittestopp. The focus has subsequently turned to efforts to get access to code and to data sets. At present, work is ongoing to clarify what exactly is going on with the remnants of the Smittestopp infrastructure. This attention to the zombification of data structures post-app is important: Across the globe, Covid-19 apps have collected an enormous amount of data. Governments must be held accountable for mission and function creeps – and for what happens when the apps are set on the path to digital graveyards. This creates questions about what new types of challenges to competencies and methods the established civil society and academia are facing and should be ready to meet.

Smittestopp is dead! Long live Smittestopp!

What have we learned from all of this? While Smittestopp is small fry, it is also an instructive illustration of the affordances of the crisis-label and the pitfalls of digital transformation in a highly digitized society. The experiment now continues. The preferred solution for the new Norwegian app is GAEN (Google and Apple Exposure Notification), with data stored mostly locally. Possibly due to the public brouhaha, the tender process only received one offer – which was accepted – from the Danish NetCompany, the developer of the Danish Covid-19 tracing app also called ‘Smittestopp’. After a public vote of sorts, it is clear that the second installment of the Norwegian app-experiment will also be called Smittestopp. The regulations for the initial app were repealed October 9 and no regulations have been issued for the new app. Netcompany has emphasized the importance of better public communication – but as this blog has illustrated, public engagement on the legality and legitimacy of digital interventions and figuring out the how-to of this – remains more important than ever.

Wednesday, 21 October 2020

Revisiting the Black Box Society by Rethinking the Political Economy of Big Data

Special Theme Issue
https://journals.sagepub.com/page/bds/collections/revisitingtheblackboxsociety

Guest lead editors: Benedetta Brevini* and Frank Pasquale**

* University of Sydney
** Brooklyn Law School

Throughout the 2010s, scholars explored the politics and sociology of data, its regulation and its role in informing and guiding policymakers such as the importance of quality health data in the COVID-19 epidemic to “flatten the curve.” However, all too much of this work is being done in “black box societies” jurisdictions where the analysis and use of data is opaque, unverifiable, and unchallengeable. As a result, far too often data are used as a tool for social, political, and economic control, with biases often distorting decision making and accompanied by narratives of tech solutionism and even salvation-ism abound.

The Black Box Society was one of first scholarly accounts of algorithmic decision making to synthesize empirical research, normative frameworks, and legal argument and this symposium of commentaries reflect on what has happened since its publication. Much has happened since 2015 that vindicates and challenges the book’s main themes. Yet recurring examples of algorithmically driven injustices raise the question of whether transparency—the foundational normative value in The Black Box Society—is a first step toward a more emancipatory deployment of algorithms and AI, is an easily deflected demand, or actually worsens matters by rationalizing the algorithmic ordering of human affairs.

To address these issues, this symposium features the work of leading thinkers who have explored the interplay of politics, economics, and culture in domains ordered algorithmically by managers, bureaucrats, and technology workers. By bringing social scientists and legal experts into dialogue, we aim both to clarify the theoretical foundations of critical algorithm studies and to highlight the importance of engaged scholarship, which translates the insights of the academy into an emancipatory agenda for law and policy reform. While the contributions are diverse, a unifying theme animates them: each offers a sophisticated critique of the interplay between state and market forces in building or eroding the many layers of our common lives, as well as the kaleidoscopic privatization of spheres of reputation, search, and finance. Unsatisfied with narrow methodologies of economics or political science, they advance politico-economic analysis. They therefore succeed in unveiling the foundational role that the turn to big data has in organising economic and social relations. All the contributors help us imagine practical changes to prevailing structures that will advance social and economic justice, mutual understanding, and ecological sustainability. For this and much else, we are deeply grateful for their insightful work.

Editorial by Benedetta Brevini and Frank Pasquale, "Revisiting the Black Box Society by rethinking the political economy of big data"

Ifeoma Ajunwa, in “The Black Box at Work,” describes the data revolution of the workplace, which simultaneously demands workers surrender intimate data and then prevents them from reviewing how it is used.

Mark Andrejevic, in “Shareable and Un-Shareable Knowledge,” focuses on what it means to generate actionable but non-shareable information, reaffirming the urgency of intelligible evaluation as a form of dignity.

Margaret Hu’s article “Cambridge Analytica’s Black Box” surveys a range of legal and policy remedies that have been proposed to better protect consumer data and informational privacy.

Paul Prinsloo examines “Black Boxes and Algorithmic Decision-making in (Higher) Education” to show how the education sector is beginning to adopt technologies of monitoring and personalization that are similar to the way the automated public sphere serves political information to voters.

Benedetta Brevini, in “Black Boxes, not Green: Mythologizing AI and Omitting the Environment” documents how AI runs on technology, machines and infrastructures that deplete scarce resources in their production, consumption and disposal, thus placing escalating demands on energy and accelerating the climate emergency.

Gavin Smith develops the concept of our “right to the face” in “The Face is the Message: Theorisingthe Politics of Algorithmic Governance in the Black Box City” as he explores how algorithms are now responsible for important surveillance of cities, constantly passing judgment on mundane activities.

Nicole Dewandre’s article, “Big Data: From Fears of the Modern to Wake-up Call for a New Beginning” applies a deeply nuanced critique of modernity to algorithmic societies arguing that Big Data may be hailed as the endpoint or materialisation of a Western modernity, or as a wake-up call for a new beginning.

Jonathan Obar confirms this problem empirically in “Sunlight Alone is Not a Disinfectant: Consent andthe Futility of Opening Big Data Black Boxes,” and proposes solutions to more equitably share the burden of understanding.

Kamel Ajji in “CyborgFinance Mirrors Cyborg Social Media” outlines how The Black Box Society inspired him to found “21 Mirrors, a nonprofit organization aimed at analyzing, rating and reporting to the public about the policies and practices of social media, web browsers and email services regarding their actual and potential consequences on freedom of expression, privacy, and due process.”

Tuesday, 8 September 2020

Emerging models of data governance in the age of datafication

by Marina Micheli, Marisa Ponti, Max Craglia and Anna Berti Suman 

Big Data & Society 7(2), https://doi.org/10.1177/2053951720948087. First published: Sept 1, 2020.

The article synthetizes and critically inquires a ‘moving target’: the various practices that are being advanced for the governance of personal data. In the last years, following scandals like Cambridge Analytica and new regulations for the protection of data like the GDPR, there is mounting attention on how data collected by big tech corporations and business entities might be accessed, controlled, and used by other societal actors. Scholars, practitioners and policy makers have been exploring the opportunities of agency for data subjects, as well as the alternative data regimes that could allow public bodies to use such data for their public interest mission. Yet, the current circumstances, which are the result of a tradition of ‘corporate self-regulation’ in the digital domain and an overall laissez-faire approach (albeit increasingly divergent by geopolitical context), see the hegemonic position of a few technology corporations that have de-facto established ‘quasi-data monopolies’. This is reflected in the asymmetry of power between data corporations, which hold most of the decision-making power over data access and use, and the other stakeholders.

The article increases knowledge about the practices for data governance that are currently developed by various societal actors beyond ‘big tech’. It does so describing four data governance models, emphasizing the power of social actors to control how data is accessed and used to produce different kinds of value. A relevant outcome of the article lies in the heuristic tools it proposes that could be useful to better understand and further examine the emerging models of data governance –looking in particular at the relations between stakeholders and the power (un)balances between them.

The idea for this study originates from a workshop that we organised in the context of the project Digitranscope at the Centre of Advanced Studies of the Joint Research Centre of the European Commission. Seventeen invited experts - from academia, public sector, policymaking, research and consultancy firms - took part at the event, back in October 2018, to discuss the policy implications of the governance of (and with) data. While preparing the workshop, we realised how the various labels that circulated in the policy arena to tackle data governance - such as data sovereignty, data commons, data trusts, etc. - tended to be used equivocally to refer to different concepts (technical solutions, legal frameworks, economic partnerships, etc.), with their meaning slightly shifting according to the context. Furthermore, during the workshop, participants highlighted the widespread lack of knowledge and practical understanding of possible alternatives to the ‘data extraction’ approach of big online platforms, as well as the need to find ways to use data collected by private companies for the public interest, and the urgency to consider data subjects as key stakeholders for the governance of data. With all these insights in mind, we decided to engage in the research that lead to this article.

The key contributions of this publication, according to our view, are conceptual and empirical.

  • We developed a ‘social-science informed’ definition of data governance that draws from science and technology studies and critical data studies (hence, also from some key publications of this journal). We understood data governance as the power relations between all the actors affected by, or having an effect on, the way data is accessed, controlled, shared and used, the various socio-technical arrangements set in place to generate value from data, and how value is redistributed between actors. Such definition allows moving beyond concerns of technical feasibility, efficiency discourses and ‘solutionist’ thinking. Instead, it points to the actual goals for which data is managed, emphasizing who benefits from it, the power un(balances) among stakeholders, the kind of value produced, and the mechanisms (including underling principles and system of thoughts) that sustain this approaches. 

  • We conducted a review of relevant resources from the scientific and grey literature on the practices of data governance that lead to the identification of four emerging models: data sharing pools, data cooperatives, public data trusts and personal data sovereignty. As this is a rapidly evolving field, we did not aim at offering an exhaustive picture of all possible models - hence these four should not be understood as comprehensive. They also have to be contextualised in our conceptual approach, in the time span in which the research has been conducted and in the European focus taken by the article. Yet, they provide a basis to understand how the emerging data governance models are (re)thinking and redressing power asymmetries between big data platforms and other actors. In particular, they show how both civic society and public bodies are key actors for democratising data governance and redistributing value produced through data.
A social science-informed conceptualisation of data governance allows seeing ‘through the infrastructure’ and encourages asking certain questions, such as: what principles guide data sharing and use? What is done with data and who can access and participate in its governance? What value is produced and how it is redistributed? This kind of questions is particularly relevant today, given that the policy debate around data governance is very active at the moment (especially in Europe). The future of the data governance models examined in this article – and of any model that allows more actors to control data and use it for purposes beyond the generation of profit for big tech corporations – depends on the policy actions and the legal frameworks that will be developed to sustain them.

Vignette of the data governance models examined in the article.

Keywords: Data governance, Big Data, digital platforms, data infrastructure, data politics, data policy

Wednesday, 2 September 2020

COVID-19 is spatial: Ensuring that mobile Big Data is used for social good

by Age Poom, Olle Järv, Matthew Zook and Tuuli Toivonen

Big Data & Society 7(2), https://doi.org/10.1177/2053951720952088. First published: August 28, 2020

The mobility restrictions related to COVID-19 pandemic have resulted in the biggest disruption to individual mobilities in modern times. Hot spots, quarantine, closed borders, video-conferencing, social distancing and temporary closure of workplaces, schools, restaurants and recreational facilities are all profoundly about distance, separation, and space. Examining the geographical aspect of the pandemic is important in understanding its broad implications, including the broader societal impacts of containment policies. 

The avalanche of mobile Big Data – location and time-stamp data from mobile phone call records, operating system, social media or apps – makes it possible to study the spatial effects of the crisis with spatiotemporal detail even at national and global scales. Beyond health care objectives such as understanding how virus transmission is mediated by human mobility or evaluating adherence to restrictions, mobile Big Data also allows us to understand the changes in people’s daily interactions, mobilities and socio-spatial responses across population groups.

Our advocacy for the use of these data, however, is tempered both by our experiences in recent months with the serious limitations of using mobile Big Data and our unease with the power of these same data to track, surveil and discipline social behaviour at the scale of entire populations. 

Thus, we pose the question: How can we use mobile Big Data for social good, while also protecting society from social harm? Drawing on the Estonian and Finnish experiences during the early phases of COVID-19 pandemic, we highlight issues with quickly developed ad hoc data products as well as the “black box” solutions (Pasquale, 2015) offered by large platform companies that created “new digital divides” among researchers (boyd and Crawford, 2012).

We argue that these examples demonstrate a clear need to re-evaluate the public-private relationships with mobile Big Data and propose two strategic pathways forward.

First, we call for transparent and sound mobile Big Data products that provide relevant up-to-date longitudinal data on the mobility patterns of dynamic populations. To help increase their usefulness, data products should be transparent about their production methodology, and ensure easy access and stability. 

Second, there is also a need to develop trustworthy platforms for collaborative use of raw individual level data. Secured and privacy-respectful access to near real-time raw data is needed for developing and testing sound methodologies for the above-mentioned data products. This would help bridge the Big Data digital divide, enable scientific innovation, and offering needed flexibility in responding to unanticipated questions on changing locations and mobilities in case of crises. To be clear, we do not view this as simple to achieve, particularly as we weigh what kind of institution might best fill this role, or how is “social good” defined and operationalized in practice. But addressing these issues via public debates and academic discourses will leave us better prepared for the next crisis.

Summing up,
  • We need harmonized and representative data about human mobility for better crisis preparedness and social good in general;
  • Methodological transparency about mobile Big Data products are vital for open societies and capacity building;
  • Access to mobile Big Data to develop feasible methodologies and baseline knowledge for public decision-making is needed before the next crisis occurs;
  • Recognizing the fundamental spatiality of the current COVID-19 crisis and crises more generally is the most relevant of all.

Mobile Big Data can help us to better understand and address the important spatial dimensions of COVID-19 pandemic and every other social phenomenon. The challenge is doing so responsibly (Zook et al., 2017) and not normalizing a lack of spatial privacy.

References

boyd, d, Crawford, K (2012) Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 15(5): 662–679. https://doi.org/10.1080/1369118X.2012.678878

Pasquale, F (2015) The Black Box Society. Cambridge: Harvard University Press.

Zook, M, Barocas, S, boyd, d, et al. (2017) Ten simple rules for responsible big data research. PLOS Computational Biology 13(3): e1005399. https://doi.org/10.1371/journal.pcbi.1005399

Keywords: mobile Big Data, mobility, COVID-19, spatial data infrastructure, social good, mobile phone data, social media data, privacy


Tuesday, 1 September 2020

Designing for human rights in AI

Evgeni Aizenberg and Jeroen van den Hoven introduce their publication "Designing for human rights in AI" in Big Data & Society 7(2), https://doi.org/10.1177/2053951720949566. First published: Aug 18, 2020.

Video abstract

Text abstract
In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.

Keywords: Artificial intelligence, human rights, Design for Values, Value Sensitive Design, ethics, stakeholders