Big Data & Society, July 2022, https://doi.org/10.1177/20539517221113774.
Alison Powell @a_b_powell
Funda Ustek-Spilda @fundaustek
Sebastian Lehuede @s_lehuede
Irina Shklovski @quillis
Funda Ustek-Spilda @fundaustek
Sebastian Lehuede @s_lehuede
Irina Shklovski @quillis
We all know by now that ‘big tech’ data-driven systems can create and perpetuate harm and injustice. The “technology for good” sector (including ‘data for good’ and ‘AI for good’) aims to address this situation by making redress to these harms a business proposition within a technology firm. “Technology for good” businesses are often values-led, and incubators and funders encourage these investments through incubators and strategic funding rounds. Yet despite the ‘ethical turn,’ the statements, products and development processes in “tech for good” spaces embed deterministic views of ethics. They assume that ethics is a ‘problem to be solved’ with greater data or more effective technology, and that good intentions and individual action (either by an individual or a company) can bring about better ethical outcomes. They also fail to attend to the influence of business contexts themselves and therefore fail to engage the normative or justice-oriented grounding of these critiques, showing the significant work yet to be done.
In this paper we draw on three years of fieldwork to identify and address enduring ethical gaps in ‘technology for good’ practices. As part of a turn towards ‘ethics in practice’ our research examines three gaps that persist in ‘technology for good’. We examine (1) a misplaced individualization of virtue; (2) the constraints on ethical action created through social context; and (3) the often unaccounted for mismatch between ethical intentions and outcomes in time and space. We ask, ‘how can we understand and address contextual gaps and build more complex capacity for work in data and AI ethics, especially given the increasing focus and hype of the “technology for good” space?’
We present two potential ways of addressing these gaps, first focusing on the social and cultural dimensions of technology development and second arguing that ethical considerations need to be positioned across individual and collective experiences. We then reflect on how adding different ethical perspectives such as the capability approach and care ethics can also enrich current thinking on data and AI ethics – using these approaches to address issues at a systemic level, accepting and addressing harms that might be produced at various scales and times, and may be subject to legal recourse or regulation. We also identify how research in this space can unwittingly perpetuate these gaps, and reflect on our own practices as means of attending to this.
Without addressing these paradoxes and ethical gaps, ‘technology for good’ efforts might displace time, energy, and critical attention away from considerations that might transform the way we think about and practice technology development and data and AI ethics. Perhaps something else is possible.