Tuesday, 20 April 2021

One size does not fit all: Constructing complementary digital reskilling strategies using online labour market data

Fabian Stephany introduces his new article, 'One size does not fit all: Constructing complementary digital reskilling strategies using online labour market data' in Big Data & Society doi:10.1177/20539517211003120. First published April 14th 2021.

Video abstract

Digital technologies are radically transforming our work environments and demand for skills, with certain jobs being automated away and others demanding mastery of new digital techniques. This global challenge of rapidly changing skill requirements due to task automation overwhelms workers. The digital skill gap widens further as technological and social transformation outpaces national education systems and precise skill requirements for mastering emerging technologies, such as Artificial Intelligence, remain opaque. Online labour platforms could help us to understand this grand challenge of reskilling en masse. Online labour platforms build a globally integrated market that mediates between millions of buyers and sellers of remotely deliverable cognitive work. This commentary argues that, over the last decade, online labour platforms have become the ‘laboratories’ of skill rebundling; the combination of skills from different occupational domains. Online labour platform data allows us to establish a new taxonomy on the individual complementarity of skills. For policy makers, education providers and recruiters, a continuous analysis of complementary reskilling trajectories enables automated, individual and far-sighted suggestions on the value of learning a new skill in a future of technological disruption.

Keywords: Artificial intelligenceautomationBig Datanetworksonline labour platformsskills


Monday, 12 April 2021

Heritage-based tribalism in Big Data ecologies: Deploying origin myths for antagonistic othering

Chiara Bonacchi introduces her new article with Marta Krzyzanska, 'Heritage-based tribalism in Big Data ecologies: Deploying origin myths for antagonistic othering' in Big Data & Society doi:10.1177/20539517211003310. First published March 23rd 2021.

Video abstract

This article presents a conceptual and methodological framework to study heritage-based tribalism in Big Data ecologies by combining approaches from the humanities, social and computing sciences. We use such a framework to examine how ideas of human origin and ancestry are deployed on Twitter for purposes of antagonistic ‘othering’. Our goal is to equip researchers with theory and analytical tools for investigating divisive online uses of the past in today’s networked societies. In particular, we apply notions of heritage, othering and neo-tribalism, and both data-intensive and qualitative methods to the case of people’s engagements with the news of Cheddar Man’s DNA on Twitter. We show that heritage-based tribalism in Big Data ecologies is uniquely shaped as an assemblage by the coalescing of different forms of antagonistic othering. Those that co-occur most frequently are the ones that draw on ‘Views on Race’, ‘Trust in Experts’ and ‘Political Leaning’. The framings of the news that were most influential in triggering heritage-based tribalism were introduced by both right- and left-leaning newspaper outlets and by activist websites. We conclude that heritage-themed communications that rely on provocative narratives on social media tend to be labelled as political and not to be conducive to positive change in people’s attitudes towards issues such as racism.


Wednesday, 31 March 2021

Welcoming Our New Assistant Editors

I am very happy to introduce the six new assistant editors who have joined the editorial team at Big Data and Society. As assistant editors they will working with the co-editors on special themes, helping authors set up blog posts, overseeing the journal's Twitter account as well as overseeing a number of special projects related to the journal. Welcome to you all!

Andrew Dwyer: University of Durham, UK
Rohith Jyothish: Jawaharlal Nehru University (JNU), New Delhi, India
Gemma Newlands: University of Amsterdam and BI Norwegian Business School
Natalia Orrego Tapia: Pontifical Catholic University of Chile
Yu-Shan Tseng: University of Helsinki, Finland
Joshua Uyheng: Carnegie Mellon University, USA

More details on the backgrounds, interests and expertise of all our assistant editors can be found at https://bigdatasoc.blogspot.com/p/editoial-team.html

We also delighted to have Margie Cheesman (Oxford Internet Institute) and Julie Saperstein (University of Kentucky) continue as AEs as well.

Finally, I want to also say goodbye to former Assistant Editor Mei-chun Lee (now Dr. Lee) is stepping down from this role. Thanks for all your hard work, we wish you all the best for the future.


Thursday, 11 March 2021

“Reach the Right People”: The Politics of “Interests” in Facebook’s Classification System for Ad Targeting

by Kelley Cotter, Mel Medeiros, Chankyung Pak, Kjerstin Thorson

In recent years, targeted advertising has gained a prominent place in American politics. In particular, political campaigns, candidates, and advocacy organizations have turned to Facebook for its voluminous array of options for targeting users according to “interests” inferred by machine learning algorithms. In this study, we explored for whom this (algorithmic) classification system for ad targeting works in order to contribute to conversations about the ways such systems produce knowledge about the world rooted in power structures. To do this, we critically analyzed the classification system from a variety of vantage points, particularly focusing on the representation of people of color (POC), women, and LGBTQ+ people. First, we drew on donated user data, which included a list of political ad categories people had been sorted into on Facebook. We also examined Facebook's documentation, training materials, and patents for insight into the inner workings of the system. Finally, we entered into the system via Facebook’s tools for advertisers to explore its contents.
 
Through this investigation, we catalogued a series of cases that reveal the political order enacted via Facebook’s classification system for ad targeting. We particularly highlight four themes. First, we demonstrate how certain ad categories reflect what Joy Buolamwini calls a "coded gaze," or the “embedded views that are propagated by those who have the power to code systems” (2016: n.p.).  Second, we highlight how a disproportionate number of ad categories for women and people of color hint at an unmarked user and what Tressie McMillan Cottom (2020) calls "predatory inclusion." Third, we describe cases of ad categories that flatten dimensions of identity and suggest KimberlĂ© Crenshaw’s (1989) notion of a "single-axis framework" of identity, which fails to capture the intersectionality of identity. Fourth, we illustrate how Facebook's classification system exhibits something akin to what Whitney Phillips (2018) refers to as "both-sides-ism" by allowing for ad categories that could either represent an interest in civil rights or the endorsement of hateful ideologies.
 
Through these cases, we argue that Facebook's classification system for ad targeting is necessarily political as a result of its underlying technical and commercial logics and the human choices embedded in datafication processes. The system prioritizes the interests of the socially and economically powerful and represents those who have been historically marginalized not on their own terms, but on the terms of those occupying more privileged positions. We suggest that, as a tool for political communication, Facebook’s classification system may have downstream implications for the political voice and representation of marginalized communities to the extent that political campaigns, advocacy groups, and activists increasingly rely on it for cultivating and mobilizing supporters. As Facebook weighs the decision of if/when to reinstate political advertising, our study urges continued critical reflection on whose “interests” are served by Facebook’s classification system (and others like it). 

References

Buolamwini J (2016) InCoding — In the Beginning Was the Coded Gaze. Available at: https://medium.com/mit-media-lab/incoding-in-the-beginning-4e2a5c51a45d  

Cottom TM (2020) Where platform capitalism and racial capitalism meet: The sociology of race and racism in the digital society. Sociology of Race and Ethnicity 6(4): 441–449. DOI: 10.1177/2332649220949473

Crenshaw K (1989) Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum Article 8: 139. 

Phillips W (2018) The oxygen of amplification: Better practices for reporting on extremists, antagonists, and manipulators online. Data & Society. https://datasociety.net/library/oxygen-of-amplification/

Tuesday, 9 March 2021

Welcoming Dr. Sachil Singh as a co-editor of Big Data and Society

We are very happy to announce that Dr. Sachil Singh (Queen's University, Canada) is joining the editorial staff of the journal as a co-editor. Dr. Singh will expand our editorial expertise in health and medicine, critical race studies and surveillance/sorting.

Dr.  Sachil Singh
Surveillance Studies Centre, Department of Sociology, Queen’s University, Canada

Dr. Singh is a sociologist whose main areas of study are medical sociology, critical race studies and surveillance. The common thread in his work is attention to the racial outcomes of digital sorting technologies, which has allowed him to research topics as varied as credit scoring in South Africa and healthcare in Canada. His current work examines the extent to which healthcare practitioners in Canada rely on pithy conclusions about race and ethnicity from medical apps to inform patient diagnosis and treatment.

 

Monday, 22 February 2021

The Algorithm Audit: Scoring the algorithms that score us

by Shea Brown

Big Data & Society. doi:10.1177/2053951720983865. First published: January 28th 2021.


“The Algorithm Audit: Scoring the algorithms that score us” outlines a conceptual framework for auditing algorithms for potential ethical harms. In recent years, the ethical impact of AI has been increasingly scrutinized, and has led to a growing mistrust of AI and increased calls for mandated audits of algorithms. While there are many excellent proposals for ethical assessments of algorithms, such as Algorithmic Impact Assessments or the similar Automated Decision System Impact Assessments, these are too high level to be put directly into practice without further guidance. Other proposals have a more narrow focus on technical notions of bias or transparency (Mitchell et al., 2019). Moreover, without a unifying conceptual framework for carrying out these evaluations, there’s a worry that the ad hoc nature of the methodology could lead to potential harms being missed. 


We present an auditing framework that can serve as a more practical guide for comprehensive ethical assessments of algorithms. We clarify what we mean by an algorithm audit, explain key preliminary steps to any such audit (identifying the purpose of the audit, describing and circumscribing its context) and elaborate on the three main elements of the audit instrument itself: (i) a list of possible interests and rights of stakeholders affected by the algorithm, (ii) a list and assessment of metrics that describe key ethically salient features of the algorithm in the relevant context, and (ii) a relevancy matrix that connects the assessed metrics to the stakeholder interests.  We provide a simple example to illustrate how the audit is supposed to work, and discuss the different forms the audit result could take (quantitative score, qualitative score, and a narrative assessment).  


Our motivations for this separation of descriptive (metrics) and normative (interests) features are many, but one important reason is that this separation forces an auditor to carefully consider each stakeholder explicitly, and consider the possible relevance of various features of the algorithm (metrics) to that stakeholder’s interests. It’s important to note that different stakeholders in the same category (e.g. students, loan applicants, those up for parole, etc.) are often affected in very different ways by the same algorithm and often on the basis of race, ethnicity, gender, age, religion, or sexual orientation (Benjamin, 2019). We argue that understanding the context of an algorithm is a precursor to being able to not only enumerate stakeholder interests generally, but also to be able to identify particular sub-categories of stakeholders whose identification is relevant for ethical assessment of an algorithm (e.g. students of color, Hispanic loan applicants, male African-Americans up for parole, etc.). These stakeholders might face particular threats, and attention to context allows us to guard against thinking of groups of stakeholders are homogeneous entities that will be negatively or positively affected simply in virtue of the type of engagement with an algorithm, and to recognize socio-political and socio-technical factors, and power dynamics at play (Benjamin, 2019; D’Ignazio and Klein, 2020; Mohamed et al., 2020).


The proposed audit instrument yields an ethical evaluation of an algorithm that could be used by regulators and others interested in doing due diligence, while paying careful attention to the complex societal context within which the algorithm is deployed. It can also help institutions mitigate the reputational, financial, and ethical risk that a poorly performing algorithm might present.  



  


Sunday, 13 December 2020

How the pandemic made visible a new form of power

By Engin Isin and Evelyn Ruppert

The political pandemic known by its biomedical name COVID-19 has thrown us off balance. We have steadied ourselves (somewhat) by taking a historical view on forms of power and re-evaluating some long-held assumptions about power in social and political theory to see what the present might reveal. The result is this article where we broadly provide an account of three forms of power (sovereign, disciplinary and regulatory) and suggest that the coronavirus pandemic has brought these three forms of power into sharp relief while making particularly visible a fourth form of power that we name ‘sensory power’. We place sensory power within an historical series straddling the 17th and 18th (sovereign), 18th and 19th (disciplinary), and 19th and 20th (regulatory) centuries. The birth of sensory power straddles the 20th and 21st centuries and just like earlier forms of power neither replaces nor displaces but nestles and intertwines with them. So, rather than announcing the birth of a new form of power that marks this age or epoch we articulate its formation as part of a much more complex configuration of power. We conclude the article with a discussion the kinds of resistance that sensory power elicits.

This article follows the second edition of our book, Being Digital Citizens, which was published a few months earlier. Its new chapter examines various forms of digital activism. We were inspired by the range, spread, and diffusion of such acts of resistance and yet were unclear what form of power to which they were a response. Having examined these acts of resistance compelled us to take a historical view and to name a fourth form of power. With the onset of the pandemic, that form of power became visible and through the article we came to name it as sensory power.  

 

Both the article and the book are available at the links below and are open access.

 

Isin, Engin, and Evelyn Ruppert. 2020. ‘The Birth of Sensory Power: How a Pandemic Made It Visible’. Big Data & Society 7(2). Open access.

 

Isin, Engin F., and Evelyn Ruppert. 2020. Being Digital Citizens. Second edition. London: Rowman & Littlefield.  Go to tab 'features' to download open access pdf.