Wednesday, 11 January 2023

Ground Truth Tracings (GTT): On the Epistemic Limits of Machine Learning

by Edward B. Kang (@edwardbkang)

Kang, E. B. (2023). Ground truth tracings (GTT): On the epistemic limits of machine learning. Big Data & Society, 10(1). https://doi.org/10.1177/20539517221146122  

This article is a direct response to the increasing division I have been seeing between what might be called the “technical” and “sociotechnical” communities in artificial intelligence/machine learning (AI/ML). It started as a foray into the industry of machine “listening” with the purpose of examining to what extent practitioners engage with the complexity of voice in developing techniques for listening to and evaluating it. Through my interviews, however, I found that voice, along with many other qualitatively complex phenomena like “employee fit,” “emotion,” and “personality,” gets flattened in the context of machine learning. The piece thus starts with a specific scholarly interest in the interface of voice and machine learning, but ends with a broader commentary on the limitations of machine learning epistemologies as seen through machine listening systems.
 
Specifically, I develop an intentionally non-mathematical methodological schema called “Ground Truth Tracings” (GTT) to make explicit the ontological translations that reconfigure a qualitative phenomenon like voice into a usable quantitative reference AKA “ground-truthing.” Given that all machine learning systems require a referential database that serves as its ground truth – i.e., what is assumed to be true by the system – examining these assumptions are key to exploring the strengths, weaknesses, and beliefs embedded in AI/ML technologies. In one example, I bring attention to a voice analysis “employee-fit” prediction system that analyzes a potential candidate’s voice to predict whether the individual will be a good fit for a particular team. By using GTT, I qualitatively show why this system is not feasible as an ML use case and unlikely to be as robust as it is marketed to be.
 
Finally, I acknowledge that although this framework may serve as a useful tool for investigating claims around ML applicability, it does not immediately engage questions of subjectivity, stakes, and power. I thus further splinter this schema through these axes to develop a perhaps imperfect, but practical heuristic called the “Learnability-Stakes” table to assess and think about the epistemological and ethical soundness of machine learning systems, writ large. I’m hoping this piece will contribute to the fostering of interdisciplinary dialogue among the wide range of practitioners in the AI/ML community that includes not just computer scientists and ML engineers, but also social scientists, activists, journalists, policy makers, humanities scholars, and artists, broadly construed. 

Tuesday, 13 December 2022

Johann Laux and Fabian Stephany introduce their new paper on "The Concentration-after-Personalisation Index (CAPI)"

Johann Laux and Fabian Stephany introduce their new paper on "The Concentration-after-Personalisation Index (CAPI)" out in Big Data & Society  doi:10.1177/20539517221132535. First published December 5, 2022.

Video abstract



Abstract.

Firms are increasingly personalising their offers and services, leading to an ever finer-grained segmentation of consumers online. Targeted online advertising and online price discrimination are salient examples of this development. While personalisation's overall effects on consumer welfare are expectably ambiguous, it can lead to concentration in the distribution of advertising and commercial offers. Constellations are possible in which a market is generally open to competition, but the targeted consumer is only made aware of one possible seller. For the consumer, such a market could effectively resemble a monopoly. We call such extreme cases ‘targeting pockets’. Competition-law metrics such as the Herfindahl–Hirschman Index and traditional means of public oversight of adverts would not detect this concentration. We, therefore, suggest a novel metric, the Concentration-after-Personalisation Index (CAPI). The CAPI treats every consumer as a separate ‘market’, computes a measure of concentration for personalised adverts and offers for each individual consumer separately, and then averages the result to measure the exposure experienced by an average consumer. We demonstrate how the CAPI can serve as a monitoring tool for regulators and auditors and thus help to enforce existing consumer law as well as proposed new regulations such as the European Union's Digital Services Act and its Artificial Intelligence Act. We further show how adding noise via randomly distributed non-personalised adverts can dilute the potential harm of overly concentrated personalisation. We demonstrate how the CAPI can identify the optimal degree of added noise, balancing the protection of consumer choice with the economic interests of advertisers.


Keywords: 

  1. Targeted advertising
  2. personalisation
  3. consumer welfare
  4. consumer protection
  5. consumer law
  6. competition law
  7. EU law
  8. platform regulation
  9. Digital Services Act
  10. AI ActUnfair Commercial Practices Directive
  11. digital markets
  12. law and economics
  13. novel metrics

Tuesday, 15 November 2022

Learning accountable governance: Challenges and perspectives for data-intensive health research networks

by Sam Muller

Muller, S. H. A., Mostert, M., van Delden, J. J. M., Schillemans, T., & van Thiel, G. J. M. W.(2022). Learning accountable governance: Challenges and perspectives for data-intensive health research networks. Big Data & Society, 9(2). https://doi.org/10.1177/20539517221136078

In our article, we address the accountability of large-scale health data research. Accountability is crucial to ensure democratic control and to steer health data research to contribute public value. Yet whereas previous research about health data paid much attention to accountability as a norm for doing and organising health data research, it did not specify what accountability processes should look like in practice. Specifically, previous research did not take into account that much health data research takes place in international networks, in which public and private organisations collaborate internationally and in a relatively horizontal way.
 
In our analysis of the current state of accountability, we found that governing such networks to foster accountability faces several challenges. The fact that health data research takes place in complex networks puts a lot of pressure on realizing clear and stable accountability relationships. Moreover, smooth cooperation is difficult due to unclarity of norms and principles which could guide accountability processes. Lastly, effective design of information provision and debate is lacking.
 
To complement the shortcomings of current accountability in health data research networks, we propose focusing on accountability as a means of learning from insights and feedback about how good governance can be achieved. We suggest two pathways for pursuing learning accountability. First, an integrated governance structure for learning to occur needs to be developed. Provisional goals need to be established by building on overlapping consensus. This is crucial to develop mutual understanding, shared motivation and common commitment between organisations engaged in health data research. Second, ongoing deliberation and open communication about collaboration are required for reflexive dialogue. Stakeholders (publics and communities affected by health data research) should be represented and enabled to participate. Empowering them in the form of a collective forum enables learning from their experiences and holding health data research to account.

Thursday, 20 October 2022

Jill Rettberg introduces a new paper on, "Algorithmic failure as a humanities methodology: Machine learning's mispredictions identify rich cases for qualitative analysis"

Jill Rettberg introduces a new paper on, "Algorithmic failure as a humanities methodology: Machine learning's mispredictions identify rich cases for qualitative analysis", out in Big Data & Society  doi:10.1177/20539517221131290. First published October 18, 2022.

Video abstract



Abstract.

This commentary tests a methodology proposed by Munk et al. (2022) for using failed predictions in machine learning as a method to identify ambiguous and rich cases for qualitative analysis. Using a dataset describing actions performed by fictional characters interacting with machine vision technologies in 500 artworks, movies, novels and videogames, I trained a simple machine learning algorithm (using the kNN algorithm in R) to predict whether or not an action was active or passive using only information about the fictional characters. Predictable actions were generally unemotional and unambiguous activities where machine vision technologies were treated as simple tools. Unpredictable actions, that is, actions that the algorithm could not correctly predict, were more ambivalent and emotionally loaded, with more complex power relationships between characters and technologies. The results thus support Munk et al.'s theory that failed predictions can be productively used to identify rich cases for qualitative analysis. This test goes beyond simply replicating Munk et al.'s results by demonstrating that the method can be applied to a broader humanities domain, and that it does not require complex neural networks but can also work with a simpler machine learning algorithm. Further research is needed to develop an understanding of what kinds of data the method is useful for and which kinds of machine learning are most generative. To support this, the R code required to produce the results is included so the test can be replicated. The code can also be reused or adapted to test the method on other datasets.

Keywords: 

  1. Machine vision,
  2. machine learning,
  3. qualitative methodology,
  4. machine anthropology,
  5. digital humanities,
  6. algorithmic failure


Monday, 26 September 2022

Comparing algorithmic platforms through differences

Tseng, Y.-S. (2022). Algorithmic empowerment: A comparative ethnography  of two open-source algorithmic  platforms – Decide Madrid and vTaiwan. Big Data & Society, 9(2). https://doi.org/10.1177/20539517221123505

What can critical algorithms/data studies learn from comparative urbanism? 

 

In my latest paper, I repurpose the idea of comparative urbanism to offer an alternative epistemology of algorithmic decision-making based on two open-source platforms in Taipei and Madrid. In the book Ordinary Cities, Robinson (2006) develops a cosmopolitan approach for comparing cities across axes of ‘difference’ within/beyond the dichotomy of the Global North/South in order to destabilise the conventionally Anglo-Saxon understanding of urban modernisation and world cities and the Northern reduction of the cultural and social richness of the Southern cities to the improvised counterparts. For Robinson, the ‘incommensurable difference’ (Robinson, 2006, p. 41) between the Global South and North, between the wealthy and poor, ‘needs to be viewed less as a problem to be avoided and more as a productive means for conceptualising contemporary urbanism’ (McFarlane and Robinson, 2010, p. 767; Robinson, 2011). Since then, there has been much discussion on how critical scholars can retheorise the urban by learning from cities in the Global South and East (Robinson, 2011; 2015; McFarlane, 2011).  

 

As much as urban scholars have charted out grounds for comparing cities that are deemed different in terms of socio-economics and politics (see Robinson, 2015, 2022), critical algorithm/data studies begins to identify needs to go beyond the Northern concept of data universalism as well as the dichotomy of Global North/South (Milan & TrerĂ©, 2019). For Milan & TrerĂ© (2019, p. 319), it is much needed to undo the ‘cognitive injustice that fails to recognise non-mainstream ways of knowing the world’ in data universalism. Against this backdrop, Yu-Shan’s paper demonstrates how comparative urbanism is helpful to provide new epistemology and theorisation of otherwise unknown algorithmic decision-making for citizen empowerment through differences and similarities of vTaiwan and Decide Madrid platforms.  


Urban comparison offers two methodological tactics to bring the two algorithmic platforms - vTaiwan and Decide Madrid – into a dialogue with the current conceptualisation of algorithmic devices and platforms in human geography and critical algorithmic studies.

 

Firstly, the genetic comparative tactic allows not to start with the technicality of the two platforms but to trace their shared trajectories. Both platforms were developed by activists and civic hackers involving in or inspired by the Occupy Movements in Madrid (the 15M) and Taipei (the Sunflower Movement) and were situated within wider on-going democratisations in Spain and Taiwan. It is through the genetic trajectories that the comparative study showcases how algorithmic platforms can be imbued with democratic claims and promises by politicians, activists and civic hackers. Crucially, situating algorithmic platforms in their shared trajectories of urban social movements carves out an important conceptual position which does not assume algorithmic platforms to be fundamentally anti-democratic but allow them to develop each’s own pathway towards becoming more (or less) democratic. 

 

Secondly, the generative comparative tactic conceptualises how algorithmic systems empower citizens across different geolocations, governmental institutions and algorithm-human relationships. The notion of algorithmic empowerment is generated by examining how the two algorithmic systems actualise democratic possibilities and practices across different sets of algorithmic orderings and human actions. Such algorithmic differences, no matter how subtle, matter for opening up or closing down political possibilities (Amoore, 2013; 2019; 2020), in this case, for empowering citizens in political decisions.  

 

Looking beyond this paper, the comparative urbanism has much more to offer to unpack critical data/algorithm studies. Considering the cultural richness and diversity of the highly digitalised urban societies in the Global East, comparative studies on algorithmic platforms or devices through  differences is urgently needed to broaden (or provincialize) the Northern theorisation of algorithms and data practices.  As digital geographers have already noted, ‘there is a pressing need to destabilize the dominance of the Global North as a universal placeholder and de facto field site for geographical research about the digital’ (Ash et al., 2018, p. 37). 


BD&S Webinar on Decolonizing Data Universalism, September 29th 2022 9:00-11:00 (GMT+1)

Come join the BD&S Webinar on Decolonizing Data Universalism, September 29th 2022 9:00-11:00 (GMT+1), 



Monday, 19 September 2022

BD&S Webinar on Big Data and the Environment, September 22nd 2022 15:30-17:00 (GMT+1)

Come join the BD&S Webinar on Big Data and the Environment, September 22nd 2022 15:30-17:00 (GMT+1),