by Edward B. Kang (@edwardbkang)
Kang, E. B. (2023). Ground truth tracings (GTT): On the epistemic limits of machine learning. Big Data & Society, 10(1). https://doi.org/10.1177/20539517221146122
This article is a direct response to the increasing division I have been seeing between what might be called the “technical” and “sociotechnical” communities in artificial intelligence/machine learning (AI/ML). It started as a foray into the industry of machine “listening” with the purpose of examining to what extent practitioners engage with the complexity of voice in developing techniques for listening to and evaluating it. Through my interviews, however, I found that voice, along with many other qualitatively complex phenomena like “employee fit,” “emotion,” and “personality,” gets flattened in the context of machine learning. The piece thus starts with a specific scholarly interest in the interface of voice and machine learning, but ends with a broader commentary on the limitations of machine learning epistemologies as seen through machine listening systems.
Specifically, I develop an intentionally non-mathematical methodological schema called “Ground Truth Tracings” (GTT) to make explicit the ontological translations that reconfigure a qualitative phenomenon like voice into a usable quantitative reference AKA “ground-truthing.” Given that all machine learning systems require a referential database that serves as its ground truth – i.e., what is assumed to be true by the system – examining these assumptions are key to exploring the strengths, weaknesses, and beliefs embedded in AI/ML technologies. In one example, I bring attention to a voice analysis “employee-fit” prediction system that analyzes a potential candidate’s voice to predict whether the individual will be a good fit for a particular team. By using GTT, I qualitatively show why this system is not feasible as an ML use case and unlikely to be as robust as it is marketed to be.
Finally, I acknowledge that although this framework may serve as a useful tool for investigating claims around ML applicability, it does not immediately engage questions of subjectivity, stakes, and power. I thus further splinter this schema through these axes to develop a perhaps imperfect, but practical heuristic called the “Learnability-Stakes” table to assess and think about the epistemological and ethical soundness of machine learning systems, writ large. I’m hoping this piece will contribute to the fostering of interdisciplinary dialogue among the wide range of practitioners in the AI/ML community that includes not just computer scientists and ML engineers, but also social scientists, activists, journalists, policy makers, humanities scholars, and artists, broadly construed.