Wednesday 29 May 2024

Guest Blog: Jascha Bareis on Trust and AI

by Jascha Bareis

Bareis, J. (2024). The trustification of AI. Disclosing the bridging pillars that tie trust and AI together. Big Data & Society, 11(2).

Does it make sense to tie trust with AI?

Everywhere we look: companies, politics, and research: so many people are focusing on AI. Being approached as the core technological institution of our times, notions of trust are repeatedly mobilized. Especially policy makers seem to feel urged to highlight the need of trustworthiness in relation to AI. Take the current European AI act that claims the EU to be a “leader in the uptake of trustworthy AI” (AIA, Article 1a), or the US 2023 US executive order on “Safe, Secure and Trustworthy AI”.

I simply asked myself: Despite all this attention, is it at all legitimate to marry a technology with an intersubjective phenomenon that used to be reserved between humans only? I can trust my neighbor next door taking care of my cat, but can I trust the TESLA Smart Cat Toilet automating cleaning cycle to take care of its poo-poo, too (indeed, the process of gadget’ification and smartification does not spare cat toilets)?

Does it make sense at all to talk about trust in the latter case or are we just dealing with a conceptual misfit? Doing more research, I realized that the way trust is handled in both the policy and academic AI debate is very sloppy, staying undertheorized and just somehow taken for granted.

I noticed that users approach trust and AI as something intersubjective, expecting great things from their new AI powered gadget and then being utterly disappointed if it fails to do so (because even if branded as “smart”, there is actually no AI in the TESLA Smart Cat Toilet). Users perceive AI as something being highly mediated by powerful actors, as when Elon Musk trusts that AI will be the cure to the world’s problems, many people seem to follow blindly (but do they trust AI then, or Elon?). And as something that can mobilize greater political dimensions and strong sentiments. As when my friend told me that she would certainly distrust AI because she distrusted the integrity of EU politicians who instead of regulating it, let Big Tech get “greedy and rich”.

Communication, mediation, sentiments, expectations, power, misconceptions – all of this seemed to have a say in the relationship between AI and trust. This created a messy picture with AI and trust being enmeshed in a social complex interplay with overlapping epistemic realms and actors.

As a consequence, I set out to problematize this relationship in this paper. I argue that trust is located in the social. And only if one acknowledges that AI is a very social phenomenon as well, this relationship makes sense at all. AI produces notions of trust and distrust because it is woven and negotiated in the everyday realities of users and society, with AI applications mediating human relationships, producing intimacies, social orders and knowledge authorities. I came up with the following analytical scheme.

I run through the scheme in the paper and describe its value and limitations, rendering trustworthy AI as a constant and complex dynamic between the actual technological developments and the social realities, political power struggles and futures that are associated with it.