Wednesday, 11 February 2026

Guest Blog: The Uneven Politics of AI Bias: Ageism as a Blind Spot

by Francesca Belotti, Mireia Fernández-Ardèvol, Veysel Bozan, Francesca Comunello, and Simone Mulargia

Belotti, F., Fernández-Ardèvol, M., Bozan, V., Comunello, F., & Mulargia, S. (2026). Double standards of generative AI chatbots: Unveiling (digital) ageism versus sexism through sociological interviews. Big Data & Society, 13(1). https://doi.org/10.1177/20539517261419407 (Original work published 2026)

Many conversational AI systems already display visible “guardrails” when required to provide information about gender. However, when age—particularly older age—enters the picture, caution often fades. Our research shows that generative AI chatbots tend to reproduce a striking double standard: they actively avoid sexism but not ageism, treating age-based stereotyping as largely unproblematic.

We approach AI ageism as a form of digital ageism. Under the widespread assumption that digital culture is inherently youthful, later life is framed as less competent or in need of support. We analyze AI ageism from a sociotechnical perspective, as generative AI systems are not neutral tools but function as communicative infrastructures that absorb historically situated data, norms, and design choices—only to circulate them back as seemingly reasonable and “authentic” responses.

To explore this dynamic, we conducted semi-structured interviews with five widely used chatbots (ChatGPT, Gemini, Copilot, Jasper, and Perplexity). We took a traditional qualitative sociological approach, considering these conversations as sites where machine sociality becomes visible. 

Two recurring patterns emerged. First, the chatbots frequently resisted assigning gender to our fictional users, often invoking disclaimers about bias. Yet they readily assigned age ranges to the same profiles—typically associating youth with creativity, trendiness, and digital fluency, and older age with traditionalism, limitation, or distance from digital culture. Second, the chatbots framed similar functions differently: younger users were offered “tips,” creativity, and exploration, while older users were offered “assistance,” simplification, and health-related guidance—an example of benevolent stereotyping that reinforces inequality.

Generative AI mirrors society, and ageism is one of its least questioned forms of discrimination. Yet, this blind spot also points to a broader issue: LLMs are quietly transforming historically situated assumptions into default outputs, which, in turn, shape what appears normal, relevant, or legitimate in everyday human–machine communication. Are these conversations more ageist than those between humans?