Identity-related Speech Suppression in Generative AI Content Moderation
Document Type
Conference Proceeding
Role
Author
Standard Number
9798400721403
Journal Title
Proceedings of the 5th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EEAMO 2025)
First Page
185
Last Page
217
Publication Date
11-4-2025
Abstract
Automated content moderation has long been used to help identify and filter undesired user-generated content online. But such systems have a history of incorrectly flagging content by and about marginalized identities for removal. Generative AI systems now use such filters to keep undesired generated content from being created by or shown to users. While a lot of focus has been given to making sure such systems do not produce undesired outcomes, considerably less attention has been paid to making sure appropriate text can be generated. From classrooms to Hollywood, as generative AI is increasingly used for creative or expressive text generation, whose stories will these technologies allow to be told, and whose will they suppress?
In this paper, we define and introduce measures of speech suppression, focusing on speech related to different identity groups incorrectly filtered by a range of content moderation APIs. Using both short-form, user-generated datasets traditional in content moderation and longer generative AI-focused data, including two datasets we introduce in this work, we create a benchmark for measurement of speech suppression for nine identity groups. Across one traditional and four generative AI-focused automated content moderation services tested, we find that identity-related speech is more likely to be incorrectly suppressed than other speech. We find that reasons for incorrect flagging behavior vary by identity based on stereotypes and text associations, with, e.g., disability-related content more likely to be flagged for self-harm or health-related reasons while non-Christian content is more likely to be flagged as violent or hateful. As generative AI systems are increasingly used for creative work, we urge further attention to how this may impact the creation of identity-related content.
Repository Citation
Proebsting, G., Anigboro, O. I., Crawford, C. M., Metaxa, D., & Friedler, S. A. (2025). Identity-related Speech Suppression in Generative AI Content Moderation. Proceedings of the 5th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 185–217. https://doi.org/10.1145/3757887.3763010
