As artificial intelligence floods Reddit with machine-generated posts, volunteer moderators are scrambling to protect the platform’s authentic, human-driven spirit, a new Cornell study reveals.

Research: 'There Has To Be a Lot That We're Missing': Moderating AI-Generated Content on Reddit. Image Credit: Visual Generation / Shutterstock
Reddit bills itself as "the most human place on the internet," but the proliferation of artificial intelligence-generated content is threatening to squeeze some of the humanity out of the news-sharing forum.
Content moderators on some of Reddit's most popular boards see some value in artificial intelligence-generated content, but they're generally fearful that AI will reduce the utility and social value of a community that prides itself on authenticity, according to new Cornell University research.
"They were concerned about it on three levels: decreasing content quality, disrupting social dynamics and being difficult to govern," said Travis Lloyd, doctoral student in the field of information science. "And to respond to this, they were enacting rules in their communities, which set norms, but they also then had to enforce those rules, which is challenging."
New Research Presented at ACM SIGCHI
Lloyd is lead author of "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit, which is being presented at the ACM SIGCHI Conference on Computer-Supported Cooperative Work and Social Computing, Oct. 18–22 in Bergen, Norway. The work received an honorable mention for best paper.
Earlier research sought to understand how Reddit communities were responding to AI content; this paper goes a step further, engaging directly with content moderators to see exactly how they try to preserve Reddit's humanity in an increasingly AI-infused world. This work began in 2023, a year after the release of ChatGPT.
Inside the Study: Moderator Perspectives
For this work, the researchers recruited moderators of popular subreddits that also had rules regarding the use of AI content. The researchers wound up with 15 moderators who collectively oversaw more than 100 different subreddits, with memberships ranging from 10 people to more than 32 million.
Of the three main concerns, content quality was top of mind. According to one moderator the authors talked to, AI content "tries to meet the substance and depth of a typical post … however, there are frequent glaring errors in both style and content." Style, inaccuracy and divergence from the intended topic were chief issues.
Threats to Community Interaction
Regarding social dynamics, several respondents expressed fear that AI would negatively impact meaningful one-to-one interactions, citing decreased opportunities for human connection, strained relationships and violation of community values as potential byproducts.
Mor Naaman, senior author and professor of information science, said it is currently left up to the moderators—all volunteers—to help Reddit preserve the humanity it cherishes.
"It remains a huge question of how they will achieve that goal," he said. "A lot of it will inevitably go to the moderators, who are in limited supply and are overburdened. Reddit, the research community, and other platforms need to tackle this challenge, or these online communities will fail under the pressure of AI."
This work was supported in part by funding from the National Science Foundation.
Source:
Journal reference:
- Travis Lloyd, Joseph Reagle, and Mor Naaman. 2025. 'There Has To Be a Lot That We're Missing': Moderating AI-Generated Content on Reddit. Proc. ACM Hum.-Comput. Interact. 9, 7, Article CSCW264 (November 2025), 24 pages. DOI:10.1145/3757445, https://dl.acm.org/doi/10.1145/3757445