New research from the University of Washington demonstrates that people mirror the racial biases embedded in AI hiring tools, underscoring the urgent need for safeguards as companies integrate AI more deeply into their recruitment workflows.

Research: No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy. Image Credit: tete_escape / Shutterstock
Artificial intelligence has become deeply embedded in the hiring pipeline: AI drafts job listings, applicants use chatbots to prepare resumes and cover letters, and automated systems screen submissions or even conduct preliminary interviews. While these tools promise efficiency, research continues to uncover a troubling issue: bias within large language models (LLMs) such as ChatGPT and Gemini. Far less understood, however, is how those biases shape the behavior of human hiring reviewers.
Human Reviewers Mirror AI Bias
A new University of Washington study examined how people make hiring decisions when supported by biased AI recommendations. In a controlled experiment, 528 U.S. participants evaluated candidates for 16 jobs, ranging from nurse practitioner to computer systems analyst. Each candidate pool included equally qualified white, Black, Hispanic, and Asian men, along with one unqualified distractor candidate.
When participants received no AI input or neutral recommendations, they selected white and non-white candidates at similar rates. But once the LLM’s recommendations displayed even moderate racial bias, participants adopted the same bias. When the AI favored white candidates, so did participants; when it favored non-white applicants, participants mirrored that preference. Under conditions of severe bias, human decisions aligned with AI recommendations nearly 90% of the time.
The findings were presented at the 2025 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society.
Experimental Design
Participants were recruited via Prolific and assigned four rounds of hiring decisions:
- No AI recommendations
- Neutral AI recommendations
- Moderately biased AI recommendations
- Severely biased AI recommendations
Candidate race was implied through names, such as Gary O’Brien, and affinity group memberships. The resumes were generated using AI and validated for quality. To precisely control for bias, the research team simulated the AI recommendations rather than relying on live model outputs.
Bias Persists Even When Recognized
Although participants could sometimes detect severe bias, awareness alone was insufficient to correct their decisions. “Unless bias is obvious, people were perfectly willing to accept the AI’s biases,” said lead author Kyra Wilson.
Senior author Aylin Caliskan added that restricted access to real hiring data necessitated lab-based methods, which still yielded actionable insights into human-AI interaction.
Potential Mitigations
The study identified two promising interventions:
- Implicit association tests: Starting each session with an IAT reduced bias by 13%.
- AI literacy training: Teaching users how AI models work increased awareness of limitations and reduced overreliance on them.
However, researchers emphasized that responsibility should not be placed solely on end-users. Model developers must address bias in design, and policymakers must establish guardrails to ensure alignment with societal standards.
Broader Implications
The results underscore the risks associated with integrating AI into high-stakes decisions without adequate oversight. As long as organizations continue to use AI in hiring, often without rejecting candidates before human review, biased recommendations can systematically distort outcomes.
Co-authors include Anna-Maria Gueorguieva and Mattea Sim. The U.S. National Institute of Standards and Technology funded the study.
Source:
Journal reference:
- Wilson, K., Sim, M., Gueorguieva, A.-M., & Caliskan, A. (2025). No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(3), 2692-2704. DOI: 10.1609/aies.v8i3.36749, https://ojs.aaai.org/index.php/AIES/article/view/36749