AI Bias in Hiring Influences Human Decision-Makers, UW Study Finds

New research from the University of Washington demonstrates that people mirror the racial biases embedded in AI hiring tools, underscoring the urgent need for safeguards as companies integrate AI more deeply into their recruitment workflows.

Research: AI-driven transformation of water treatment technology and industry: toward a new era of comprehensive innovation. Image Credit: tete_escape / Shutterstock

Research: No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy. Image Credit: tete_escape / Shutterstock

Artificial intelligence has become deeply embedded in the hiring pipeline: AI drafts job listings, applicants use chatbots to prepare resumes and cover letters, and automated systems screen submissions or even conduct preliminary interviews. While these tools promise efficiency, research continues to uncover a troubling issue: bias within large language models (LLMs) such as ChatGPT and Gemini. Far less understood, however, is how those biases shape the behavior of human hiring reviewers.

Human Reviewers Mirror AI Bias

A new University of Washington study examined how people make hiring decisions when supported by biased AI recommendations. In a controlled experiment, 528 U.S. participants evaluated candidates for 16 jobs, ranging from nurse practitioner to computer systems analyst. Each candidate pool included equally qualified white, Black, Hispanic, and Asian men, along with one unqualified distractor candidate.

When participants received no AI input or neutral recommendations, they selected white and non-white candidates at similar rates. But once the LLM’s recommendations displayed even moderate racial bias, participants adopted the same bias. When the AI favored white candidates, so did participants; when it favored non-white applicants, participants mirrored that preference. Under conditions of severe bias, human decisions aligned with AI recommendations nearly 90% of the time.

The findings were presented at the 2025 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society.

Experimental Design

Participants were recruited via Prolific and assigned four rounds of hiring decisions:

  • No AI recommendations
  • Neutral AI recommendations
  • Moderately biased AI recommendations
  • Severely biased AI recommendations

Candidate race was implied through names, such as Gary O’Brien, and affinity group memberships. The resumes were generated using AI and validated for quality. To precisely control for bias, the research team simulated the AI recommendations rather than relying on live model outputs.

Bias Persists Even When Recognized

Although participants could sometimes detect severe bias, awareness alone was insufficient to correct their decisions. “Unless bias is obvious, people were perfectly willing to accept the AI’s biases,” said lead author Kyra Wilson.

Senior author Aylin Caliskan added that restricted access to real hiring data necessitated lab-based methods, which still yielded actionable insights into human-AI interaction.

Potential Mitigations

The study identified two promising interventions:

  • Implicit association tests: Starting each session with an IAT reduced bias by 13%.
  • AI literacy training: Teaching users how AI models work increased awareness of limitations and reduced overreliance on them.

However, researchers emphasized that responsibility should not be placed solely on end-users. Model developers must address bias in design, and policymakers must establish guardrails to ensure alignment with societal standards.

Broader Implications

The results underscore the risks associated with integrating AI into high-stakes decisions without adequate oversight. As long as organizations continue to use AI in hiring, often without rejecting candidates before human review, biased recommendations can systematically distort outcomes.

Co-authors include Anna-Maria Gueorguieva and Mattea Sim. The U.S. National Institute of Standards and Technology funded the study.

Source:
Journal reference:
  • Wilson, K., Sim, M., Gueorguieva, A.-M., & Caliskan, A. (2025). No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(3), 2692-2704. DOI: 10.1609/aies.v8i3.36749, https://ojs.aaai.org/index.php/AIES/article/view/36749 

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI Systems Reveal Bias Against German Dialect Speakers