Latest

Does AI introduce bias in hiring?

Does AI introduce bias in hiring?

Does AI introduce bias in hiring?

AI is rapidly transforming nearly every part of the hiring process, from resume screening to candidate matching to interview scheduling. But as AI tools become more common in HR workflows, they’re also drawing scrutiny. HR professionals, job seekers, and researchers are asking the same question: does AI introduce bias into hiring decisions?
The answer is complicated. Yes, AI can reinforce or even exacerbate bias, but it doesn’t have to. In fact, when designed and deployed with care, AI can help reduce bias and improve fairness.


Cause for concern

AI hiring tools are often trained on historical data like past resumes, employee records, or hiring decisions. That’s where the trouble can start. If those data sets reflect bias (and many do), the AI may learn to replicate it.
A widely cited 2018 internal experiment by Amazon, reported by Reuters, found that its resume screening tool penalized resumes that included the word “women’s,” as in “women’s chess club captain.” As Reuters reported, the source of this bias was not hard to find:
That is because Amazon's computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
This type of pattern replication is common in large language models (LLMs), which generate text or make predictions based on vast amounts of internet data. As the Brookings Institution notes, these models can absorb and reinforce social biases encoded in that data.

The risks are real, and they’re not just theoretical. In 2023, the Equal Employment Opportunity Commission (EEOC) settled a landmark AI hiring discrimination case against iTutorGroup Inc., where the AI system was found to have rejected applicants over the age of 55. In another ongoing case, an ACLU complaint alleges that an AI-powered interview platform used by Intuit failed to evaluate a deaf and Indigenous applicant fairly, penalizing her for speech patterns that did not conform to the training data.  
A 2024 study by the University of Washington, which found that AI resume screeners showed a strong preference for names associated with white applicants over Black and Latino applicants, echoing a groundbreaking 2003 study on hiring bias. Researchers at NYU Tandon School of Engineering found that some hiring tools appeared to penalize women who had taken time off for caregiving, a phenomenon known as the “mom penalty.”
It’s not hard to understand why some regulators are taking notice. New York City has already implemented the Automated Employment Decision Tool (AEDT) law, which requires employers to audit and disclose any algorithmic tools used in hiring. Similar legislation is being proposed across the country.


AI can reinforce bias, but it doesn’t have to

It’s easy to walk away from these studies and stories with the impression that AI is inherently biased. But bias doesn’t come from machines, it comes from people. More specifically it comes from data and from how tools are designed, tested, and used. That means bias can be mitigated.
As Stanford's Institute for Human-Centered Artificial Intelligence notes, the solution isn't to throw out AI entirely. It's to build better systems using the principles of ethical AI. The Stanford AI Index Report recommends making fairness and explainability critical design principles for any AI system that impacts people's lives. Tools that are transparent, auditable, and designed with bias reduction in mind can help hiring teams make more equitable decisions at scale.


What CLARA does differently

CLARA is not built on large language models. Instead, it uses neuro-symbolic AI, a custom architecture that blends statistical modeling with symbolic reasoning. This approach allows CLARA to make assessments based on logic-driven frameworks rather than just pattern prediction.
More importantly, CLARA was trained with bias reduction in mind, not just as an afterthought. It’s designed to anonymize and standardize candidate applications by removing identifiers like names, schools, and dates that can introduce unconscious bias. Every candidate is evaluated against the same set of job-specific criteria.


A smarter way forward

AI alone won’t solve every problem in hiring. But when used intentionally and designed responsibly, it can help teams make better decisions, reduce bias, and spend more time with the right candidates.
If you’ve been hesitant to adopt AI hiring tools because of fairness concerns, you’re not wrong to be cautious. But not all tools are created equal.
Want to see how AI can make your hiring more fair, not less? Talk to our team today.