AI in Hiring: How Automated Systems Could Lead to Workplace Discrimination

by | Mar 8, 2025 | General Issues

Artificial intelligence (AI) is transforming the hiring process, making recruitment faster and more data-driven. Many companies rely on AI to screen resumes, rank candidates, and even conduct initial interviews. However, while these systems aim to improve efficiency, they also introduce potential bias and discrimination that may disadvantage certain groups.

AI hiring tools work by analyzing resumes for keywords, assessing facial expressions in video interviews, and ranking candidates based on algorithmic criteria. Since these tools are trained on historical hiring data, they may reflect past discriminatory patterns, unintentionally favoring certain candidates over others. This can create barriers for people of color, women, older job seekers, and individuals with disabilities, among other protected groups.

How AI Can Lead to Workplace Discrimination

One major issue with AI-driven hiring is algorithmic bias. If a company’s past hiring data reflects an unconscious preference for a particular demographic—such as white male candidates—then an AI system trained on that data may continue to reinforce the same biases. Additionally, AI tools may misinterpret non-traditional career paths, penalizing candidates with career gaps due to medical leave, caregiving, or disability accommodations.

Another concern is lack of transparency. Unlike a human hiring manager, an AI algorithm does not provide feedback on why a candidate was rejected. This makes it difficult for job seekers to challenge discriminatory hiring decisions. Applicants may never know if their application was filtered out due to biased data or flawed AI decision-making.

People with disabilities also face unique challenges when AI is involved in hiring. Automated screening tools may struggle to evaluate candidates who use adaptive technology for job applications. Video interview software that analyzes facial expressions or speech patterns could unfairly disadvantage neurodiverse individuals or those with speech impairments.

Legal Protections Under California Law

California’s Fair Employment and Housing Act (FEHA) and federal laws like Title VII of the Civil Rights Act prohibit hiring discrimination based on race, gender, disability, and other protected characteristics. Employers who use AI tools must ensure that these systems do not result in unfair hiring practices.

Under California law, applicants who suspect they have been discriminated against in hiring—whether by a human recruiter or an AI system—have the right to file complaints with the California Civil Rights Department (CRD) or take legal action.

What Job Seekers Can Do

If you believe an AI-driven hiring process has resulted in discrimination, you can:

  • Request information about how the hiring system works.
  • Keep records of all application materials and interactions with the employer.
  • Seek legal advice to determine whether discrimination occurred and how to challenge it.

Conclusion

AI technology in hiring is here to stay, but it must be used responsibly. While automation can speed up recruitment, it should not come at the cost of fair hiring practices. California employees are protected under state and federal laws, ensuring that discrimination—whether by humans or AI—is not tolerated.

Need legal guidance? Call Rothschild & Alwill, APC at (661-369-8510) (Bakersfield) or (805-845-1190) (Santa Barbara) for a confidential consultation. Se habla Español.