The Hidden Bias of AI: How Employers May Be Violating the ADEA with Automated Hiring Tools
By Eric D. Morton
As artificial intelligence continues to revolutionize recruitment practices, more employers are relying on AI-driven software to streamline hiring decisions. While these tools promise efficiency, they also carry significant legal and ethical risks—particularly regarding age discrimination. Recently, a Federal court in California certified a class action lawsuit in Mobley v. Workday, Inc. The plaintiffs allege that Workday’s AI hiring tool was biased against older workers and others.
A growing body of evidence suggests that AI systems trained on historical hiring data may unintentionally perpetuate or even worsen existing biases. For job seekers over the age of 40, this can mean being unfairly screened out before a human ever reviews their application. The Age Discrimination in Employment Act (ADEA), which prohibits employment discrimination against individuals age 40 or older, is increasingly at the center of legal scrutiny surrounding these practices.
AI Bias by Design
Many AI hiring tools rely on machine learning models that are only as fair as the data used to train them. If historical hiring practices favored younger candidates—intentionally or not—the algorithms may “learn” to replicate those patterns. Characteristics correlated with age, such as graduation dates, years of experience, or gaps in employment, can serve as proxies for age, leading to a disparate impact on older applicants.
Even when age is not an explicit input, AI systems can infer it from seemingly neutral data. Employers who fail to audit these systems for bias risk violating the ADEA—even if their intent was to eliminate human prejudice.
Legal and Ethical Concerns Mount
Regulators and advocacy groups are paying closer attention. Allegations have emerged that some AI-based recommendation engines and screening tools systematically disadvantage candidates over 40. In some cases, older workers report never receiving interview invitations despite meeting or exceeding job requirements.
The Equal Employment Opportunity Commission (EEOC) has emphasized that employers are responsible for ensuring that third-party tools used in hiring do not cause unlawful discrimination. Simply outsourcing decision-making to an algorithm does not absolve a company of liability.
Recommendations for Employers
- To stay compliant and avoid costly litigation, employers should take the following steps:
- Conduct regular audits of AI tools for discriminatory impact, especially age-based disparities.
- Demand transparency from vendors about how their algorithms are developed, trained, and tested.
- Implement human oversight to review automated decisions and ensure fairness.
- Remove proxies for age in data used to train or run hiring algorithms.
- Stay informed on evolving guidance from the EEOC and other regulatory bodies.
Conclusion
AI can be a powerful tool for promoting equity and efficiency—but only when implemented thoughtfully and responsibly. Employers must recognize that the use of biased algorithms can inadvertently violate anti-discrimination laws, including the ADEA. As legal challenges mount, so too does the urgency for organizations to examine the hidden mechanisms of their hiring technology and ensure that no qualified candidate is unfairly excluded due to age.
Eric D. Morton is the principal attorney at Clear Sky Law Group, P.C. He can be reached at emorton@clearskylaw.com, 760-722-6582, and 510-556-0367.