How To

EEOC Chief Warns: AI System Audits May Comply with Local Anti-Bias Laws, But Not Federal Ones

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has been raising concerns about the potential for artificial intelligence (AI) to violate federal anti-discrimination laws, such as the Civil Rights Act of 1964.

With the rise of popular generative AI tools like ChatGPT and Bard, lawmakers at various levels of government have started paying attention, and companies have become aware of the risks posed by AI technology in their business processes.

Sonderling has shifted his focus from giving speeches to HR officers and employment lawyers to discussing AI and how companies can ensure compliance as they delegate more HR responsibilities to algorithms. These algorithms are capable of quickly parsing through thousands of resumes, automating the hiring process.

In an interview with Computerworld, Sonderling discussed the emergence of a collection of laws meant to expose and eliminate biases in AI. He emphasized the importance for companies to comply with both local and federal laws. Here are some key excerpts from the interview:


EEOC

EEOC Commissioner Keith Sonderling

How have you and the EEOC been involved in addressing AI’s use in human resources and hiring?

Sonderling explained that the EEOC is responsible for regulating HR and has been discussing AI’s impact on HR with various stakeholders. He highlighted the widespread interest in AI technology and its implications for the workforce.

What is your opinion on how various nations and localities are addressing AI regulation?

Sonderling acknowledged the diverse approaches taken by different countries and cities in regulating AI. He mentioned China’s proactive stance in recognizing AI’s potential and the associated risks. He also pointed out the European Union’s proposed AI Act and its risk-based approach to regulation.

Why is New York’s Local Law 144 important?

Sonderling explained that Local Law 144 in New York is significant because it requires employers to conduct audits of AI systems used in hiring and promotion. He mentioned that while the law is limited to specific criteria such as sex, race, and ethnicity, employers should still be mindful of federal anti-discrimination laws that protect against broader categories of bias.

How should companies approach compliance considering some laws are local, some are state, and some are federal?

Sonderling emphasized that companies must not solely rely on compliance with local laws but also ensure compliance with federal anti-discrimination laws, which have been in place since the 1960s. He urged companies to be cautious and not be complacent by assuming that fulfilling local obligations is sufficient to meet federal requirements.

So, if your AI-assisted applicant tracking system is audited, should you feel secure that you’re fully compliant?

Sonderling advised against a false sense of security and reminded companies that compliance with local laws does not automatically guarantee compliance with federal anti-discrimination laws. He cited the example of Illinois, where facial recognition technology in employment interviews requires consent under state law, but federal anti-discrimination laws still apply.

So, where does the liability for ensuring AI-infused or machine learning tools lie?

Sonderling emphasized that employers bear the responsibility for compliance with anti-discrimination laws, regardless of the technology used. He cautioned against overlooking existing laws and waiting for new AI-specific regulations, urging companies to apply existing laws to AI tools in the same way they would with any other employment decision.

Do you believe New York’s Local Law 144 is a good baseline or foundation for other laws to mimic?

Sonderling acknowledged the positive aspect of Local Law 144 in raising awareness about employment audits and preventing discrimination. He highlighted the need for companies to proactively audit AI systems and ensure their compliance with anti-discrimination laws.

What makes an AI applicant tracking system problematic in the first place?

Sonderling emphasized that the potential problems lie not in the applicant tracking system itself but in how machine learning tools analyze and implement the data. Biases can arise from outdated job descriptions or pre-existing biases in the applicant pool. He urged companies to critically evaluate the criteria used by AI tools to make employment decisions.

Unique Perspective:
AI technology has the potential to revolutionize the HR and hiring process, but it also poses risks in terms of bias and discrimination. While local laws, such as New York’s Local Law 144, aim to regulate AI systems and protect against bias, it is crucial for companies to remember that compliance with federal anti-discrimination laws is equally important. Employers need to proactively audit and evaluate their AI systems to ensure fairness and equal opportunity in the workforce. As AI continues to advance, the responsibility lies with companies to navigate the complexities of AI regulation and uphold the principles of non-discrimination.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button