Illinois joins other states, including Utah and Colorado, in passing its own legislation to regulate AI. On August 9, 2024, Illinois Governor JB Pritzker signed a new bill into law (HB 3773) that amends the Illinois Human Rights Act (IHRA) to address the risks associated with AI use in the employment context. The amendment affects any employer who currently uses, or intends to use, AI, including generative AI, to make decisions around recruitment, hiring, promotions, renewal of employment, training, discipline, discharge, or any other term or condition of employment. Under the new amendment, employers are prohibited from using AI in a manner that causes a discriminatory effect for any protected characteristic already covered under the IHRA or zip codes as a proxy for a protected class under the IHRA. Employers are also required to give notice if they are using AI.

Background

HB 3773 amends the IHRA, which applies to employers who employ one or more employees within Illinois during 20 or more calendar weeks a) within the calendar year of or b) preceding the alleged violation. Under the IHRA, protected characteristics include race, color, ancestry, national origin, disability, religion, sex, sexual orientation, pregnancy, military status, military discharge, age (over 40), order of protection status, marital status, citizenship, work authorization status, language, conviction record, and arrest record. Beginning January 1, 2025, the IHRA will also protect “family responsibilities” and “reproductive health decisions.”

“AI” Definition

“Artificial intelligence” is defined consistent with leading AI laws—such as Colorado’s Concerning Consumer Protections in Interactions with Artificial Intelligence Systems and the EU AI Act—to mean any “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The amendment also covers generative AI (i.e., AI that is capable of generating text, images, audio, videos, or other content that would otherwise be produced by humans).

Detailed Guidance on Affirmative Steps Lacking

Illinois’s amendment comes on the heels of Colorado passing its AI law in May 2024, which also addresses algorithmic discrimination, and New York City’s Local Law 144, which requires employers to conduct, among other things, bias audits. However, unlike the AI laws in Colorado and New York City, Illinois’s does not provide detailed guidance regarding affirmative steps employers are required to take to address discriminatory outcomes, such as bias audits, AI impact assessments, and implementing risk management systems.

Discriminatory Effect Prohibited and Notice Requirement

The law covers AI that has the effect of discriminating against employees based on protected characteristics, regardless of whether the discrimination was intentional. Employers are also required to notify employees if the employers use AI for the purposes mentioned above. While the law does not specify the exact form this notice must take, the Illinois Department of Human Rights will adopt rules to define the circumstances and conditions that would trigger a notice requirement, a timeline for the same, and a means of providing that notice.

Enforcement and Remedies

The Illinois Department of Human Rights and Illinois Human Rights Commission will enforce the law, and employees can file a charge with the Department of Human Rights if they believe they have been discriminated against due to AI. Remedies may include back pay, reinstatement, emotional distress damages, and attorneys’ fees.

Takeaways

Based on the latest legislative developments, Illinois’s AI law underscores that use of AI in the employment context carries higher risks. Employers that are considering using AI for recruiting or other human resources-related decisions should consider conducting a bias audit on their AI systems and/or conducting sufficient diligence on the AI vendor providing such a tool. Those assessments should look at whether AI tools disproportionately impact certain groups—such as those of a certain race, gender, or age. Employers should document their analysis in an AI-impact assessment, keep records of any formal audits conducted on the AI system, and continuously monitor the AI system once it is used in the real world in order to make adjustments if the AI system deviates. Finally, employers should be transparent about their use of AI through appropriate AI pre-use notices.

Visit us at mayerbrown.com

Mayer Brown is a global services provider comprising associated legal practices that are separate entities, including Mayer Brown LLP (Illinois, USA), Mayer Brown International LLP (England & Wales), Mayer Brown (a Hong Kong partnership) and Tauil & Chequer Advogados (a Brazilian law partnership) and non-legal service providers, which provide consultancy services (collectively, the “Mayer Brown Practices”). The Mayer Brown Practices are established in various jurisdictions and may be a legal person or a partnership. PK Wong & Nair LLC (“PKWN”) is the constituent Singapore law practice of our licensed joint law venture in Singapore, Mayer Brown PK Wong & Nair Pte. Ltd. Details of the individual Mayer Brown Practices and PKWN can be found in the Legal Notices section of our website. “Mayer Brown” and the Mayer Brown logo are the trademarks of Mayer Brown.

© Copyright 2024. The Mayer Brown Practices. All rights reserved.

This Mayer Brown article provides information and comments on legal issues and developments of interest. The foregoing is not a comprehensive treatment of the subject matter covered and is not intended to provide legal advice. Readers should seek specific legal advice before taking any action with respect to the matters discussed herein.