Artificial Intelligence in the Workplace

Artificial Intelligence in the Workplace: Benefits, Risks, and Legal Implications

The integration of Artificial Intelligence (AI) in the workplace has become a growing trend, promising numerous benefits but also posing significant risks and legal challenges. AI technologies offer several potential benefits to employers and management, including increased efficiency, cost savings, and enhanced decision-making processes. For instance, AI can automate repetitive tasks, allowing employees to focus on more strategic activities. AI-driven analytics can also provide insights that improve productivity and innovation. Additionally, AI tools such as resume sifters can streamline the recruitment process, potentially reducing the time and cost associated with hiring new employees.

However, a business may be liable if it executes unlawful decisions made by its own AI program. Due to increasing concerns with how AI learns and operates, and the potential for AI to commit unlawful bias, make inappropriate decisions, and/or fail to be a proper substitute for professional judgment, the EEOC and New York City have issued new rules, and legislation has been introduced at the state level, to hold business owners liable for the unlawful errors of their AI programs, even if the AI program was purchased from or operated by a third party vendor. It is necessary for employers and business owners to comply with the new laws by establishing necessary safeguards.

Challenges of AI Training

One significant concern with employer use of AI is the potential for bias in AI systems. AI training involves collecting large existing datasets, which are then used to train algorithms through machine learning techniques. The goal is to enable the AI system to recognize patterns, make decisions, and improve over time with new data. However, if the historical data that the AI learns from reflects existing bias, these biases can be perpetuated by the AI, leading to discriminatory practices, sometimes without the employer being aware the AI is doing so. This is particularly concerning in hiring processes, where AI resume sifters might inadvertently favor certain demographics over others.

Hiring algorithms, for example, if trained on biased data, can unfairly disadvantage candidates from certain racial backgrounds, perpetuating inequality in employment opportunities. In one such known algorithm, the AI training data was not inclusive, and utilized biased facial recognition technology that performed poorly on non-white faces. Racially biased AI learning may also be inadvertently used in predictive policing algorithms, which often disproportionately target communities of color due to the existing statistics containing such bias, leading to over-policing and exacerbating existing disparities in the criminal justice system. Biased AI training can reinforce and amplify societal inequities, and the business using the AI can be held accountable, making it essential to ensure diverse and representative training data.

New and Developing Law

EEOC Guidance

The use of AI in the workplace must comply with existing legal frameworks, such as the ADA and the Civil Rights Act. The EEOC has provided guidance on the use of AI and algorithms in employment decisions, emphasizing that employers must ensure their AI tools do not discriminate against individuals with disabilities or other protected groups. Under the ADA, employers are required to provide reasonable accommodations to qualified individuals with disabilities. If an applicant or employee tells the employer that a medical condition may make it difficult to take a test, or that it may cause an assessment result that is less acceptable to the employer, the applicant or employee has requested reasonable accommodation.

Algorithmic decision-making tools could inadvertently screen out individuals with disabilities if their disability results in a lower score or less favorable assessment. For example, a chatbot might reject applicants with significant employment gaps, not recognizing that those gaps were caused by lawfully protected disabilities. If the employer fails to recognize and reverse the AI software’s error, this could constitute a violation of the ADA unless the employer can demonstrate that the selection criterion is job-related and consistent with business necessity.

The EEOC’s guidance on AI in employment decisions highlights the need for employers to regularly audit their AI tools for potential adverse impacts. When deciding whether to rely on an algorithmic decision-making tool developed by a software vendor, an employer should ask whether the tool was developed with individuals with disabilities in mind; employers developing their own systems must take those same considerations into account. Employers should ensure their AI systems are transparent, explainable, and subject to regular review to mitigate the risk of discrimination. The guidance also advises employers to provide training to those involved in the development and implementation of AI tools to recognize and address potential biases.

Federal Trade Commission

The Federal Trade Commission (FTC) oversees issues related to consumer protection and fair competition, which extend to the use of AI in employment. The Fair Credit Reporting Act (FCRA) plays a crucial role in this context, particularly when AI tools are used to make employment-related decisions. Employers using AI screening tools with access to criminal records or typical background check information must comply with the FCRA and relevant state laws. The FCRA requires providing a written disclosure to the applicant or employee and obtaining their written consent before acquiring a consumer report; AI tools with similar capabilities to third party social media background check companies may qualify as consumer reporting agencies, thereby triggering these obligations.

New York Law

In 2023, New York City Local Law 144, Automated Employment Decision Tools (AEDT) took effect, prohibiting employers from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool. Bias audits for AEDTs must be conducted by an independent auditor who is not involved in using or developing the AEDT. This is to ensure an unbiased evaluation of whether the tool has a disparate impact based on race, ethnicity, or gender. Employers or employment agencies must ensure the audit results are publicly accessible on their websites. Additionally, NYC employers must notify candidates and employees when AEDTs are being used, providing details about the data collected and the job qualifications assessed. These requirements aim to promote transparency, fairness, and accountability in the use of AEDTs in hiring and promotion decisions.

In parallel, New York State introduced bill A.8129 (2023), which seeks to extend requirements similar to NYC’s to employers statewide, emphasizing the need for transparency and fairness in AI-driven employment decisions. New Jersey has also introduced bill A.4909, which would require employers to perform impact assessments and maintain transparency regarding the use of AI in employment practices. Additionally, Maryland and Illinois have proposed laws prohibiting the use of facial recognition and video analysis tools in job interviews without candidate consent. Meanwhile, the California Fair Employment and Housing Council is considering mandates to ban AI tools and tests that could screen applicants based on race, gender, ethnicity, and other protected characteristics.

Liability of Business Owners

Employers using third-party AI services must consider potential liability issues. Under the Civil Rights Act, employers are held accountable for the discriminatory impacts of algorithmic decision-making tools, even if these tools are developed or managed by external vendors. If the software vendor acts as an authorized agent of the employer, the acts of the vendor’s AI system can be attributed to the employer. This legal stance is critical when employers use third-party tools for selecting candidates, as any resulting discrimination based on race, color, religion, sex, or national origin could make the employer liable. Therefore, employers must ensure that such tools are not only relevant to the job but also do not unjustly disadvantage protected groups, thereby adhering to the principles of business necessity and minimizing disparate impact. This due diligence is necessary even if the vendor initially claims the tool to be non-discriminatory, as incorrect assessments can still lead to legal consequences for the employer.

Conclusion

The integration of AI in the workplace presents both opportunities and challenges. While AI can drive efficiency and innovation, it also raises significant legal and ethical concerns. As employers are increasingly becoming lawfully liable for the actions and decisions of the AI software they use, employers must navigate the legal complexities carefully, ensuring compliance with ADA, Civil Rights Act, EEOC guidelines, FTC regulations, and FCRA requirements. By adopting best practices for transparency, accountability, and non-discrimination, employers can harness the benefits of AI while minimizing risks and protecting the rights of their employees.

About the Author

Tyler Caffrey is a dedicated law student at the Benjamin N. Cardozo School of Law, with a strong focus on intellectual property and entertainment law. He has a rich academic background, including a Bachelor of Arts in Environmental Studies from Pace University. Tyler is deeply interested in exploring the intersection of law and technology, particularly in the areas of trademark law and artificial intelligence. His legal experience includes internships at notable organizations, where he has developed a keen understanding of complex legal issues and gained practical skills in contract review and intellectual property management.