In Case You Missed It: New Jersey Addresses Algorithmic Discrimination

Follow us on LinkedIn to see future News.

Patricia Tsipras

February 13, 2025

Last month, the New Jersey Office of the Attorney General issued guidance and announced a new Department of Civil Rights Technology Initiative to address the risks of “algorithmic discrimination” – discrimination stemming from the use of artificial intelligence and related advanced technologies.

The guidance provides an overview of automated decision-making tools, explores the benefits of those tools and risks of algorithmic discrimination that they pose, and outlines the protections under the New Jersey Law Against Discrimination (LAD) against algorithmic discrimination.

For purposes of New Jersey’s guidance, the term “automated decision-making tool” refers to any technological tool, including, but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process.

In the employment context, automated decision-making tools are used, for example, to help determine who sees a job advertisement, whether a human reviewer reads an applicant’s resume, whether an applicant is hired, whether an employee receives positive reviews or is promoted, and whether an employee is demoted or fired.

The guidance cautions that many automated decision-making tools accomplish their aims by using algorithms.  These algorithms analyze data, uncover correlations within that data, and make predictions, recommendations, or generate new data based on those correlations.  In so doing, the tools can create classes of individuals who either will be advantaged or disadvantaged in ways that may exclude or burden them based on their protected characteristics.

The guidance addresses some common reasons – including design, training, and deployment – why using automated decision-making tools may result in discriminatory outcomes.

The guidance reminds us that the LAD prohibits all forms of discrimination, regardless of whether discriminatory conduct is facilitated by automated decision-making tools or solely by human practices.  Thus, for example, a covered employer cannot avoid liability under the LAD merely because it used or relied on an automated decision-making tool.

The LAD prohibits algorithmic discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics.  Covered entities may violate the LAD if they use automated decision-making tools that result in disparate treatment[1] or a disparate impact.[2]

The LAD also prohibits algorithmic discrimination when it precludes or impedes reasonable accommodations, or when it precludes or impedes modifications to policies, procedures, or physical structures to ensure accessibility for people based on their disability, religion, pregnancy, or breastfeeding status.

The guidance provides examples of how automated decision-making tools can lead to disparate treatment, a disparate impact, or reasonable accommodation discrimination.  One example from the guidance that relates to reasonable accommodations is an employer who uses a tool to measure applicants’ typing speed.  Such a tool may not fairly or accurately assess the typing speed of an applicant who uses a non-traditional keyboard because of a disability.

New Jersey Employers To decrease the risk of discriminatory outcomes and, thus, decrease your risk of liability under the LAD, it is critically important that you evaluate the design and testing of automated decision-making tools before you deploy them and that you continue to evaluate the tools regularly after you deploy them.

 

The author of this article, Patricia Tsipras, is a member of the Bar of Pennsylvania.  This article is designed to provide one perspective regarding recent legal developments, and is not intended to serve as legal advice in Pennsylvania, New Jersey, or any other jurisdiction, nor does it establish an attorney-client relationship with any reader of the article where one does not exist.  Always consult an attorney with specific legal issues.

 

[1] “Disparate treatment” involves conduct that treats a person differently because of their membership in an LAD-protected class.  It occurs if a policy or practice is intentionally discriminatory, or if a policy or practice is discriminatory on its face, even if no intention to discriminate exists.

[2] A “disparate impact” exists if use of the tools has a disproportionately negative effect on members of an LAD-protected class.  Policies and practices that result in a disparate impact are prohibited under the LAD unless they are necessary to achieve a substantial, legitimate, nondiscriminatory interest and no less discriminatory alternative exists that would achieve the same interest.  Disparate impact discrimination can occur even if policies and practices are facially neutral and are not motivated by discriminatory intent.

 
© 2026 Rubin Fortunato. All rights reserved. Disclaimer | Privacy Policy | Sitemap
Lisi
Rubin Fortunato
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.