Article

Guidance For Employers on Algorithmic Discrimination in New Jersey

4/30/2026

The New Jersey Attorney General and the Division on Civil Rights (DCR) have released guidance addressing “algorithmic discrimination” and outlining how the New Jersey Law Against Discrimination (NJLAD) applies when public and private organizations use automated decision-making tools. The guidance makes clear that existing anti-discrimination requirements apply with full force to decisions influenced by artificial intelligence (AI).

The DCR’s guidance reflects a recognition that employers across all industries are increasingly relying on AI-driven tools in their day-to-day operations. These tools range from generative AI assistants, like ChatGPT and Microsoft Copilot, that support drafting, research, and day-to-day workflows, to applicant screening and hiring platforms, like Workday, which uses algorithms to evaluate resumes and ultimately inform personnel decisions. Given this increased reliance, the DCR guidance explains how automated systems can give rise to unlawful discrimination.

The DCR’s guidance applies broadly and is not limited to any particular sector or technology type. With the broad application of the DCR’s guidance in mind, we encourage both private- and public- sector organizations, and specifically employers, to focus on three key takeaways. 

1. AI Design, Training, and Deployment

The DCR’s guidance underscores that employers should exercise care in selecting and implementing AI-based tools. Bias can enter an automated decision-making tool at three key stages:

  • Design: The choices a developer makes in designing an AI tool can introduce bias, either purposefully or inadvertently.
  • Training: AI tools learn from training data, and if that data is skewed, unrepresentative, lacks diversity, or contains errors, the tool itself can become biased. 
  • Deployment: Discrimination can occur when a tool is used in a purposely discriminatory manner, applied to decisions it was not designed for, or when a negative feedback loop is created.

Organizations should work proactively to ensure that their use of AI tools does not lead to discriminatory outcomes by evaluating how these tools are used and identifying patterns that may indicate disparate treatment or impact on protected groups.

2. Compliance With Existing Laws

The DCR’s guidance emphasizes that the NJLAD applies regardless of whether discriminatory decisions are made by humans, AI, or through a combination of both. Additionally, the use of third‑party tools will not insulate an organization from liability if those tools contribute to discriminatory conduct. Ultimately, organizations remain responsible for ensuring that automated processes do not result in unlawful denial of opportunities, services, or benefits.

For employers in particular, the DCR’s guidance confirms that hiring and other decisions assisted by AI are subject to the NJLAD. Employers who rely on AI for hiring and other personnel decisions should therefore take proactive measures to ensure that AI-assisted decisions are not informed by latent and impermissible bias.

3. Practical Implications

Employers that use or are considering using AI in New Jersey should expect heightened scrutiny of:

  • How automated systems are incorporated into decision‑making processes, such as interviewing and hiring job candidates; 
  • Whether there are mechanisms in place to detect and remedy discriminatory patterns or outcomes; and
  • The extent to which organizations can explain, document, and justify algorithm‑driven decisions in compliance terms.

Litigation Spotlight: Mobley v. Workday, Inc

The ongoing litigation in Mobley v. Workday, Inc. illustrates the real and present risks employers face when deploying AI-powered tools in employment decisions. In Mobley, a job applicant alleges that Workday’s screening tools discriminated against him on the basis of race, age, and disability. The plaintiff claims that he applied to over 100 positions at companies using Workday’s AI-driven hiring platform and was rejected each time, despite being qualified for many of the roles.

Mobley serves as a cautionary example for organizations operating under the DCR’s guidance. It demonstrates that the failure to audit, validate, and monitor AI tools can result in significant litigation exposure—and that reliance on vendor assurances alone is unlikely to shield employers from liability. 

Recommendation

Employers in New Jersey that currently use or intend to implement AI‑based tools should evaluate their systems and decision-making processes for potential discriminatory impacts, determine whether enhanced compliance protocols and recordkeeping practices are needed, and collaborate with technical, legal, and compliance personnel to ensure that any automated decision‑making aligns with DCR’s guidance and the NJLAD. 

Specifically, we recommend that employers take the following concrete steps before adopting any AI-based tool: (i) conduct an evaluation of each prospective AI-based tool, analyzing the tool’s design, training data, bias-testing protocols, accuracy, and known limitations; (ii) review any purchase or licensing agreement to assess the AI vendor’s willingness to indemnify the employer for discrimination, privacy, or other claims arising from the employer’s use of the tool; and (iii) involve legal, compliance, and human resources personnel early in the procurement process to ensure that contractual protections, audit rights, and ongoing monitoring obligations are built into the engagement.

Porzio’s team of employment and labor attorneys is ready to assist organizations in ensuring that their use of algorithmic or AI-based tools complies with the DCR’s guidance.

Print
Share with: