New York and New Jersey Introduce Bills to Regulate Artificial Intelligence in the Hiring Process
Most employers strive to create a diverse and inclusive workforce. To achieve that, some employers are looking to advanced automated decision tools such as artificial intelligence, data analytics, and statistical modeling in an attempt to eliminate bias in the hiring and retention process once and for all. Presumably, automated computational processes would use hard data and other objective measures, rather than the implicit biases of the human mind, to make decisions about potential and current employees. But the opposite may be true: automated decision tools may unwittingly reaffirm certain biases, indelibly reproducing them through an artificial process, designed and marketed as bias-free.
Potential Problems With Automated Decision Tools
Researchers and academics have for years expressed concern that “automated tools might introduce bias or entrench existing inequality -- especially if they are being inserted into an already discriminatory social system.” See Rachel Courtland, The Bias Detectives, 558 Nature 357 (June 21, 2018). In one real-world example, judges in Broward County Florida used commercial software to determine whether a person charged with a crime should be held in jail or released before trial. The software generated scores for each defendant designed to measure the chance of recidivism. A later journalistic report determined that a disproportionate number of black defendants were “false positives,” meaning the software classified them as high risk, but they never committed another crime. The software developer denied this claim. Id.
Automated decision tools have also caught the EEOC’s attention. In an October 2021 press release, EEOC Chair Charlotte A. Burrows stated that “these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”
Recent Legislative Action in New York and New Jersey
In response to these noted concerns, New York City passed a law set to take effect in April 2023 that will regulate automated decision tools, in part through requiring these tools to be independently audited for bias. These “bias audits” will assess whether the tool in question causes a disparate impact on applicants based on their race, gender, disability, and any other category protected by law. The law also requires that candidates or employees who reside in the city be notified about such tools in their assessment or evaluation for hire or promotion, as well as the job qualifications and characteristics used by the automated employment decision tool. Violators will be subject to a civil penalty.
At the state level, New York and New Jersey have introduced similar bills. New Jersey’s bill, introduced in the General Assembly as A4909 and in the Senate as S1926, requires “automated employment decision tools” sold within the state to be subject to a bias audit within one year before public sale, to include at no additional charge an annual bias audit service, and to provide a notice stating that the tool is subject to statutory requirements. The employer also must notify candidates within 30 days that they have been screened using such tools; the tools are subject to an audit for bias; and the tools assessed the candidate’s “job qualifications or characteristics.” Similar to New York City’s law, the bill includes a civil penalty for violators. It is unclear, at this time, whether the bill will pass. New York State’s bill provides similar terms to New York City’s law, but calls the bias audit a “disparate impact analysis.”
Most interesting of all is what these bills and laws do not include. They do not expressly prohibit employers from using automated decision tools with a suspicious record, or ones that show evidence of discriminatory effect through a bias audit. New York City’s bill requires employers to publish a summary of the most recent bias audit on their website, and to advise employees on how to request an alternative selection process or accommodation, but it does not prohibit tools that show a troubling record. New Jersey’s bill does even less, and requires neither publication nor an alternative process. Instead, the bill implies that it is up to individual employees and state agencies to take necessary action, stating that it does not limit or otherwise curtail rights provided to a candidate or employee by law, i.e., the right to sue, and does not limit the right of state agencies to investigate and “enforce rights relating to bias and discrimination in employment.” New York State’s bill goes the farthest, requiring that the audit be submitted annually to the State, and permitting the attorney general to investigate violations of New York’s Labor Law based on the provided data.
- New York and New Jersey are attempting to address potential implicit biases inherent in automated decision tools, primarily through using independent “bias audits,” which assess whether the tools cause a disparate impact on a protected class.
- New York City’s law is scheduled to take effect in April 2023. New York State and New Jersey each have bills pending in their respective legislatures.
- These laws do not expressly prohibit tools shown to have a discriminatory effect.
- Employers, nevertheless, must be wary when using such tools. To the extent an employer is, or becomes, subject to the laws and bills described above, it should pay careful attention to the annual bias audit, which will give candidates, employees, and state agencies information to fuel a lawsuit or other adverse action.
- Employers should consult with their attorneys when using such tools and when interpreting the data from an annual bias audit, where available. This is particularly true for multi-state employers, who are likely to face varying requirements from each state in which they engage in business.