top of page
Search

Is Your ATS Breaking the Law? NYC’s AI Hiring Rule, Two Years Later

  • Writer: J W
    J W
  • Sep 3
  • 5 min read
AI Hiring law in NYC
AI Hiring law in NYC

For many job seekers in New York City, their résumé is first screened by an Applicant Tracking System (ATS) before a human recruiter reviews it. In some cases, the system may even generate automated rejection messages without a recruiter ever reading the application.


These résumé screeners, ranking engines, and scoring tools are exactly the type of Automated Employment Decision Tools (AEDTs) regulated under NYC Local Law 144 of 2021.


Since July 5, 2023, employers in NYC can’t just use an ATS or AI hiring tool without meeting three requirements:


  1. Run an independent bias audit before the tool is used, and update it every year.

  2. Publish a public summary of that audit on their website.

  3. Give candidates 10 business days’ notice that a tool will be used in the hiring process.


Penalties: $500–$1,500 per day, enforced by the Department of Consumer and Worker Protection (DCWP).


📊 What we’ve learned so far


A 2024 peer-reviewed study (Wright, Muenster, Vecchione, Qu, Cai, Smith, & COMM/INFO 2450 Student Investigators, FAccT 2024) examined 391 NYC employers. Researchers found only 18 audit reports and 13 transparency notices posted online. Only 11 employers had both.


They called this “null compliance”: the law is in force, but most job seekers see little evidence of it. 


Since that study (data collected Oct 24–Nov 9, 2023), no new compliance data has been published, and DCWP has not reported detailed enforcement activity.


⚖️ Strengths vs. Gaps

Strengths


  • First U.S. law to require algorithmic hiring audits

  • Keeps accountability with the employer, not just the AI tool vendor 

  • Guarantees candidates advance notice and a public summary


Gaps


  • Scope ambiguity: The law covers tools that “substantially assist or replace” decisions. Minimal human review has been used as an argument for exemption.

  • No duty to remediate: DCWP’s FAQ confirms employers need only audit and publish, not necessarily fix bias. 

  • Data carve-outs: 

    • Groups representing < 2% of applicants may be excluded from published results.

    • Auditors can use synthetic or pooled data if employer data is lacking. 

    • Example: If 1,000 people apply and only 15 are Asian women (1.5%), that group’s outcomes may be left out of the public audit. This protects privacy but can hide real disparities.

  • Auditor standards: The law requires an “independent auditor” but sets no accreditation. Companies often rely on vendor-arranged audits or consulting firms. This leaves room for uneven quality and conflicts of interest.


🌐 The bigger picture


  • Federal oversight: The EEOC has confirmed that Title VII liability applies to employers using AI in hiring—even when tools come from third-party vendors.

  • Standards emerging: NIST AI Risk Management Framework (2023) sets a continuous cycle of Govern → Map → Measure → Manage. Also, ISO/IEC 42001:2023 is the first international AI management system standard, offering structured lifecycle governance.

  • New York State Bill A00567: Introduced in January 2023 but stalled in Jan 2024 when its enacting clause was stricken. No new hearings or votes are scheduled. For now, there is no progress, but employers should keep watch. A statewide framework could return with stronger enforcement powers. 

  • Other states: New Jersey and others are considering similar bias audit or transparency rules modeled on NYC’s approach.


🧭 What this means in practice


For employers & recruiters in NYC: Know your duties


  • Check your scope. If your ATS ranks, scores, tags, or filters and substantially assists hiring or promotion, LL144 applies.

  • Be transparent. Post audit summaries with job titles, data sources, and impact ratios. Explain exclusions under the 2% rule or when test data is used.

  • Go beyond the minimum. LL144 doesn’t require fixes, but compliance philosophy points another way. The U.S. DOJ’s “Evaluation of Corporate Compliance Programs” stresses one question: does your program work in practice? Treat audits as a starting point, not a checkbox.

  • Think in systems. Frameworks like NIST RMF and ISO 42001 help assign roles, record decisions, monitor outcomes. This makes compliance continuous, not a one-off project.


For job seekers in NYC: Know your rights


  • You have the right to 10 days’ advance notice if AI will screen your application.

  • You should be able to find a bias audit summary on the employer’s site. If it’s missing, ask.

  • If these are missing, you can file a complaint with NYC’s DCWP.


My take


Local Law 144 was a bold step. For the first time in the U.S., it put AI-driven hiring tools under a regulatory spotlight. But two years later, we still don’t know how often it is enforced or whether it delivers the accountability promised.


For employers, the direction is clear: don’t wait for regulators. Put auditable processes in place now. Align audits with frameworks like NIST’s AI RMF or ISO/IEC 42001. Treat remediation as part of compliance, not an optional extra. Both regulators and candidates want proof that your programs work in practice. 


For candidates, awareness is power. You have the right to advance notice if an ATS or AI tool is used, and you should be able to see a summary of the bias audit. If you don’t, you can ask.


Albany’s A00567 bill has stalled, but the issue is not going away. A statewide framework is likely, and employers that prepare now will be stronger positioned when the rules expand.


As I move toward admission to the New York Bar next week, I see a clear need for people who can connect the dots between law, compliance, HR, and AI governance. That is the work I am committed to. If your team is exploring AI in hiring, or if you’re a job seeker facing algorithmic screening, I’d be glad to connect and share perspectives.


References


  • Wright, L., Muenster, R. M., Vecchione, B., Qu, T., Cai, P., Smith, A., COMM/INFO 2450 Student Investigators, Metcalf, J., & Matias, J. N. (2024). Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. ACM.

  • NYC Department of Consumer and Worker Protection. (2023). Automated Employment Decision Tools – Final Rule & FAQs.

  • EEOC. (2023). Technical Assistance: Assessing Adverse Impact in Software, Algorithms, and AI Used in Employment Selection Procedures Under Title VII.

  • NIST. (2023). AI Risk Management Framework (AI RMF 1.0).

  • ISO. (2023). ISO/IEC 42001:2023 – Artificial intelligence — Management system.

  • NY State Assembly Bill A00567, status page (last action Jan 10, 2024: enacting clause stricken).


🔹 This article is part of Workplace Law NY, where I share weekly updates on New York employment law, AI compliance, and workplace rights. I also share visual explainers on Instagram @nylaborlaw.


⚖️ Disclaimer: The views expressed here are my own and are for general educational and informational purposes only. They are not legal advice and do not create an attorney–client relationship. I will be admitted to the New York Bar on September 8, 2025; until then I am not licensed to practice law in New York. Do not send confidential information through LinkedIn or social media. For advice on your facts, consult a qualified attorney.

 
 
 

Comments


- Get In Touch - 

Contact Me

  • Linkedin

Attorney Advertising. Prior results do not guarantee a similar outcome. The information on this site is for general informational and educational purposes only and is not legal advice. Contacting me does not create an attorney–client relationship. Admitted in New York (First Dept., Sept 8, 2025). Do not send confidential information through this website.

Thanks for submitting!

© 2025 by Jessie T. L. Wu. Powered and secured by Wix

bottom of page