Demo
New York — Apex Capital Partners HQ — Friday, 4:30pm
HR Business Partner at Apex Capital Partners — a mid-size investment management firm with 850 employees across New York, Chicago, and Los Angeles.
Most of the office has left for the weekend. You’re about to close your laptop when a new email arrives — a polite enquiry from a rejected candidate.
This is a choose-your-own-adventure scenario. You’ll face real decisions that AI compliance professionals encounter — and your choices shape how the story unfolds.
Tip: Look for highlighted text throughout the scenario:
Legal references — click to read the relevant NYC LL144 section
Key terms — hover for a quick definition
From: Raymond Clarke <r.clarke@outlook.com>
To: Maya Torres <maya.torres@apexcapitalpartners.com>
Subject: Application for Senior Risk Analyst — Request for Feedback
Dear Ms Torres,
I hope this email finds you well. I recently applied for the Senior Risk Analyst position (Ref: ACP-2026-0847) and received notification that my application was not progressed to the interview stage.
I have 30 years of experience in risk management, including 8 years specifically in fintech regulatory compliance. I hold an MSc in Financial Risk Management from Columbia University and am a certified FRM holder.
I appreciate that competition for roles is strong, and I'm not suggesting I'm necessarily the best candidate. However, given my background, I'd genuinely appreciate understanding what areas of my profile fell short of your requirements. Any feedback would be valuable for my ongoing job search.
Thank you for your time and consideration.
Kind regards,
Raymond Clarke
Something about the email nags at you. Raymond's CV is strong — genuinely strong. 30 years in risk management, 8 in fintech compliance, Columbia-educated, FRM-certified. For a Senior Risk Analyst role, he's arguably overqualified.
You open HireLogic AI and pull up his application. The tool assigned him a score of 72 out of 100 — below the 80-point threshold your team set for interview invitations. But the platform doesn't show why. No explanation of the scoring. Just a number.
Curious, you export the rejection data for the last 3 months and sort by age. Your stomach drops.
Of the 11 candidates rejected in the most recent hiring round, 9 candidates over 50 scored below the threshold. The common factors dragging their scores down: 'adaptability potential' and 'cultural alignment' — metrics you've never seen defined anywhere. Meanwhile, zero candidates under 35 were rejected. Not one.
It's just past 5pm. The office is nearly empty. Interviews for 12 shortlisted candidates are scheduled for Monday morning. You have a spreadsheet showing a pattern that could be coincidence or could be systematic age discrimination.
Email Derek Langford to pause Monday's interviews until you investigate
The pattern is concerning enough to warrant a pause. If the tool is discriminating, every interview based on its shortlist is tainted. Derek coordinated 12 candidate schedules — he'll be furious. And you might be wrong.
Add Raymond to the shortlist manually and let interviews proceed
Raymond clearly deserves an interview. You can fix this one case now, investigate the broader pattern next week. The system will do this again on the next hire, but at least Raymond gets a fair shot on Monday.
Go home — you need more data before making accusations
9 out of 11 is a pattern, but it's a small sample. If you raise the alarm and you're wrong, you've undermined a tool the VP championed and damaged your credibility for nothing. Raymond has already been rejected — one weekend won't change that.

From: Maya Torres <maya.torres@apexcapitalpartners.com>
To: Derek Langford <derek.langford@apexcapitalpartners.com>
Subject: Urgent: Monday Interview Schedule — Data Review Needed
Derek,
I've identified an anomaly in HireLogic AI's rejection data that I believe warrants review before we proceed with Monday's interviews. I'd prefer to discuss the specifics in person rather than over email.
I know this is extremely late notice and I understand the scheduling implications. I wouldn't raise this if I didn't think it was important — both for the candidates and for our compliance position.
Could we meet first thing Monday at 8am, before the first interview slot?
Maya
Maya, I've spent two weeks coordinating these interviews. The hiring panel has blocked their entire Monday. Three candidates are travelling from other cities. You're asking me to blow up the schedule because of a 'data anomaly'? This had better be serious.
It is. I wouldn't ask if it weren't. 8am Monday — I'll have the data ready.
Fine. But I'm not cancelling the interviews. We'll meet at 8, and they'd better still go ahead at 9.
Derek is frustrated but hasn't refused. You've bought yourself the weekend to prepare and Monday morning to present it. Under § 20-871, it is unlawful to use an AEDT without a compliant bias audit and proper candidate notice.
§ 20-870 classifies recruitment AI as an automated employment decision tool. § 20-871 requires deployers to monitor output and take action when they identify risks.

You log into HireLogic AI and manually add Raymond Clarke to the interview shortlist. The system flags the override with an amber warning: 'Candidate score (72) below threshold (80). Manual override logged.'
Maya, I see you added a candidate manually to the shortlist? Raymond Clarke — score 72? That's below threshold. What happened?
His CV is exceptionally strong for this role. I felt the score didn't reflect his qualifications.
OK, but if we're going to override the AI, what's the point of using it? Derek isn't going to like this.
It's one candidate, Aisha. Let's just make sure he gets a fair interview.
Fine. But if anyone asks why we're cherry-picking candidates outside the AI's recommendations, that's on you.
Raymond will get an interview, but you've patched one symptom without investigating the disease. The 9 other rejected candidates over 50 won't get a manual override. Under Title VII & NYCHRL, human oversight must be effective — systematic, not ad hoc.
Title VII & NYCHRL requires effective human oversight, including the ability to override decisions. But a single manual override is not oversight — it's an exception. Effective oversight means a repeatable process.

Raymond Clarke
Risk Management Professional
Rejected again. 30 years in risk management. 8 years in fintech regulatory compliance. MSc from Columbia. FRM certified. Didn't even get an interview.
I'm not naming the company — this isn't about them specifically. But I'm starting to wonder whether the 'AI-powered recruitment' tools that companies are adopting are filtering out experience rather than filtering for it.
Is anyone else over 50 experiencing this? I'd genuinely like to know.
1,247 likes
Same experience here. Three rejections in a row from companies using automated screening. 28 years in financial services. Not one interview.
893 likes
I work in HR tech. Some of these tools use 'cultural fit' and 'adaptability' proxies that effectively penalise career stability and age. It's a known problem.
2,341 likes
I'm a journalist at the Financial Times and I'm working on a piece about AI recruitment bias. Raymond, would you be willing to speak with me? DM open.
You close your laptop and go home. Except Raymond doesn't spend the weekend waiting. By Monday morning, the post has 4,200 reactions and 380 comments. Someone has identified Apex Capital Partners.
Derek Langford sees the post before you do. He's at your desk at 7:30am Monday.
Under § 20-871, employers must not use an AEDT without a compliant bias audit. You identified a pattern suggesting the audit was inadequate and chose not to act. Under Title VII and NYCHRL, employers are liable for discriminatory outcomes from their hiring tools.
Regardless of what you did on Friday, the situation has converged. Derek Langford is at your desk. He's heard — through Aisha Osei, through LinkedIn, or through your email — that you've been 'questioning the AI tool.'
His expression is hard to read. He's not hostile, exactly, but he's guarded. He closes your office door and sits down.
Derek was the executive sponsor who brought HireLogic AI to Apex Capital Partners. He presented the business case to the board. He personally reported the 40% reduction in time-to-hire. The tool is, in many ways, his project.
Maya, I'm going to be direct. The tool works. Our time-to-hire is down 40%. The board cited it in last quarter's efficiency report. The CFO loves it. Are you really going to blow this up because one candidate complained?
It's not about one candidate, Derek. I ran the rejection data. Nine out of eleven rejected candidates over fifty scored below threshold. Zero candidates under thirty-five were rejected.
We rejected people under 35 too — for other roles, other rounds. And look at Raymond specifically: he's been at the same company for 12 years. Maybe the tool flagged low adaptability based on career trajectory, not age.
Present the data directly — this is age discrimination, whether the AI intended it or not
The pattern is clear. 9 of 11 over-50 candidates rejected on undefined metrics. Under Title VII & NYCHRL, you're required to monitor for discriminatory output. As deployers, Apex Capital Partners is liable for the tool's decisions, not the vendor.
Agree with Derek publicly but quietly flag the issue to Legal
Derek has a point — you might be wrong. But the risk is too high to ignore entirely. Let Legal investigate discreetly while the interviews proceed.
Accept Derek's explanation — career trajectory, not age, explains the pattern
He might be right. 'Adaptability' could legitimately correlate with career trajectory, not age. You don't have enough data to be certain. Focus on getting Raymond an interview and move on.

Derek, I hear what you're saying about career trajectory. But let me show you something. Here's the rejection data. I've highlighted age, score, and the two metrics that drove the low scores: 'adaptability potential' and 'cultural alignment.' These metrics aren't defined anywhere in the platform documentation. I've checked.
So?
So we're using an automated employment decision tool — recruitment AI is explicitly classified under § 20-870 — and we can't explain how it makes decisions. If Raymond Clarke files a complaint, we have no transparency documentation to show them.
The vendor assured us the tool was compliant.
The vendor's compliance is their problem. Our compliance — as deployers — is ours. § 20-871 is clear: we have to ensure the bias audit is current, the summary is posted, and candidates received notice. The question isn't whether I'm right or wrong about the cause. The question is what we do now that the pattern exists.
What are you proposing?
I want to bring Legal in. Today. And request the vendor's transparency documentation on how those metrics are calculated. If they can explain it, great. If they can't, we have a bigger conversation.
Fine. But I want to be in the room when Legal reviews this. And I want it on record that I'm cooperating, not being investigated.
Of course. This isn't about blame, Derek. It's about getting ahead of a problem before it gets ahead of us.
Derek shifts from 'you're wrong' to 'what do we do.' You've framed this as compliance, not accusation. That's the outcome you needed.
NYC LL144 distinguishes between providers (who build) and deployers (who use). Under § 20-871, deployers cannot use an AEDT without a current bias audit, public posting of results, and proper candidate notice. § 20-873 requires candidates be notified at least 10 business days before use.

From: Maya Torres <maya.torres@apexcapitalpartners.com>
To: Priya Nair <priya.nair@apexcapitalpartners.com>
Subject: Confidential: Potential NYC LL144 Compliance Issue — HireLogic AI
Priya,
I've identified a statistical pattern in our AI recruitment tool's rejection data that may indicate age-based discrimination. 9 of 11 rejected candidates over 50 in the latest round scored below threshold on metrics I can't find documentation for ('adaptability potential' and 'cultural alignment').
I've raised this informally with Derek Langford, who believes the pattern has a non-discriminatory explanation. He may be right. But given that recruitment AI is classified as an AEDT under § 20-870, I believe Legal should review the data independently.
Happy to discuss at your earliest convenience.
Maya
You make a fair point, Derek. The career trajectory explanation could account for some of the pattern. I'll dig into the data more before raising anything formally.
Good. Let's not create a crisis out of a coincidence. The interviews are at 10 — are we good?
We're good.
Priya responds within the hour. You've protected yourself with a paper trail. But the interviews proceed with a potentially tainted shortlist, and Derek believes the matter is closed.
Under § 20-871, it is unlawful to use an AEDT without a compliant bias audit and proper notice. Allowing it to continue while you investigate may constitute ongoing violations — each use is a separate offence.

You're probably right. Career trajectory is a legitimate signal. I'll make sure Raymond gets an interview and we'll keep an eye on the metrics going forward.
That's sensible. Look, I appreciate that you're thorough — that's why you're good at your job. But sometimes a pattern is just a coincidence.
The interviews proceed. Raymond is not among the candidates. Three weeks later, Priya Nair forwards you an FT article: 'AI recruitment tools under scrutiny as NYC Local Law 144 enforcement begins.' Her note: 'Maya — are we exposed here?'
You now have to explain you identified a pattern three weeks ago and accepted Derek's explanation without independent investigation. Under Title VII disparate impact doctrine, intent doesn't matter — only outcomes.
AI systems can discriminate through proxy variables. 'Career stability' correlates with age. Under § 20-872, the bias audit must calculate selection rates and impact ratios by protected categories. The fact the tool doesn't explicitly use 'age' is irrelevant if the outcome is discriminatory.

Legal is now involved. Priya Nair has contacted HireLogic AI's vendor. A video call is scheduled with Marcus Hale, the vendor's Head of Product.
Marcus, we need to understand how 'adaptability potential' and 'cultural alignment' are calculated. What data inputs drive those scores?
Those are part of our proprietary Talent Compatibility Engine. The specific weighting and feature interactions are commercially sensitive.
Under § 20-873 of NYC LL144, candidates must be told what job qualifications and characteristics the tool evaluates. If we can't explain those metrics, we can't provide compliant notice.
We provide a compliance summary document. I can send that over.
We've read it. 'Scores are generated using a multi-factor model incorporating role-relevant competency indicators.' That doesn't tell us how 'cultural alignment' is calculated.
What I can offer is our AI Compliance Audit Package — a comprehensive review by our internal compliance team. 6–8 weeks, $30,000.
Six to eight weeks?
They can't or won't explain how their own tool makes decisions. That's a § 20-873 problem — theirs and ours.
The vendor confirmed what you suspected: a black box. 'Proprietary' is not a defence under NYC LL144. § 20-873 requires the notice to specify what characteristics the tool evaluates. The vendor is offering to self-audit for $30,000 — a clear conflict of interest under § 20-872, which requires an independent auditor.
Marcus just sent through HireLogic's bias audit summary. Under § 20-872, the audit must be conducted by an independent auditor and must publish adverse impact ratios. Read carefully — 5 phrases violate LL144 requirements. Click to flag them. You only get 6 flags total.
HireLogic AI — Compliance Audit Summary (2024)
There are 5 compliance red flags hidden in this document. You can flag up to 6 phrases — choose carefully.
This audit was conducted by our internal compliance team in partnership with the product engineering division. The audit covered all candidate-facing scoring functions deployed across January–December 2024.
HireLogic's scoring model evaluates candidates across role-relevant competency dimensions. Assessment inputs are drawn from application data, structured interview responses, and job-fit indicators. The evaluation framework applies a proprietary validation methodology to score demographic outcomes. Results showed broadly comparable outcomes across candidate groups.
This summary is provided to deployers as part of HireLogic's standard compliance documentation package. Adverse impact ratios are not reported in this summary, as they are considered commercially sensitive intellectual property.
The audit dataset comprised candidates assessed between Q2 and Q4 2024. Dataset size and demographic composition are not disclosed. The tool was evaluated against industry benchmarks for enterprise hiring software. HireLogic holds compliance documentation for all 14 US client deployments.
0 flagged (6 remaining) — flag up to 6 phrases, then submit
Under the 4/5ths (80%) rule, the selection rate for any protected group must be at least 80% of the highest group's rate. § 20-872 requires bias audits to report adverse impact ratios. Use the slider to explore the threshold — then see HireLogic's actual figures.
Maya, we've invested $200,000 in this platform. The vendor wants $30,000 on top. I've got three open roles we can't fill fast enough. The CFO will ask why time-to-hire went back up. What exactly are you recommending?
Suspend the tool immediately until we get a compliant independent bias audit
If the audit wasn't independent and didn't disclose impact ratios, we've been using an AEDT in violation of § 20-871. Accept the political cost. Apex Capital Partners stops potentially discriminating today, not in 6–8 weeks.
Continue with the tool but add mandatory human review of every AI rejection
Add a human checkpoint: every candidate below threshold gets manual review. Flag any candidate over 50 who fails on 'adaptability' or 'cultural alignment' for senior HR review.
Purchase the vendor's $30,000 compliance audit and continue using the tool
The audit will confirm whether there's a real problem. Six to eight weeks isn't ideal, but it's better than suspending a tool that saves 200 hours per quarter based on an unverified pattern.

Derek, I'm recommending we suspend HireLogic AI effective immediately. I'll draft the formal recommendation for Priya and the CFO today.
Immediately? We process 200 applications a month through that tool. We'll be back to manual screening — that's the 200 hours per quarter I saved us.
I know. But § 20-875 allows fines up to $500 per first violation and $500–$1,500 for each subsequent violation on different days, with each affected candidate counting as a separate violation. With 850 employees and thousands of candidates processed annually, daily penalties plus class-action exposure under Title VII could run into the millions.
That's the worst case. The DCWP isn't going to come after us for a recruitment tool.
LL144 penalties are just the start. The real exposure is Title VII — a class-action age discrimination suit from rejected candidates. And that's before reputational damage. If the FT runs a story about Apex Capital's AI discriminating against older candidates, what happens to the Chicago office's client relationships?
How long?
Until we get an independent bias audit with proper impact ratios. If the vendor can provide one, we turn it back on. If they can't, we find a vendor who can.
The board is going to want to know why.
Better they hear it from us than from a regulator.
The hardest recommendation to make and the most defensible. You've given Derek a clear path back: the tool isn't banned, it's suspended pending a compliant audit. Proportionate and professional.
§ 20-871 prohibits use of an AEDT without a compliant bias audit within the past year. § 20-875 makes each day of non-compliant use a separate violation. Demonstrating you suspended as soon as you identified the issue is powerful evidence of good faith.

I'm recommending we keep the tool but add mandatory human review. Every rejected candidate gets manual review. Any candidate over 50 below threshold on 'adaptability' or 'cultural alignment' gets escalated to senior HR.
That's more work for your team.
Less work than a regulatory investigation. And we can keep using the tool's screening while we push the vendor for a proper independent audit.
Reasonable interim measure. But this doesn't fix the underlying § 20-871 issue. If the bias audit wasn't independent, we're still technically non-compliant regardless of human review.
Agreed. Reviewers won't see the AI score until after their own assessment. Blind review first, then comparison.
Better. But we still need a compliant independent audit. This is temporary, not permanent.
A pragmatic compromise. But you're adding human oversight to compensate for a system you can't explain. Under Title VII & NYCHRL, human oversight must be effective — which requires understanding the system's logic, which you don't have.
Title VII and NYCHRL require the overseer to be able to detect and correct discriminatory outcomes. Without access to the scoring logic, oversight is limited to pattern detection, not root cause analysis. The § 20-872 independent audit obligation remains unresolved.

I think the audit is the right path. The vendor knows their system best. Six to eight weeks is manageable.
Maya, I have concerns. We're asking the vendor to audit themselves. That's a conflict of interest — and § 20-872 explicitly requires the auditor be independent of the employer and the vendor.
They have internal compliance people. It's standard practice.
Standard practice that LL144 doesn't accept. The law is explicit: the audit must be performed by an independent auditor. If the DCWP investigates, 'we paid the vendor to audit themselves' won't hold up.
If the vendor's audit comes back clean — which it almost certainly will — and a regulator later finds the same pattern you found, where does that leave us?
Let's just do the vendor audit and move on.
A self-audit by the vendor is unlikely to find issues with their own product. Meanwhile, the tool continues screening for 6–8 more weeks. Under § 20-872, the bias audit must be performed by an independent auditor — not the vendor.
§ 20-872 is explicit: the bias audit must be conducted by an independent auditor not employed by the employer or the vendor. Self-audits are not compliant. An independent third-party audit is the only defensible path.
The decisions you made as Maya Torres rippled outward — to Raymond Clarke, to Apex Capital Partners' board, to the next 200 candidates. Here's what happened.
Custom Programme clients get a scenario built around their actual screening tools and workflows — plus an always-on compliance coach that answers questions specific to NYC Local Law 144 and their HR technology.
NYC LL144
AEDT Regulation
§ 20-870
AEDT Definition
§ 20-872
Bias Audit Requirements
§ 20-873
Candidate Notice
Title VII & NYCHRL
Disparate Impact
§ 20-871
Prohibited Use
§ 20-873
Alternative Process
§ 20-875
Civil Penalties
Ask your L&D team to share the team leaderboard from your LMS dashboard. Can your department beat the rest?
Demo Complete
You navigated the compliance dilemma. Try a different path to see how the story changes.