Module 1

The Shortlist

London — NovaTech Financial HQ — Friday, 4:30pm

All languages available in the full course

Portrait of Sarah Chen, HR Business Partner, looking professional and confident
Your Role

Sarah Chen

HR Business Partner at NovaTech Financial — a mid-size fintech with 600 employees across London, Frankfurt, and Dublin.

Most of the office has left for the weekend. You’re about to close your laptop when a new email arrives — a polite enquiry from a rejected candidate.

Before You Start

How This Works

This is a choose-your-own-adventure scenario. You’ll face real decisions that AI compliance professionals encounter — and your choices shape how the story unfolds.

+3 Best practice — the response a compliance expert would choose
+1 Reasonable but incomplete — you’re on the right track
−1 Risky or non-compliant — learn why this path creates problems

Tip: Look for highlighted text throughout the scenario:

§ Article referencesclick to read the relevant AI Act article

Key termshover for a quick definition

Portrait of Sarah Chen, HR Business Partner, looking professional and confident

From: David Okonkwo <d.okonkwo@outlook.com>

To: Sarah Chen <sarah.chen@novatech-financial.com>

Subject: Application for Senior Risk Analyst — Request for Feedback

Dear Ms Chen,

I hope this email finds you well. I recently applied for the Senior Risk Analyst position (Ref: NVT-2026-0847) and received notification that my application was not progressed to the interview stage.

I have 30 years of experience in risk management, including 8 years specifically in fintech regulatory compliance. I hold an MSc in Financial Risk Management from the London School of Economics and am a certified FRM holder.

I appreciate that competition for roles is strong, and I'm not suggesting I'm necessarily the best candidate. However, given my background, I'd genuinely appreciate understanding what areas of my profile fell short of your requirements. Any feedback would be valuable for my ongoing job search.

Thank you for your time and consideration.

Kind regards,
David Okonkwo

Friday, 4:32 PM
Narrator

Something about the email nags at you. David's CV is strong — genuinely strong. 30 years in risk management, 8 in fintech compliance, LSE-educated, FRM-certified. For a Senior Risk Analyst role, he's arguably overqualified.

You open TalentScreen AI and pull up his application. The tool assigned him a score of 72 out of 100 — below the 80-point threshold your team set for interview invitations. But the platform doesn't show why. No explanation of the scoring. Just a number.

Curious, you export the rejection data for the last 3 months and sort by age. Your stomach drops.

Of the 11 candidates rejected in the most recent hiring round, 9 candidates over 50 scored below the threshold. The common factors dragging their scores down: 'adaptability potential' and 'cultural alignment' — metrics you've never seen defined anywhere. Meanwhile, zero candidates under 35 were rejected. Not one.

Portrait of Sarah Chen with a concerned expression
Decision Point1 of 3
Friday, 5:02 PM

It's just past 5pm. The office is nearly empty. Interviews for 12 shortlisted candidates are scheduled for Monday morning. You have a spreadsheet showing a pattern that could be coincidence or could be systematic age discrimination.

What do you do?

Your choice

Email James Hartley to pause Monday's interviews until you investigate

The pattern is concerning enough to warrant a pause. If the tool is discriminating, every interview based on its shortlist is tainted. James coordinated 12 candidate schedules — he'll be furious. And you might be wrong.

Your choice

Add David to the shortlist manually and let interviews proceed

David clearly deserves an interview. You can fix this one case now, investigate the broader pattern next week. The system will do this again on the next hire, but at least David gets a fair shot on Monday.

Your choice

Go home — you need more data before making accusations

9 out of 11 is a pattern, but it's a small sample. If you raise the alarm and you're wrong, you've undermined a tool the VP championed and damaged your credibility for nothing. David has already been rejected — one weekend won't change that.

Portrait of Sarah Chen with a concerned expression

From: Sarah Chen <sarah.chen@novatech-financial.com>

To: James Hartley <james.hartley@novatech-financial.com>

Subject: Urgent: Monday Interview Schedule — Data Review Needed

James,

I've identified an anomaly in TalentScreen AI's rejection data that I believe warrants review before we proceed with Monday's interviews. I'd prefer to discuss the specifics in person rather than over email.

I know this is extremely late notice and I understand the scheduling implications. I wouldn't raise this if I didn't think it was important — both for the candidates and for our compliance position.

Could we meet first thing Monday at 8am, before the first interview slot?

Sarah

Friday, 5:25 PM +3
James (reply, 5:48 PM)

Sarah, I've spent two weeks coordinating these interviews. The hiring panel has blocked their entire Monday. Three candidates are travelling from other cities. You're asking me to blow up the schedule because of a 'data anomaly'? This had better be serious.

You (reply, 5:55 PM)

It is. I wouldn't ask if it weren't. 8am Monday — I'll have the data ready.

James (reply, 6:01 PM)

Fine. But I'm not cancelling the interviews. We'll meet at 8, and they'd better still go ahead at 9.

James is frustrated but hasn't refused. You've bought yourself the weekend to prepare and Monday morning to present it. Under Article 26 of the AI Act, deployers must monitor for risks to fundamental rights.

Portrait of Sarah Chen, HR Business Partner, looking professional and confident
Friday, 5:30 PM +0
The Manual Override

You log into TalentScreen AI and manually add David Okonkwo to the interview shortlist. The system flags the override with an amber warning: 'Candidate score (72) below threshold (80). Manual override logged.'

Priya (Hiring Manager, reply Monday 8:12 AM)

Sarah, I see you added a candidate manually to the shortlist? David Okonkwo — score 72? That's below threshold. What happened?

You

His CV is exceptionally strong for this role. I felt the score didn't reflect his qualifications.

Priya

OK, but if we're going to override the AI, what's the point of using it? James isn't going to like this.

You

It's one candidate, Priya. Let's just make sure he gets a fair interview.

Priya

Fine. But if anyone asks why we're cherry-picking candidates outside the AI's recommendations, that's on you.

David will get an interview, but you've patched one symptom without investigating the disease. The 9 other rejected candidates over 50 won't get a manual override. Under Article 14, human oversight must be effective — systematic, not ad hoc.

Portrait of Sarah Chen looking thoughtful and contemplative
DO

David Okonkwo

Risk Management Professional

Rejected again. 30 years in risk management. 8 years in fintech regulatory compliance. MSc from LSE. FRM certified. Didn't even get an interview.

I'm not naming the company — this isn't about them specifically. But I'm starting to wonder whether the 'AI-powered recruitment' tools that companies are adopting are filtering out experience rather than filtering for it.

Is anyone else over 50 experiencing this? I'd genuinely like to know.

1,247 likes

Same experience here. Three rejections in a row from companies using automated screening. 28 years in financial services. Not one interview.

893 likes

I work in HR tech. Some of these tools use 'cultural fit' and 'adaptability' proxies that effectively penalise career stability and age. It's a known problem.

2,341 likes

I'm a journalist at the Financial Times and I'm working on a piece about AI recruitment bias. David, would you be willing to speak with me? DM open.

Saturday — Sunday −2
The Weekend That Wasn't Quiet

You close your laptop and go home. Except David doesn't spend the weekend waiting. By Monday morning, the post has 4,200 reactions and 380 comments. Someone has identified NovaTech Financial.

James Hartley sees the post before you do. He's at your desk at 7:30am Monday.

Portrait of Sarah Chen with a determined expression
Monday, 9:15 AM
Monday Morning

Regardless of what you did on Friday, the situation has converged. James Hartley is at your desk. He's heard — through Priya, through LinkedIn, or through your email — that you've been 'questioning the AI tool.'

His expression is hard to read. He's not hostile, exactly, but he's guarded. He closes your office door and sits down.

James was the executive sponsor who brought TalentScreen AI to NovaTech. He presented the business case to the board. He personally reported the 40% reduction in time-to-hire. The tool is, in many ways, his project.

Portrait of Sarah Chen with a concerned expression
Decision Point2 of 3
Monday, 9:20 AM — Sarah's Office
James

Sarah, I'm going to be direct. The tool works. Our time-to-hire is down 40%. The board cited it in last quarter's efficiency report. The CFO loves it. Are you really going to blow this up because one candidate complained?

You

It's not about one candidate, James. I ran the rejection data. Nine out of eleven rejected candidates over fifty scored below threshold. Zero candidates under thirty-five were rejected.

James

We rejected people under 35 too — for other roles, other rounds. And look at David specifically: he's been at the same company for 12 years. Maybe the tool flagged low adaptability based on career trajectory, not age.

James

I'm not saying ignore it. I'm saying — are you sure you're not seeing a pattern that isn't there? Because if you raise this and you're wrong, you've just told the board their flagship efficiency initiative is discriminatory. That's not a bell you can un-ring.

How do you respond to James?

Your choice

Present the data directly — this is age discrimination, whether the AI intended it or not

The pattern is clear. 9 of 11 over-50 candidates rejected on undefined metrics. Under Article 14, you're required to monitor for discriminatory output. As deployers, NovaTech is liable for the tool's decisions, not the vendor.

Your choice

Agree with James publicly but quietly flag the issue to Legal

James has a point — you might be wrong. But the risk is too high to ignore entirely. Let Legal investigate discreetly while the interviews proceed.

Your choice

Accept James's explanation — career trajectory, not age, explains the pattern

He might be right. 'Adaptability' could legitimately correlate with career trajectory, not age. You don't have enough data to be certain. Focus on getting David an interview and move on.

Portrait of Sarah Chen with a concerned expression
Monday, 9:35 AM +3
Confronting the Data
You

James, I hear what you're saying about career trajectory. But let me show you something. Here's the rejection data. I've highlighted age, score, and the two metrics that drove the low scores: 'adaptability potential' and 'cultural alignment.' These metrics aren't defined anywhere in the platform documentation. I've checked.

James

So?

You

So we're using a high-risk AI system — recruitment AI is explicitly classified as high-risk under Article 6 — and we can't explain how it makes decisions. If David Okonkwo files a complaint, we have no transparency documentation to show them.

James

The vendor assured us the tool was compliant.

You

The vendor's compliance is their problem. Our compliance — as deployers — is ours. Article 26 is clear: we have to monitor for risks to fundamental rights. The question isn't whether I'm right or wrong about the cause. The question is what we do now that the pattern exists.

James (long pause)

What are you proposing?

You

I want to bring Legal in. Today. And request the vendor's transparency documentation on how those metrics are calculated. If they can explain it, great. If they can't, we have a bigger conversation.

James

Fine. But I want to be in the room when Legal reviews this. And I want it on record that I'm cooperating, not being investigated.

You

Of course. This isn't about blame, James. It's about getting ahead of a problem before it gets ahead of us.

James shifts from 'you're wrong' to 'what do we do.' You've framed this as compliance, not accusation. That's the outcome you needed.

Portrait of Sarah Chen, HR Business Partner, looking professional and confident

From: Sarah Chen <sarah.chen@novatech-financial.com>

To: Helen Park <helen.park@novatech-financial.com>

Subject: Confidential: Potential AI Act Compliance Issue — TalentScreen AI

Helen,

I've identified a statistical pattern in our AI recruitment tool's rejection data that may indicate age-based discrimination. 9 of 11 rejected candidates over 50 in the latest round scored below threshold on metrics I can't find documentation for ('adaptability potential' and 'cultural alignment').

I've raised this informally with James Hartley, who believes the pattern has a non-discriminatory explanation. He may be right. But given that recruitment AI is classified as high-risk under Article 6, I believe Legal should review the data independently.

Happy to discuss at your earliest convenience.

Sarah

Monday, 9:40 AM +1
You

You make a fair point, James. The career trajectory explanation could account for some of the pattern. I'll dig into the data more before raising anything formally.

James

Good. Let's not create a crisis out of a coincidence. The interviews are at 10 — are we good?

You

We're good.

Helen responds within the hour. You've protected yourself with a paper trail. But the interviews proceed with a potentially tainted shortlist, and James believes the matter is closed.

Portrait of Sarah Chen with a determined expression
Monday, 9:40 AM −2
Accepting the Explanation
You

You're probably right. Career trajectory is a legitimate signal. I'll make sure David gets an interview and we'll keep an eye on the metrics going forward.

James

That's sensible. Look, I appreciate that you're thorough — that's why you're good at your job. But sometimes a pattern is just a coincidence.

The interviews proceed. David is not among the candidates. Three weeks later, Helen Park forwards you an FT article: 'AI recruitment tools under scrutiny as EU AI Act enforcement begins.' Her note: 'Sarah — are we exposed here?'

You now have to explain you identified a pattern three weeks ago and accepted James's explanation without independent investigation. Intent doesn't matter under Article 9.

Portrait of Sarah Chen looking serious during a difficult conversation
Wednesday, 2:00 PM
The Vendor's Response

Legal is now involved. Helen Park has contacted TalentScreen AI's vendor. A video call is scheduled with Marcus Webb, the vendor's Head of Product.

Helen (Legal)

Marcus, we need to understand how 'adaptability potential' and 'cultural alignment' are calculated. What data inputs drive those scores?

Marcus (Vendor)

Those are part of our proprietary Talent Compatibility Engine. The specific weighting and feature interactions are commercially sensitive.

Helen

Under Article 13 of the AI Act, high-risk AI systems must provide sufficient transparency for deployers to understand the output. We're the deployers.

Marcus

We provide a compliance summary document. I can send that over.

Helen

We've read it. 'Scores are generated using a multi-factor model incorporating role-relevant competency indicators.' That doesn't tell us how 'cultural alignment' is calculated.

Marcus

What I can offer is our AI Compliance Audit Package — a comprehensive review by our internal compliance team. 6–8 weeks, EUR 30,000.

James

Six to eight weeks?

Helen (muted, to you and James)

They can't or won't explain how their own tool makes decisions. That's an Article 13 problem — theirs and ours.

The vendor confirmed what you suspected: a black box. 'Proprietary' is not a defence under the AI Act. Article 13 requires transparency. The vendor is offering to self-audit for EUR 30,000 — a clear conflict of interest.

Portrait of Sarah Chen with a concerned expression
Decision Point3 of 3
Wednesday, 2:45 PM — After the vendor call
James

Sarah, we've invested EUR 200,000 in this platform. The vendor wants EUR 30,000 on top. I've got three open roles we can't fill fast enough. The CFO will ask why time-to-hire went back up. What exactly are you recommending?

What do you recommend?

Your choice

Suspend the tool immediately until the vendor provides Article 13 transparency documentation

If you can't explain how it makes decisions, you can't ensure those decisions are lawful. Accept the political cost. NovaTech stops potentially discriminating today, not in 6–8 weeks.

Your choice

Continue with the tool but add mandatory human review of every AI rejection

Add a human checkpoint: every candidate below threshold gets manual review. Flag any candidate over 50 who fails on 'adaptability' or 'cultural alignment' for senior HR review.

Your choice

Purchase the vendor's EUR 30,000 compliance audit and continue using the tool

The audit will confirm whether there's a real problem. Six to eight weeks isn't ideal, but it's better than suspending a tool that saves 200 hours per quarter based on an unverified pattern.

Portrait of Sarah Chen with a concerned expression
Wednesday, 4:00 PM +3
Suspension
You

James, I'm recommending we suspend TalentScreen AI effective immediately. I'll draft the formal recommendation for Helen and the CFO today.

James

Immediately? We process 200 applications a month through that tool. We'll be back to manual screening — that's the 200 hours per quarter I saved us.

You

I know. But Article 99 allows fines up to EUR 15 million or 3% of global annual turnover. NovaTech's turnover was EUR 340 million. Three percent is EUR 10.2 million.

James

That's the maximum. No regulator is going to fine us EUR 10 million for a recruitment tool.

You

Even 1% is EUR 3.4 million. And that's before reputational damage. If the FT runs a story about NovaTech's AI discriminating against older candidates, what happens to the Frankfurt office's regulator relationships?

James (long silence)

How long?

You

Until we get transparency documentation we can review. If the vendor can explain the algorithm, we turn it back on. If they can't, we find a vendor who can.

James

The board is going to want to know why.

You

Better they hear it from us than from a regulator.

The hardest recommendation to make and the most defensible. You've given James a clear path back: the tool isn't banned, it's suspended pending transparency. Proportionate and professional.

Portrait of Sarah Chen with a resolute and decisive expression
Thursday, 10:00 AM +1
The Human Checkpoint
You

I'm recommending we keep the tool but add mandatory human review. Every rejected candidate gets manual review. Any candidate over 50 below threshold on 'adaptability' or 'cultural alignment' gets escalated to senior HR.

James

That's more work for your team.

You

Less work than a regulatory investigation. And we can keep using the tool's screening while we push the vendor for transparency.

Helen (Legal)

Reasonable interim measure. But this doesn't fully satisfy Article 14. Human oversight must be effective, not performative. If reviewers rubber-stamp the AI scores, we're exposed.

You

Agreed. Reviewers won't see the AI score until after their own assessment. Blind review first, then comparison.

Helen

Better. But we still need vendor transparency documentation. This is temporary, not permanent.

A pragmatic compromise. But you're adding human oversight to compensate for a system you can't explain. Under Article 14, human oversight must enable full understanding of the system's capacities — which you don't have.

Portrait of Sarah Chen with a resolute and decisive expression
Thursday, 11:00 AM −1
The Vendor's Audit
You

I think the audit is the right path. The vendor knows their system best. Six to eight weeks is manageable.

Helen (Legal)

Sarah, I have concerns. We're asking the vendor to audit themselves. That's a conflict of interest.

James

They have internal compliance people. It's standard practice.

Helen

Standard practice that regulators don't accept. If we end up in front of a national authority, 'we paid the vendor to audit themselves' won't inspire confidence.

Helen

If the vendor's audit comes back clean — which it almost certainly will — and a regulator later finds the same pattern you found, where does that leave us?

James

Let's just do the vendor audit and move on.

A self-audit by the vendor is unlikely to find issues with their own product. Meanwhile, the tool continues screening for 6–8 more weeks. Under Article 9, risk management must include independent testing for bias.

Advanced Track

Monday, 8:15 AM

Your Friday email worked. But overnight, things escalated.

JAMES

Before we start — the board approved Frankfurt expansion on Friday. TalentScreen will handle recruitment across all three offices. Contract signed at 4pm.

YOU

That changes the compliance picture significantly. Cross-border deployment of a high-risk AI system triggers additional obligations.

JAMES

The vendor assured us it's compliant in all EU jurisdictions. Helen signed off. Are you saying the CEO made a mistake?

Advanced2 of 3

TalentScreen is now processing candidates across three EU jurisdictions. James has board backing. Helen signed the contract. What do you recommend?

Your choice

Commission an independent conformity assessment under Article 43 and suspend cross-border deployment until complete

Cross-border expansion is a substantial modification. The vendor's self-assessment doesn't transfer. Article 26(5) requires suspension if you have reason to believe it presents a risk.

Your choice

Implement a human review panel for all AI-rejected candidates while keeping the tool operational

Address the immediate bias risk with Article 14 human oversight. Maintains operational continuity. Review the conformity question in parallel.

Your choice

Proceed with Frankfurt deployment with enhanced bias monitoring dashboards

The data shows a potential issue, not a proven one. Put monitoring in place to detect problems early rather than disrupting a board-approved expansion.

Monday, 9:20 AM +3
The Line in the Sand
YOU

James, the Frankfurt expansion isn't just a new office — it's a new jurisdiction. The vendor's self-certification was for UK deployment. Cross-border use is a substantial modification under Article 43. We need an independent conformity assessment before TalentScreen processes a single Frankfurt candidate.

JAMES

You're telling me to suspend a tool the board approved 72 hours ago. Do you have any idea what that conversation looks like?

YOU

I know exactly what it looks like. It looks like the compliance team doing their job before the German regulator does it for us. The BfDI doesn't accept "the vendor said it was fine" as a defence. And the potential fine is up to €15 million or 3% of global turnover — whichever is higher.

JAMES

(long pause) How long does this assessment take?

YOU

Eight to twelve weeks if we engage an accredited body this week. I'll have a shortlist of assessors on your desk by end of day.

HELEN (joining the call)

James just messaged me. Sarah, is this as serious as you're suggesting?

YOU

Helen, I'd rather explain a ten-week delay to the board than a regulatory investigation to shareholders. I'm recommending an emergency board briefing this Thursday.

HELEN

...Book it. And get me a one-page brief by Wednesday evening.

You spotted the critical distinction that most professionals miss. The vendor's self-certification doesn't transfer when the deployment context changes. Cross-border expansion is a substantial modification that triggers new conformity obligations. James is frustrated, but Helen's response tells you the board will listen when the risk is quantified.

Monday, 9:25 AM +1
The Safety Net
YOU

I'm proposing a human review panel. Every candidate TalentScreen rejects gets reviewed by a trained assessor before the decision is finalised. We keep the tool running, but no one falls through the cracks.

JAMES

Now that's more reasonable. How many extra hours are we talking?

YOU

About 30 hours per quarter across three reviewers. Far less than going back to fully manual screening.

JAMES

I can live with that. Set it up. And this resolves the compliance issue?

PRIYA (Legal, joining by email)

Sarah, I've reviewed your proposal. Human oversight addresses Article 14, and I support the panel. However — I need to flag that cross-border deployment to Frankfurt may require a fresh conformity assessment under Article 43. Human review doesn't resolve that question. We should discuss.

YOU

...Understood. I'll set up time with you this afternoon.

James is relieved — you've found a solution that doesn't derail the expansion. But Priya has spotted the gap you missed. Human review addresses symptoms of the bias problem. It doesn't address whether the system itself is lawfully deployed in a new jurisdiction. If a German regulator asks for conformity documentation, "we added human review" isn't sufficient.

Two Weeks Later — Tuesday, 3:15 PM -1
The Complaint
JAMES

Good news — the Frankfurt pipeline is already filling. TalentScreen processed 45 candidates in the first batch. The dashboards look clean.

YOU

That's... good to hear. What's the rejection profile looking like?

JAMES

Haven't dug into the details. The dashboard says bias indicators are within normal range. Why?

That afternoon, Priya forwards you an email. A 54-year-old candidate in Frankfurt, rejected by TalentScreen with a score of 68, has filed a complaint with the Hessian data protection authority. His lawyer references the EU AI Act directly. He wants to know how "adaptability potential" was calculated and why his 22 years of banking experience scored lower than graduates with two years.

PRIYA (Legal)

Sarah, I need the transparency documentation for TalentScreen's scoring methodology. The candidate's lawyer has given us 14 days to respond. Do we have it?

YOU

...No. We don't.

Commercial pressure won. You expanded a potentially discriminatory system to a new jurisdiction while hoping dashboards would catch what you'd already identified. The dashboards measured what TalentScreen chose to surface — not the metrics driving the discrimination. Under Article 26(5), deployers must suspend systems they have reason to believe present a risk. You had that reason two weeks ago.

Wednesday, 2:00 PM — The Vendor Pushes Back

MARCUS (TalentScreen)

Our legal team reviewed Article 6. TalentScreen recommends — it doesn't decide. Under Article 6(3), we're not high-risk.

YOU

Article 6(2) references Annex III directly — which lists "AI systems intended to be used for recruitment" without qualifying the automation level.

MARCUS

We have clients in 14 EU countries. None have raised this. Your own legal team signed off on our compliance pack.

JAMES

The board meets Thursday. If we suspend, I explain why we're back to manual screening at 200 hours per quarter.

Advanced3 of 3

The vendor claims they're not high-risk. Your legal counsel agreed six months ago. The board meets tomorrow. What do you recommend?

Your choice

Present a formal risk assessment to the board — recommend a 90-day compliance programme with independent audit

Accept the commercial cost. The vendor's Article 6(3) argument has merit but creates unacceptable risk if a regulator disagrees. A fundamental rights impact assessment under Article 27 is required regardless.

Your choice

Keep TalentScreen but require human review of ALL decisions, plus quarterly bias audits

Pragmatic middle ground. Human review satisfies Article 14, bias audits demonstrate diligence. The board keeps their tool, candidates get oversight. Not perfect, but defensible.

Your choice

Accept legal counsel's position that TalentScreen is "decision-support" and not high-risk — document your concerns formally

Your legal team cleared it. The vendor has 14 EU clients. Maybe the Article 6(3) interpretation is correct. Document concerns to protect yourself, but don't blow up a board-approved strategy.

Thursday, 10:00 AM — Board Room +3
The Hardest Right Answer
HELEN

Let me make sure I understand. You're asking the board to approve a 90-day pause on a tool I personally signed off on, less than a week ago.

YOU

I'm asking the board to approve a compliance programme that protects a £2.1 billion company from a regulatory action that could cost €15 million. The tool works. The question is whether it works lawfully across three jurisdictions. Right now, we can't prove it does.

HELEN

And the vendor's position that they're not high-risk?

YOU

The vendor's interpretation has arguable merit under Article 6(3). But if a regulator disagrees — and the European AI Office has signalled that recruitment tools will be scrutinised early — we bear the risk as deployers. Not them. The independent audit settles the question before a regulator asks it.

JAMES

For the record, I think this is overcautious. But I understand the logic.

HELEN

(to the board) I'm approving the 90-day programme. Sarah, I want weekly progress reports. And I want the independent assessor's name on my desk by Friday.

You chose compliance integrity over commercial convenience, even when your own legal counsel disagreed and the CEO was initially hostile. Helen's anger gave way to respect when you quantified the risk. The 90-day programme with independent audit is the gold standard — it costs political capital today, but it's the position you want when the regulator calls.

Thursday, 10:30 AM — Board Room +1
The Middle Ground
YOU

I'm proposing we keep TalentScreen operational with two conditions: mandatory human review of every decision, and quarterly bias audits conducted by an external firm. The tool stays. The candidates get protected.

JAMES

That I can sell to the board. We keep the efficiency gains and show we're taking oversight seriously.

HELEN

Approved. Sarah, make sure the first audit is completed before the Frankfurt office opens in Q3.

The board accepts. James is visibly relieved. Six months later, an email arrives from the European AI Office — a sector-wide regulatory inquiry into AI recruitment tools. They want conformity documentation, fundamental rights impact assessments, and evidence of Article 13 transparency compliance.

PRIYA (Legal)

Sarah, the human review logs and bias audits help. They show good faith. But they're asking for a conformity assessment we never did and a fundamental rights impact assessment we never conducted. We have 30 days to respond.

Pragmatic but not bulletproof. Human review plus audits is defensible — it demonstrates diligence and catches individual cases. But it sidesteps the fundamental question: is this system lawfully deployed? You've bought time, not compliance. The compromise helps but it isn't airtight when the regulator comes knocking.

Three Months Later — Friday, 4:45 PM -1
The Investigation
YOU (internal monologue)

You documented your concerns in a formal memo to Priya three months ago. Filed it. Moved on. TalentScreen kept processing candidates across all three offices. The age pattern continued — 73% of rejected candidates over 50 scored below threshold on "adaptability potential." You saw the quarterly report. You said nothing.

The European AI Office announces an investigation into recruitment AI across the financial services sector. NovaTech is on the list. Priya calls an emergency meeting.

PRIYA (Legal)

They want everything. Conformity assessment, fundamental rights impact assessment, transparency documentation, deployment logs. We have 60 days. Sarah — what do we actually have?

YOU

We have the vendor's original compliance pack and my memo from three months ago flagging the concerns.

PRIYA

So you identified a risk, documented it, and the system kept running for three more months. That memo doesn't protect the company, Sarah. It incriminates us. It proves we knew.

HELEN

How did we get here?

CYA is not compliance. Documenting concerns protects you personally in a narrow sense — but it actively harms the organisation by creating a paper trail of known, unaddressed risk. "My legal team said it was fine" isn't a defence when deployers bear independent obligations. And every candidate processed after your memo is a candidate NovaTech knowingly exposed to a potentially discriminatory system.

Learning Moment

High-Risk AI Classification

In the previous situation, the key issue was recognising that TalentScreen AI processes job applications.

AI in recruitmentAnnex IIIHigh-riskFull compliance required

Article 6 classifies AI in employment as high-risk. This triggers: human oversight (Art. 14), transparency (Art. 13), data governance (Art. 10), and risk management (Art. 9).

As a deployerDeployerAn organisation that uses an AI system under its authority — as opposed to the provider who built it. Under the AI Act, deployers carry their own compliance obligations., NovaTech has independent obligations under Article 26 — even if the vendor claims compliance.

📖Guided2 of 3

Remember: TalentScreen is high-risk under Article 6. You have obligations under Article 26.

Monday morning. James pushes back — the tool saved 200 hours/quarter. What do you do about the bias pattern?

Your choice

Present the bias data and recommend pausing the tool until the vendor provides transparency documentation

Article 26(5) says deployers must suspend if they believe there's a risk. The data suggests age discrimination. Article 13 documentation should explain how the tool decides.

Your choice

Add a human reviewer to check all AI rejections before they're finalised

Human oversight (Article 14) is required for high-risk systems. This catches discriminatory rejections before they affect candidates.

Your choice

Wait for more data — one pattern doesn't prove discrimination

Maybe the pattern is coincidental. Acting too quickly could damage your relationship with the board.

Monday, 9:15 AM +3
Confronting the Data
YOU

James, I need you to look at this. Nine of eleven rejected candidates over 50 scored below threshold. Zero candidates under 35 were rejected. The two metrics driving the scores — "adaptability potential" and "cultural alignment" — aren't defined anywhere in the vendor's documentation.

JAMES

It saved us 200 hours last quarter, Sarah. You want me to go back to the board and say we're pausing it because of a spreadsheet?

YOU

I want you to go back to the board and say we caught a potential age discrimination pattern before a candidate's lawyer did. Under Article 26, we're required to suspend a system when we have reason to believe it presents a risk. This data IS that reason.

JAMES

(long silence) ...How long?

YOU

Until the vendor provides proper transparency documentation under Article 13. If they can explain how those metrics work, and the explanation is non-discriminatory, we turn it back on. If they can't — we've dodged a bullet.

JAMES

Fine. But you're presenting this to Helen. And you'd better have the maths ready.

James reluctantly agrees. He's not happy, but you've framed the risk in terms he can't ignore — a candidate's lawyer finding the pattern first. You have the weekend to prepare a formal brief for Helen. The tool is paused. No more candidates will be processed until you have answers.

Monday, 9:20 AM +1
The Review Panel
YOU

I'm recommending we add a human reviewer to check every AI rejection before it's finalised. No candidate gets screened out without a person confirming the decision.

JAMES

That's reasonable. We keep the tool, candidates get a second look. How quickly can you set it up?

YOU

By end of week. Three trained reviewers, rotating on a schedule.

JAMES

Good. Problem solved.

The review panel catches three more questionable rejections in the first two weeks — all candidates over 45, all scoring low on "adaptability potential." The reviewers override the AI and advance them. But something nags at you: the reviewers can see that the tool rejects certain candidates. They can't see why. They're catching symptoms. The system still can't explain its logic.

YOU (to yourself, reviewing the logs)

We're putting a safety net under a bridge we're not sure is structurally sound. If someone asks how "adaptability potential" is calculated, we still can't answer.

Human review addresses Article 14 and catches individual cases of bias. But the underlying system remains a black box. Without transparency documentation under Article 13, you can't explain why the tool makes the decisions it does — only that you sometimes disagree with the output.

The Following Week — Wednesday, 11:40 AM -1
The Pattern Continues
YOU (reviewing the latest batch)

Another round processed. Twelve new candidates. Three rejections — all over 50. All scored below threshold on "adaptability potential." The pattern isn't one pattern anymore. It's two.

You pull the full data. In two months of TalentScreen operation, 14 of 16 rejected candidates over 50 scored below threshold on the same opaque metric. Zero candidates under 35 have been rejected. You open LinkedIn to distract yourself and freeze.

JAMES (Teams message, 2:15 PM)

Have you seen David Okonkwo's LinkedIn post? It's got 400 comments. He's naming us. Well, not us specifically — but "a London fintech using AI to screen out experienced candidates." His former colleague at Barclays just shared it.

YOU

I saw it.

JAMES

Helen wants a meeting. Today. She's asking what we knew and when we knew it.

YOU

...I flagged the pattern two weeks ago. I was waiting for more data.

JAMES

You flagged it to ME. And I told you to wait. Helen's going to want to know why neither of us escalated.

Waiting was not compliance. Nine of eleven was already a pattern — 14 of 16 is a crisis. Every day you waited, more candidates were potentially discriminated against. Under Article 26(5), you had reason to believe the system presented a risk to fundamental rights. The obligation was to act, not to gather a statistically perfect dataset.

Learning Moment

Deployer vs Provider

Deployers and providers have separate obligations. Even if TalentScreen claims compliance, NovaTech has its own duties:

Provider = builds AI|Deployer = uses AI

Article 26: Deployers must use the system per instructions, ensure human oversight, monitor risks, keep logs, and suspend if there's a risk.

The vendor saying "we're compliant" doesn't discharge YOUR obligations.

📖Guided3 of 3

Remember: As deployer, NovaTech has independent obligations under Article 26.

The vendor wants €30,000 for a transparency audit. James says the CFO won't approve it. What do you recommend?

Your choice

Suspend the tool until the vendor provides proper documentation

The potential fine (up to €15M or 3% of turnover) far exceeds manual screening costs. Article 26(5) requires suspension if you believe there's a risk.

Your choice

Keep the tool but add human review of every rejection and document everything

Addresses immediate risk. Human review satisfies Article 14. Documentation shows good faith.

Your choice

Do nothing — the vendor has 14 EU clients and none have had issues

Maybe you're overreacting. The vendor seems confident and your legal team cleared it.

Monday, 3:00 PM — James's Office +3
The Numbers Don't Lie
JAMES

The CFO won't approve €30,000 for an audit we might not need. And now you want to suspend the tool entirely? We'll be back to 200 hours of manual screening per quarter.

YOU

Let me give you different numbers. Potential fine under the AI Act: up to €15 million or 3% of global turnover. NovaTech's turnover last year was £2.1 billion. Three percent is £63 million. Manual screening costs £48,000 a year. Which number do you want to present to the CFO?

JAMES

(pause) ...You've done the maths.

YOU

I have. And I've drafted a one-page brief for Helen. The recommendation is a temporary suspension while we require the vendor to provide proper Article 13 documentation. If they comply, we could be back online in four to six weeks. If they can't explain their own system, we find a vendor who can.

JAMES

Four to six weeks I can live with. A €15 million fine I cannot. Send me the brief — I'll co-sign it.

Suspension is the right call. James is unhappy but the maths is irrefutable — £48,000 in manual screening versus £63 million in potential fines isn't a close decision. The tool is paused. The candidates are protected. And you've positioned this as temporary, not permanent — giving the vendor a clear path to reactivation through compliance.

Three Weeks Later — Thursday, 2:30 PM +1
Better Than Nothing
YOU (reviewing the first monthly report)

The review panel has overridden 7 of 43 rejections in three weeks. All seven were candidates over 45. All scored low on "adaptability potential." The human reviewers are catching the worst outcomes.

James is satisfied. The board received your documentation showing proactive oversight. The candidates who would have been unfairly rejected are getting interviews. On paper, the system looks responsible.

JAMES

The review panel is working. We caught seven bad decisions. I'd call that a success.

YOU

We caught seven outputs we disagreed with. We still don't know why the system generates them. If a regulator asks how "adaptability potential" is calculated, we can show them our override logs. We can't show them how the AI works.

JAMES

Isn't that the vendor's problem?

YOU

...That's the part I'm not sure about.

Human review plus documentation is better than nothing — significantly better. You're catching discriminatory outputs and creating an audit trail that shows good faith. But the underlying system is unchanged. The AI still processes candidates the same way. You're filtering its decisions, not fixing its logic. If a regulator determines the system itself is non-compliant, your workarounds don't satisfy the full conformity requirements.

Two Months Later — Tuesday, 8:50 AM -1
The LinkedIn Post

Two months of silence. TalentScreen keeps processing. You stopped checking the rejection data. The vendor has 14 EU clients. Your legal team cleared it. Maybe you were overreacting.

JAMES (Teams message, 8:52 AM)

Have you seen LinkedIn this morning? David Okonkwo just posted a 1,200-word essay about age discrimination in fintech hiring. He names AI screening tools. He doesn't name us specifically but the details are unmistakable. It's already got 2,000 reactions.

YOU

I'm reading it now.

The post is devastating. Okonkwo writes about 22 years of banking experience, a stellar track record, and being rejected by an AI tool that scored his "adaptability potential" at 31 out of 100. A journalist from the Financial Times has already commented asking to DM. By lunchtime, the post has 8,000 reactions and three former NovaTech candidates have replied with similar stories.

HELEN (emergency call, 1:15 PM)

I need everyone on this call to tell me: did we know about this? Was there any indication our AI tool was discriminating against older candidates?

JAMES

(silence)

YOU

...I identified a statistical pattern two months ago. I decided to wait for more data before escalating.

HELEN

You knew. For two months. And the system kept running.

"Everyone else is doing it" was never a defence — and now it's a crisis. The vendor's other clients haven't been publicly named by a rejected candidate with 15,000 LinkedIn followers. Your legal team's clearance was based on incomplete information. Under the AI Act, ignorance you could have corrected is not a defence. And you had the data to correct it two months ago.

Six Months Later

The decisions you made as Sarah Chen rippled outward — to David Okonkwo, to NovaTech's board, to the next 200 candidates. Here's what happened.

Your Result
/ 9

Your Decisions

Key Lessons

1. Recruitment AI is classified as high-risk under Article 6 — it carries the full weight of compliance obligations
2. Deployers are liable for AI output even if the provider built the system — 'the vendor assured us' is not a defence
3. Proxy discrimination is still discrimination — a metric that correlates with age triggers the same obligations
4. Human oversight must be effective, not performative — rubber-stamping AI decisions doesn't constitute compliance
5. Transparency is not optional — if you can't explain how a high-risk AI system makes decisions, you can't lawfully use it
6. Speed of response matters — identifying a risk and delaying action weakens your position with regulators
7. Self-audits by vendors are inherently conflicted — independent review is the defensible standard
8. The AI Act protects people, not efficiency metrics — a 40% reduction in time-to-hire means nothing if the shortlist is discriminatory

Key Legal References

Article 4

AI Literacy

Article 6 + Annex III

High-Risk Classification

Article 9

Risk Management

Article 13

Transparency

Article 14

Human Oversight

Article 26

Deployer Obligations

Article 50

Transparency for Users

Article 99

Penalties

Ask your L&D team to share the team leaderboard from your LMS dashboard. Can your department beat the rest?

Next Scenario

In the next scenario, you'll face a different AI Act challenge: what happens when your company's customer service chatbot starts giving financial advice it wasn't designed to give — and a customer loses money following it? Article 50 transparency obligations meet real-world harm.

Custom Programme Preview

What If This Scenario Used Your AI Tools?

Custom Programme clients get a scenario built around their actual AI systems — plus an always-on compliance coach that answers questions specific to their tools and regulations.

Ask about your company's AI tools... CUSTOM PROGRAMME
Talk to Us About Custom Training →

Module 1 Complete

The Shortlist

You navigated the compliance dilemma. Try a different path to see how the story changes.