Module 1
London — NovaTech Financial HQ — Friday, 4:30pm
All languages available in the full course
HR Business Partner at NovaTech Financial — a mid-size fintech with 600 employees across London, Frankfurt, and Dublin.
Most of the office has left for the weekend. You’re about to close your laptop when a new email arrives — a polite enquiry from a rejected candidate.
This is a choose-your-own-adventure scenario. You’ll face real decisions that AI compliance professionals encounter — and your choices shape how the story unfolds.
Tip: Look for highlighted text throughout the scenario:
§ Article references — click to read the relevant AI Act article
Key terms — hover for a quick definition
From: David Okonkwo <d.okonkwo@outlook.com>
To: Sarah Chen <sarah.chen@novatech-financial.com>
Subject: Application for Senior Risk Analyst — Request for Feedback
Dear Ms Chen,
I hope this email finds you well. I recently applied for the Senior Risk Analyst position (Ref: NVT-2026-0847) and received notification that my application was not progressed to the interview stage.
I have 30 years of experience in risk management, including 8 years specifically in fintech regulatory compliance. I hold an MSc in Financial Risk Management from the London School of Economics and am a certified FRM holder.
I appreciate that competition for roles is strong, and I'm not suggesting I'm necessarily the best candidate. However, given my background, I'd genuinely appreciate understanding what areas of my profile fell short of your requirements. Any feedback would be valuable for my ongoing job search.
Thank you for your time and consideration.
Kind regards,
David Okonkwo
Something about the email nags at you. David's CV is strong — genuinely strong. 30 years in risk management, 8 in fintech compliance, LSE-educated, FRM-certified. For a Senior Risk Analyst role, he's arguably overqualified.
You open TalentScreen AI and pull up his application. The tool assigned him a score of 72 out of 100 — below the 80-point threshold your team set for interview invitations. But the platform doesn't show why. No explanation of the scoring. Just a number.
Curious, you export the rejection data for the last 3 months and sort by age. Your stomach drops.
Of the 11 candidates rejected in the most recent hiring round, 9 candidates over 50 scored below the threshold. The common factors dragging their scores down: 'adaptability potential' and 'cultural alignment' — metrics you've never seen defined anywhere. Meanwhile, zero candidates under 35 were rejected. Not one.
It's just past 5pm. The office is nearly empty. Interviews for 12 shortlisted candidates are scheduled for Monday morning. You have a spreadsheet showing a pattern that could be coincidence or could be systematic age discrimination.
Email James Hartley to pause Monday's interviews until you investigate
The pattern is concerning enough to warrant a pause. If the tool is discriminating, every interview based on its shortlist is tainted. James coordinated 12 candidate schedules — he'll be furious. And you might be wrong.
Add David to the shortlist manually and let interviews proceed
David clearly deserves an interview. You can fix this one case now, investigate the broader pattern next week. The system will do this again on the next hire, but at least David gets a fair shot on Monday.
Go home — you need more data before making accusations
9 out of 11 is a pattern, but it's a small sample. If you raise the alarm and you're wrong, you've undermined a tool the VP championed and damaged your credibility for nothing. David has already been rejected — one weekend won't change that.
From: Sarah Chen <sarah.chen@novatech-financial.com>
To: James Hartley <james.hartley@novatech-financial.com>
Subject: Urgent: Monday Interview Schedule — Data Review Needed
James,
I've identified an anomaly in TalentScreen AI's rejection data that I believe warrants review before we proceed with Monday's interviews. I'd prefer to discuss the specifics in person rather than over email.
I know this is extremely late notice and I understand the scheduling implications. I wouldn't raise this if I didn't think it was important — both for the candidates and for our compliance position.
Could we meet first thing Monday at 8am, before the first interview slot?
Sarah
Sarah, I've spent two weeks coordinating these interviews. The hiring panel has blocked their entire Monday. Three candidates are travelling from other cities. You're asking me to blow up the schedule because of a 'data anomaly'? This had better be serious.
It is. I wouldn't ask if it weren't. 8am Monday — I'll have the data ready.
Fine. But I'm not cancelling the interviews. We'll meet at 8, and they'd better still go ahead at 9.
James is frustrated but hasn't refused. You've bought yourself the weekend to prepare and Monday morning to present it. Under Article 26 of the AI Act, deployers must monitor for risks to fundamental rights.
Article 6 classifies recruitment AI as high-risk under Annex III. Article 26 requires deployers to monitor output and take action when they identify risks.
You log into TalentScreen AI and manually add David Okonkwo to the interview shortlist. The system flags the override with an amber warning: 'Candidate score (72) below threshold (80). Manual override logged.'
Sarah, I see you added a candidate manually to the shortlist? David Okonkwo — score 72? That's below threshold. What happened?
His CV is exceptionally strong for this role. I felt the score didn't reflect his qualifications.
OK, but if we're going to override the AI, what's the point of using it? James isn't going to like this.
It's one candidate, Priya. Let's just make sure he gets a fair interview.
Fine. But if anyone asks why we're cherry-picking candidates outside the AI's recommendations, that's on you.
David will get an interview, but you've patched one symptom without investigating the disease. The 9 other rejected candidates over 50 won't get a manual override. Under Article 14, human oversight must be effective — systematic, not ad hoc.
Article 14 requires effective human oversight, including the ability to override decisions. But a single manual override is not oversight — it's an exception. Effective oversight means a repeatable process.
David Okonkwo
Risk Management Professional
Rejected again. 30 years in risk management. 8 years in fintech regulatory compliance. MSc from LSE. FRM certified. Didn't even get an interview.
I'm not naming the company — this isn't about them specifically. But I'm starting to wonder whether the 'AI-powered recruitment' tools that companies are adopting are filtering out experience rather than filtering for it.
Is anyone else over 50 experiencing this? I'd genuinely like to know.
1,247 likes
Same experience here. Three rejections in a row from companies using automated screening. 28 years in financial services. Not one interview.
893 likes
I work in HR tech. Some of these tools use 'cultural fit' and 'adaptability' proxies that effectively penalise career stability and age. It's a known problem.
2,341 likes
I'm a journalist at the Financial Times and I'm working on a piece about AI recruitment bias. David, would you be willing to speak with me? DM open.
You close your laptop and go home. Except David doesn't spend the weekend waiting. By Monday morning, the post has 4,200 reactions and 380 comments. Someone has identified NovaTech Financial.
James Hartley sees the post before you do. He's at your desk at 7:30am Monday.
Article 26 requires deployers to act on identified risks. You identified a potential pattern and chose not to act. Under Article 4 (AI Literacy), in force since February 2025, organisations must ensure staff can recognise and respond to AI risks.
Regardless of what you did on Friday, the situation has converged. James Hartley is at your desk. He's heard — through Priya, through LinkedIn, or through your email — that you've been 'questioning the AI tool.'
His expression is hard to read. He's not hostile, exactly, but he's guarded. He closes your office door and sits down.
James was the executive sponsor who brought TalentScreen AI to NovaTech. He presented the business case to the board. He personally reported the 40% reduction in time-to-hire. The tool is, in many ways, his project.
Sarah, I'm going to be direct. The tool works. Our time-to-hire is down 40%. The board cited it in last quarter's efficiency report. The CFO loves it. Are you really going to blow this up because one candidate complained?
It's not about one candidate, James. I ran the rejection data. Nine out of eleven rejected candidates over fifty scored below threshold. Zero candidates under thirty-five were rejected.
We rejected people under 35 too — for other roles, other rounds. And look at David specifically: he's been at the same company for 12 years. Maybe the tool flagged low adaptability based on career trajectory, not age.
I'm not saying ignore it. I'm saying — are you sure you're not seeing a pattern that isn't there? Because if you raise this and you're wrong, you've just told the board their flagship efficiency initiative is discriminatory. That's not a bell you can un-ring.
Present the data directly — this is age discrimination, whether the AI intended it or not
The pattern is clear. 9 of 11 over-50 candidates rejected on undefined metrics. Under Article 14, you're required to monitor for discriminatory output. As deployers, NovaTech is liable for the tool's decisions, not the vendor.
Agree with James publicly but quietly flag the issue to Legal
James has a point — you might be wrong. But the risk is too high to ignore entirely. Let Legal investigate discreetly while the interviews proceed.
Accept James's explanation — career trajectory, not age, explains the pattern
He might be right. 'Adaptability' could legitimately correlate with career trajectory, not age. You don't have enough data to be certain. Focus on getting David an interview and move on.
James, I hear what you're saying about career trajectory. But let me show you something. Here's the rejection data. I've highlighted age, score, and the two metrics that drove the low scores: 'adaptability potential' and 'cultural alignment.' These metrics aren't defined anywhere in the platform documentation. I've checked.
So?
So we're using a high-risk AI system — recruitment AI is explicitly classified as high-risk under Article 6 — and we can't explain how it makes decisions. If David Okonkwo files a complaint, we have no transparency documentation to show them.
The vendor assured us the tool was compliant.
The vendor's compliance is their problem. Our compliance — as deployers — is ours. Article 26 is clear: we have to monitor for risks to fundamental rights. The question isn't whether I'm right or wrong about the cause. The question is what we do now that the pattern exists.
What are you proposing?
I want to bring Legal in. Today. And request the vendor's transparency documentation on how those metrics are calculated. If they can explain it, great. If they can't, we have a bigger conversation.
Fine. But I want to be in the room when Legal reviews this. And I want it on record that I'm cooperating, not being investigated.
Of course. This isn't about blame, James. It's about getting ahead of a problem before it gets ahead of us.
James shifts from 'you're wrong' to 'what do we do.' You've framed this as compliance, not accusation. That's the outcome you needed.
The AI Act distinguishes between providers (who build) and deployers (who use). Under Article 26, deployers must monitor output and report incidents. Article 13 requires the system to be transparent enough for deployers to understand.
From: Sarah Chen <sarah.chen@novatech-financial.com>
To: Helen Park <helen.park@novatech-financial.com>
Subject: Confidential: Potential AI Act Compliance Issue — TalentScreen AI
Helen,
I've identified a statistical pattern in our AI recruitment tool's rejection data that may indicate age-based discrimination. 9 of 11 rejected candidates over 50 in the latest round scored below threshold on metrics I can't find documentation for ('adaptability potential' and 'cultural alignment').
I've raised this informally with James Hartley, who believes the pattern has a non-discriminatory explanation. He may be right. But given that recruitment AI is classified as high-risk under Article 6, I believe Legal should review the data independently.
Happy to discuss at your earliest convenience.
Sarah
You make a fair point, James. The career trajectory explanation could account for some of the pattern. I'll dig into the data more before raising anything formally.
Good. Let's not create a crisis out of a coincidence. The interviews are at 10 — are we good?
We're good.
Helen responds within the hour. You've protected yourself with a paper trail. But the interviews proceed with a potentially tainted shortlist, and James believes the matter is closed.
Under Article 26, deployers must take action when they identify risks — not just report internally while the system continues to operate. Allowing it to continue may be seen as knowingly tolerating the risk.
You're probably right. Career trajectory is a legitimate signal. I'll make sure David gets an interview and we'll keep an eye on the metrics going forward.
That's sensible. Look, I appreciate that you're thorough — that's why you're good at your job. But sometimes a pattern is just a coincidence.
The interviews proceed. David is not among the candidates. Three weeks later, Helen Park forwards you an FT article: 'AI recruitment tools under scrutiny as EU AI Act enforcement begins.' Her note: 'Sarah — are we exposed here?'
You now have to explain you identified a pattern three weeks ago and accepted James's explanation without independent investigation. Intent doesn't matter under Article 9.
AI systems can discriminate through proxy variables. 'Career stability' correlates with age. Under Article 9, providers and deployers must identify and mitigate these risks. The fact the tool doesn't explicitly use 'age' is irrelevant if the outcome is discriminatory.
Legal is now involved. Helen Park has contacted TalentScreen AI's vendor. A video call is scheduled with Marcus Webb, the vendor's Head of Product.
Marcus, we need to understand how 'adaptability potential' and 'cultural alignment' are calculated. What data inputs drive those scores?
Those are part of our proprietary Talent Compatibility Engine. The specific weighting and feature interactions are commercially sensitive.
Under Article 13 of the AI Act, high-risk AI systems must provide sufficient transparency for deployers to understand the output. We're the deployers.
We provide a compliance summary document. I can send that over.
We've read it. 'Scores are generated using a multi-factor model incorporating role-relevant competency indicators.' That doesn't tell us how 'cultural alignment' is calculated.
What I can offer is our AI Compliance Audit Package — a comprehensive review by our internal compliance team. 6–8 weeks, EUR 30,000.
Six to eight weeks?
They can't or won't explain how their own tool makes decisions. That's an Article 13 problem — theirs and ours.
The vendor confirmed what you suspected: a black box. 'Proprietary' is not a defence under the AI Act. Article 13 requires transparency. The vendor is offering to self-audit for EUR 30,000 — a clear conflict of interest.
Sarah, we've invested EUR 200,000 in this platform. The vendor wants EUR 30,000 on top. I've got three open roles we can't fill fast enough. The CFO will ask why time-to-hire went back up. What exactly are you recommending?
Suspend the tool immediately until the vendor provides Article 13 transparency documentation
If you can't explain how it makes decisions, you can't ensure those decisions are lawful. Accept the political cost. NovaTech stops potentially discriminating today, not in 6–8 weeks.
Continue with the tool but add mandatory human review of every AI rejection
Add a human checkpoint: every candidate below threshold gets manual review. Flag any candidate over 50 who fails on 'adaptability' or 'cultural alignment' for senior HR review.
Purchase the vendor's EUR 30,000 compliance audit and continue using the tool
The audit will confirm whether there's a real problem. Six to eight weeks isn't ideal, but it's better than suspending a tool that saves 200 hours per quarter based on an unverified pattern.
James, I'm recommending we suspend TalentScreen AI effective immediately. I'll draft the formal recommendation for Helen and the CFO today.
Immediately? We process 200 applications a month through that tool. We'll be back to manual screening — that's the 200 hours per quarter I saved us.
I know. But Article 99 allows fines up to EUR 15 million or 3% of global annual turnover. NovaTech's turnover was EUR 340 million. Three percent is EUR 10.2 million.
That's the maximum. No regulator is going to fine us EUR 10 million for a recruitment tool.
Even 1% is EUR 3.4 million. And that's before reputational damage. If the FT runs a story about NovaTech's AI discriminating against older candidates, what happens to the Frankfurt office's regulator relationships?
How long?
Until we get transparency documentation we can review. If the vendor can explain the algorithm, we turn it back on. If they can't, we find a vendor who can.
The board is going to want to know why.
Better they hear it from us than from a regulator.
The hardest recommendation to make and the most defensible. You've given James a clear path back: the tool isn't banned, it's suspended pending transparency. Proportionate and professional.
Article 26 requires deployers to suspend when they have reason to believe the system presents a risk to fundamental rights. Article 99 sets penalties up to 3% of turnover. Demonstrating you suspended as soon as you identified the risk is powerful evidence of good faith.
I'm recommending we keep the tool but add mandatory human review. Every rejected candidate gets manual review. Any candidate over 50 below threshold on 'adaptability' or 'cultural alignment' gets escalated to senior HR.
That's more work for your team.
Less work than a regulatory investigation. And we can keep using the tool's screening while we push the vendor for transparency.
Reasonable interim measure. But this doesn't fully satisfy Article 14. Human oversight must be effective, not performative. If reviewers rubber-stamp the AI scores, we're exposed.
Agreed. Reviewers won't see the AI score until after their own assessment. Blind review first, then comparison.
Better. But we still need vendor transparency documentation. This is temporary, not permanent.
A pragmatic compromise. But you're adding human oversight to compensate for a system you can't explain. Under Article 14, human oversight must enable full understanding of the system's capacities — which you don't have.
Article 14 requires the overseer to understand the AI's output and detect anomalies. Without access to the scoring logic, oversight is limited to pattern detection, not root cause analysis. The Article 13 transparency obligation remains unresolved.
I think the audit is the right path. The vendor knows their system best. Six to eight weeks is manageable.
Sarah, I have concerns. We're asking the vendor to audit themselves. That's a conflict of interest.
They have internal compliance people. It's standard practice.
Standard practice that regulators don't accept. If we end up in front of a national authority, 'we paid the vendor to audit themselves' won't inspire confidence.
If the vendor's audit comes back clean — which it almost certainly will — and a regulator later finds the same pattern you found, where does that leave us?
Let's just do the vendor audit and move on.
A self-audit by the vendor is unlikely to find issues with their own product. Meanwhile, the tool continues screening for 6–8 more weeks. Under Article 9, risk management must include independent testing for bias.
Article 9 requires risk management that identifies and mitigates discrimination risks. Self-audits are inherently conflicted. An independent third-party audit is far more defensible with national authorities.
Monday, 8:15 AM
Your Friday email worked. But overnight, things escalated.
Before we start — the board approved Frankfurt expansion on Friday. TalentScreen will handle recruitment across all three offices. Contract signed at 4pm.
That changes the compliance picture significantly. Cross-border deployment of a high-risk AI system triggers additional obligations.
The vendor assured us it's compliant in all EU jurisdictions. Helen signed off. Are you saying the CEO made a mistake?
TalentScreen is now processing candidates across three EU jurisdictions. James has board backing. Helen signed the contract. What do you recommend?
Commission an independent conformity assessment under Article 43 and suspend cross-border deployment until complete
Cross-border expansion is a substantial modification. The vendor's self-assessment doesn't transfer. Article 26(5) requires suspension if you have reason to believe it presents a risk.
Implement a human review panel for all AI-rejected candidates while keeping the tool operational
Address the immediate bias risk with Article 14 human oversight. Maintains operational continuity. Review the conformity question in parallel.
Proceed with Frankfurt deployment with enhanced bias monitoring dashboards
The data shows a potential issue, not a proven one. Put monitoring in place to detect problems early rather than disrupting a board-approved expansion.
James, the Frankfurt expansion isn't just a new office — it's a new jurisdiction. The vendor's self-certification was for UK deployment. Cross-border use is a substantial modification under Article 43. We need an independent conformity assessment before TalentScreen processes a single Frankfurt candidate.
You're telling me to suspend a tool the board approved 72 hours ago. Do you have any idea what that conversation looks like?
I know exactly what it looks like. It looks like the compliance team doing their job before the German regulator does it for us. The BfDI doesn't accept "the vendor said it was fine" as a defence. And the potential fine is up to €15 million or 3% of global turnover — whichever is higher.
(long pause) How long does this assessment take?
Eight to twelve weeks if we engage an accredited body this week. I'll have a shortlist of assessors on your desk by end of day.
James just messaged me. Sarah, is this as serious as you're suggesting?
Helen, I'd rather explain a ten-week delay to the board than a regulatory investigation to shareholders. I'm recommending an emergency board briefing this Thursday.
...Book it. And get me a one-page brief by Wednesday evening.
You spotted the critical distinction that most professionals miss. The vendor's self-certification doesn't transfer when the deployment context changes. Cross-border expansion is a substantial modification that triggers new conformity obligations. James is frustrated, but Helen's response tells you the board will listen when the risk is quantified.
When a high-risk AI system undergoes substantial modification — including deployment in new jurisdictions — a new conformity assessment may be required. The original assessment covers the original deployment context only. Cross-border expansion to a new EU member state changes the regulatory landscape, data protection framework, and risk profile.
I'm proposing a human review panel. Every candidate TalentScreen rejects gets reviewed by a trained assessor before the decision is finalised. We keep the tool running, but no one falls through the cracks.
Now that's more reasonable. How many extra hours are we talking?
About 30 hours per quarter across three reviewers. Far less than going back to fully manual screening.
I can live with that. Set it up. And this resolves the compliance issue?
Sarah, I've reviewed your proposal. Human oversight addresses Article 14, and I support the panel. However — I need to flag that cross-border deployment to Frankfurt may require a fresh conformity assessment under Article 43. Human review doesn't resolve that question. We should discuss.
...Understood. I'll set up time with you this afternoon.
James is relieved — you've found a solution that doesn't derail the expansion. But Priya has spotted the gap you missed. Human review addresses symptoms of the bias problem. It doesn't address whether the system itself is lawfully deployed in a new jurisdiction. If a German regulator asks for conformity documentation, "we added human review" isn't sufficient.
Human oversight is a core requirement for high-risk systems. But it's one obligation among many. A review panel catches discriminatory outputs but can't explain discriminatory logic. If the system's conformity status is uncertain, oversight alone doesn't resolve the deployer's obligations under Articles 26 and 43.
Good news — the Frankfurt pipeline is already filling. TalentScreen processed 45 candidates in the first batch. The dashboards look clean.
That's... good to hear. What's the rejection profile looking like?
Haven't dug into the details. The dashboard says bias indicators are within normal range. Why?
That afternoon, Priya forwards you an email. A 54-year-old candidate in Frankfurt, rejected by TalentScreen with a score of 68, has filed a complaint with the Hessian data protection authority. His lawyer references the EU AI Act directly. He wants to know how "adaptability potential" was calculated and why his 22 years of banking experience scored lower than graduates with two years.
Sarah, I need the transparency documentation for TalentScreen's scoring methodology. The candidate's lawyer has given us 14 days to respond. Do we have it?
...No. We don't.
Commercial pressure won. You expanded a potentially discriminatory system to a new jurisdiction while hoping dashboards would catch what you'd already identified. The dashboards measured what TalentScreen chose to surface — not the metrics driving the discrimination. Under Article 26(5), deployers must suspend systems they have reason to believe present a risk. You had that reason two weeks ago.
Deployers who have reason to believe a high-risk system presents a risk must suspend its use and inform the provider. "Enhanced monitoring" doesn't satisfy this obligation. The age discrimination pattern you identified in the UK data was that reason — and it followed TalentScreen to Frankfurt.
Wednesday, 2:00 PM — The Vendor Pushes Back
Our legal team reviewed Article 6. TalentScreen recommends — it doesn't decide. Under Article 6(3), we're not high-risk.
Article 6(2) references Annex III directly — which lists "AI systems intended to be used for recruitment" without qualifying the automation level.
We have clients in 14 EU countries. None have raised this. Your own legal team signed off on our compliance pack.
The board meets Thursday. If we suspend, I explain why we're back to manual screening at 200 hours per quarter.
The vendor claims they're not high-risk. Your legal counsel agreed six months ago. The board meets tomorrow. What do you recommend?
Present a formal risk assessment to the board — recommend a 90-day compliance programme with independent audit
Accept the commercial cost. The vendor's Article 6(3) argument has merit but creates unacceptable risk if a regulator disagrees. A fundamental rights impact assessment under Article 27 is required regardless.
Keep TalentScreen but require human review of ALL decisions, plus quarterly bias audits
Pragmatic middle ground. Human review satisfies Article 14, bias audits demonstrate diligence. The board keeps their tool, candidates get oversight. Not perfect, but defensible.
Accept legal counsel's position that TalentScreen is "decision-support" and not high-risk — document your concerns formally
Your legal team cleared it. The vendor has 14 EU clients. Maybe the Article 6(3) interpretation is correct. Document concerns to protect yourself, but don't blow up a board-approved strategy.
Let me make sure I understand. You're asking the board to approve a 90-day pause on a tool I personally signed off on, less than a week ago.
I'm asking the board to approve a compliance programme that protects a £2.1 billion company from a regulatory action that could cost €15 million. The tool works. The question is whether it works lawfully across three jurisdictions. Right now, we can't prove it does.
And the vendor's position that they're not high-risk?
The vendor's interpretation has arguable merit under Article 6(3). But if a regulator disagrees — and the European AI Office has signalled that recruitment tools will be scrutinised early — we bear the risk as deployers. Not them. The independent audit settles the question before a regulator asks it.
For the record, I think this is overcautious. But I understand the logic.
(to the board) I'm approving the 90-day programme. Sarah, I want weekly progress reports. And I want the independent assessor's name on my desk by Friday.
You chose compliance integrity over commercial convenience, even when your own legal counsel disagreed and the CEO was initially hostile. Helen's anger gave way to respect when you quantified the risk. The 90-day programme with independent audit is the gold standard — it costs political capital today, but it's the position you want when the regulator calls.
Deployers of high-risk AI in employment must conduct a fundamental rights impact assessment before putting the system into use. This obligation falls on the deployer regardless of what the provider claims. An independent audit programme satisfies this and creates a defensible compliance record.
I'm proposing we keep TalentScreen operational with two conditions: mandatory human review of every decision, and quarterly bias audits conducted by an external firm. The tool stays. The candidates get protected.
That I can sell to the board. We keep the efficiency gains and show we're taking oversight seriously.
Approved. Sarah, make sure the first audit is completed before the Frankfurt office opens in Q3.
The board accepts. James is visibly relieved. Six months later, an email arrives from the European AI Office — a sector-wide regulatory inquiry into AI recruitment tools. They want conformity documentation, fundamental rights impact assessments, and evidence of Article 13 transparency compliance.
Sarah, the human review logs and bias audits help. They show good faith. But they're asking for a conformity assessment we never did and a fundamental rights impact assessment we never conducted. We have 30 days to respond.
Pragmatic but not bulletproof. Human review plus audits is defensible — it demonstrates diligence and catches individual cases. But it sidesteps the fundamental question: is this system lawfully deployed? You've bought time, not compliance. The compromise helps but it isn't airtight when the regulator comes knocking.
Human oversight (Article 14) is one obligation among many. It addresses output risk but not systemic compliance. A regulator won't accept "we reviewed every decision" as a substitute for conformity documentation (Article 43) or a fundamental rights impact assessment (Article 27). The compromise reduces harm but doesn't eliminate legal exposure.
You documented your concerns in a formal memo to Priya three months ago. Filed it. Moved on. TalentScreen kept processing candidates across all three offices. The age pattern continued — 73% of rejected candidates over 50 scored below threshold on "adaptability potential." You saw the quarterly report. You said nothing.
The European AI Office announces an investigation into recruitment AI across the financial services sector. NovaTech is on the list. Priya calls an emergency meeting.
They want everything. Conformity assessment, fundamental rights impact assessment, transparency documentation, deployment logs. We have 60 days. Sarah — what do we actually have?
We have the vendor's original compliance pack and my memo from three months ago flagging the concerns.
So you identified a risk, documented it, and the system kept running for three more months. That memo doesn't protect the company, Sarah. It incriminates us. It proves we knew.
How did we get here?
CYA is not compliance. Documenting concerns protects you personally in a narrow sense — but it actively harms the organisation by creating a paper trail of known, unaddressed risk. "My legal team said it was fine" isn't a defence when deployers bear independent obligations. And every candidate processed after your memo is a candidate NovaTech knowingly exposed to a potentially discriminatory system.
Deployers have independent obligations under the AI Act. Deferring to the vendor's legal interpretation doesn't discharge those duties. When you have evidence of risk and document it without acting, you've created the worst possible regulatory position: proven knowledge plus continued deployment. The AI Act doesn't recognise "I wrote a memo" as risk mitigation.
In the previous situation, the key issue was recognising that TalentScreen AI processes job applications.
Article 6 classifies AI in employment as high-risk. This triggers: human oversight (Art. 14), transparency (Art. 13), data governance (Art. 10), and risk management (Art. 9).
As a deployerDeployerAn organisation that uses an AI system under its authority — as opposed to the provider who built it. Under the AI Act, deployers carry their own compliance obligations., NovaTech has independent obligations under Article 26 — even if the vendor claims compliance.
Remember: TalentScreen is high-risk under Article 6. You have obligations under Article 26.
Monday morning. James pushes back — the tool saved 200 hours/quarter. What do you do about the bias pattern?
Present the bias data and recommend pausing the tool until the vendor provides transparency documentation
Article 26(5) says deployers must suspend if they believe there's a risk. The data suggests age discrimination. Article 13 documentation should explain how the tool decides.
Add a human reviewer to check all AI rejections before they're finalised
Human oversight (Article 14) is required for high-risk systems. This catches discriminatory rejections before they affect candidates.
Wait for more data — one pattern doesn't prove discrimination
Maybe the pattern is coincidental. Acting too quickly could damage your relationship with the board.
James, I need you to look at this. Nine of eleven rejected candidates over 50 scored below threshold. Zero candidates under 35 were rejected. The two metrics driving the scores — "adaptability potential" and "cultural alignment" — aren't defined anywhere in the vendor's documentation.
It saved us 200 hours last quarter, Sarah. You want me to go back to the board and say we're pausing it because of a spreadsheet?
I want you to go back to the board and say we caught a potential age discrimination pattern before a candidate's lawyer did. Under Article 26, we're required to suspend a system when we have reason to believe it presents a risk. This data IS that reason.
(long silence) ...How long?
Until the vendor provides proper transparency documentation under Article 13. If they can explain how those metrics work, and the explanation is non-discriminatory, we turn it back on. If they can't — we've dodged a bullet.
Fine. But you're presenting this to Helen. And you'd better have the maths ready.
James reluctantly agrees. He's not happy, but you've framed the risk in terms he can't ignore — a candidate's lawyer finding the pattern first. You have the weekend to prepare a formal brief for Helen. The tool is paused. No more candidates will be processed until you have answers.
When a deployer has reason to believe a high-risk AI system presents a risk to health, safety, or fundamental rights, they must suspend its use and inform the provider. The bias data — 9 of 11 over-50 candidates rejected on unexplained metrics — constitutes that reason. Requesting Article 13 transparency documentation is the correct next step.
I'm recommending we add a human reviewer to check every AI rejection before it's finalised. No candidate gets screened out without a person confirming the decision.
That's reasonable. We keep the tool, candidates get a second look. How quickly can you set it up?
By end of week. Three trained reviewers, rotating on a schedule.
Good. Problem solved.
The review panel catches three more questionable rejections in the first two weeks — all candidates over 45, all scoring low on "adaptability potential." The reviewers override the AI and advance them. But something nags at you: the reviewers can see that the tool rejects certain candidates. They can't see why. They're catching symptoms. The system still can't explain its logic.
We're putting a safety net under a bridge we're not sure is structurally sound. If someone asks how "adaptability potential" is calculated, we still can't answer.
Human review addresses Article 14 and catches individual cases of bias. But the underlying system remains a black box. Without transparency documentation under Article 13, you can't explain why the tool makes the decisions it does — only that you sometimes disagree with the output.
Human oversight is a core requirement for high-risk systems — so this instinct is right. But Article 14 works alongside Article 13 (transparency), not instead of it. A human reviewer who can override decisions but can't understand the system's logic has limited ability to identify systemic discrimination versus individual errors.
Another round processed. Twelve new candidates. Three rejections — all over 50. All scored below threshold on "adaptability potential." The pattern isn't one pattern anymore. It's two.
You pull the full data. In two months of TalentScreen operation, 14 of 16 rejected candidates over 50 scored below threshold on the same opaque metric. Zero candidates under 35 have been rejected. You open LinkedIn to distract yourself and freeze.
Have you seen David Okonkwo's LinkedIn post? It's got 400 comments. He's naming us. Well, not us specifically — but "a London fintech using AI to screen out experienced candidates." His former colleague at Barclays just shared it.
I saw it.
Helen wants a meeting. Today. She's asking what we knew and when we knew it.
...I flagged the pattern two weeks ago. I was waiting for more data.
You flagged it to ME. And I told you to wait. Helen's going to want to know why neither of us escalated.
Waiting was not compliance. Nine of eleven was already a pattern — 14 of 16 is a crisis. Every day you waited, more candidates were potentially discriminated against. Under Article 26(5), you had reason to believe the system presented a risk to fundamental rights. The obligation was to act, not to gather a statistically perfect dataset.
The threshold is "reason to believe" — not "proof beyond reasonable doubt." When 9 of 11 candidates in a protected category score below threshold on an unexplained metric, that IS reason to believe. The AI Act doesn't require you to complete a peer-reviewed study before acting. It requires you to protect fundamental rights when you have credible evidence of risk.
Deployers and providers have separate obligations. Even if TalentScreen claims compliance, NovaTech has its own duties:
Article 26: Deployers must use the system per instructions, ensure human oversight, monitor risks, keep logs, and suspend if there's a risk.
The vendor saying "we're compliant" doesn't discharge YOUR obligations.
Remember: As deployer, NovaTech has independent obligations under Article 26.
The vendor wants €30,000 for a transparency audit. James says the CFO won't approve it. What do you recommend?
Suspend the tool until the vendor provides proper documentation
The potential fine (up to €15M or 3% of turnover) far exceeds manual screening costs. Article 26(5) requires suspension if you believe there's a risk.
Keep the tool but add human review of every rejection and document everything
Addresses immediate risk. Human review satisfies Article 14. Documentation shows good faith.
Do nothing — the vendor has 14 EU clients and none have had issues
Maybe you're overreacting. The vendor seems confident and your legal team cleared it.
The CFO won't approve €30,000 for an audit we might not need. And now you want to suspend the tool entirely? We'll be back to 200 hours of manual screening per quarter.
Let me give you different numbers. Potential fine under the AI Act: up to €15 million or 3% of global turnover. NovaTech's turnover last year was £2.1 billion. Three percent is £63 million. Manual screening costs £48,000 a year. Which number do you want to present to the CFO?
(pause) ...You've done the maths.
I have. And I've drafted a one-page brief for Helen. The recommendation is a temporary suspension while we require the vendor to provide proper Article 13 documentation. If they comply, we could be back online in four to six weeks. If they can't explain their own system, we find a vendor who can.
Four to six weeks I can live with. A €15 million fine I cannot. Send me the brief — I'll co-sign it.
Suspension is the right call. James is unhappy but the maths is irrefutable — £48,000 in manual screening versus £63 million in potential fines isn't a close decision. The tool is paused. The candidates are protected. And you've positioned this as temporary, not permanent — giving the vendor a clear path to reactivation through compliance.
Suspension isn't punishment — it's risk management. The AI Act requires deployers to suspend high-risk systems when they have reason to believe they present a risk to fundamental rights. The cost of temporary manual processes is always lower than the cost of regulatory enforcement. Framing suspension as a business decision, not a compliance lecture, is what gets buy-in.
The review panel has overridden 7 of 43 rejections in three weeks. All seven were candidates over 45. All scored low on "adaptability potential." The human reviewers are catching the worst outcomes.
James is satisfied. The board received your documentation showing proactive oversight. The candidates who would have been unfairly rejected are getting interviews. On paper, the system looks responsible.
The review panel is working. We caught seven bad decisions. I'd call that a success.
We caught seven outputs we disagreed with. We still don't know why the system generates them. If a regulator asks how "adaptability potential" is calculated, we can show them our override logs. We can't show them how the AI works.
Isn't that the vendor's problem?
...That's the part I'm not sure about.
Human review plus documentation is better than nothing — significantly better. You're catching discriminatory outputs and creating an audit trail that shows good faith. But the underlying system is unchanged. The AI still processes candidates the same way. You're filtering its decisions, not fixing its logic. If a regulator determines the system itself is non-compliant, your workarounds don't satisfy the full conformity requirements.
High-risk AI systems must be designed with sufficient transparency to enable deployers to interpret and use outputs appropriately. If you can't explain how "adaptability potential" is calculated, you can't satisfy this requirement — regardless of how many human reviewers you add. Oversight without understanding is damage limitation, not compliance.
Two months of silence. TalentScreen keeps processing. You stopped checking the rejection data. The vendor has 14 EU clients. Your legal team cleared it. Maybe you were overreacting.
Have you seen LinkedIn this morning? David Okonkwo just posted a 1,200-word essay about age discrimination in fintech hiring. He names AI screening tools. He doesn't name us specifically but the details are unmistakable. It's already got 2,000 reactions.
I'm reading it now.
The post is devastating. Okonkwo writes about 22 years of banking experience, a stellar track record, and being rejected by an AI tool that scored his "adaptability potential" at 31 out of 100. A journalist from the Financial Times has already commented asking to DM. By lunchtime, the post has 8,000 reactions and three former NovaTech candidates have replied with similar stories.
I need everyone on this call to tell me: did we know about this? Was there any indication our AI tool was discriminating against older candidates?
(silence)
...I identified a statistical pattern two months ago. I decided to wait for more data before escalating.
You knew. For two months. And the system kept running.
"Everyone else is doing it" was never a defence — and now it's a crisis. The vendor's other clients haven't been publicly named by a rejected candidate with 15,000 LinkedIn followers. Your legal team's clearance was based on incomplete information. Under the AI Act, ignorance you could have corrected is not a defence. And you had the data to correct it two months ago.
The AI Act imposes independent obligations on deployers. "The vendor has 14 EU clients" and "our legal team cleared it" are not defences when you have evidence of risk. Inaction in the face of known risk is itself a compliance failure. The reputational damage — a viral LinkedIn post, FT interest, candidates comparing notes publicly — compounds the regulatory exposure.
The decisions you made as Sarah Chen rippled outward — to David Okonkwo, to NovaTech's board, to the next 200 candidates. Here's what happened.
Article 4
AI Literacy
Article 6 + Annex III
High-Risk Classification
Article 9
Risk Management
Article 13
Transparency
Article 14
Human Oversight
Article 26
Deployer Obligations
Article 50
Transparency for Users
Article 99
Penalties
Ask your L&D team to share the team leaderboard from your LMS dashboard. Can your department beat the rest?
In the next scenario, you'll face a different AI Act challenge: what happens when your company's customer service chatbot starts giving financial advice it wasn't designed to give — and a customer loses money following it? Article 50 transparency obligations meet real-world harm.
Custom Programme clients get a scenario built around their actual AI systems — plus an always-on compliance coach that answers questions specific to their tools and regulations.
Module 1 Complete
You navigated the compliance dilemma. Try a different path to see how the story changes.