Module 2

The Disclosure

Wednesday, 9:17 AM — Dublin

Elena Vasquez
Your Role

Elena Vasquez

Director of Marketing & Communications at NovaTech Financial — a mid-size European fintech, 1,200 employees, Dublin HQ. Regulated by the Central Bank of Ireland.

You arrive Wednesday morning to find two overnight emails: one from a journalist at the Financial Times, one from Customer Support about a chatbot complaint. Your CMO is unreachable until noon. Q2 earnings is in 9 days.

Your team has been publishing AI-generated content for 11 months — without a disclosure policy, review process, or documentation trail.

What You Need to Know

AI Transparency Under the EU AI Act

This module tests your understanding of when and how AI use must be disclosed. Here are the rules you’ll need:

Article 50(1) — Chatbot Disclosure

When a person interacts with an AI system, they must be informed they are interacting with AI — at the point of interaction, not buried in terms and conditions. This applies to chatbots, virtual assistants, and any AI that a user might mistake for a human.

Article 50(2) — AI-Generated Content

Content generated or substantially modified by AI must be labelled as artificially generated. Publishing AI-written blog posts, marketing copy, or reports without disclosure violates this obligation. Human review does NOT remove the disclosure requirement.

Article 99 — Penalties

Providing misleading information about AI use to authorities: up to €7.5M or 1% of annual turnover. The fine isn’t for using AI — it’s for hiding it.

Key Principle

Transparency is not about whether AI is “good” or “bad.” It’s about the recipient’s right to know. A customer served by an AI chatbot, a reader of AI-generated content, and a regulator asking about your AI use all have a right to accurate information.

Subject: FT Alphaville — AI-generated content query
From: Lena Richter <l.richter@ft.com>
To: NovaTech Communications <press@novatech.ie>
Date: Tuesday, 11:47 PM

Dear NovaTech Communications Team,

I'm writing a piece on AI-generated content in financial services marketing. I've identified several articles on your blog that appear to be substantially AI-generated, including "The Future of Personal Finance" (published 12 March 2026). Could you confirm:

  1. Whether this article was generated using AI tools?
  2. Whether NovaTech has a disclosure policy for AI-generated content?
  3. Whether your customer-facing chatbot (NovaAssist) uses generative AI, and if so, whether customers are informed?

I'm working to a Friday deadline. I'd welcome a conversation with your CMO.

Best regards,
Lena Richter, FT Alphaville

Context

The journalist ran the article through three AI detection tools — all flagged it as >90% AI-generated. She was tipped off by a former NovaTech contractor who noticed the blog output tripled after AI tools were adopted.

The second email: customer Robert Acheson used the NovaAssist chatbot to ask about upgrading his account. The chatbot told him the premium tier had "no monthly maintenance fee for the first 12 months." That waiver was discontinued 6 months ago. He was charged EUR 49.

He's posted on X: "NovaTech's AI chatbot lied to me about fees. @CentralBankIE are you watching?" — 340 engagements and rising.

Elena Vasquez facing an urgent decision
Decision 1: The Morning 9:22 AM · Journalist deadline: Friday · Acheson tweet: 340 engagements · Fiona unreachable until noon
Incoming
Lena Richter, FT Alphaville 11:47 PM Tue
FT Alphaville — AI-generated content query

I've identified several articles on your blog that appear to be substantially AI-generated, including "The Future of Personal Finance." Could you confirm whether this content was produced using AI tools, and whether NovaTech has a disclosure policy?

I'm also asking whether your customer-facing chatbot uses generative AI, and if so, whether customers are informed.

Working to a Friday deadline. I'd welcome a conversation with your CMO.

Draft your response
Full Transparency
Re: FT Alphaville — AI-generated content query
"We have paused all scheduled AI-generated content and moved our chatbot to human-agent mode. We can confirm the article in question was produced using AI tools. We are conducting a full audit and will issue a public disclosure..."
Art. 50 · Art. 26
Holding Response
Re: FT Alphaville — AI-generated content query
"Thank you for reaching out. We take content quality very seriously and are currently reviewing our editorial processes. Our communications team will be in touch before your Friday deadline..."
Deflect
Re: FT Alphaville — AI-generated content query
"NovaTech maintains rigorous human editorial oversight at every stage of content production. All published articles are reviewed and approved by our editorial team before publication. We'd be happy to arrange a briefing..."
Art. 50(4)
Elena Vasquez taking decisive action
Consequence 1A: The Pause +3
Elena (9:28 AM — IT Helpdesk call)

I need NovaAssist switched to human-only routing immediately. All chatbot auto-responses disabled. Every query goes to a live agent.

IT Support (Gavin)

That's going to triple the queue. We've got three agents on shift until 1 PM.

Elena

I know. Prioritise urgent queries. Anyone who called about account fees in the last 48 hours gets a callback today.

She opens the CMS and unpublishes the "Future of Personal Finance" article. She cancels the 2 PM email campaign. She sends Amir a Slack message:

Elena (Slack)

Amir, I need a complete list of every piece of content you've published using AI tools. Blog posts, email copy, chatbot scripts, social posts. Everything from the last 11 months. I need it by 11 AM. No judgment — I need the facts.

Amir (Slack)

Elena, I'm really sorry. It's... basically everything. I'll get you the list.

By 11:15 AM, Elena has a spreadsheet showing 127 blog posts, 34 email campaigns, all chatbot FAQ scripts, and approximately 200 social media posts that were substantially AI-generated. The earliest was published 11 months ago. None carried any disclosure.

Elena Vasquez conducting an audit
Consequence 1B: The Audit +1

Elena spends the morning in a spreadsheet. By 11:30 AM, she has a partial picture: at least 90+ blog posts, most email campaigns, and all chatbot scripts appear to be AI-generated. But the audit isn't complete — Amir is still pulling data, and the social media posts are harder to trace.

At 1:47 PM, the email campaign goes out as scheduled. It's a product update about NovaTech's savings accounts. The subject line, body copy, and CTA were all generated by ChatGPT. It reaches 28,000 subscribers.

At 2:15 PM, a customer in Berlin uses the chatbot and asks about international transfer fees. The chatbot gives an answer based on Amir's six-month-old script. The fee structure changed in January. The answer is wrong by EUR 12 per transfer.

Niamh (phone, 3:00 PM)

Elena, I heard you're doing an audit. Good. But the chatbot is still live. And you just sent an email campaign to 28,000 people. If any of that content is inaccurate, every hour it stays out there is additional exposure. An audit without action is just documentation of a problem you knew about and didn't fix.

Elena Vasquez drafting a risky response
Consequence 1C: The Holding Statement -2

Elena drafts a response to Lena Richter at 9:35 AM:

Dear Lena,

Thank you for reaching out. NovaTech uses AI tools as part of our content creation workflow, with human editorial oversight at every stage of the process. We believe in the responsible use of AI to enhance our communications, and we're committed to transparency in how we operate.

I'd be happy to arrange a call with our CMO, Fiona Gallagher, later this week to discuss our approach in more detail.

Best regards,
Elena Vasquez
Director of Marketing & Communications

The problem: the statement contains a factual inaccuracy. There is no "human editorial oversight at every stage." Amir has been publishing directly from ChatGPT output.

At 11:40 AM, Lena responds:

Thank you, Elena. I appreciate the quick response. Could you clarify what "human editorial oversight at every stage" involves specifically? I've spoken with a former contractor who describes a different process. I'd also like to understand whether the "Future of Personal Finance" article had a named human author, or whether "NovaTech Insights Team" is a byline used for AI-generated content. Happy to discuss on a call.

Meanwhile, the chatbot continues operating. Robert Acheson's tweet now has 1,200 engagements. A consumer rights account has retweeted it: "Another AI chatbot giving wrong financial info. When will the Central Bank act?"

Niamh (walking into Elena's office, 12:15 PM)

Elena, I just saw your email to the FT journalist. You told her we have "human editorial oversight at every stage." Do we?

Elena

I—

Niamh

Because if we don't, and she can prove we don't, that response becomes evidence of misleading a journalist about our AI practices. Under Article 50(4), deployers must not misrepresent the AI-generated nature of content. And under Article 99, providing misleading information can carry fines up to EUR 7.5 million or 1% of turnover. You've turned a transparency problem into a deception problem.

Elena Vasquez on a tense call with the CMO
12:30 PM — The Noon Call

Fiona calls from Madrid. She's heard about the journalist inquiry from her EA.

Fiona

Elena, what's happening with the FT thing?

Elena

[Summarises the situation — journalist, chatbot, customer complaint, the scope of AI-generated content]

Fiona (long pause)

How bad is it?

Elena

127 blog posts, 34 email campaigns, all chatbot scripts. Eleven months. No disclosure on any of it.

Fiona

Right. Look, I'm back Thursday evening. Can we hold everything until then?

Elena

The journalist's deadline is Friday.

Fiona

Then tell her we'll have a full response by Thursday. Buy us a day. And Elena — don't tell her more than you have to. This isn't a confession, it's a PR situation.

Elena

Fiona, Niamh says it's a compliance situation. Article 50 of the AI Act—

Fiona (irritated)

The AI Act isn't fully enforced yet. We have until August. Let's not overreact.

Elena

The transparency provisions are already in force for general-purpose AI output. And the chatbot gave a customer wrong information. He's threatening the Central Bank.

Fiona (irritated)

Fine. Loop in Niamh. But I'm telling you, if we put out a statement saying "we used AI and didn't tell anyone," that's the headline. Let's be smarter than that.

Fiona hangs up. Niamh is waiting in Elena's office.

Niamh

I heard the call. Fiona wants to manage the narrative. I understand why. But here's what she's not factoring in: Article 50 transparency obligations for AI-generated content that could be mistaken for human-generated content are already enforceable for general-purpose AI systems. We're not in a grey area. And the chatbot — that's Article 50(1). Persons interacting with an AI system must be informed they're interacting with AI. Our chatbot doesn't disclose that it uses AI at all. Not in the interface, not in the terms, nowhere.

Elena

What about the customer? Robert Acheson.

Niamh

The Air Canada precedent is clear. A tribunal held the airline responsible for every statement its chatbot made. Our chatbot told him there was no maintenance fee. If we don't make this right, he has a strong case — and not just under the AI Act. This is basic consumer protection.

Niamh advising on the legal response
Decision 2: The Response

2:00 PM Wednesday

Elena has the full picture. She has to decide how NovaTech responds — to the journalist, to the customer, and internally. Niamh has prepared three options. Fiona has made her preference clear: minimise disclosure.

A. Full transparency

Respond to the journalist honestly: the article was AI-generated, the company is implementing a comprehensive disclosure policy immediately. Call Robert Acheson directly, reverse the fee, apologise. Draft an AI content policy for Fiona to approve Thursday. Art. 50(1) · Art. 50(2) · Art. 50(4)

B. Partial disclosure

Tell the journalist: "NovaTech uses AI tools to assist our content team. All content is reviewed by human editors before publication. We are developing a formal disclosure framework." Mostly true — going forward. Handle Robert Acheson through Customer Support with a goodwill refund. Wait for Fiona before creating policy.

C. Follow Fiona's lead

Tell the journalist NovaTech is "at the forefront of responsible AI adoption" and offer a call with the CMO on Thursday. Don't address specifics. Escalate Robert Acheson as a standard complaint. No internal changes until Fiona decides.

Elena Vasquez choosing full transparency
Consequence 2A: Full Transparency +3

Elena drafts two communications. First, to Lena Richter:

Dear Lena,

Thank you for your patience. I want to be straightforward with you.

NovaTech has been using generative AI tools across our content function for the past 11 months. This includes blog posts, email campaigns, and chatbot scripts. The article you identified — "The Future of Personal Finance" — was generated using AI tools. It should not have been published without disclosure, and it should not have been published without human review of its factual claims.

We have identified this as a gap in our processes and are taking immediate steps:

  1. All content published using AI tools is being reviewed and will carry appropriate disclosure.
  2. We are implementing a mandatory human review process for all AI-assisted content before publication.
  3. Our customer chatbot is being updated to clearly disclose that it uses AI, and all automated responses are being verified against current product information.

I recognise this is an area where the industry is evolving, and we should have moved faster on governance. We'd welcome the opportunity to speak with you about our approach — not to manage a narrative, but to be honest about what happened and what we're doing about it.

Elena Vasquez

Second, Elena calls Robert Acheson directly.

Elena

Mr. Acheson, my name is Elena Vasquez. I'm the Director of Marketing at NovaTech. I'm calling about your experience with our chatbot.

Robert

Finally, someone who isn't reading from a script.

Elena

I've reviewed what happened. Our chatbot gave you incorrect information about the premium account fee waiver. That fee waiver was discontinued six months ago, but our chatbot's information wasn't updated. That's our error, not yours.

Robert

So what are you going to do about it?

Elena

Three things. First, we're reversing the EUR 49 charge immediately. Second, we're honouring the fee waiver the chatbot offered you — no maintenance fee for 12 months. Third, we're reviewing every automated response in the chatbot to make sure the information is current.

Robert

That's... fair. That's what I was asking for from the start. Your support team kept telling me the chatbot wasn't binding.

Elena

They shouldn't have said that. If our system gives you information and you act on it in good faith, we should stand behind it.

Robert (softening)

I appreciate the call. I'll take down the tweet.

Niamh (after the call)

That was the right approach. The Air Canada tribunal found the airline liable because they tried to distance themselves from the chatbot's statements. You just did the opposite — and that's defensible. If this goes to the Central Bank, we can show we identified the problem, contacted the customer, and made it right within 24 hours.

Elena Vasquez attempting partial disclosure
Consequence 2B: The Middle Ground +1

Elena sends a carefully worded response to the journalist:

Dear Lena,

NovaTech embraces AI as part of our content workflow. All content is reviewed by human editors before publication, and we are developing a formal disclosure framework aligned with the EU AI Act's transparency requirements. We'd be happy to discuss our approach with our CMO, Fiona Gallagher, who is available for a call on Friday.

Elena Vasquez

The statement is technically forward-looking — "all content is reviewed" will be true once the new process is implemented. But it's not true today, and it wasn't true for the last 11 months.

Lena responds within an hour:

Thanks, Elena. I'd welcome the call with Fiona. A few follow-ups: Was the "Future of Personal Finance" article reviewed by a human editor before publication? Can you share the name of the editor? And can you confirm whether your chatbot uses generative AI — one of your customers has raised concerns publicly about receiving incorrect information from it.

Meanwhile, Customer Support handles Robert Acheson with a EUR 49 refund and a "sorry for the inconvenience" email. Robert is partially satisfied but doesn't delete his tweet. He replies: "I appreciate the refund. But someone needs to answer for why an AI is giving financial advice without telling people it's an AI."

Niamh

Elena, the journalist is going to ask about the specific article. She already knows the answer. If Fiona tells her on Friday that the article was reviewed by a human editor, and the journalist has evidence it wasn't, we've gone from a transparency issue to a credibility issue. And the customer — a refund without an acknowledgment that the chatbot is AI-powered isn't enough under Article 50(1). We've fixed the charge but not the disclosure.

Elena Vasquez watching the crisis escalate
Consequence 2C: Wait for Fiona -1

Elena emails the journalist:

Dear Lena,

Thank you for your inquiry. NovaTech is at the forefront of responsible AI adoption in financial services. Our CMO, Fiona Gallagher, would be delighted to discuss our approach with you. She's available for a call on Thursday afternoon or Friday morning.

Elena Vasquez

The response doesn't address any of the journalist's three specific questions. Lena recognises it immediately. She doesn't reply. Instead, she posts on X at 5:30 PM:

"Asked NovaTech Financial whether their thought leadership blog is AI-generated. Got a non-answer. Also: their chatbot gave a customer wrong fee information. Interesting pattern. Story coming Friday. @NovaTechFinancial"

By Thursday morning, the tweet has 3,400 engagements. Two other fintech publications have picked it up. The Central Bank of Ireland's press team has seen it. Robert Acheson's complaint was handled as a standard customer service ticket — he received an automated email and replied: "I don't want an automated apology from the same company that had an AI lying to me. I'm filing with the Central Bank."

Niamh (Thursday morning, 8:15 AM)

Elena, we've lost control of this. The journalist is publishing tomorrow with or without our input. The customer has escalated to the Central Bank. And Fiona isn't back until tonight. If she walks into that journalist call on Friday with a "nothing to see here" posture, this becomes the lead story, not a sidebar.

Elena

What do we do?

Niamh

We needed to do it yesterday. Article 50 required transparency from the moment we deployed these systems. Every day we delay, the regulator's patience shrinks. I need to brief the board. This is no longer a marketing problem.

4:00 PM Wednesday — The Chatbot Crisis Deepens

Niamh forwards Elena an email from NovaTech's Data Protection Officer (DPO):

Subject: NovaAssist — Urgent compliance review
From: Dr. Katya Novak, DPO <k.novak@novatech.ie>
To: Elena Vasquez, Niamh O'Brien

Elena / Niamh,

I've reviewed the NovaAssist chatbot configuration. Three issues:

  1. The chatbot does not disclose to users that they're interacting with an AI system. This violates Article 50(1) of the EU AI Act.
  2. The chatbot's "personality prompt" instructs it to "respond as a helpful NovaTech financial advisor." Under the Consumer Credit Directive and MiFID II, only regulated individuals can provide financial advice — we have a dual regulatory problem.
  3. The chatbot logs all conversations, including financial queries and account numbers. There is no GDPR-compliant data processing notice for these logs.

I recommend immediate suspension pending a compliance review.

Regards,
Dr. Katya Novak, DPO

Niamh

This isn't just an AI Act problem anymore. The chatbot is three violations deep. Any one of these could trigger a regulatory investigation. All three together? We need to shut this down today.

Elena

Fiona will say we're overreacting.

Niamh

Fiona isn't the one who'll be sitting across from the Central Bank examiner. Article 50(1) is clear: if a person is interacting with an AI system, they must be informed. There's no "unless the CMO thinks it's unnecessary" exception.

Wednesday Evening — Pre-Release Audit

Fiona's statement — before it goes to the FT

Niamh has just forwarded you Fiona's voice memo. Fiona dictated it on her phone after Niamh's call — this is the audio that will be transcribed into NovaTech's FT response tomorrow morning. Audit the lines. Anything misleading or factually contradicted by what NovaTech already has on file is an Article 99 exposure: penalties up to €7.5M or 1% of turnover for supplying inaccurate information to authorities once the FT story escalates.

Listen to the full memo, then click the timeline at every line that should not ship without revision. Flagging defensible lines costs points — an audit that suspects everything is a useless audit.

Fiona Gallagher, CMO
Voice memo · Fiona Gallagher
Recorded 16:42 · Sent via consultant for FT response
FT-PREP · Fiona Statement v3 · pre-release audit copy
What you're listening for: over-claims — lines that overstate NovaTech's history or comfort with the situation. Two lines in this draft would not survive a regulator reading.
Press play to enable flagging.
Timeline — click to flag
0.0s2.0s4.0s6.0s8.0s
Your flags
No flags yet.

Watch the full clip to enable submission.

Niamh presenting policy options
Decision 3: The Policy

Thursday, 9:00 AM — Emergency Meeting

Fiona arrives at the Dublin office, jetlagged. She's seen the media coverage. She calls an emergency meeting: Elena, Niamh, and Amir.

Fiona

Right. Where are we? And before anyone answers — I'm not looking to blame anyone. I pushed AI adoption. I own that. But we need a plan for the next 24 hours, and we need one for the next 6 months.

Niamh

The 24-hour plan is the journalist call. The 6-month plan is the AI content governance framework we need before Article 50 is fully enforced in August. But they're connected — what we tell the journalist tomorrow has to be consistent with the policy we're building. If we say one thing to the FT and do another internally, we create a documented record of misrepresentation.

Fiona

What are the options?

A. Comprehensive AI governance

Full AI content governance framework: mandatory disclosure, human review, chatbot redesign with AI disclosure, quarterly audits, staff training. Present this to the journalist as a proactive initiative. Niamh prepares a regulatory briefing for the Central Bank. Art. 50 · Art. 13 · Art. 4 · Art. 26

B. Minimum viable compliance

Targeted fix: add disclosure labels to the blog, implement a review checklist, add an AI disclosure banner to the chatbot. Don't overhaul — just add the disclosure and manually verify FAQ responses this week. Labels within 7 days, chatbot disclosure within 14 days.

C. Fiona's approach: narrative control

Position NovaTech as "leading on transparency" without acknowledging 11 months without oversight. Frame the journalist call as a thought leadership opportunity. Don't mention the chatbot. Tell Amir not to discuss AI processes externally. Brief the board minimally.

Elena Vasquez building a governance framework
Consequence 3A: The Framework +3

Elena presents the governance framework. Fiona listens, initially resistant, then shifts.

Fiona

You want me to go on a call with the FT and say we've been using AI without disclosure for 11 months?

Elena

I want you to go on the call and say we identified a gap, we've built a framework, and we're implementing it. The story is the framework, not the gap.

Niamh

Fiona, I need to add something. If the Central Bank opens an investigation — and I think there's a 60% chance they do, based on the chatbot complaint — the first thing they'll ask is what we did when we found out. If we can show a comprehensive governance framework adopted within 72 hours of identification, that's the difference between a warning letter and a formal investigation.

Fiona (long pause)

What's the cost?

Elena

The chatbot relaunch: approximately EUR 40,000, including compliance review and redesign. The audit of existing content: I'll need one contractor for two months, maybe EUR 15,000. Staff training: we can do that internally. Total: under EUR 60,000.

Fiona

And the cost of not doing this?

Niamh

Article 99. Up to EUR 15 million, or 3% of global turnover — that's the higher tier for deployer obligation violations, not just misleading information (which carries the lower 1% tier). For NovaTech, that's EUR 10.2 million. And that's the AI Act alone — the GDPR exposure from the chatbot logs is separate.

Fiona

Build the framework. I'll do the call.

Amir (quiet until now)

Elena... I want to help. I know I caused a lot of this. I should have asked about a review process. I should have checked the chatbot responses against the fee schedule. Can I help build the training module?

Elena

Yes. You're going to help build it because you understand exactly what went wrong. That's not punishment — that's the most useful thing you can do.

Elena Vasquez proposing a quick fix
Consequence 3B: The Quick Fix +1

Elena proposes the targeted approach. Fiona approves it immediately.

Fiona

This I can work with. Labels on the blog, review checklist, chatbot disclosure. We can tell the journalist we're already rolling this out. What's the timeline?

Elena

Labels within a week. Chatbot disclosure banner within two weeks.

Niamh

Elena, this addresses the visible symptoms but not the root cause. We have no documentation of which content was AI-generated. No training for the team on what requires disclosure. No process for chatbot response verification. And the GDPR issue with the conversation logs — you haven't addressed that at all.

Elena

We'll handle GDPR separately. This gets us past the journalist deadline.

Niamh

And what about the Central Bank? If Robert Acheson files his complaint, they won't ask whether we added a banner to the chatbot. They'll ask whether we have a risk management system. Whether we have human oversight processes. Whether we trained our staff. Under Article 9, a risk management system must be "established, implemented, documented, and maintained." A checklist isn't a system.

Two weeks later: The disclosure labels are live on the blog. The chatbot has a small banner reading "This service uses AI." But the FAQ responses haven't been fully verified — Amir checked the top 20 most-asked questions, but the chatbot's knowledge base has 340 response templates. Three more contain outdated information. One incorrectly states NovaTech's dispute resolution process, directing customers to a team that was restructured four months ago.

Elena Vasquez losing control of the narrative
Consequence 3C: Narrative Control -2

Fiona takes the lead.

Fiona

Here's what we do. The journalist call tomorrow — I'll position us as leaders. "NovaTech has been at the forefront of AI adoption in fintech, and we're now at the forefront of AI transparency. We're launching a disclosure initiative." No mention of the chatbot issue. No mention of how long we've been running without oversight.

Elena

Fiona, Niamh has flagged serious—

Fiona

Niamh is doing her job. I'm doing mine. The board doesn't need to know the details until we've got the narrative right. Elena, tell your team — especially Amir — that nobody discusses our content processes externally. That includes LinkedIn, that includes former colleagues, that includes friends at other fintechs.

Niamh (standing up)

Fiona, I need to formally advise you that instructing staff not to discuss compliance concerns externally could be interpreted as suppression of whistleblowing. Under the EU Whistleblower Directive, which Ireland transposed in 2022, employees who report breaches of EU law — including the AI Act — are protected from retaliation. If Amir or anyone on Elena's team raises this concern through an external channel and we've told them not to talk, our position becomes indefensible.

Fiona

I'm not suppressing anything. I'm asking for message discipline.

Niamh

A regulator won't see it that way.

Friday's journalist call: Fiona delivers a polished narrative. Lena Richter listens, then asks:

Lena (on the call)

Thank you, Fiona. That's helpful. Just to clarify — your customer-facing chatbot, NovaAssist, does it use generative AI?

Fiona

Our chatbot uses advanced natural language processing, yes. It's designed to help customers with routine queries.

Lena

Does it disclose to users that they're interacting with AI?

Fiona (hesitating)

We're... in the process of adding enhanced transparency features.

Lena

So currently, no?

Fiona

We're implementing industry-leading transparency measures across all our AI touchpoints.

Lena

I understand. One more thing — Robert Acheson, a customer in Cork, says your chatbot gave him incorrect fee information and NovaTech wouldn't honour it. Are you aware of this?

Fiona

I'm not aware of specific customer cases, but NovaTech takes all customer feedback seriously.

The FT Alphaville article publishes Monday: "NovaTech Financial: The Fintech That Can't Explain Its Own AI"

The article details: 11 months of AI-generated content without disclosure, the chatbot fee error, and Fiona's call where she "repeatedly deflected questions about whether customers are told they're interacting with AI." Robert Acheson filed his complaint with the Central Bank of Ireland on Thursday afternoon.

Niamh (Monday, 7:30 AM)

The Central Bank's Regulatory Affairs team called me at 7:15. They've seen the FT article. They want to understand our AI governance framework. I told them we'd provide documentation within 48 hours. Elena — what documentation do we have?

Elena

We have a ChatGPT subscription and 127 unpublished blog post drafts with no version history.

Niamh

Then we have 48 hours to build a framework we should have built 11 months ago. And we're building it under regulatory scrutiny, not ahead of it.

Six Months Later

The decisions you made as Elena Vasquez rippled outward — to the marketing team, to NovaTech's reputation, to Robert Acheson, to the chatbot, and to public trust. Here's what happened.

Your Result
/ 9

Your Decisions

Key Lessons

1. Article 50 is not optional transparency — it creates specific, enforceable obligations to disclose AI interaction and label AI-generated content
2. You are responsible for every statement your AI systems make to customers — the "it's just a chatbot" defence has already failed in court
3. The cost of building AI governance is always lower than the cost of building it under regulatory scrutiny
4. AI literacy for staff (Article 4) has been enforceable since February 2025 — giving someone AI tools without training is itself a compliance failure
5. Identifying a risk and delaying action weakens your position with regulators — speed of response is a documented mitigating factor under Article 99
6. Narrative control is not a compliance strategy — Article 50 creates testable obligations, not brand positioning opportunities
7. Misrepresenting AI content as human-generated is worse than non-disclosure — it transforms a transparency gap into affirmative deception
8. The Air Canada chatbot ruling established that companies are liable for all representations their AI systems make — regardless of accuracy disclaimers

Key Legal References

Article 50(1)

AI Interaction Disclosure

Article 50(2)

Content Labelling

Article 50(4)

Timing and Manner of Disclosure

Article 13

Transparency

Article 4

AI Literacy

Article 26

Deployer Obligations

Article 99

Penalties

Air Canada v. Moffatt

Chatbot Liability Precedent

Ask your L&D team to share the team leaderboard from your LMS dashboard. Can your department beat the rest?

Next Scenario

In Module 3, you'll step into a very different role: Head of Risk at NovaTech Financial. The company's AI credit scoring model has been rejecting applicants from certain postal codes at alarming rates. A rejected applicant has filed a complaint. And the AI vendor won't share how the model works. This is Article 6 territory — high-risk AI, where the stakes are highest.


Supplemental Resource

EU AI Act Quick Reference Guide

A printable summary of all key articles covered across the five modules — Articles 4, 5, 6, 9, 13, 14, 25, 26, 27, 50 and 99. Save as PDF for offline reference.

Module 2 Complete

The Disclosure

You navigated the disclosure dilemma. Now test what you've learned.

← Back to Course Take the Module Quiz →