Art. 19 Deadline 04:00:00
Regulator
Clients
Board
Vendor

DORA — Digital Operational Resilience

The Outage

Meridian Payments — Monday, 9:47 AM — Month-End

An interactive scenario about a critical vendor outage, the 4-hour reporting clock, and what happens when the CEO is on a plane and the regulator is on the line.

4-hour window. CEO unreachable. The regulator is waiting.

All languages available in the full course

Jordan Adams, Head of Compliance
Your Role

Jordan Adams

Head of Compliance at Meridian Payments — a mid-size EU payments firm processing €2.3 billion annually. DORA-regulated. 850 employees.

It's 9:47 AM on the last business day of the month. Your critical cloud provider's status page just turned amber.

Monday, 9:47 AM — Month-End Processing Day

Your monitoring dashboard changes colour. Amber. The payment gateway response times are climbing: 200ms → 800ms → 1,400ms.

■ DEGRADED — Payment Gateway (CloudVault EU-West)
■ OPERATIONAL — Core Banking API
■ OPERATIONAL — Fraud Detection
■ DEGRADED — Merchant Settlement Queue
Last updated: 09:47:12 UTC

It's month-end. 14,000 merchant settlements are queued. Your largest client — a retail bank processing €180M today — has an SLA that triggers penalty clauses at 99.5% uptime. You're already below that.

Emma Powell, your CTO, messages from the engineering floor: "Seeing latency spikes on CloudVault. Checking with their team. Probably their side."

Emma Powell
Emma Powell — CTO

"Jordan, it's worse than I thought. CloudVault's EU-West region is having a major issue. Their status page says 'investigating' but I've been on hold with their support for 20 minutes."

"Payment processing is down to 40% capacity. The settlement queue is backing up. If this isn't resolved in the next hour, we'll miss the month-end window for 8,000 merchants."

You "What about failover?"

Emma "I can failover to EU-Central. Forty-five minutes. But Jordan — twelve million in mid-flight. If even one of those is a retail bank client, that's a direct debit failure for their customers. Real people bouncing rent payments."

Your phone buzzes. A text from David Chen, your CEO, sent before his flight: "Month-end is clean, right? Singapore board presentation depends on it. Don't let anything blow up while I'm in the air." He's now unreachable for 10 hours.

Emma Powell
Decision 1 of 3 — Incident Classification

Your monitoring shows 14,000 merchants affected. Payment processing at 40% capacity. €180M in month-end settlements at risk. The question on your desk: how serious is this?

Emma thinks it's CloudVault's problem. The status page says "investigating." You have incomplete information — but the queue is growing by the minute.

Do you classify this as a major ICT-related incident?

Classify as major — start the 4-hour clock now
14,000 merchants affected, €180M at risk, critical payment function degraded. This meets the criteria. Notify the competent authority. Better early than late.
Wait for Emma's assessment — classify in 90 minutes
Give the CTO time to determine if it's really a major incident or just CloudVault having a bad morning. You'll have better data. But the clock is already ticking.
Classify as significant but not major — monitor and escalate if needed
It's a vendor issue, not an internal failure. "Significant" doesn't trigger the 4-hour notification. If it gets worse, you can reclassify. If it resolves, no harm done.
Emma Powell
You

"This is a major incident. 14,000 merchants, €180M in settlements, critical payment function below threshold. I'm classifying it now and starting the notification process."

Emma "Jordan, it might resolve in 20 minutes. CloudVault has had blips before. If you file a major incident report and it turns out to be nothing, we look like we overreacted."

You "If it resolves in 20 minutes, I update the report. If it doesn't and I haven't classified it, we're explaining to the regulator why we waited. Which conversation would you rather have?"

You open the incident management platform and log the classification. The 4-hour notification window starts now: 10:15 AM. Deadline: 2:15 PM.

+3 Regulator | +1 Board
90 Minutes Later — 11:17 AM

Emma's assessment: "It's definitely CloudVault. Their EU-West storage layer failed. They're restoring from backups. ETA: unknown."

You classify it as major at 11:17 AM. The 4-hour window starts now. Deadline: 3:17 PM.

But the incident started at 9:47. When Dr. Rossi asks "when did you become aware?", the answer is 9:47. When she asks "when did you classify?", the answer is 11:17. The gap is 90 minutes.

"Why did it take you 90 minutes to determine that 14,000 affected merchants and €180M in at-risk settlements constituted a major incident?"

-1 Regulator | +1 Operations
2 Hours Later — 11:47 AM

The incident hasn't resolved. It's gotten worse. Payment processing is now at 15% capacity. Three merchant clients have escalated. Your largest client's SLA breach penalty is now active: €50,000 per hour.

You reclassify to major at 11:47. The 4-hour window starts at 11:47. Deadline: 3:47 PM.

At 11:22, Fatima Khoury in Rotterdam calls her bank. She runs a 12-person logistics company. Her payroll settlement didn't process. Twelve people expecting their salary today won't get it. She's been on hold for 40 minutes. Nobody can tell her why.

The regulator will see the timeline. 9:47 to 11:47 — two hours of a critical payment function being degraded before classification. Under Article 17, classification should happen "as soon as the incident is detected." Two hours isn't "as soon as."

-2 Regulator | +1 Vendor
10:15 AM — Incident Triage

While you deal with the CloudVault outage, four more alerts have hit your queue. Under DORA Article 17, each must be classified by severity. You have 60 seconds.

Classify each as CRITICAL, MAJOR, or MINOR.

0:60

ATM Network — 340 machines offline across 3 regions

Customer-facing service, financial impact, multi-region scope

Internal HR portal — slow load times (8s vs normal 2s)

Internal system only, no financial or customer impact

Mobile banking app — intermittent login failures for 12% of users

Customer-facing, partial disruption, single service

Core banking — settlement engine processing at 40% capacity

Settlement risk, regulatory reporting affected, systemic impact

11:30 AM — The CEO Problem

Your CEO, David Chen, is on a flight to Singapore. He took off 40 minutes ago. He's unreachable for the next 10 hours.

Meanwhile, CloudVault's account manager finally calls back:

Marcus Hahn — CloudVault "Jordan, look — I know this is bad timing. We're seeing issues in EU-West. Our team is on it."

You "Marcus, when we signed the contract, you personally assured me EU-West had full redundancy. You said — I have the email — 'no single point of failure in our EU infrastructure.' What happened?"

Marcus A pause. "The redundancy is at the application layer. This is a storage controller issue. We... I'll be honest, Jordan, I'm not sure our maintenance schedule covered the firmware on those controllers. I need to check."

You "So the 'no single point of failure' had a single point of failure."

Marcus "I'll update you in an hour."

The notification deadline is approaching. Your CEO is unreachable. The vendor is vague. You need to decide what to file.

Dr. Elena Rossi
Decision 2 of 3 — Regulatory Notification

The 4-hour window under Article 19 is closing. You must file an initial notification with your national competent authority. The CEO is on a plane. The vendor says "2-4 hours." You don't have a root cause.

What do you file?

File with what you know — incomplete but honest
"Major ICT incident affecting payment processing. Vendor-side failure. 14,000 merchants impacted. Root cause unknown. CEO unreachable — filing under delegated authority." Honest. Incomplete. On time.
Wait for Marcus's update — file 1 hour late with better data
Marcus promised an update in an hour. If you have the root cause, the report is stronger. But you'll miss the 4-hour window by 60 minutes.
File a minimal report — "ICT incident under investigation"
Technically meets the filing requirement. Doesn't reveal the severity. Buys you time to get better information without technically being late.
Dr. Elena Rossi
You

You file at 2:12 PM — 3 minutes before deadline.

Dr. Elena Rossi — National Competent Authority "Mr. Adams, I've received your initial notification. Thank you for filing within the window. I note the root cause is unknown — when do you expect the intermediate report?"

You "Within 72 hours. We're working with CloudVault to determine root cause."

Elena "The report notes your CEO was unreachable. Who authorised the filing?"

You "I did. Under the delegated authority in our ICT incident management policy, section 4.3."

Elena "Good. That's exactly the kind of documentation we expect to see."

+3 Regulator | +1 Board
3:15 PM — One Hour Late

Marcus's update arrives at 3:08 PM: "Root cause identified — storage controller firmware bug in EU-West. Patch deploying now. Full resolution by 5 PM."

You file at 3:15 PM. Better data. But 60 minutes past the deadline.

Dr. Rossi "Mr. Adams, your initial notification was due at 2:15. It arrived at 3:15. Under Article 19, the initial report must be submitted within 4 hours of classification. Can you explain the delay?"

You "We were waiting for the vendor to confirm root cause—"

Dr. Rossi "The initial report doesn't require root cause. It requires what you know. You could have filed at 2:15 and updated when the root cause was confirmed. That's what the intermediate report is for."

-2 Regulator | 0 Board
Dr. Elena Rossi
Dr. Elena Rossi

"Mr. Adams, I've received your initial notification. It says 'ICT incident under investigation.' That's it."

"Under Article 19, the initial notification must include: the nature and classification of the incident, a first assessment of impact, and a point of contact. Your filing contains none of these."

You "We're still determining the scope—"

Elena "You classified this as major. That means you already determined it meets the criteria. The initial report should reflect that assessment. I'll need a revised submission within 2 hours."

-1 Regulator | -1 Board
Marcus Hahn
5:30 PM — The Root Cause

CloudVault's incident is resolved. Payments are processing normally. The settlement queue clears by 7 PM. No data was lost — but 6 hours of degraded service on month-end.

Marcus Hahn "It was a firmware bug in our storage controller. Affected EU-West only. We've deployed the patch. It won't happen again."

You "Marcus, this outage lasted 6 hours and affected our critical payment function. Under DORA Article 28, we need to assess whether your resilience arrangements meet the standards in our outsourcing agreement."

Marcus "That's... let me check with our compliance team and get back to you."

He doesn't get back to you. The board meeting is Thursday. The CEO is back tomorrow. You need to decide how to frame this.

Decision 3 of 3 — The Board Review

Thursday morning. David Chen is back. The board wants to understand what happened. Your SLA penalties total €300,000. Three merchant clients have requested "assurance meetings."

The question isn't just "what went wrong." It's "whose fault was it?" CloudVault's firmware bug caused the outage. But DORA Article 28 says you are responsible for managing third-party ICT risk. The vendor failed. But did your oversight fail too?

Full accountability — "Our vendor management failed. Here's the fix."
Present the timeline honestly. Acknowledge that your third-party risk monitoring didn't catch that CloudVault's EU-West had no firmware update schedule. Propose a remediation plan.
Shared responsibility — "CloudVault failed, and here's how we responded"
Present your incident response positively. Highlight what worked (classification, notification, recovery). Note CloudVault's failure as the root cause. Recommend enhanced vendor oversight.
Vendor blame — "CloudVault is the problem. We're reviewing the contract."
The firmware bug was CloudVault's. The SLA breach was CloudVault's. Position Meridian as the victim. Recommend contract penalties or vendor switch.
You

"The outage was caused by CloudVault's firmware bug. But our third-party risk management didn't identify that their EU-West maintenance schedule had a gap. That's on us."

"Here's the remediation plan: quarterly vendor resilience reviews, automated monitoring of our critical ICT providers' maintenance schedules, and updated exit strategy documentation for CloudVault."

The CEO nods. The board appreciates the honesty. When Dr. Rossi reviews your intermediate report and sees the same accountability, she notes: "Meridian's incident response demonstrates a mature approach to third-party ICT risk management."

+3 Regulator | +2 Board | -1 Vendor
You

"CloudVault experienced a firmware failure in their EU-West region. Here's how we responded: classified within [X] minutes, notified the regulator within the 4-hour window, initiated failover procedures, and recovered full processing by 7 PM."

"Recommendation: enhanced vendor oversight including quarterly resilience reviews."

The board is satisfied. But Dr. Rossi's intermediate report review notes: "The response timeline is well-documented. However, the entity's third-party risk assessment for CloudVault was last updated 14 months ago. DORA Article 28 requires ongoing monitoring, not periodic review."

+1 Board | 0 Regulator
You

"CloudVault's infrastructure failed. Their firmware management was inadequate. We're reviewing the contract and considering penalties."

The CEO asks: "Did we know about this risk?" The honest answer is no — because your last vendor risk assessment was 14 months ago. You don't say that.

Dr. Rossi's report is less charitable: "The entity's presentation to the management body attributed the incident entirely to the ICT third-party service provider. However, under Article 28(2), the financial entity retains full responsibility for compliance with DORA, including oversight of ICT third-party service providers. The absence of an updated risk assessment suggests insufficient ongoing monitoring."

-2 Regulator | -1 Board | +1 Vendor
Meridian Payments — Incident Management System
Logged in: Jordan Adams, COO

System Status

Payment GatewayDOWN
Core Banking APIOK
Fraud DetectionOK
Settlement QueueDOWN
Merchant PortalDEGRADED

Art. 19 Filing Deadline

01:42:18

remaining

Incident Feed

09:47 Gateway response: 200ms → 800ms → 1,400ms
09:52 CloudVault EU-West: “investigating”
10:03 Emma: “Worse than expected. 40% capacity.”
10:15 CEO SMS: “Month-end running clean?”
10:22 Settlement queue backing up — 14,000 pending
10:38 CloudVault Account Manager calls back
11:02 Classification decision made
11:18 Filing window: 01:42 remaining

Initial Incident Notification — Article 19

File Your Report

Complete all fields. The competent authority is waiting.

Incident Review

Compliance Score

0

Regulator Satisfaction

50%

Client Impact

50%

Board Confidence

50%

Your Incident Timeline

What Happened

DORA Articles in Play

Article 17 — ICT incident classification criteria
Article 19 — Reporting obligations (4h initial, 72h intermediate, 1 month final)
Article 28 — Third-party ICT risk management
Article 30 — Key contractual provisions for ICT services

Your Decisions

What Happened Next

You scored . The 4-hour clock doesn't wait. Try a different path?

Ready to train your team? Take the DORA readiness assessment