NIS2 Compliance Training: Slides vs. Simulations — What Auditors Actually Accept
Article 20 doesn't say 'complete a course.' It says gain sufficient knowledge to assess cybersecurity practices. Here's why your training format determines whether you pass the audit.
When the NIS2 Directive landed in national law across EU member states, most compliance teams reached for the tools they already had: a webinar recording, a slide deck someone put together, or a generic eLearning module from their existing library. The thinking was straightforward — Article 20 says management must receive training, and training means courses, so book the course and tick the box.
That logic has a problem. Auditors are not checking whether training occurred. They are checking whether it worked.
This article looks at the three training formats organisations most commonly use for NIS2, what evidence each one produces, and whether that evidence holds up against what Article 20 actually demands.
What Article 20 Actually Says
Before comparing formats, it is worth reading the directive's language carefully. Article 20(2) of NIS2 states:
"Member States shall ensure that the members of the management bodies of essential and important entities are required to follow training, and shall encourage essential and important entities to offer similar training to their employees on a regular basis, so as to gain sufficient knowledge and skills in order to identify risks and assess cybersecurity risk-management practices and their impact on the services provided by the entity."
Three elements of this text carry legal weight.
First, the obligation sits with management bodies — not just the IT team. Directors, board members, and senior executives are personally in scope.
Second, the goal is described in active, functional terms: to identify risks and to assess cybersecurity risk-management practices. The verb is "assess." Not "be aware of." Not "understand the principles of." Assess — implying the capacity to make judgements, not simply to recall information.
Third, knowledge must be sufficient relative to a defined task: evaluating the cybersecurity posture of the organisation and its impact on services. This is not a pass/fail quiz threshold. It is a functional competence standard.
The format of training is not specified in the directive. That silence is significant. It means the burden falls on the organisation to demonstrate that whatever format was used actually produced the required competence. The question is not "did we run training?" It is "can we prove the training worked?"
Format 1: Webinars and Presentations
Live or recorded sessions delivered to a group — a supplier briefing, an internal awareness hour, an external consultant presenting the NIS2 framework to the board.
What they produce: An attendance log. Possibly a calendar invite. If well-organised, a registration record showing who joined and for how long.
What they prove: Presence. A person was in the room, or on the call, for a period of time.
Attendance records do not distinguish between a director who engaged carefully and one who had the session running in the background while responding to email. They record exposure to content, not absorption of it. They cannot demonstrate that a board member can now identify a reportable incident, evaluate a vendor's security posture, or challenge a CISO's risk assessment.
This is not a criticism of webinars as a learning format. They have legitimate uses — communicating policy changes, raising awareness of emerging threats, creating shared language across a team. But as the primary evidence of Article 20 compliance, an attendance log is thin. If an authority asks "how do you know your management body can assess cybersecurity risk," the answer "they attended a 60-minute presentation" is unlikely to satisfy.
Format 2: eLearning Modules and Slide-Based Courses
Self-paced digital courses — typically a series of slides, reading passages, or short videos followed by a multiple-choice quiz. The learner completes the module, passes the quiz, and receives a certificate.
What they produce: A completion record and a quiz score. These are storable, exportable, and auditable. They are the standard output of most LMS platforms.
What they prove: Information recall at a specific moment.
Multiple-choice assessment is well suited to testing whether someone read the material. It is poorly suited to testing whether they can use it. Selecting the correct answer to "which of the following is a reportable NIS2 incident?" does not demonstrate the ability to sit in a board meeting and determine whether a specific operational disruption meets the reporting threshold.
The learning science on this is not contested. Kirkpatrick's Level 1 and Level 2 evaluation — satisfaction and knowledge testing — correlate at approximately 0.09 with on-the-job performance. Research consistently finds that 70 to 74 per cent of compliance training content is forgotten within a month of completion. The certificate documents an event. It does not document a change in capability.
This matters for NIS2 because the competence standard is forward-looking. The directive asks whether the management body can assess risk-management practices at the point of governance — in a meeting, in a crisis, in an incident response. A quiz score from last quarter does not speak to that.
Completion records are still worth holding. They demonstrate organisational intent and procedural diligence. But when paired with no other evidence, they describe a training event, not a training outcome.
Format 3: Interactive Simulations
Scenario-based learning that places participants in realistic governance situations — a simulated ransomware incident, a vendor due diligence decision, a regulatory reporting deadline — and requires them to make sequenced decisions with consequences.
What they produce: A decision audit trail. The simulation records which choices were made, in what order, against what constraints, and how those decisions affected the modelled outcome. Scores can be attributed to specific competence dimensions: risk identification, escalation judgement, regulatory knowledge, stakeholder communication.
What they prove: Demonstrated ability to govern under realistic pressure.
The distinction is not cosmetic. A participant who completes a ransomware simulation has not just read about incident response — they have had to decide, in sequence, whether to isolate affected systems, whether the incident meets the 24-hour notification threshold, what to communicate to the board, and whether to engage external forensics. Each decision is logged against the correct course of action.
That log is qualitatively different evidence. It does not say "this person attended training." It says "this person, faced with a simulated incident meeting the NIS2 reporting criteria, made the correct escalation decision." That is closer to what Article 20 is actually requiring.
The US Department of Justice, in guidance on corporate compliance programmes, has been explicit that checkbox compliance — training completion without demonstrated behavioural change — is insufficient to establish an effective compliance programme. While NIS2 operates in a different jurisdiction, European supervisory authorities are likely to take a similar view as enforcement matures. The question will be whether training changed behaviour, not whether it happened.
The Evidence Question
Here is a practical test. If a national authority contacted your organisation tomorrow and requested evidence of Article 20 compliance for your management body, what would you hand them?
Webinar records: A list of names and attendance durations. No indication of what was understood or retained.
eLearning certificates: A PDF showing completion and a quiz score. Evidence of exposure and short-term recall.
Simulation decision logs: A record showing each participant's decisions against a realistic cybersecurity scenario, with competence scores across the relevant governance dimensions.
None of these formats is inherently sufficient on its own. Context matters — an organisation with strong internal governance processes, regular board-level cyber discussions, and documented incident response rehearsals is in a different position to one relying on a single annual training event regardless of format.
But if you are trying to construct an evidence base that addresses the specific language of Article 20 — the ability to identify risks and assess cybersecurity risk-management practices — then the format that produces the richest evidence is the one that tests those exact capabilities.
Why the "Assess" Verb Changes the Standard
It is worth dwelling on the word "assess" for a moment, because it does a lot of work in Article 20.
Awareness-level training — the kind that explains what NIS2 is, why it matters, and what the key obligations are — produces awareness. That is a meaningful outcome. An executive who understands the regulatory landscape is better placed than one who does not.
But assessment is a higher-order skill. It requires the ability to look at a specific situation — a vendor contract, an incident report, a security architecture proposal — and make a judgement about its adequacy. That requires not just knowledge but the ability to apply knowledge under conditions of ambiguity and time pressure.
Learning science has a useful framework for this distinction. Bloom's Taxonomy separates remembering and understanding (lower-order) from applying, analysing, evaluating, and creating (higher-order). Article 20's competence standard sits squarely in the higher-order range. Training formats that only engage the lower-order capabilities — presentations, slide-based courses — may produce knowledge without producing the competence the directive requires.
Simulation-based approaches are specifically designed for the higher-order range. The learner cannot succeed by recalling a fact. They succeed by applying a framework to a novel situation, making a judgement, and living with the consequences within the simulation. That is a closer analogue to what governance actually requires.
Practical Recommendations
Choosing a training approach for Article 20 compliance involves trade-offs between cost, time, logistics, and evidence quality. A few observations that may help.
Use webinars and presentations for awareness, not compliance evidence. They are efficient for communicating changes, maintaining cultural awareness, and creating shared language. They should not be the primary mechanism for demonstrating management competence.
Use eLearning for foundational knowledge and documentation. Completion records and quiz scores are easy to maintain and demonstrate procedural diligence. Pair them with something that tests application, or they will not address the Article 20 standard on their own.
Use simulations to generate competence evidence. The decision log from a well-designed scenario is the closest available proxy for demonstrated governance capability. It directly addresses the "assess" standard. It is also the format most likely to produce genuine learning that persists — applied practice with meaningful feedback is how adults build durable skill.
Build a layered evidence base. The organisations best placed for an audit will not rely on a single format. They will have a combination of foundational knowledge records, applied practice evidence, and ongoing governance activity (board minutes, risk committee records, incident response rehearsals) that together demonstrate a functioning compliance culture.
Check whether your training evidence addresses the right role. Article 20 is explicitly a management body obligation. Training programmes designed for IT staff or general employees will not satisfy the specific competence standard for directors and senior executives, even if the certificates look identical.
What This Means for Your Training Decision
The point of this comparison is not to declare one format the winner. It is to be honest about what each format proves, because that is the question auditors will eventually ask.
A programme built entirely on webinar attendance and eLearning certificates is not necessarily non-compliant. But it leaves a meaningful gap between what the directive requires — demonstrated ability to assess cybersecurity risk — and what the evidence shows. That gap is a risk.
A programme that includes scenario-based elements produces evidence that maps more directly to Article 20's language. It is also more likely to result in the actual capability change the directive is trying to achieve — which, beyond the compliance calculation, is the real point.
If you want to see what simulation-based NIS2 training looks like in practice before committing to a format, the NIS2 ransomware scenario is available to try directly. It runs a board-level incident response decision sequence and produces the kind of scored decision log described above.
For a structured view of where your current training approach sits against the Article 20 standard, the NIS2 readiness assessment takes about two minutes and gives you a clear picture of your evidence gaps.
The full NIS2 board training programme is available for organisations that want to build a complete evidence base — combining regulatory knowledge, applied scenario practice, and documented competence records across the management body.
The compliance question and the learning question have the same answer here. Training that produces evidence of assessed competence is better evidence and better training. The format you choose determines both.