Blend
Compliance Training 22 April 2026

EU AI Act Article 4: What Evidence Regulators Will Actually Ask For

Article 4 requires 'sufficient AI literacy' — but what does that mean in practice? A breakdown of the evidence national authorities will expect after 2 August 2026.

By Tom Payani

On 2 August 2026, EU AI Act Article 4 becomes enforceable. Every organisation that provides or deploys AI systems in the EU must ensure their staff have "sufficient AI literacy" — a standard that applies whether the AI in question is ChatGPT, an automated hiring tool, a credit-scoring model, or a chatbot handling customer complaints.

Enforcement sits with national market surveillance authorities — the CNIL in France, the BaFin and Bundesnetzagentur in Germany, the DNB in the Netherlands, the AGID in Italy, the Spanish AI agency AESIA, and their counterparts across the bloc. Penalties for non-compliance reach €15 million or 3 per cent of global turnover for AI providers, and €7.5 million or 1.5 per cent for deployers.

The question most compliance officers are working through right now is not whether to do AI literacy training. Most organisations have accepted that obligation. The question is what evidence a regulator will actually want to see — because "sufficient AI literacy" is written into the Regulation without a prescriptive definition, and each national authority will interpret it differently in the first 18 months of enforcement.

This article breaks down the four categories of evidence regulators are likely to ask for, why standard e-learning certificates don't meet the threshold, and what compliance-grade evidence looks like in practice. For a primer on what Article 4 requires at a policy level, see our earlier post on Article 4 literacy requirements. For the broader compliance picture, see our AI Act compliance checklist.


What "Sufficient" Actually Means

Article 4 reads as follows: "Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used."

The word doing the work here is "sufficient." Three principles flow from it.

Proportionality. Training must match the risk level and complexity of the AI system. Staff using a general-purpose language model (low-risk, everyday tool) need different training from staff using an automated hiring system (Annex III high-risk AI). A one-size-fits-all course fails this principle.

Context-specificity. Training must reflect how the AI is actually used in the organisation. Generic "AI for business" training won't demonstrate that staff understand this company's AI systems, their particular risks, or their operational procedures.

Impact on affected persons. Training must cover the rights of people affected by the AI. For Annex III systems — hiring, credit, healthcare, law enforcement — this includes Article 86 rights to explanation, the right to contest, and procedural fairness requirements.

Each of these principles is an evidence question in disguise. Can you show that your training was proportionate to the AI systems actually in use? Can you show it was context-specific to your operations? Can you show it covered the rights of affected persons?

If the answer to any is "we'd have to dig" — you're not ready.


The Four Categories of Evidence Regulators Will Ask For

Drawing from how market surveillance authorities have enforced adjacent regulations (GDPR, MiFID II, the ePrivacy Directive), the evidence categories Article 4 will surface are consistent.

1. An AI Inventory

Every organisation subject to Article 4 needs a documented list of the AI systems in use, who uses them, and the associated risk classification.

This isn't optional. Without an inventory, the training cannot be proportionate (because you don't know what systems need training for), and the regulator cannot assess whether the training matched the actual AI footprint.

The minimum inventory fields:

  • System name and vendor
  • Category of AI under the Act's risk taxonomy (minimal, limited, high-risk under Annex III, prohibited)
  • Purpose and deployment context
  • Departments / roles using it
  • Date of inclusion in the inventory and last review

Many organisations discover during this exercise that their AI footprint is larger than assumed. Shadow AI usage — staff using consumer tools like ChatGPT or Claude without formal procurement — is a common blind spot. Regulators will ask how shadow AI is identified and brought under governance.

2. Training Records Matched to the Inventory

The training record is the core evidence artefact. For each member of staff in scope, the record must show what AI systems they use and what training they received for each.

Minimum fields per learner:

Field Example
Learner name Marie Laurent
Role Head of Recruitment
AI systems used HireVue Assessments (Annex III), ChatGPT Enterprise (limited risk)
Training completed Annex III Deployer Obligations (3h), General AI Literacy (1h)
Completion date 15 March 2026
Comprehension evidence Scenario decision log, score 8/10, reviewed by DPO
Next refresh due 15 March 2027

The key is that training is mapped to AI use, not delivered as a generic one-off. A learner using two different AI systems should have two separate training completions logged — one for each. A regulator asking "how did this learner know what to do with your hiring AI?" will be answered by the record, not an ad-hoc explanation.

3. Proof of Comprehension — Not Just Completion

This is where most organisations fail.

Standard corporate e-learning produces a completion certificate: "Learner X completed the AI Act Training course on Date Y." This proves attendance. It does not prove comprehension.

The distinction matters because Article 4 is explicit about "sufficient AI literacy." Literacy is applied knowledge, not absorbed information. The regulator's implicit question is: "can this person recognise a biased AI output, know what to do about it, and understand the affected person's rights?"

A multiple-choice quiz at the end of a video course is weak evidence of this. Multiple-choice tests reward recognition, not application. A learner can click the correct answer while still not being able to apply the concept under pressure.

Stronger evidence options:

  • Scenario-based exercises where the learner makes decisions inside a simulated AI-use case, with every decision logged and reviewed
  • Post-training applied assessments where the learner explains in their own words how they would handle a specific AI-related situation
  • Supervisor-signed comprehension confirmations where a line manager attests that the learner has demonstrated applied understanding in a real work context

All three produce evidence that goes beyond "did the learner finish the course." This is the distinction Article 4's "sufficient" standard implicitly demands.

For a deeper discussion of why scenario-based training produces stronger regulatory evidence, see our piece on scenario-based compliance training.

4. Evidence of Review and Continuous Improvement

AI systems change. Your AI Act compliance must demonstrate that you're keeping up.

Minimum evidence:

  • Annual review of the AI inventory — signed and dated
  • Post-incident reviews for any near-misses (biased output discovered, incorrect automated decision, candidate complaint about AI screening)
  • External learning log — evidence the DPO or Compliance Officer tracks regulatory guidance updates from the EU AI Office and national authorities, and considers whether training content needs updating
  • Training content version control — when training is updated, evidence of what changed and when learners received the revised version

None of this needs to be burdensome. A quarterly 30-minute review meeting with the Responsible Person, a short written note filed, and the annual procedure review done formally. That's the standard a reasonable national authority will find credible during their first year of enforcement.


Why Completion Certificates Are Not Evidence

Most enterprise LMS platforms produce completion certificates as the default training record. These show learner name, course name, date, optional score.

A completion certificate is necessary. It is not sufficient.

Consider what a completion certificate actually proves. The learner logged in. They advanced through the modules. They may have scored well on a multiple-choice test. None of this answers the question a regulator will ask: can this person recognise the AI systems in use, understand their risks, and apply the Act's principles under pressure?

The gap between completion and competence is the gap regulators care about. It is also the gap that national authorities in adjacent compliance domains — the ICO on GDPR, the FCA on conduct rules — have increasingly asked compliance teams to close with decision-level evidence rather than attendance records.

The AI Act's "sufficient" standard is new enough that no national authority has published definitive enforcement guidance yet. But the direction is clear from how these authorities have enforced similar open-textured standards. Completion certificates are the minimum. Applied evidence is where defensible compliance sits.


The AI Literacy Evidence Checklist

For compliance officers running a self-audit before the 2 August 2026 deadline, the following 12-point checklist surfaces the most common gaps:

Inventory

☐ 1. AI inventory exists and is current (last reviewed within 3 months) ☐ 2. Inventory includes shadow AI (consumer tools used without formal procurement) ☐ 3. Each AI system is classified under the Act's risk taxonomy

Training programme

☐ 4. Training content differs by AI risk level (high-risk Annex III systems trained to a higher standard) ☐ 5. Training is role-specific (HR staff receive different content from customer-service staff) ☐ 6. Training covers Article 86 affected-person rights (for Annex III deployers) ☐ 7. Training content is reviewed when AI systems change or when new guidance is published

Evidence

☐ 8. Training records are matched to the AI inventory (each learner's record shows what systems they use and what training they completed) ☐ 9. Evidence of comprehension exists beyond completion certificates (scenario outcomes, applied assessments, supervisor confirmations) ☐ 10. Records are retrievable within one business day of a regulator's request

Governance

☐ 11. Annual review is scheduled and documented with the Responsible Person's signature ☐ 12. Post-incident reviews are captured when AI-related near-misses occur

Organisations scoring 10 or above have a credible position. Below 7 and there is substantial remediation work required before the deadline.


The Practical Path to August 2026

Fourteen months is both a lot of time and not much time, depending on where the organisation starts.

For a company with an existing AI governance function, a maintained inventory, and annual compliance training already in place — the path is incremental. Map existing training to Article 4 requirements, upgrade the comprehension-evidence layer, add the Annex III high-risk modules where AI systems require it.

For a company without existing AI governance — which is most mid-market organisations — the path is longer but still feasible. The sequence that works:

  1. Build the AI inventory in month 1. This drives every subsequent decision. If you don't know what AI you use, you cannot train staff to use it properly.
  2. Design the training programme by risk tier in month 2. Each AI system in the inventory maps to a training requirement — general literacy for low-risk, deeper Article 86 coverage for Annex III.
  3. Roll out training in months 3-6. Allow six months for completion, accounting for staff turnover, new starters, and shift patterns.
  4. Capture evidence rigorously. Don't rely on the default LMS completion record. Require decision-level outputs (scenario logs, applied assessments) as standard.
  5. Complete the first annual review in month 9-10. This gives you time to remediate gaps before the enforcement date.
  6. Audit your evidence package in month 12. Simulate the regulator's request. If you can produce every artefact within one business day, you're ready.

Our EU AI Act Article 4 training course is a scenario-based module specifically designed to produce the decision-level evidence described above. Learners play an HR director using AI-assisted hiring, face bias and transparency decisions, and every choice is logged with legal citations. The output is a decision-path log exportable for your compliance records — not a tick-box certificate.

For a 2-minute self-assessment on where your current readiness stands, the AI Act Article 4 Readiness Calculator gives a score across 7 dimensions including the evidence layer.

The organisations that will pass their first national-authority review are the ones treating "sufficient AI literacy" as an evidence-generation problem, not a training-completion problem. The distinction is subtle. The consequence of missing it is a €7.5 million fine.

EU AI Act Article 4 AI literacy compliance evidence DPO AI governance regulatory compliance

EU AI Act Article 4 Readiness Assessment

Score your organisation's AI literacy training readiness in 2 minutes. 7 questions, no email required.

Free: AI Training Audit for Your Team

See where AI could improve your training programs. Interactive 5-minute assessment.

Start the Audit