AI Act Compliance Checklist for L&D Directors
A practical AI Act compliance checklist for L&D directors covering AI inventory, risk classification, training obligations, and evidence trails.
The AI Act does not arrive all at once. Different obligations phase in across different timelines, with the first wave — the Article 4 AI literacy requirements — taking effect in August 2026. For L&D directors, this creates a specific problem: you need a structured approach to something that most organisations have never had to think about before.
This checklist is designed to be a working document. It covers the six areas that L&D teams need to address between now and the August 2026 deadline, and it is structured so you can use it as the basis for internal reporting, project planning, or board-level updates.
If you want to understand the Article 4 literacy requirements in more detail before working through this checklist, our breakdown of AI Act Article 4 and what it means for employee training covers the legal text and its practical implications.
Step 1: Inventory Every AI System in Your Organisation
You cannot train people on AI systems they do not know about. The first step — and the one most organisations underestimate — is building a comprehensive inventory of every AI system deployed, procured, or under evaluation across the business.
This is broader than most people expect. AI systems under the AI Act are not limited to large language models or chatbots. The definition in Article 3 covers any machine-based system that operates with varying levels of autonomy, receives input, and generates outputs such as predictions, recommendations, decisions, or content that can influence physical or virtual environments.
In practical terms, your inventory should include:
- Recruitment tools — CV screening software, automated shortlisting, video interview analysis, psychometric scoring platforms
- HR and workforce management — automated scheduling, performance prediction, attrition risk scoring, compensation benchmarking tools
- Customer-facing AI — chatbots, recommendation engines, dynamic pricing, personalised marketing tools
- Operational AI — predictive maintenance, supply chain optimisation, fraud detection, quality control automation
- Productivity tools — AI writing assistants, code generation tools, AI-powered search, meeting transcription and summarisation
- Decision-support systems — any tool that provides recommendations or scores that influence human decisions, even if a person makes the final call
For each system, record:
- System name and vendor
- Business function and department
- Who uses it (roles, not just team names)
- What decisions it influences
- Data inputs and outputs
- Current training provision (if any)
The goal is not perfection on the first pass. The goal is visibility. Most organisations discover AI systems during this inventory that no central team knew about — particularly in departments that procured SaaS tools independently.
Owner: Assign a single person to lead the inventory. In most organisations, this sits with IT or procurement, but L&D needs to be involved from the start because the training obligations flow directly from this list.
Deadline: Complete the initial inventory at least four months before the August 2026 deadline. You will need the remaining time to classify, plan, and deliver training.
Step 2: Classify Each System by Risk Level
The AI Act introduces a risk-based classification framework. Where a system falls in this framework determines what obligations apply — and by extension, what training your people need.
The four tiers are:
Unacceptable risk — Prohibited outright. This includes social scoring systems, real-time biometric surveillance in public spaces (with narrow exceptions), and AI that exploits vulnerabilities of specific groups. If anything on your inventory falls here, escalate immediately. This is not a training issue; it is a legal one.
High risk — Subject to the most extensive obligations. The AI Act lists specific use cases in Annex III, including: AI in recruitment and HR management, credit scoring, insurance risk assessment, access to essential services, law enforcement, and safety components in regulated products. If your organisation uses AI for any of these purposes, those systems carry the heaviest compliance burden.
Limited risk — Systems with transparency obligations. Chatbots, deepfake generators, and emotion recognition systems must disclose that users are interacting with AI. Training here focuses on ensuring the people who deploy and manage these systems understand disclosure requirements.
Minimal risk — Most AI systems fall here. Spam filters, AI-powered search, recommendation engines for internal use. The AI Act does not impose specific technical requirements on these, but Article 4 literacy obligations still apply to the people who use them.
For your checklist, mark each system on the inventory with its risk classification. Where classification is ambiguous — and it will be for some systems — document your reasoning. This documentation itself becomes part of your compliance evidence.
Step 3: Map Training Obligations to Roles
Article 4 of the AI Act requires that providers and deployers of AI systems ensure their staff have "sufficient AI literacy" — taking into account their technical knowledge, experience, education, training, the context in which the AI system is used, and the persons or groups on whom the system will be used.
This means training is not one-size-fits-all. The obligation scales with the person's role and the system's risk level.
Here is how to map it:
Executive and board level — Need to understand the AI Act's risk framework, the organisation's obligations, the penalties for non-compliance, and their governance responsibilities. They do not need technical depth. They need enough to make informed decisions and provide meaningful oversight.
AI system operators (people who use AI tools daily) — Need practical training on the specific systems they use: what the system does, what it does not do, how to interpret its outputs, when to override or escalate, and how to document their interactions. This is the largest training population in most organisations and where the Article 4 obligation has its most direct application.
HR and recruitment teams — If your organisation uses AI in any part of the hiring process, these teams carry heightened obligations. AI in recruitment is classified as high-risk under Annex III. Training must cover bias awareness, the limits of automated scoring, human oversight requirements, and candidate rights.
IT and data teams — Need deeper technical training on how AI systems work, how to monitor them, how to detect drift or degradation, and how to maintain documentation. For high-risk systems, this extends to conformity assessment and risk management requirements.
Procurement — Anyone involved in purchasing or evaluating AI tools needs to understand the due diligence the AI Act expects. They should know what questions to ask vendors, what documentation to request, and how to assess whether a system will create compliance obligations.
For each role group, document:
- Which AI systems they interact with
- What level of training they need
- The format and frequency of that training
- How completion and comprehension will be assessed
Our AI Act scenario-based course is built around exactly this mapping — different learning paths for different roles, each tied to the specific obligations those roles carry.
Step 4: Build Your Evidence Trail
Compliance under the AI Act is not just about doing the right things. It is about being able to prove you did them. This is a documentation-first regulation, and your training programme needs to produce auditable evidence from day one.
Your evidence trail should include:
Training records — Who completed what training, when, and with what result. Completion certificates alone are insufficient. You need records that show the training was relevant to the person's role and the AI systems they use.
Competency assessments — Evidence that people understood what they were taught, not just that they sat through it. This can be scenario-based assessments, practical exercises, or knowledge checks — but it needs to demonstrate comprehension, not just attendance.
Policy documentation — Written policies on AI use, acceptable use guidelines, escalation procedures, and governance structures. These policies should reference the specific AI systems in your inventory and the risk classifications you have assigned.
Update records — The AI landscape changes. New systems are deployed, existing ones are updated, roles shift. Your evidence trail needs to show that training is reviewed and updated at appropriate intervals, not treated as a one-off compliance exercise.
Gap analysis records — Documentation showing that you assessed your training needs systematically, identified gaps, and addressed them. This is particularly important for the initial compliance phase, where auditors will want to see how you moved from your pre-regulation state to compliance.
Store all of this centrally. If it is spread across five different systems with no single point of access, it will not serve you well when you need it.
Step 5: Set Deadlines and Milestones
The August 2026 deadline for Article 4 AI literacy is the first of several compliance dates. Working backwards from that, here is a realistic timeline:
Now through April 2026 — Complete AI system inventory and risk classification. Begin stakeholder engagement with department heads to identify AI systems you may have missed.
April to May 2026 — Finalise role-to-training mapping. Identify which training you can deliver with existing resources and where you need external content or expertise. Make procurement decisions.
May to June 2026 — Begin delivering training to priority groups. High-risk system operators and executive leadership should be trained first. These are the groups most likely to attract early scrutiny.
June to July 2026 — Extend training across the organisation. Run assessments. Begin building your evidence trail with real completion and competency data.
August 2026 — Article 4 obligations take effect. Your organisation should be able to demonstrate that staff who interact with AI systems have received training appropriate to their role and that you can evidence this.
Post-August 2026 — Ongoing. New hires need onboarding into the programme. System changes trigger training updates. Annual reviews keep the programme current.
These dates assume a standing start. If your organisation has already begun AI literacy work, you can compress the early stages. If you have done nothing, starting after May 2026 leaves very little margin.
Step 6: Assign Ownership and Governance
The final step is the one that determines whether everything else actually happens. Without clear ownership, checklists remain checklists.
Programme owner — One named individual who is accountable for AI Act training compliance. In most organisations, this sits within L&D, but it requires a formal mandate and reporting line to a senior sponsor. The programme owner does not need to deliver all the training. They need to ensure it gets delivered, tracked, and evidenced.
Senior sponsor — A member of the leadership team who owns the compliance outcome at board level. This person receives regular updates, removes blockers, and ensures AI Act compliance has budget and visibility.
Departmental leads — Each department with AI systems in the inventory needs a point of contact who manages training completion for their team. They ensure the right people take the right training and report gaps back to the programme owner.
Review cadence — Set a recurring governance review — quarterly is typical for new compliance programmes, moving to biannual once the programme is mature. This review covers: new AI systems deployed, changes to risk classifications, training completion rates, assessment results, and any incidents or near-misses.
Escalation path — Define what happens when a gap is found. If a department is not completing training, or a new high-risk system is deployed without a training plan, there needs to be a clear escalation route with defined response times.
Using This Checklist
Print it, share it, adapt it. The value of a checklist is in its use, not its existence.
If you want to benchmark where your organisation currently stands across these six areas, our free diagnostic scores your readiness and highlights the gaps that need attention first.
And if you are looking for scenario-based training content that maps directly to these obligations — built for L&D teams who need to evidence Article 4 compliance — take a look at our AI Act training course. It is designed around the same role-based framework this checklist uses, with built-in assessment and evidence trails.
The August 2026 deadline is close enough to plan against and far enough away to meet. The organisations that start now will have a working programme in place. The organisations that wait will be scrambling. This checklist is designed to help you be in the first group.