AI Act Penalties: What Happens If You Miss the August 2026 Deadline?
AI Act penalties reach EUR 35M or 7% of global turnover. Learn which violations trigger which fines and how enforcement will work from August 2026.
The AI Act's penalty framework is, on paper, among the most severe in EU regulatory history. Fines of up to EUR 35 million or 7% of global annual turnover — whichever is higher — place it above GDPR's maximum of EUR 20 million or 4% of turnover. For many organisations, the numbers alone are enough to concentrate attention.
But headline penalty figures rarely tell the full story. What matters in practice is which violations attract which penalties, how enforcement will actually operate in the early years, and where regulators are most likely to focus their limited resources. Understanding this is the difference between panic-driven compliance and strategic, proportionate preparation.
The Three Penalty Tiers
The AI Act establishes three tiers of administrative fines, each tied to specific categories of violation. These are set out in Article 99.
Tier 1: Up to EUR 35 million or 7% of global annual turnover
This is the maximum tier, reserved for the most serious violations:
- Deploying AI systems that are prohibited under Article 5 — social scoring, exploitative AI targeting vulnerable groups, real-time remote biometric identification in publicly accessible spaces (outside the narrow permitted exceptions), and other banned practices
- Failing to comply with data governance requirements for high-risk systems as laid out in Article 10
The 7% turnover figure is striking. For a company with EUR 1 billion in global revenue, the theoretical maximum is EUR 70 million. This tier exists primarily as a deterrent against the most egregious uses of AI, and the prohibited practices it covers are genuinely harmful — manipulative, exploitative, or surveillance-oriented systems that the EU has decided have no legitimate place in the market.
Tier 2: Up to EUR 15 million or 3% of global annual turnover
This covers non-compliance with other AI Act obligations, including:
- Failing to meet high-risk system requirements (risk management, transparency, human oversight, accuracy, robustness, cybersecurity)
- Non-compliance with obligations for providers, deployers, importers, and distributors
- Failing to meet the Article 4 AI literacy obligation
- Non-compliance with requirements for general-purpose AI models
This is the tier most relevant to L&D directors and compliance teams. The Article 4 literacy requirement — ensuring staff who interact with AI systems have sufficient AI literacy — falls here. Missing the August 2026 deadline for AI literacy training is a Tier 2 violation.
For context: EUR 15 million or 3% of global turnover is the same range as GDPR's lower tier. It is serious money, but it is not the headline figure that tends to dominate media coverage.
Tier 3: Up to EUR 7.5 million or 1% of global annual turnover
This tier applies to the supply of incorrect, incomplete, or misleading information to notified bodies or national competent authorities. It is essentially a penalty for obstructing the regulatory process rather than for substantive non-compliance.
Reduced caps for SMEs and startups: The AI Act includes a provision that for small and medium-sized enterprises, the fines should be the lower of the two figures (the fixed amount or the turnover percentage). This is a meaningful concession, particularly for smaller organisations where a fixed EUR 15 million fine would be disproportionate to their scale.
How Enforcement Will Actually Work
The AI Act does not create a single EU-wide enforcement body equivalent to a "European AI Authority." Instead, it follows the same model as most EU regulations: enforcement is delegated to national authorities in each member state.
National market surveillance authorities will be the primary enforcement bodies. Each member state must designate at least one authority responsible for supervising AI systems within its jurisdiction. In many cases, these will be existing regulatory bodies — data protection authorities, sector-specific regulators, or consumer protection agencies — with an expanded mandate.
The European AI Office, established within the European Commission, plays a coordinating role. It oversees compliance for general-purpose AI models, supports national authorities, facilitates cross-border enforcement, and develops guidance. But it is not the body that will knock on your door. That will be your national authority.
This matters because enforcement capacity and priorities will vary by member state. Some countries — Germany, France, the Netherlands — have well-resourced regulatory infrastructure and are likely to build AI enforcement capacity relatively quickly. Others will take longer. The early enforcement landscape will be uneven, but that unevenness is not something to rely on. Cross-border coordination mechanisms mean that a complaint or investigation in one member state can trigger attention in others.
The complaint mechanism is worth noting. The AI Act gives individuals and organisations the right to lodge complaints with national authorities. This means enforcement will not be purely top-down. Employees, candidates, consumers, and civil society organisations can trigger investigations by reporting suspected non-compliance. In the GDPR context, complaints have driven a significant proportion of enforcement actions, and there is every reason to expect the same pattern here.
What "Proportionate" Enforcement Means
Article 99 requires that fines be "effective, proportionate and dissuasive." This is standard EU regulatory language, but the proportionality principle is genuinely important in practice.
When determining the amount of a fine, national authorities must consider:
- The nature, gravity, and duration of the infringement
- Whether the infringement was intentional or negligent
- Actions taken to mitigate harm
- The degree of responsibility, taking into account technical and organisational measures the organisation had in place
- Previous infringements
- The degree of cooperation with the authority
- The manner in which the infringement became known to the authority (self-reported versus discovered through complaint or investigation)
- The size and market share of the organisation
This list reveals something important: enforcement is not binary. An organisation that has made genuine, documented efforts to comply but has gaps is in a fundamentally different position from an organisation that has done nothing. The fine for a company that built a training programme, identified most of its AI systems, and was working through its evidence trail when the deadline arrived will not be the same as the fine for a company that was unaware the regulation existed.
This is why documentation matters even when your programme is incomplete. Every step you take — every inventory entry, every risk classification, every training session delivered — creates evidence of good-faith effort that directly influences how any enforcement action would be assessed.
Where Early Enforcement Will Likely Focus
No regulatory authority has infinite resources. In the early months and years of AI Act enforcement, national authorities will need to prioritise. Based on how similar regulations (GDPR, the Digital Services Act, sector-specific directives) have been enforced in their early phases, several patterns are likely.
Prohibited practices will be the first priority. The ban on prohibited AI systems under Article 5 takes effect in February 2025 — well before most other obligations. Any organisation still deploying social scoring, manipulative AI, or banned biometric surveillance after that date is an immediate enforcement target. These cases are clear-cut, high-profile, and politically salient. National authorities will want early wins here.
Demonstrable training gaps will attract early attention. The Article 4 literacy obligation is, in enforcement terms, relatively easy to assess. An authority can ask a straightforward question: can you demonstrate that the people in your organisation who use AI systems have received appropriate training? If the answer is "no, we have not started," that is a clear, documentable violation.
Compare this to assessing whether a high-risk AI system's risk management framework meets every technical requirement of Article 9 — that is a complex, resource-intensive investigation that requires technical expertise. Checking whether an organisation has a training programme in place, with records, is comparatively straightforward.
This is why the Article 4 deadline matters disproportionately for L&D teams. It is not the highest-penalty obligation, but it is one of the most visible and most easily enforced. Our AI Act compliance checklist covers the practical steps for building a programme that withstands this scrutiny.
High-risk system deployers in sensitive sectors will face scrutiny. Organisations using AI in recruitment, credit scoring, insurance, and access to essential services are deploying high-risk systems that directly affect individuals' rights and opportunities. These are the use cases that generate complaints and media attention. National authorities, particularly those with existing mandates in employment or consumer protection, will gravitate toward these cases.
Complaints will drive investigations. As with GDPR, authorities are likely to be reactive as much as proactive. Organisations that generate complaints — from employees subjected to opaque AI-driven decisions, candidates rejected by automated screening, or consumers affected by AI-powered services — will find themselves under scrutiny before organisations that fly under the radar.
The Real Cost of Non-Compliance
Fines are the most visible consequence, but they are rarely the most significant one.
Reputational damage in the AI governance space is substantial and growing. Organisations that are publicly found to be non-compliant with AI regulations face scrutiny from customers, partners, investors, and talent. In sectors where trust is a competitive asset — financial services, healthcare, professional services — a public enforcement action on AI compliance can be more damaging than the fine itself.
Operational disruption follows enforcement. If a national authority finds that an AI system is non-compliant, it can require the system to be withdrawn from the market or its use to be suspended. For organisations that have built AI into core business processes — recruitment workflows, customer service, operational decision-making — forced suspension of an AI system is an operational crisis, not just a compliance one.
Contractual consequences are emerging. Large enterprises and public sector bodies are increasingly including AI Act compliance requirements in procurement contracts. If you supply AI-powered services or deploy AI within a client relationship, non-compliance can trigger contractual penalties, loss of contracts, or exclusion from tenders. This is already happening in advance of the enforcement deadline.
Competitive disadvantage is the subtler cost. Organisations that build compliant AI governance early will move faster with AI adoption in the long run. They will deploy new AI systems with confidence, secure in the knowledge that their governance framework can accommodate them. Organisations that treat compliance as an afterthought will find themselves hesitating, second-guessing, and moving slower — not because they lack ambition, but because they lack the governance infrastructure to act on it.
What to Do Between Now and August 2026
The penalty framework is designed to motivate action, not to punish organisations that are making genuine progress. The proportionality provisions in Article 99 make this explicit: your efforts matter, and they are taken into account.
Here is what that means practically:
Start now, even if you cannot finish. A partially complete compliance programme is categorically different from no programme at all. Begin your AI system inventory, start mapping training obligations, deliver training to your highest-priority groups. Every documented step reduces your enforcement risk.
Prioritise Article 4 literacy. This is the obligation with the nearest deadline and the most straightforward enforcement path. It is also the obligation that L&D teams can most directly influence. If you can demonstrate a functioning, evidenced AI literacy programme by August 2026, you have addressed one of the most likely early enforcement triggers.
Build evidence from the start. Do not wait until your programme is complete to begin documenting it. Record your inventory process, your classification decisions, your training delivery, your assessment results. If enforcement arrives, the quality of your evidence trail will determine the outcome more than the perfection of your programme.
Use the diagnostic to find your gaps. Our free readiness diagnostic assesses where your organisation stands against the AI Act's requirements and identifies the areas that need attention first. It takes three minutes and gives you a clear picture of your exposure.
Get the training component right. The AI Act training course we have built is designed specifically for the Article 4 obligation — scenario-based, role-appropriate, with built-in assessments and evidence trails. It is not a general AI awareness module. It is structured to produce the documentation that compliance teams and auditors will need.
The August 2026 deadline will arrive. The question is not whether enforcement will happen — the regulatory infrastructure is being built right now. The question is whether your organisation will be among those that prepared or those that hoped it would not apply to them. The penalty framework makes the answer to that question consequential.