EU AI Act Deadlines: Every Compliance Date Developers Need to Know

The EU AI Act (Regulation (EU) 2024/1689) does not switch on all at once. It rolls out across four staggered application dates between February 2025 and August 2027. Knowing which provisions apply to your system on which date is the difference between being compliant on time and being fined up to €35 million.

This page is the compact reference. For the full developer implementation guide — risk classification, audit logging, human oversight, documentation — see EU AI Act Compliance: Developer Implementation Guide.

At a glance

DateWhat appliesWho is affected
2 February 2025Prohibited AI practices (Art. 5) and AI literacy obligations (Art. 4)Everyone placing AI on the EU market or using AI in the EU
2 August 2025General-purpose AI (GPAI) model rules, governance bodies, penalties, notifying authoritiesGPAI providers, Member State authorities, all providers (penalties chapter)
2 August 2026General application — most of the Act including Annex III high-risk obligations and Article 50 transparencyMost providers and deployers of high-risk and limited-risk AI systems
2 August 2027Annex I high-risk obligations (product-safety integration) and GPAI models placed on the market before 2 August 2025Providers of AI systems embedded in regulated products; GPAI providers with legacy models

The Act entered into force on 1 August 2024. The dates above are application dates — the points at which specific obligations become legally enforceable.


2 February 2025 — Prohibitions and AI literacy

In force since 6 months after entry into force.

What applies

Article 5 — Prohibited AI practices. Eight categories of AI systems are banned outright:

  1. Subliminal manipulation that materially distorts behaviour and causes harm
  2. Exploitation of vulnerabilities of specific groups (age, disability, social or economic situation)
  3. Social scoring by public or private actors leading to detrimental treatment
  4. Predictive policing based solely on profiling or personality traits
  5. Untargeted scraping of facial images for facial-recognition databases
  6. Emotion inference in workplaces and educational institutions (with narrow medical exceptions)
  7. Biometric categorisation that infers protected characteristics (race, political views, sexual orientation, etc.)
  8. Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)

Article 4 — AI literacy. Providers and deployers must ensure their staff and anyone operating AI systems on their behalf have an appropriate level of AI literacy, taking into account the technical knowledge, experience, education, and context.

Developer checklist

  • Audit your system inventory for any feature that could fall under Article 5. Even features that look "merely" personalised can cross the line if they exploit vulnerabilities.
  • Document AI literacy training for your team. There is no certification requirement, but you must be able to demonstrate the team understands what their AI systems do, their limits, and their risks.
  • If you operate emotion-recognition or biometric-categorisation features, this is the deadline that already passed for getting them off the EU market.

2 August 2025 — GPAI rules, governance, penalties

In force since 12 months after entry into force.

What applies

Chapter V — General-purpose AI models (Articles 51–56). Providers of GPAI models must:

  • Maintain technical documentation about the model
  • Provide downstream documentation to integrators
  • Comply with EU copyright law (including text and data mining opt-outs)
  • Publish a sufficiently detailed summary of training content

GPAI models classified as having systemic risk (compute threshold of 10²⁵ FLOPs or designated by the Commission) carry additional obligations: model evaluations, adversarial testing, serious incident reporting, and cybersecurity protections.

Chapter VII — Governance. The European AI Office and the European Artificial Intelligence Board become operational. Member States must designate national competent authorities.

Chapter XII — Penalties (Articles 99–101). The fine schedule is now legally enforceable:

  • Up to €35 million or 7% of global annual turnover for prohibited practices (Article 5)
  • Up to €15 million or 3% for non-compliance with high-risk and GPAI obligations
  • Up to €7.5 million or 1% for supplying incorrect information to authorities

Article 78 — Confidentiality also applies from this date.

Developer checklist

  • If you train or fine-tune your own foundation models and place them on the EU market, you are a GPAI provider. Use API access from someone else's model? You are a deployer, not a provider.
  • The training-content summary obligation is enforceable from this date but uses a Commission-published template (still being finalised at time of writing — re-check the AI Office page).
  • The penalties chapter is in force, but most substantive obligations don't apply until 2 August 2026. Penalties apply to whatever obligations are also in force at the time of the violation.

2 August 2026 — General application

In force since 24 months after entry into force. This is the date most engineering teams care about.

What applies

The vast majority of the Act becomes enforceable, including:

Article 6(2) and Annex III — High-risk AI systems. AI systems used in the following areas are classified as high-risk and must comply with the full Chapter III obligations:

  • Biometrics (categorisation, emotion recognition outside workplace/education)
  • Critical infrastructure (digital, road traffic, water, gas, electricity)
  • Education and vocational training (admission, assessment, monitoring of behaviour)
  • Employment, worker management, access to self-employment (recruitment, screening, evaluation, termination decisions)
  • Access to essential private and public services (credit scoring, insurance pricing, benefits eligibility, emergency dispatch)
  • Law enforcement (risk assessment, lie detection, evidence reliability assessment, profiling)
  • Migration, asylum, and border control
  • Administration of justice and democratic processes

For high-risk systems the obligations are: risk-management system (Art. 9), data and data governance (Art. 10), technical documentation (Art. 11), record-keeping/logs (Art. 12), transparency to deployers (Art. 13), human oversight (Art. 14), accuracy, robustness, cybersecurity (Art. 15), conformity assessment (Art. 43), CE marking (Art. 48), registration in the EU database (Art. 49), and post-market monitoring (Art. 72).

Article 50 — Transparency obligations. Even outside high-risk:

  • Users must be informed they are interacting with an AI system (chatbots)
  • AI-generated synthetic content must be machine-readable as such (deepfakes, AI-generated images, audio, video)
  • Emotion-recognition and biometric-categorisation systems (where not prohibited) must inform exposed natural persons
  • Providers of GPAI must mark synthetic content as artificially generated

Developer checklist

  • Run a risk-tier classification on every AI feature you ship to EU users. The first concrete output of the implementation guide is exactly this.
  • For high-risk systems, conformity assessment is not a same-day job. Most teams need 6–12 months. Start now.
  • Article 50 affects far more teams than they realise. If you ship a chatbot, generate images, or use voice synthesis — you are in scope.
  • Record-keeping under Article 12 requires automatic logging of inputs, outputs, timestamps, and model versions. JuiceFactory's stateless inference plus your own audit-log middleware is the standard pattern; the implementation guide walks through it with code.

2 August 2027 — Annex I high-risk and legacy GPAI

In force since 36 months after entry into force.

What applies

Article 6(1) and Annex I — High-risk AI systems integrated into regulated products. Where AI is a safety component of a product covered by existing EU product-safety regulation (machinery, toys, lifts, radio equipment, in-vitro diagnostics, medical devices, civil aviation, motor vehicles, marine equipment, agriculture, etc.), the high-risk obligations apply.

This is later than Annex III because these products already have third-party conformity assessment regimes (machinery directive, MDR, etc.) and the Act integrates with them rather than duplicating.

Legacy GPAI models. GPAI models that were placed on the EU market before 2 August 2025 must comply with the GPAI obligations from this date.

Developer checklist

  • If your AI is embedded in a regulated product, your existing product-safety conformity-assessment process now needs to include AI-specific controls. Talk to your notified body early — they may not have AI Act capacity.
  • If you provide GPAI under a legacy version, document compliance work and update your model card to include training-content summary and copyright disclosures.

What this means in practice

Most developers building chat applications, RAG pipelines, internal tools, or SaaS products will encounter the AI Act on three dates:

  1. 2 February 2025Have we ruled out Article 5 prohibitions? (likely yes for normal SaaS)
  2. 2 August 2026Is any of our user-facing AI in scope of Annex III? (more often than you'd think — recruitment, credit, education, healthcare triage tools all are)
  3. 2 August 2026Have we wired up Article 50 transparency for our chatbot, code generator, image generator, voice features? (yes, almost certainly)

The compliance posture is layered: GDPR sits underneath, the AI Act sits on top, and your inference provider's data-handling determines whether either is even tractable.

This is where stateless EU inference matters. If your provider retains prompts and outputs, you inherit their data-retention story under both GDPR and Article 12. If your provider runs inference outside the EU, you inherit their transfer-mechanism story under GDPR. JuiceFactory's stateless EU-only inference removes both questions — the data path is fully inside GDPR territory and nothing is retained server-side by default.

What to do this week

  • Run risk-tier classification on every AI feature. The risk classification API in the implementation guide gives you a working template.
  • For anything that lands in high-risk, schedule conformity-assessment planning with a notified body before end of Q1 of the year you ship.
  • For anything that uses chatbots, image/audio generation, or biometric features, plan Article 50 transparency UI changes now — they are small but easy to forget.
  • Move inference to an EU-hosted, stateless provider. Get a JuiceFactory API key and run the migration in 10 minutes.

Sources