Back to InsightsCompliance

Preparing for AI Act Compliance: A Practical Timeline

Key milestones and preparation steps for organisations subject to EU AI Act requirements

February 202613 min read

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) introduces the world's most comprehensive regulatory framework for AI systems. Its obligations unfold over several years and apply not only to developers ("providers") but also to deployers, importers, distributors, and organisations integrating third-party AI systems.

Because enforcement is phased, organisations that begin preparation early can avoid operational disruption, reduce compliance costs, and build a defensible governance posture by the time obligations become binding.

This article provides a practical, phased compliance timeline for 2024–2027, outlining the key tasks every organisation should complete.

1. Month 0–6: Foundational Readiness (Immediately to Early 2025)

Objective: Build internal visibility, governance structure, and classification capability.

1.1 Establish an AI Governance Function

Organisations should define a central governance unit that oversees all AI-related activities. This includes:

  • appointing an AI governance lead,
  • mapping reporting lines to risk, legal, and technical teams,
  • defining approval and escalation pathways for AI-related decisions.

1.2 Create an AI System Inventory

A comprehensive inventory is the foundation of every compliance requirement. The inventory must identify:

  • all internally developed AI systems,
  • all procured or integrated systems,
  • all third-party models or APIs,
  • intended purpose and deployment context,
  • functionality, risk exposure, and data flows.

1.3 Conduct Preliminary Risk Classification

Organisations must assess—across Articles 6–7—whether systems fall into:

  • prohibited AI,
  • high-risk AI (Annex III),
  • AI systems as safety components of regulated products,
  • general-purpose AI (GPAI),
  • non-high-risk or limited-risk systems.

This classification determines the entirety of downstream obligations.

1.4 Identify Compliance Gaps

A gap analysis should compare current processes with upcoming obligations, focusing on:

  • data governance maturity,
  • documentation practices,
  • human oversight,
  • cybersecurity,
  • vendor governance.

This stage enables early budgeting and staffing decisions.

2. Month 6–18: High-Risk Systems Implementation (Mid-2025 to Late 2026)

Objective: Build legally defensible governance and documentation infrastructure.

Once systems classified as high-risk are identified, organisations should begin implementing the mandatory controls.

2.1 Build the AI Risk Management System (Article 9)

This includes:

  • risk identification methodologies,
  • risk severity/likelihood matrices,
  • mitigation strategies and residual-risk acceptance processes,
  • version-controlled decision records.

2.2 Implement Data Governance Controls (Article 10)

Tasks include:

  • documenting dataset provenance,
  • evaluating bias and representativeness,
  • establishing data quality protocols,
  • ensuring lawful data collection and usage.

2.3 Develop Technical Documentation (Article 11)

This is a structured dossier enabling regulators to reconstruct design choices, data handling, evaluation methods, and governance decisions.

2.4 Establish Logging and Record-Keeping (Article 12)

Organisations must ensure traceability of:

  • system behaviour,
  • model decisions,
  • training/evaluation processes,
  • human interventions.

2.5 Formalise Human Oversight (Article 14)

Oversight processes must be:

  • meaningful,
  • technically empowered (override capability),
  • role-specific,
  • documented with competency requirements.

2.6 Integrate Cybersecurity and Robustness Measures (Article 15)

This includes resilience testing, vulnerability monitoring, and adversarial risk assessment.

2.7 Vendor and Supply-Chain Governance

Organisations integrating third-party AI must:

  • review provider documentation,
  • ensure conformity assessments,
  • conduct contractual due diligence,
  • maintain ongoing monitoring of external risks.

3. Month 18–30: Post-Market Monitoring and Operational Maturity (2026–2027)

Objective: Ensure continuous lifecycle compliance and audit readiness.

3.1 Establish Post-Market Monitoring (Article 72)

Organisations should develop:

  • monitoring metrics,
  • performance dashboards,
  • anomaly detection thresholds,
  • user-feedback channels,
  • incident logs.

3.2 Develop Incident Reporting Workflows (Article 73)

This involves:

  • internal investigation procedures,
  • regulatory notification templates,
  • timelines for reporting serious incidents.

3.3 Implement Internal Audits

Audits should test:

  • documentation coherence,
  • classification accuracy,
  • technical performance,
  • governance effectiveness.

3.4 Update Governance Documentation

Documentation must be:

  • contemporaneous,
  • version-controlled,
  • aligned with system behaviour,
  • able to withstand supervisory scrutiny.

3.5 Staff Training & Competency Development

Training modules should cover:

  • AI governance,
  • operational controls,
  • risk monitoring,
  • oversight responsibilities,
  • regulatory duties for deployers and providers.

4. Full Enforcement (2027 and Beyond): Continuous Compliance

When enforcement becomes fully active, regulators will scrutinise not only documentation, but its alignment with operational reality.

Organisations should:

  • run continuous monitoring cycles,
  • maintain risk registers,
  • update documentation during system changes,
  • integrate regulatory updates and new standards,
  • ensure vendor compliance for all third-party AI components.

Compliance becomes an ongoing governance discipline, not a one-time task.

Practical Takeaway

The EU AI Act imposes a multi-year, multi-layered compliance journey. Organisations that begin preparation early will significantly reduce operational risk, regulatory exposure, and implementation costs.

Early action is not only strategic but essential: the Act's obligations are extensive, interdependent, and evidence-driven. Organisations that build governance capability now will be positioned to comply confidently and demonstrate accountability under the first global AI regulatory regime.

This article is provided for informational purposes only and does not constitute legal advice. Readers should consult qualified legal counsel for advice on specific compliance matters.