Back to InsightsEU AI Act

Understanding High-Risk AI Classifications Under the EU AI Act

January 202612 min read

1. Why "high-risk" classification is the legal pivot of the AI Act

The EU AI Act is built around a tiered risk architecture. Within that architecture, the "high-risk" category is the decisive compliance trigger: once a system is classified as high-risk, the provider (and, in defined ways, the deployer and other actors in the supply chain) is pulled into the Act's most demanding regime—covering lifecycle risk management, data governance, technical documentation, logging, transparency, human oversight, robustness/cybersecurity, conformity assessment, registration, post-market monitoring, and incident reporting obligations.

The practical reality is that classification disputes will often be the first enforcement battlefield. The Act anticipates this: it requires documented assessments where a provider concludes that an Annex III system is not high-risk, and it provides a dedicated supervisory pathway for authorities to scrutinise such "non-high-risk" self-classification decisions.

2. The legal definition: two routes into "high-risk"

Under the AI Act, a system can become high-risk through two distinct legal routes.

A. Route 1: High-risk by integration into regulated products (Annex I logic)

The first route captures AI systems that are safety components of products (or are themselves products) governed by specified EU harmonisation legislation (the "New Legislative Framework" family and related sectoral regimes). In effect, where EU product-safety law already mandates third-party or formal conformity pathways, the AI Act "hooks" into that ecosystem and treats relevant AI safety components as high-risk. This route reflects the Act's judgment that safety-critical contexts warrant elevated ex ante controls.

B. Route 2: High-risk by standalone "use-case" listing (Annex III logic)

The second route is use-case based. Annex III lists "High-risk AI systems referred to in Article 6(2)" across eight areas, meaning that intended use in any listed area presumptively triggers high-risk status. The Annex III catalogue is the Act's most operational classification tool: it is where organisations will spend most of their interpretive energy.

3. Annex III in detail: the eight "high-risk" areas

Annex III defines high-risk AI systems by reference to intended purpose (not marketing labels, not model architecture, not whether the system is "AI-powered" in a colloquial sense). The eight areas are:

  1. Biometrics (where permitted under EU/national law): remote biometric identification; biometric categorisation based on sensitive/protected inferences; and emotion recognition.
  2. Critical infrastructure: safety components used in managing/operating critical digital infrastructure, road traffic, or essential utilities (water, gas, heating, electricity).
  3. Education and vocational training: systems determining admissions/placement; evaluating learning outcomes; assessing educational level access; and detecting prohibited behaviour during tests.
  4. Employment, workers' management and access to self-employment: systems used for recruitment/selection (including targeted job ads, filtering applications, evaluating candidates) and systems affecting terms of work, promotion/termination, task allocation, and monitoring/evaluating worker performance/behaviour.
  5. Access to and enjoyment of essential private services and essential public services and benefits: eligibility decisions for public assistance/essential services; creditworthiness/credit scoring (with a fraud-detection exception); life/health insurance risk assessment and pricing; emergency call triage and dispatch prioritisation.
  6. Law enforcement (where permitted under EU/national law): systems assessing victimisation risk; polygraph-type tools; reliability of evidence; offending/re-offending risk assessments (with constraints); and profiling in detection/investigation/prosecution.
  7. Migration, asylum and border control management (where permitted): polygraph-type tools; risk assessments (security, irregular migration, health); assistance for asylum/visa/residence applications (including evidence reliability); and certain detection/recognition/identification uses (excluding travel document verification).
  8. Administration of justice and democratic processes: systems assisting judicial authorities in researching/interpreting facts and law and applying law to facts (including analogous ADR use), and systems intended to influence election/referendum outcomes or voting behaviour (with an express carve-out for non-exposed administrative/logistical tools).

Key interpretive point: "intended to be used" is doing the legal work

Annex III repeatedly uses the phrase "intended to be used". That drafting choice matters because it anchors classification to (i) the provider's stated purpose, (ii) how the system is placed on the market, (iii) its instructions for use, and (iv) foreseeable deployment contexts. In practice, classification analysis is rarely purely textual: it is an evidence exercise about what the system is for and how it is actually deployed.

4. The Article 6(3) "derogation": when an Annex III system is not high-risk

A major late-stage refinement of the AI Act is the derogation mechanism allowing some Annex III systems to be treated as not high-risk if they do not pose a significant risk of harm to health, safety, or fundamental rights—including by not materially influencing the outcome of decision-making.

The Act then sets out conditions under which this derogation can apply (for example, where the system performs a preparatory task to an assessment relevant to Annex III use cases). Crucially, the Act also creates a bright-line override: profiling of natural persons (in this Annex III context) means the system "shall always be considered" high-risk.

Two legal consequences follow:

  • Documentation duty: if the provider treats an Annex III system as non-high-risk, it must document the assessment.
  • Regulatory scrutiny pathway: the Act contains a specific procedure enabling market surveillance authorities to challenge a provider's "non-high-risk" conclusion (this is not merely theoretical; it is a designed enforcement channel).

Academic commentary has already flagged that Article 6(3) will be a central site of interpretive contest—particularly around what counts as "materially influencing" decision outcomes, and whether the listed conditions risk creating loopholes or disproportionate burdens depending on context.

5. Commission power to evolve Annex III: delegated acts and the criteria test

High-risk classification is not static. The Commission is empowered to amend Annex III by adding or modifying use cases via delegated acts, subject to a two-part condition: the AI systems must (a) be used in an Annex III area, and (b) pose a risk of harm/adverse fundamental-rights impact equivalent to or greater than existing Annex III high-risk systems. The Act further specifies assessment criteria, including the system's intended purpose.

This matters for compliance strategy: organisations should treat Annex III as a living perimeter, not a one-time checklist.

6. A practical legal methodology for classifying a system (provider-side)

In practice, defensible classification requires an auditable sequence:

  1. Identify the AI Act "AI system": confirm the system falls within the Act's definitional scope (and isolate the relevant "system" boundary—model vs product feature vs service).
  2. Fix the "intended purpose" evidence base: product documentation, UI flows, user instructions, marketing claims, contractual descriptions, and technical design objectives.
  3. Check Route 1 (product/safety component): is it a safety component of a regulated product under the relevant EU harmonisation framework? If yes, the high-risk regime is likely engaged.
  4. Check Route 2 (Annex III mapping): map the intended purpose to Annex III categories and sub-points; document why the mapping fits (or does not).
  5. Apply Article 6(3) only if Annex III applies: treat the derogation as an exception requiring strict justification and clear evidence that the system does not significantly risk harm and does not materially influence decision outcomes.
  6. Profiling tripwire: if the Annex III system performs profiling of natural persons in the relevant sense, treat it as automatically high-risk (no derogation).
  7. Prepare for scrutiny: if concluding "non-high-risk," produce a structured memo and technical annex that could survive regulator review (because the Act is expressly built to re-test that conclusion).

7. Why classification choices cascade into obligations

Once a system is high-risk, the obligations that follow are extensive. At minimum, providers must ensure compliance with Chapter III Section 2 requirements and perform conformity assessment before placing on the market/putting into service; maintain documentation and logs; implement a quality management system; affix CE marking; and comply with registration obligations.

The Act also differentiates registration and access arrangements for certain Annex III areas (notably some law enforcement, migration, and border control contexts), which indicates the legislator's sensitivity to security-linked deployments.

8. Conclusion: high-risk classification is not a label—it's a litigable position

Under the EU AI Act, "high-risk" is not a vague ethical descriptor; it is a legal status generated by two routes (product-safety integration and Annex III use-case listing), bounded by a structured derogation mechanism, and backed by amendment powers and supervisory procedures. For providers, the safest posture is to treat classification as compliance engineering plus legal argument: a disciplined evidence record that can withstand post-hoc review.

Footnotes (OSCOLA)

  1. Regulation (EU) 2024/1689 (Artificial Intelligence Act) OJ L 2024/1689, ch III (esp arts 9–15, 16–23, 43–49).
  2. Artificial Intelligence Act, art 6(4) (documentation duty where provider considers Annex III system not high-risk) and art 80 (procedure for dealing with systems classified as non-high-risk).
  3. Artificial Intelligence Act, annexes addressing product/safety component integration and related harmonisation legislation lists.
  4. Artificial Intelligence Act, Annex III heading and introductory text.
  5. Artificial Intelligence Act, Annex III(1)(a)–(c).
  6. Artificial Intelligence Act, Annex III(2).
  7. Artificial Intelligence Act, Annex III(3)(a)–(d).
  8. Artificial Intelligence Act, Annex III(4)(a)–(b).
  9. Artificial Intelligence Act, Annex III(5)(a)–(d).
  10. Artificial Intelligence Act, Annex III(6)(a)–(e).
  11. Artificial Intelligence Act, Annex III(7)(a)–(d).
  12. Artificial Intelligence Act, Annex III(8)(a)–(b).
  13. Artificial Intelligence Act, art 6(3) first subparagraph (significant risk; "materially influencing the outcome of decision making").
  14. Artificial Intelligence Act, art 6(3) conditions (including preparatory task condition).
  15. Artificial Intelligence Act, art 6(3) last sentence (profiling always high-risk in Annex III context).
  16. Artificial Intelligence Act, art 6(4).
  17. Artificial Intelligence Act, art 80 (procedure for authorities regarding Annex III systems classified as non-high-risk by provider).
  18. See eg Federico Palmiotto, 'The AI Act Roller Coaster: The Evolution of Fundamental Rights Protection in the Legislative Process and the Future of the Regulation' (2024) European Journal of Risk Regulation.
  19. Artificial Intelligence Act, art 7(1).
  20. Artificial Intelligence Act, art 7(2) (criteria including intended purpose).
  21. See eg Marija Almada, 'How the AI Act Can Reduce the Global Reach of EU Policy' (2024).
  22. Artificial Intelligence Act, Annex III.
  23. Artificial Intelligence Act, art 6(3).
  24. Artificial Intelligence Act, art 6(3) (profiling rule).
  25. See eg 'The Academic Guide to AI Act Compliance' (SSRN, 2025).
  26. Artificial Intelligence Act, art 16 (provider obligations) and linked provisions on conformity assessment, CE marking and registration.
  27. Artificial Intelligence Act, art 49 (registration) including restricted sections/access references for certain Annex III areas.

This article is provided for informational purposes only and does not constitute legal advice. Readers should consult qualified legal counsel for advice on specific compliance matters.