EU AI Act

EU AI Act Annex III: which financial services AI is actually high-risk?

30 March 2026  ·  9 min read

The EU AI Act is now in force. Annex III sets out the categories of AI systems classified as high-risk, and several of them map directly to the AI tools that regulated financial institutions are already using or building. Understanding whether your AI systems fall within scope is the first compliance step, and for many firms it is not as straightforward as it seems.

This article explains which AI systems in financial services are explicitly listed in Annex III, how the provider vs deployer distinction affects your obligations, what the AML fraud detection carve-out actually says, and what timeline you are working against.

What Annex III actually says about financial services AI

Annex III lists eight categories of high-risk AI. Three of them directly affect regulated financial institutions.

Annex III Category What it covers Scope
Point 5(b): AI in creditworthiness assessment and credit scoring AI systems used to evaluate the creditworthiness of natural persons or to establish their credit score. Applies to lending, BNPL, credit card and overdraft decisioning AI. In scope
Point 5(c): AI in life and health insurance risk and pricing AI systems used for risk assessment and pricing decisions in life insurance and health insurance products. In scope
Point 4(a): AI in employment and recruitment AI used to screen, filter, rank or select job candidates. Applies to HR tech and ATS providers whose tools process EU-based candidates. In scope
AML fraud detection AI AI used specifically for fraud detection and AML transaction monitoring. Explicitly carved out of Annex III high-risk classification under the financial services recitals. Carved out
General AI features (chatbots, summarisation, recommendations) AI tools that do not make decisions affecting natural persons in the listed contexts. Subject only to transparency obligations under Article 50 if they involve interaction with humans. Limited risk only

The boundary question that matters most for fintechs: Many fintech AI systems touch both credit scoring and fraud detection. The carve-out for AML fraud detection AI does not automatically cover a system that also scores creditworthiness. If the same model influences both a fraud flag and a credit decision, the credit scoring function brings it into Annex III scope. The boundary requires legal analysis, not a general assumption that "fraud AI is exempt."

Provider vs deployer: why the distinction matters

The EU AI Act divides obligations between two roles: providers (who develop and place AI systems on the market) and deployers (who use third-party AI systems in a professional context). The obligations are significantly different.

Provider obligations (Chapter III)

You are a provider if you develop an AI system and place it on the EU market under your own name, even if built on a third-party foundation model. Full Chapter III obligations apply: technical documentation (Annex IV), quality management system (Article 17), conformity assessment, EU database registration, and ongoing post-market monitoring (Article 72).

Deployer obligations (Article 26)

You are a deployer if you use a third-party AI system in a professional context, such as a vendor credit tool or an external ATS. Obligations are lighter but significant: human oversight procedures, logging and record-keeping, Fundamental Rights Impact Assessment under Article 27 (for credit and insurance AI), staff AI literacy training, and notification to affected workers (for HR AI).

The provider/deployer boundary is not always obvious. Fine-tuning a third-party model on your own data, substantially configuring an AI tool, or combining multiple AI components into a system you place on the market under your name may make you a provider under Article 25, regardless of whether the underlying components were built by someone else. This determination requires legal analysis applied to the specific facts of your AI development and deployment.

The AML fraud detection carve-out: what it actually says

Recital 56 of the EU AI Act provides that AI systems used specifically for the purpose of detecting financial fraud and AML transaction monitoring are not classified as high-risk under Annex III point 5. This carve-out is widely referenced but frequently misapplied.

What the carve-out covers: AI transaction monitoring systems, AI-assisted sanctions screening, and AI fraud detection tools used purely in the context of AML/CFT compliance functions.

What the carve-out does not cover:

The carve-out requires that the system is used specifically for AML/fraud detection, not that it incidentally touches AML or fraud. A system with multiple outputs, only one of which is a fraud flag, is not automatically carved out on the basis of that one output.

What high-risk classification means: the full obligation set

For a system classified as high-risk under Annex III, the following obligations apply before deployment (or before continued operation if the system was already deployed when obligations came into force):

The enforcement timeline

When obligations apply

The EU AI Act entered into force in August 2024. Article 5 (prohibited AI) and Article 4 (AI literacy) obligations applied from August 2025.

Annex III high-risk obligations (the full Chapter III compliance set) apply from August 2026. For AI systems already in operation at that date, providers and deployers must have completed conformity assessments and produced required documentation by that date or face enforcement.

Post-market monitoring and incident reporting obligations (Articles 72-73) also apply from August 2026 and are ongoing.

August 2026 is not a distant deadline. Building a complete Annex III compliance programme, including technical documentation, QMS, FRIA, and conformity assessment preparation, typically takes 8 to 16 weeks once a decision to proceed is made. Firms that have not started the classification and scoping process have less runway than they may assume.

The GDPR and AML interaction: why integrated advice matters

For regulated financial institutions, the EU AI Act does not exist in isolation. Credit scoring AI interacts with GDPR Article 22 (automated individual decision-making). Insurance pricing AI interacts with GDPR legitimate interest assessments and DORA operational resilience requirements. AML transaction monitoring AI sits at the boundary of the Annex III carve-out and must be assessed against the specific functions the system performs.

Handling these frameworks through separate advisors, one for AI Act, one for GDPR, one for AML, creates gaps at every seam. A classification opinion that ignores the GDPR interaction is incomplete. An AI Act FRIA conducted without understanding the AML implications of the AI system being assessed will miss material obligations.

AI compliance advisory for regulated financial institutions

Our team provides AI classification opinions, FRIA and cross-regulatory advisory for regulated financial institutions using AI in credit decisions, insurance pricing or HR screening. Integrated with your existing AML and GDPR framework. Written classification opinion delivered within two weeks of engagement confirmation.

View AI Compliance service →

For regulated firms that also need AML compliance support alongside AI compliance advisory, see our KYC / ODD Team Outsourcing and AML Audit and Advisory services.

Frequently asked questions

Which financial services AI systems fall under EU AI Act Annex III?+
Annex III lists AI systems used to evaluate the creditworthiness of individuals or establish their credit score, AI used in life and health insurance pricing and underwriting, and AI used in recruitment and employment decisions. The AML transaction monitoring carve-out means fraud detection systems used solely for that purpose fall outside high-risk classification, but AI that also informs customer risk profiling may still be in scope. Classification depends on the specific use case, not the technology category.
What is a Fundamental Rights Impact Assessment (FRIA) and who must conduct one?+

A FRIA is a documented assessment of the potential impact of a high-risk AI system on fundamental rights, conducted before deployment. It is required under the EU AI Act for deployers of Annex III systems in financial services, including credit scoring and insurance underwriting AI.

The FRIA must cover the categories of persons affected, the likelihood and severity of potential harm, the safeguards in place, and the process for challenging AI-assisted decisions. It must be kept up to date and made available to market surveillance authorities on request.

What is the difference between a provider and a deployer under the EU AI Act?+
A provider is the entity that develops or places a high-risk AI system on the market, whether as a standalone product or embedded in a service. A deployer is the entity that uses a high-risk AI system in a professional context. Financial institutions that buy credit-scoring or insurance AI from a third party vendor are typically deployers. Compliance obligations differ: providers carry the conformity assessment burden; deployers carry the FRIA, human oversight, and incident reporting obligations.
When do EU AI Act obligations apply to financial institutions?+
The EU AI Act entered into force in August 2024. For high-risk AI systems under Annex III, full compliance obligations apply from August 2026. Financial institutions using or developing Annex III AI systems should be conducting classification assessments and preparing FRIA documentation now. The conformity assessment process for providers can take significant time if technical documentation is not already in place.
How does the EU AI Act interact with DORA and CRD?+
The EU AI Act does not displace DORA or CRD obligations: it adds to them. AI systems used in ICT risk management or operational resilience functions at financial entities are subject to both DORA's ICT governance requirements and EU AI Act compliance obligations where they also qualify as high-risk under Annex III. Credit risk models subject to CRD internal model governance are similarly exposed to dual requirements. Cross-regulatory mapping is essential before the August 2026 deadline.

Legal references

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[2] Annex III of Regulation (EU) 2024/1689: list of high-risk AI systems subject to full compliance obligations.

[3] Regulation (EU) 2022/2554 (Digital Operational Resilience Act): overlapping obligations for AI systems used in ICT risk management by financial entities.

[4] Directive (EU) 2024/1619 (Capital Requirements Directive VI): governance and internal model requirements relevant to credit-scoring AI systems.

This article is provided for informational purposes only and does not constitute legal advice.

Get in Touch

Tell us your situation.
We will respond within
one business day.

A senior advisor will review your details and come back with an honest assessment of which service fits your situation. No obligation.

EU-based analysts and compliance officers
KYC/ODD teams operational from two weeks
Fixed fees agreed in writing before engagement
Response within one business day guaranteed

We respond within one business day · GDPR-compliant · Data never shared with third parties