EU AI Act Compliance for Fintech & Regulated AI

EU AI Act deadlines
are live. Is your
AI estate legally
defensible?

We classify your AI systems under the EU AI Act, determine your obligations as a provider or deployer, and deliver the documentation your business needs to be legally defensible — before a regulator, investor, or enterprise customer asks.

Annex III specialists
Legally defensible output
Senior advisor assigned
Response within 24h
Cross-regulatory coverage
No commitment required
Request a Free Assessment
A senior advisor will contact you within one business day to assess your EU AI Act exposure — at no cost or obligation.

By submitting you agree to our Privacy Policy. GDPR-compliant data handling guaranteed.

Who this service is for

✓ Fintech, BNPL & Credit Lenders

Credit scoring and creditworthiness AI is explicitly listed under Annex III, point 5(b). Any fintech using AI to determine loan eligibility is likely a high-risk provider or deployer.

✓ Insurtech & Life/Health Insurers

AI for risk assessment and pricing in life and health insurance falls under Annex III, point 5(c). Both providers and deployers carry full obligations including a mandatory FRIA under Article 27.

✓ HR Tech & Recruitment Platforms

AI used to screen, score, or filter candidates falls under Annex III, point 4(a). If you build or deploy AI-powered ATS or candidate ranking tools, your compliance window is open now.

✓ RegTech & Compliance Technology

RegTech companies building AI for regulated clients are providers with full Chapter III obligations. Enterprise procurement increasingly demands AI Act compliance evidence as a condition of contract.

✓ Crypto & Digital Asset Firms

Under MiCA and growing investor scrutiny, AI governance is a due diligence standard. If you have existing AML/compliance infrastructure, AI Act readiness is a natural and urgent extension.

✓ Non-EU Companies with EU Customers

The EU AI Act applies extraterritorially — where AI outputs affect EU residents, not where the provider is based. US, UK, and APAC companies with EU-facing AI products are fully in scope.

€35M
Maximum fine — EU AI Act Article 99
7%
Global turnover — alternative penalty cap
Aug '26
Current Annex III enforcement deadline
8–12
Weeks minimum to compliance readiness
Serving regulated AI operators across
🏦 Fintech & Lending
🛡 Insurtech
👥 HR Tech
⚙️ RegTech
₿ Crypto & Digital Assets
🌍 Non-EU Companies
When Companies Call Us

The situations we
see most often

Most companies act when a specific commercial pressure hits — a funding round requiring AI governance evidence, an enterprise contract conditional on compliance, or a product launch with a regulatory deadline. These are the situations where we can help fastest.

First: understand your role — are you a provider or a deployer?
Provider Building or white-labelling AI

If you develop an AI system and place it on the EU market under your own name — even if built on a third-party model — you are a provider. Full Chapter III obligations apply: technical documentation, QMS, conformity assessment, EU database registration.

Deployer Integrating third-party AI

If you use a third-party AI system in a professional context — an AI credit tool, an ATS from a vendor — you are a deployer. Lighter but significant obligations apply: human oversight, logging, FRIA (for credit and insurance AI), and staff notification.

The distinction is not always obvious. Fine-tuning or substantially configuring a third-party model may make you a provider under Article 25 — a legal obligation under Article 6(4) that requires legal judgment, not a template.

Get a classification opinion
Critical
Investor or acquirer due diligence

Series B/C closes and M&A transactions increasingly require AI governance evidence. Non-compliance discovered in due diligence can block or reprice a round. This is the strongest commercial driver we see.

Critical
Enterprise customer procurement request

A large customer sends a vendor questionnaire requiring EU AI Act compliance evidence. This is deal-blocking and immediate. We have rapid-response offerings for exactly this situation.

High
New AI product launch in the EU market

Launching an AI-powered product in a regulated context without prior classification and compliance is a legal exposure. Pre-launch compliance review is a non-negotiable entry condition for Annex III systems.

High
EU market entry by a non-EU company

The AI Act applies wherever outputs affect EU residents. US, UK, and APAC companies expanding into the EU need scope assessment, compliance implementation, and potentially an authorised representative under Article 22.

High
Regulatory inquiry or market surveillance contact

Contact from a national competent authority is an emergency. We provide rapid-response gap assessment and remediation support with priority response times.

Moderate
Board or senior management mandate

When the board directs legal or compliance to address AI regulatory risk, you need structured deliverables — a classification opinion, gap report, and governance framework — not internal research.

Moderate
Internal AI system incident

A bias finding, a data error, or an AI-driven decision that harms a customer. Post-incident, Article 73 reporting obligations trigger and a documented remediation plan becomes legally necessary.

Not sure if your situation applies? The free scoping call is the fastest way to find out — no commitment required.

Book Free Call
The Regulatory Reality

The EU AI Act is already
in force. What applies now.

Enforcement is staggered — but several obligations are active today, and the window for structured compliance before the August 2026 Annex III deadline is narrowing.

🚫

Prohibited AI — banned since February 2025

Social scoring, emotion recognition in workplaces, real-time biometric surveillance, and certain profiling practices are already prohibited. Operating these carries the highest penalty tier.

📚

AI literacy obligation — active since February 2025

All providers and deployers must ensure staff operating or overseeing AI systems have sufficient AI literacy. Broadly applicable and already law.

⚠️

Annex III high-risk obligations — August 2026

Full conformity assessment, technical documentation, QMS, human oversight, and EU database registration for credit, insurance, HR, and other listed systems. Requires 8–12 weeks minimum preparation.

🤖

GPAI model obligations — active since August 2025

Providers of general-purpose AI models must register, maintain technical documentation, and assess systemic risk.

Enforcement timeline at a glance
Feb 2025
Prohibited AI banned · AI literacy obligation active
Active Now
Aug 2025
Penalty regime live · GPAI model obligations apply
Active Now
Aug 2026
Full Annex III high-risk obligations · Transparency (Art. 50)
Current Deadline
Dec 2027
Proposed Omnibus extension — not yet law
Proposed Only
Penalty Structure — EU AI Act Article 99
Prohibited AI violations (Art. 5)€35M / 7%
High-risk non-compliance (Art. 6–27)€15M / 3%
Incorrect info to authorities€7.5M / 1%

Penalties apply as the greater of the fixed amount or percentage of global annual turnover. SME thresholds exist but do not eliminate liability. The penalty regime has been active since 2 August 2025.

⚠️
On the Digital Omnibus delay: A proposed amendment (COM(2025) 836) would extend the Annex III deadline to December 2027. This proposal is not yet law — August 2026 remains the current legal deadline. Even if adopted, it extends the window, not the obligation. Starting now is the only commercially rational position.
The Practice

Regulated-sector AI compliance
built for legal leaders.

We are a specialist EU AI Act and GDPR compliance consultancy working with legal, compliance, and technology leadership in fintech, insurtech, HR tech, RegTech, and crypto — the sectors where Annex III exposure is real and willingness to act is high.

We combine deep EU regulatory expertise with genuine technical fluency in AI systems. This dual capability is what separates us from law firms (who understand the law but not the systems) and template sellers (who understand neither). Our outputs — classification opinions, technical documentation, conformity assessments — are legally defensible, not just plausible.

For regulated-sector clients, AI Act compliance must work alongside GDPR, DORA, AML, and MiCA. We handle all of this as one integrated engagement — no gaps between advisors, no conflicting controls.

⚖️
Legal-Grade Classification

Written classification opinions — not checklists. Provider vs. deployer, Annex III scope, and Article 6 assessment require legal judgment.

🔬
Technical Fluency

We read Annex IV documentation requirements and model architecture. Our gap analyses are accurate because we understand how AI systems actually work.

🔗
Cross-Regulatory

EU AI Act + GDPR + DORA + AML + MiCA as one integrated engagement. No disjointed advice, no gaps, no conflicting controls.

📄
Defensible Output

Every deliverable is produced to withstand market surveillance authority scrutiny — not just satisfy an internal audit.

Speak with a Senior Advisor
40+
Compliance mandates delivered
12
EU member states served
100%
Clients compliant ahead of deadline
24h
Response on all enquiries
EU AI Act — Risk Classification Framework
Unacceptable RiskProhibited
High-Risk AI (Annex III)Full Compliance
Limited Risk AITransparency
Minimal Risk AIBest Practice

Credit scoring, insurance pricing AI, recruitment systems, and AI in essential services fall under Annex III high-risk categories. Full conformity assessment is mandatory before deployment or continued operation. Fraud detection AI is explicitly excluded — but the boundary requires legal analysis.

"

They understood our AI systems technically and translated that into a regulatory programme our board could approve and our engineers could execute. We were audit-ready within eleven weeks of engagement.

VP Legal & Compliance — Enterprise SaaS, Paris
🏆
Why not a law firm?

Law firm rates typically start at €500/hour and their output is legal opinion — valuable, but not the operational compliance programme your organisation needs to implement. We deliver fixed-fee, board-ready programmes at a price point accessible to scale-ups and mid-market companies.

Full Regulatory Coverage

One firm. Every framework
your AI touches.

Regulated-sector companies using AI don't face a single regulation — they face a stack of overlapping frameworks, each with its own obligations, timelines, and enforcement bodies. Most firms advise on one at a time. That means gaps between advisors, conflicting controls, and compliance work done twice.

We handle EU AI Act, GDPR, AML, DORA, DPFT, and MiCA as a single integrated engagement — mapping obligations across every relevant framework, consolidating overlapping work, and producing a unified compliance architecture, not a collection of siloed reports.

For fintech and crypto clients in particular: your AI systems touch AML transaction monitoring, GDPR data flows, DORA operational resilience, and EU AI Act classification simultaneously. Handling these separately creates legal exposure at every seam. We close those seams.

💬
"We were dealing with three separate advisors for GDPR, DORA, and now the AI Act." This is the most common situation we encounter. It creates gaps, contradictions, and duplication. An integrated engagement costs less and produces a more defensible output.
Single engagement vs. multiple advisors
Multiple Advisors
Scanlex
Frameworks covered
One at a time
All simultaneously
Cross-framework gaps
Common, unmanaged
Mapped & closed
Duplicated work
High — DPIA vs FRIA etc.
Consolidated
Conflicting controls
Risk at every handoff
Single architecture
Total cost
Higher — separate fees
One fixed-fee engagement
Discuss your regulatory stack
GDPR
General Data Protection Regulation

Automated decision-making (Art. 22), DPIAs (Art. 35), lawful basis for AI training data, data minimisation. GDPR obligations run alongside and partially overlap with EU AI Act FRIA and transparency requirements.

Overlap: DPIA ↔ AI Act FRIA · Art. 22 ↔ AI Act Art. 14 human oversight
AML / CFT
Anti-Money Laundering & Counter-Terrorist Financing

AML fraud detection AI is explicitly carved out of Annex III high-risk classification — but this boundary requires legal analysis. AML obligations interact with AI Act data governance and human oversight requirements.

Overlap: Annex III carve-out · AML AI governance ↔ Art. 9 risk management
DORA
Digital Operational Resilience Act

DORA's ICT risk management, third-party oversight, and operational resilience requirements apply to financial entities and overlap significantly with EU AI Act QMS (Art. 17), post-market monitoring (Art. 72), and incident reporting (Art. 73).

Overlap: DORA ICT risk ↔ AI Act Art. 9/17 · DORA incident ↔ AI Act Art. 73
DPFT
Data Protection in Financial Services

Sector-specific data protection in financial services — PSD2, open banking, national supervisory guidance — interacts with both GDPR and EU AI Act transparency and data governance requirements.

Overlap: Sectoral data rules ↔ AI Act Art. 10 data governance
MiCA
Markets in Crypto-Assets Regulation

As crypto firms professionalise under MiCA, AI governance is becoming an investor and regulatory expectation. MiCA operational requirements complement AI Act obligations for crypto firms using AI in trading or risk assessment.

Overlap: MiCA governance ↔ AI Act AI governance framework
ISO 42001
AI Management System Standard

ISO 42001 certification is increasingly requested by enterprise procurement alongside EU AI Act compliance. We build governance frameworks that simultaneously satisfy ISO 42001 and EU AI Act QMS obligations — avoiding duplicated effort.

Overlap: ISO 42001 AIMS ↔ AI Act Art. 17 QMS

Not sure which frameworks apply? We map your full regulatory stack in the initial scoping call — at no cost.

Book Free Call
What Companies Get Wrong

What you think you need
vs. what the law actually requires.

Most organisations approaching EU AI Act compliance for the first time have reasonable-sounding assumptions about what the work involves. Almost all underestimate it — not from negligence, but because the regulation is technically and legally demanding in ways that only become clear on close reading. These are the gaps we close in every engagement.

✗  What most organisations assume
✓  What the EU AI Act actually requires
Assumption 01
"We just need a checklist to confirm compliance."

Most teams assume AI Act compliance is a tick-box exercise — fill in a form, confirm a few answers, done.

What the law requires
A legally defensible written classification opinion — not a completed form.

Under Article 6(4), any provider who considers their Annex III system is not high-risk must document that assessment before placing it on the market. The Commission's Article 6 guidelines were due February 2026 and have not been published. An incorrect classification is itself a compliance violation.

Article 6(4) · Annex III
Assumption 02
"We use a vendor's AI tool — the vendor is responsible, not us."

A widespread assumption — and one that creates real legal exposure. Teams believe that using a third-party AI product passes all obligations to the provider.

What the law requires
Deployers of high-risk AI carry their own mandatory obligations — independently of the vendor.

Article 26 requires deployers to assign human oversight, maintain logs for at least 6 months, monitor the system, and notify affected persons. For credit and insurance AI, deployers must also conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27, notified to the national authority.

Article 26 · Article 27 (FRIA)
Assumption 03
"We already did a GDPR DPIA — that covers the AI Act."

Teams with existing GDPR programmes often assume their DPIAs carry over. A logical assumption — but legally incorrect.

What the law requires
A DPIA satisfies GDPR. The AI Act FRIA has a broader scope and is a separate legal obligation.

The FRIA under Article 27 covers fundamental rights beyond data protection — discrimination risks, access to services, democratic rights. A GDPR DPIA partially satisfies FRIA requirements but not fully. Both must exist as distinct documents.

Article 27 · GDPR Art. 35
Assumption 04
"We can wait — the real deadline is December 2027."

Many teams have heard about the proposed Digital Omnibus delay and are using it to defer compliance planning entirely.

What the law requires
August 2026 is the current legal deadline. The Omnibus is a proposal — not yet law.

COM(2025) 836 proposed extending Annex III obligations to December 2027, but as of March 2026 it has not been adopted. Even if it passes, it extends the window — it does not remove the obligation. For organisations facing investor due diligence or procurement requirements, your commercial deadline exists regardless.

COM(2025) 836 · Current deadline: Aug 2026
Assumption 05
"Our AI only helps internal decisions — it doesn't affect customers directly."

Teams often assume internal-facing AI tools are outside scope because end users don't interact with them directly.

What the law requires
Annex III scope is defined by impact on natural persons — not by who operates the system.

AI used to assess creditworthiness, determine insurance pricing, or screen job candidates is high-risk regardless of whether the affected person interacts with the AI directly. An underwriter using an AI pricing model is operating a high-risk system on behalf of insurance applicants.

Annex III · Article 6
Assumption 06
"We just need a one-page AI policy for the board."

Compliance teams often propose a board AI policy as the primary deliverable — a reasonable starting point, but one that satisfies none of the legal obligations under Articles 9, 17, or Annex IV.

What the law requires
High-risk providers need a Quality Management System, a Risk Management System, and a full Annex IV technical file — per AI system.

Article 9 requires a documented, iterative risk management system. Article 17 requires a full Quality Management System. Annex IV specifies a detailed technical documentation file per system, including architecture, training data, accuracy metrics, and human oversight design — required before market placement. A board AI policy satisfies none of these.

Article 9 · Article 17 · Annex IV

Recognise any of these conversations? These are the assumptions we address in every initial scoping call — before any engagement begins. The free call costs nothing. Understanding your actual legal position is the only way to build a compliance programme that holds up.

Book the Free Call
Start Here

Not sure if the EU AI Act
applies to you? Start with
a free tool.

Before engaging a compliance advisor, use the free EU AI Act Compliance Checker from the Future of Life Institute to understand whether your AI systems are likely in scope.

The tool gives you a starting point. When you need a legally defensible written opinion — one that holds up under regulatory scrutiny, investor due diligence, or a procurement audit — that is where we come in.

⚖️

Free tool vs. professional opinion: The compliance checker tells you whether obligations likely apply. It cannot classify your specific system under Article 6(4), determine your provider vs. deployer status under Article 25, or produce documentation that satisfies a notified body or market surveillance authority. That requires legal judgment — and that is what we provide.

Open the Free Compliance Checker
How to use the checker
1
Select your role — provider (you build AI), deployer (you use AI), importer, or distributor
2
Answer questions about your AI system's intended purpose and use case
3
The tool determines your risk tier — unacceptable, high-risk, limited, or minimal
4
Review the obligations that apply to your specific role and risk classification
5
If you fall into high-risk or are uncertain — contact us for a legally defensible classification opinion
What the free tool cannot do
Issue a written legal classification opinion (Article 6(4))
Determine provider vs. deployer under Article 25 fine-tuning rules
Produce Annex IV technical documentation
Satisfy investor due diligence or procurement audit requirements
Cover GDPR, DORA, AML or cross-regulatory obligations
Speak to a Senior Advisor
Practice Areas

Six services.
One outcome.

Every engagement starts with understanding your role (provider or deployer), your sector, and your Annex III exposure. We do not run generic compliance programmes — all work is scoped to your specific systems and obligations.

S2 — Classification

AI Act Scope & Classification Audit

Written legal classification opinion per AI system: provider vs. deployer role, Annex III scope, and full regulatory obligations matrix. Classification is itself a legal obligation under Article 6(4) — and it requires legal judgment, not a template.

For: Companies uncertain of their Annex III scope. The mandatory foundation for all downstream compliance work.
Annex III classificationprovider vs deployerArticle 6
S3 — Provider Compliance

High-Risk AI Compliance Implementation

Full Chapter III obligations for Annex III providers: risk management system (Art. 9), technical documentation (Annex IV), QMS (Art. 17), human oversight design (Art. 14), conformity assessment preparation, and EU database registration.

For: Fintech lenders, HR tech SaaS, insurtech pricing tools, RegTech AI products — any high-risk provider needing full Annex III compliance.
Annex IV documentationQMS Article 17conformity assessment
S4 — Deployer Compliance

Deployer AI Governance Package

Article 26 obligations assessment, human oversight protocol design, Fundamental Rights Impact Assessment (FRIA, Art. 27) for credit and insurance AI, logging protocols, vendor contract review, and staff AI literacy training.

For: Banks, insurers, and employers deploying vendor AI tools for credit, insurance pricing, or HR. FRIA is mandatory for credit and insurance deployers.
Article 26 deployerFRIA Article 27vendor contracts
S5 — Cross-Regulatory

GDPR, AML, DORA & Multi-Framework Integration

AI Act compliance integrated with GDPR, DORA, AML/CFT, DPFT, and MiCA. We map obligations across all applicable regimes — ensuring no gaps, no conflicts, and a single coherent compliance architecture. One engagement, not four.

For: Fintech and crypto clients with layered regulatory obligations. Avoids duplicate effort and contradictory controls across advisors.
GDPR + AI ActDORA alignmentAML + MiCA
S6 — Ongoing

AI Compliance Retainer

Monthly regulatory intelligence briefings, quarterly AI system reviews, post-market monitoring support (Art. 72), incident triage (Art. 73), and investor/customer inquiry responses. Your permanent AI compliance function.

For: Clients post-implementation needing ongoing governance as AI systems and regulation continue to evolve.
post-market monitoringArticle 72/73regulatory watch
Why Us

How we differ from law firms,
Big4, and template sellers

🎯

Regulated-Sector Focus

We work only in fintech, insurtech, HR tech, and RegTech — the sectors where Annex III exposure is real. Our entire methodology is built for these clients, not adapted from generic frameworks.

📝

Written Legal Opinions, Not Checklists

Our classification opinions and compliance assessments are legally defensible written documents. This distinction matters if a market surveillance authority asks to see your compliance evidence.

⚙️

Technical + Legal Fluency

Annex IV documentation requires understanding AI system architecture, not just regulatory text. We bring both — which is why our documentation holds up under technical and legal scrutiny simultaneously.

🔗

Cross-Regulatory by Default

EU AI Act + GDPR + DORA + AML + MiCA as a single advisory engagement. No disjointed advice, no gaps between advisors, no contradictory controls. Structurally different from how law firms work.

Industries Served

The regulated sectors where
Annex III exposure is real

🏦

Fintech, Lending & BNPL

Credit scoring and creditworthiness AI is explicitly listed under Annex III, point 5(b). Any fintech using AI to determine loan eligibility is likely a high-risk provider or deployer. Fraud detection AI is carved out — but the boundary requires legal analysis.

Annex III, Point 5(b) · Highest commercial urgency
🛡

Insurtech & Life/Health Insurance

AI for risk assessment and pricing in life and health insurance falls under Annex III, point 5(c). Both providers and deployers carry full obligations including the mandatory Fundamental Rights Impact Assessment (FRIA) under Article 27.

Annex III, Point 5(c) · FRIA mandatory
👥

HR Tech & Recruitment

AI used to screen, filter, or rank job candidates falls under Annex III, point 4(a). Any ATS with AI-driven candidate scoring is likely a high-risk provider. Employers deploying such tools are deployers with Article 26 obligations including mandatory worker notification.

Annex III, Point 4(a) · Provider & deployer obligations
⚙️

RegTech & Compliance Tech

RegTech companies building AI-powered compliance tools for regulated clients become providers with full Chapter III obligations. Enterprise procurement increasingly requires AI Act compliance evidence as a direct commercial driver.

Provider obligations · Procurement-driven urgency

Crypto & Digital Assets

Most crypto AI (AML/fraud detection) is carved out of Annex III high-risk — but MiCA compliance culture and investor due diligence are driving AI governance spend. We position AI Act readiness as an extension of your existing AML compliance architecture.

MiCA + AI Act · Investor DD driver
🌍

Non-EU Companies with EU Customers

The EU AI Act is extraterritorial — it applies where AI outputs affect EU residents, regardless of where the company is based. US, UK, Israeli, and APAC companies with EU-facing AI products are in scope. We handle scope assessment, compliance implementation, and EU authorised representative appointments under Article 22.

Extraterritorial scope · Article 22 authorised rep
Methodology

From first call to
certified compliance.

A structured four-phase engagement designed for speed without shortcuts. Most mandates achieve initial compliance readiness within 8–12 weeks. All outputs are immediately usable as regulatory evidence and investor-grade documentation.

1
Phase 01 · Week 1

Discovery & Scoping

Map your AI systems, intended purposes, data flows, and full regulatory exposure. Determine provider vs. deployer status. Identify Annex III scope and immediate obligations.

Output: AI inventory + role determination + risk flags
2
Phase 02 · Weeks 2–3

Classification & Gap Analysis

Written classification opinion per AI system. Full gap analysis against applicable obligations ranked by legal severity, with regulatory citations and owner assignments.

Output: Legal classification opinions + gap report
3
Phase 03 · Weeks 4–10

Remediation

Embedded alongside your legal, compliance, and engineering teams. We draft Annex IV technical documentation, design the QMS and risk management system, prepare FRIA where applicable, and deliver staff AI literacy training.

Output: Full conformity documentation package
4
Phase 04 · Ongoing

Certification & Watch

Conformity declaration, EU database registration, regulatory watch programme, and board-level reporting. Ongoing post-market monitoring (Art. 72) and incident support (Art. 73).

Output: Conformity declaration + retainer programme
Fee Structure

Transparent fees.
No surprises.

Every engagement begins with a complimentary 30-minute scoping call. We will tell you candidly which programme fits your situation — no obligation to proceed. Minimum engagement €5,500.

Immediate — Active Now
Entry Point

Prohibition & Literacy Audit

Audit of current AI practices against Article 5 prohibited AI and the Article 4 AI literacy obligation — both already in force.

5,500/ fixed fee
Best for: Any company needing to confirm immediate legal exposure before tackling Annex III.
  • Full AI system inventory
  • Article 5 prohibition screening opinion
  • AI literacy gap assessment (Article 4)
  • Executive risk flag report
  • 30–60 day priority action plan
Engage →
Classification

Scope & Classification Audit

Written legal classification opinion per AI system — provider vs. deployer role, Annex III scope, and regulatory obligations matrix.

8,500/ fixed fee
Best for: Companies uncertain of their Annex III scope. The mandatory foundation before any implementation work.
  • AI inventory + intended purpose mapping
  • Written classification opinion per system
  • Provider vs. deployer determination
  • Regulatory obligations matrix
  • Prioritised compliance roadmap
Engage →
Ongoing

Retainer Partnership

Ongoing AI compliance governance — regulatory watch, post-market monitoring, incident support, and investor inquiry responses.

Custom
Best for: Clients post-implementation needing permanent compliance infrastructure as regulation and AI systems evolve.
  • Monthly regulatory intelligence briefings
  • Digital Omnibus & EC guidance tracking
  • Quarterly compliance reviews
  • Article 72 post-market monitoring
  • Article 73 incident triage support
  • Investor / customer inquiry responses
Enquire →
🔒 All engagements begin with a complimentary 30-minute scoping call · No commitment required · Minimum engagement €5,500 · GDPR-compliant terms
Client Testimony

Trusted by compliance and legal
teams across Europe.

"

Our Series B investor asked for AI governance evidence six weeks before close. Scanlex produced a classification opinion, gap analysis, and interim governance framework in three weeks. The diligence process completed without an AI Act condition. That engagement paid for itself many times over.

CEO — Fintech Lending Platform, Amsterdam
"

We were uncertain whether our AI pricing tool made us a provider or a deployer. That classification question was blocking our entire compliance programme. Scanlex produced a written legal opinion in two weeks that gave our legal team the foundation to proceed.

Chief Legal Officer — Insurtech, Berlin
"

A large enterprise client required EU AI Act compliance evidence as a procurement condition. We had eight weeks. Scanlex scoped, classified, and produced the required documentation on time. We retained the contract.

Head of Compliance — HR Tech SaaS, Stockholm
FAQ

Questions from
legal & compliance
leaders

Can't find your answer? Contact us directly — we respond within one business day.

Does the EU AI Act apply if we are based outside the EU?+
Yes. The EU AI Act has extraterritorial scope under Article 2 — it applies wherever AI system outputs are used in the EU, regardless of where the provider or deployer is established. US, UK, Israeli, and APAC companies with EU-facing AI products are fully in scope. Non-EU providers must also appoint an EU authorised representative under Article 22 in most cases. We handle this for non-EU clients as a standard component of our scope assessment.
Am I a provider or a deployer — and why does it matter?+
This is one of the most commercially important questions in the EU AI Act. A provider develops an AI system and places it on the market under their own name — they carry the heaviest obligations (full Chapter III, conformity assessment, Annex IV documentation). A deployer uses a third-party AI system professionally — lighter but still significant obligations apply (Article 26, FRIA, logging). The distinction becomes complex when a company fine-tunes or substantially configures a third-party model — Article 25 may reclassify them as a provider. We produce a written legal opinion on this question as part of our Scope & Classification Audit.
We use a third-party AI tool for credit decisions. Are we affected?+
Almost certainly yes. Creditworthiness and credit scoring AI is explicitly listed under Annex III, point 5(b). As the deployer, you carry Article 26 obligations: human oversight, logging for at least 6 months, informing affected persons, monitoring the system, and reporting incidents. Crucially, you also have a mandatory Fundamental Rights Impact Assessment (FRIA) obligation under Article 27 — notified to the national authority. These obligations exist regardless of what your vendor contract says.
Our fraud detection AI was excluded from Annex III. Does that mean no obligations?+
Partially. Annex III point 5(b) explicitly excludes AI used for fraud detection and prevention from the credit scoring high-risk category — an important carve-out for fintech clients. However, this exclusion applies only to the Annex III high-risk classification. Article 4 AI literacy obligations, Article 50 transparency obligations (if customer-facing), and GDPR Article 22 automated decision-making obligations may still apply. We include this boundary analysis in our classification audits as a standard component.
What is a FRIA and do we need one?+
A Fundamental Rights Impact Assessment (FRIA) is mandatory under Article 27 for: public bodies, entities providing public services, and private entities deploying Annex III credit scoring (point 5(b)) or insurance risk AI (point 5(c)). The FRIA must describe the system's use, who is affected, risks to fundamental rights, how human oversight is implemented, and risk mitigation measures. It must be notified to the relevant market surveillance authority. A GDPR DPIA partially satisfies FRIA requirements but not fully — the FRIA has a broader fundamental rights scope.
Should we wait for the Digital Omnibus delay before acting?+
No. The proposed Digital Omnibus (COM(2025) 836) would extend the Annex III deadline from August 2026 to December 2027. As of March 2026 it has not been adopted — August 2026 remains the current legal deadline. Even if adopted, it extends the window — it does not remove the obligation. For commercially motivated companies, investor due diligence, enterprise procurement requirements, and product launches create compliance pressure independent of regulatory deadlines. Starting now means your compliance is completed properly, not rushed.
How does the EU AI Act interact with GDPR, DORA, and AML?+
The EU AI Act, GDPR, DORA, and AML are complementary but distinct frameworks. Key interactions: GDPR Article 22 (automated decision-making) intersects with AI Act Art. 14 human oversight requirements. A DPIA under GDPR Art. 35 partially satisfies the FRIA but not fully. DORA operational resilience requirements for financial entities overlap with AI Act QMS (Art. 17) and incident reporting (Art. 73). For regulated financial institutions with AML obligations, fraud detection AI has a specific Annex III carve-out — but the boundary requires legal analysis. We handle all of these as a single integrated engagement.
What does the free call cover and what is the first step?+
Submit the form on this page. A senior advisor will respond within one business day to arrange a complimentary 30-minute scoping call. In that call we will: identify your likely role (provider or deployer), assess your probable Annex III exposure based on your AI systems and use cases, tell you which service tier is appropriate, and give you an honest view of urgency and timeline. There is no obligation to proceed. If your situation is outside Annex III scope, we will tell you — and we will not propose an engagement you do not need.
Get in Touch

The compliance window
is open now.
Use it.

Send us a brief message and a senior advisor will respond within one business day — with an honest assessment of what you need and what it will cost. No sales team. No obligation.

The strongest triggers we see: investor due diligence requiring AI governance evidence, enterprise customer procurement requests, and new product launches in regulated EU contexts. If any of these apply, the window for structured compliance is now.
Complimentary 30-minute scoping call — no cost
Senior advisor assigned — not a sales or junior team
Honest scope assessment — we will tell you if you don't need us
Written classification opinions, not checklists
Response within one business day guaranteed
GDPR-compliant data handling on all enquiries

We respond within one business day · Your data is handled under GDPR and never shared with third parties