We classify your AI systems under the EU AI Act, determine your obligations as a provider or deployer, and deliver the documentation your business needs to be legally defensible — before a regulator, investor, or enterprise customer asks.
Credit scoring and creditworthiness AI is explicitly listed under Annex III, point 5(b). Any fintech using AI to determine loan eligibility is likely a high-risk provider or deployer.
AI for risk assessment and pricing in life and health insurance falls under Annex III, point 5(c). Both providers and deployers carry full obligations including a mandatory FRIA under Article 27.
AI used to screen, score, or filter candidates falls under Annex III, point 4(a). If you build or deploy AI-powered ATS or candidate ranking tools, your compliance window is open now.
RegTech companies building AI for regulated clients are providers with full Chapter III obligations. Enterprise procurement increasingly demands AI Act compliance evidence as a condition of contract.
Under MiCA and growing investor scrutiny, AI governance is a due diligence standard. If you have existing AML/compliance infrastructure, AI Act readiness is a natural and urgent extension.
The EU AI Act applies extraterritorially — where AI outputs affect EU residents, not where the provider is based. US, UK, and APAC companies with EU-facing AI products are fully in scope.
Most companies act when a specific commercial pressure hits — a funding round requiring AI governance evidence, an enterprise contract conditional on compliance, or a product launch with a regulatory deadline. These are the situations where we can help fastest.
If you develop an AI system and place it on the EU market under your own name — even if built on a third-party model — you are a provider. Full Chapter III obligations apply: technical documentation, QMS, conformity assessment, EU database registration.
If you use a third-party AI system in a professional context — an AI credit tool, an ATS from a vendor — you are a deployer. Lighter but significant obligations apply: human oversight, logging, FRIA (for credit and insurance AI), and staff notification.
The distinction is not always obvious. Fine-tuning or substantially configuring a third-party model may make you a provider under Article 25 — a legal obligation under Article 6(4) that requires legal judgment, not a template.
Get a classification opinionSeries B/C closes and M&A transactions increasingly require AI governance evidence. Non-compliance discovered in due diligence can block or reprice a round. This is the strongest commercial driver we see.
A large customer sends a vendor questionnaire requiring EU AI Act compliance evidence. This is deal-blocking and immediate. We have rapid-response offerings for exactly this situation.
Launching an AI-powered product in a regulated context without prior classification and compliance is a legal exposure. Pre-launch compliance review is a non-negotiable entry condition for Annex III systems.
The AI Act applies wherever outputs affect EU residents. US, UK, and APAC companies expanding into the EU need scope assessment, compliance implementation, and potentially an authorised representative under Article 22.
Contact from a national competent authority is an emergency. We provide rapid-response gap assessment and remediation support with priority response times.
When the board directs legal or compliance to address AI regulatory risk, you need structured deliverables — a classification opinion, gap report, and governance framework — not internal research.
A bias finding, a data error, or an AI-driven decision that harms a customer. Post-incident, Article 73 reporting obligations trigger and a documented remediation plan becomes legally necessary.
Not sure if your situation applies? The free scoping call is the fastest way to find out — no commitment required.
Book Free CallEnforcement is staggered — but several obligations are active today, and the window for structured compliance before the August 2026 Annex III deadline is narrowing.
Social scoring, emotion recognition in workplaces, real-time biometric surveillance, and certain profiling practices are already prohibited. Operating these carries the highest penalty tier.
All providers and deployers must ensure staff operating or overseeing AI systems have sufficient AI literacy. Broadly applicable and already law.
Full conformity assessment, technical documentation, QMS, human oversight, and EU database registration for credit, insurance, HR, and other listed systems. Requires 8–12 weeks minimum preparation.
Providers of general-purpose AI models must register, maintain technical documentation, and assess systemic risk.
Penalties apply as the greater of the fixed amount or percentage of global annual turnover. SME thresholds exist but do not eliminate liability. The penalty regime has been active since 2 August 2025.
We are a specialist EU AI Act and GDPR compliance consultancy working with legal, compliance, and technology leadership in fintech, insurtech, HR tech, RegTech, and crypto — the sectors where Annex III exposure is real and willingness to act is high.
We combine deep EU regulatory expertise with genuine technical fluency in AI systems. This dual capability is what separates us from law firms (who understand the law but not the systems) and template sellers (who understand neither). Our outputs — classification opinions, technical documentation, conformity assessments — are legally defensible, not just plausible.
For regulated-sector clients, AI Act compliance must work alongside GDPR, DORA, AML, and MiCA. We handle all of this as one integrated engagement — no gaps between advisors, no conflicting controls.
Written classification opinions — not checklists. Provider vs. deployer, Annex III scope, and Article 6 assessment require legal judgment.
We read Annex IV documentation requirements and model architecture. Our gap analyses are accurate because we understand how AI systems actually work.
EU AI Act + GDPR + DORA + AML + MiCA as one integrated engagement. No disjointed advice, no gaps, no conflicting controls.
Every deliverable is produced to withstand market surveillance authority scrutiny — not just satisfy an internal audit.
Credit scoring, insurance pricing AI, recruitment systems, and AI in essential services fall under Annex III high-risk categories. Full conformity assessment is mandatory before deployment or continued operation. Fraud detection AI is explicitly excluded — but the boundary requires legal analysis.
They understood our AI systems technically and translated that into a regulatory programme our board could approve and our engineers could execute. We were audit-ready within eleven weeks of engagement.
Law firm rates typically start at €500/hour and their output is legal opinion — valuable, but not the operational compliance programme your organisation needs to implement. We deliver fixed-fee, board-ready programmes at a price point accessible to scale-ups and mid-market companies.
Regulated-sector companies using AI don't face a single regulation — they face a stack of overlapping frameworks, each with its own obligations, timelines, and enforcement bodies. Most firms advise on one at a time. That means gaps between advisors, conflicting controls, and compliance work done twice.
We handle EU AI Act, GDPR, AML, DORA, DPFT, and MiCA as a single integrated engagement — mapping obligations across every relevant framework, consolidating overlapping work, and producing a unified compliance architecture, not a collection of siloed reports.
For fintech and crypto clients in particular: your AI systems touch AML transaction monitoring, GDPR data flows, DORA operational resilience, and EU AI Act classification simultaneously. Handling these separately creates legal exposure at every seam. We close those seams.
Automated decision-making (Art. 22), DPIAs (Art. 35), lawful basis for AI training data, data minimisation. GDPR obligations run alongside and partially overlap with EU AI Act FRIA and transparency requirements.
AML fraud detection AI is explicitly carved out of Annex III high-risk classification — but this boundary requires legal analysis. AML obligations interact with AI Act data governance and human oversight requirements.
DORA's ICT risk management, third-party oversight, and operational resilience requirements apply to financial entities and overlap significantly with EU AI Act QMS (Art. 17), post-market monitoring (Art. 72), and incident reporting (Art. 73).
Sector-specific data protection in financial services — PSD2, open banking, national supervisory guidance — interacts with both GDPR and EU AI Act transparency and data governance requirements.
As crypto firms professionalise under MiCA, AI governance is becoming an investor and regulatory expectation. MiCA operational requirements complement AI Act obligations for crypto firms using AI in trading or risk assessment.
ISO 42001 certification is increasingly requested by enterprise procurement alongside EU AI Act compliance. We build governance frameworks that simultaneously satisfy ISO 42001 and EU AI Act QMS obligations — avoiding duplicated effort.
Not sure which frameworks apply? We map your full regulatory stack in the initial scoping call — at no cost.
Book Free CallMost organisations approaching EU AI Act compliance for the first time have reasonable-sounding assumptions about what the work involves. Almost all underestimate it — not from negligence, but because the regulation is technically and legally demanding in ways that only become clear on close reading. These are the gaps we close in every engagement.
Most teams assume AI Act compliance is a tick-box exercise — fill in a form, confirm a few answers, done.
Under Article 6(4), any provider who considers their Annex III system is not high-risk must document that assessment before placing it on the market. The Commission's Article 6 guidelines were due February 2026 and have not been published. An incorrect classification is itself a compliance violation.
Article 6(4) · Annex IIIA widespread assumption — and one that creates real legal exposure. Teams believe that using a third-party AI product passes all obligations to the provider.
Article 26 requires deployers to assign human oversight, maintain logs for at least 6 months, monitor the system, and notify affected persons. For credit and insurance AI, deployers must also conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27, notified to the national authority.
Article 26 · Article 27 (FRIA)Teams with existing GDPR programmes often assume their DPIAs carry over. A logical assumption — but legally incorrect.
The FRIA under Article 27 covers fundamental rights beyond data protection — discrimination risks, access to services, democratic rights. A GDPR DPIA partially satisfies FRIA requirements but not fully. Both must exist as distinct documents.
Article 27 · GDPR Art. 35Many teams have heard about the proposed Digital Omnibus delay and are using it to defer compliance planning entirely.
COM(2025) 836 proposed extending Annex III obligations to December 2027, but as of March 2026 it has not been adopted. Even if it passes, it extends the window — it does not remove the obligation. For organisations facing investor due diligence or procurement requirements, your commercial deadline exists regardless.
COM(2025) 836 · Current deadline: Aug 2026Teams often assume internal-facing AI tools are outside scope because end users don't interact with them directly.
AI used to assess creditworthiness, determine insurance pricing, or screen job candidates is high-risk regardless of whether the affected person interacts with the AI directly. An underwriter using an AI pricing model is operating a high-risk system on behalf of insurance applicants.
Annex III · Article 6Compliance teams often propose a board AI policy as the primary deliverable — a reasonable starting point, but one that satisfies none of the legal obligations under Articles 9, 17, or Annex IV.
Article 9 requires a documented, iterative risk management system. Article 17 requires a full Quality Management System. Annex IV specifies a detailed technical documentation file per system, including architecture, training data, accuracy metrics, and human oversight design — required before market placement. A board AI policy satisfies none of these.
Article 9 · Article 17 · Annex IVRecognise any of these conversations? These are the assumptions we address in every initial scoping call — before any engagement begins. The free call costs nothing. Understanding your actual legal position is the only way to build a compliance programme that holds up.
Book the Free Call
Before engaging a compliance advisor, use the free EU AI Act Compliance Checker from the Future of Life Institute to understand whether your AI systems are likely in scope.
The tool gives you a starting point. When you need a legally defensible written opinion — one that holds up under regulatory scrutiny, investor due diligence, or a procurement audit — that is where we come in.
Free tool vs. professional opinion: The compliance checker tells you whether obligations likely apply. It cannot classify your specific system under Article 6(4), determine your provider vs. deployer status under Article 25, or produce documentation that satisfies a notified body or market surveillance authority. That requires legal judgment — and that is what we provide.
Every engagement starts with understanding your role (provider or deployer), your sector, and your Annex III exposure. We do not run generic compliance programmes — all work is scoped to your specific systems and obligations.
Are any of your current AI practices already illegal? We audit your full AI estate against Article 5 prohibited practices and Article 4 AI literacy obligations — both already in force. Written legal opinion + 30–60 day action plan.
Written legal classification opinion per AI system: provider vs. deployer role, Annex III scope, and full regulatory obligations matrix. Classification is itself a legal obligation under Article 6(4) — and it requires legal judgment, not a template.
Full Chapter III obligations for Annex III providers: risk management system (Art. 9), technical documentation (Annex IV), QMS (Art. 17), human oversight design (Art. 14), conformity assessment preparation, and EU database registration.
Article 26 obligations assessment, human oversight protocol design, Fundamental Rights Impact Assessment (FRIA, Art. 27) for credit and insurance AI, logging protocols, vendor contract review, and staff AI literacy training.
AI Act compliance integrated with GDPR, DORA, AML/CFT, DPFT, and MiCA. We map obligations across all applicable regimes — ensuring no gaps, no conflicts, and a single coherent compliance architecture. One engagement, not four.
Monthly regulatory intelligence briefings, quarterly AI system reviews, post-market monitoring support (Art. 72), incident triage (Art. 73), and investor/customer inquiry responses. Your permanent AI compliance function.
We work only in fintech, insurtech, HR tech, and RegTech — the sectors where Annex III exposure is real. Our entire methodology is built for these clients, not adapted from generic frameworks.
Our classification opinions and compliance assessments are legally defensible written documents. This distinction matters if a market surveillance authority asks to see your compliance evidence.
Annex IV documentation requires understanding AI system architecture, not just regulatory text. We bring both — which is why our documentation holds up under technical and legal scrutiny simultaneously.
EU AI Act + GDPR + DORA + AML + MiCA as a single advisory engagement. No disjointed advice, no gaps between advisors, no contradictory controls. Structurally different from how law firms work.
Credit scoring and creditworthiness AI is explicitly listed under Annex III, point 5(b). Any fintech using AI to determine loan eligibility is likely a high-risk provider or deployer. Fraud detection AI is carved out — but the boundary requires legal analysis.
AI for risk assessment and pricing in life and health insurance falls under Annex III, point 5(c). Both providers and deployers carry full obligations including the mandatory Fundamental Rights Impact Assessment (FRIA) under Article 27.
AI used to screen, filter, or rank job candidates falls under Annex III, point 4(a). Any ATS with AI-driven candidate scoring is likely a high-risk provider. Employers deploying such tools are deployers with Article 26 obligations including mandatory worker notification.
RegTech companies building AI-powered compliance tools for regulated clients become providers with full Chapter III obligations. Enterprise procurement increasingly requires AI Act compliance evidence as a direct commercial driver.
Most crypto AI (AML/fraud detection) is carved out of Annex III high-risk — but MiCA compliance culture and investor due diligence are driving AI governance spend. We position AI Act readiness as an extension of your existing AML compliance architecture.
The EU AI Act is extraterritorial — it applies where AI outputs affect EU residents, regardless of where the company is based. US, UK, Israeli, and APAC companies with EU-facing AI products are in scope. We handle scope assessment, compliance implementation, and EU authorised representative appointments under Article 22.
A structured four-phase engagement designed for speed without shortcuts. Most mandates achieve initial compliance readiness within 8–12 weeks. All outputs are immediately usable as regulatory evidence and investor-grade documentation.
Map your AI systems, intended purposes, data flows, and full regulatory exposure. Determine provider vs. deployer status. Identify Annex III scope and immediate obligations.
Written classification opinion per AI system. Full gap analysis against applicable obligations ranked by legal severity, with regulatory citations and owner assignments.
Embedded alongside your legal, compliance, and engineering teams. We draft Annex IV technical documentation, design the QMS and risk management system, prepare FRIA where applicable, and deliver staff AI literacy training.
Conformity declaration, EU database registration, regulatory watch programme, and board-level reporting. Ongoing post-market monitoring (Art. 72) and incident support (Art. 73).
Every engagement begins with a complimentary 30-minute scoping call. We will tell you candidly which programme fits your situation — no obligation to proceed. Minimum engagement €5,500.
Audit of current AI practices against Article 5 prohibited AI and the Article 4 AI literacy obligation — both already in force.
Written legal classification opinion per AI system — provider vs. deployer role, Annex III scope, and regulatory obligations matrix.
End-to-end Annex III compliance for high-risk AI providers and deployers — from classification through to conformity declaration.
Ongoing AI compliance governance — regulatory watch, post-market monitoring, incident support, and investor inquiry responses.
Our Series B investor asked for AI governance evidence six weeks before close. Scanlex produced a classification opinion, gap analysis, and interim governance framework in three weeks. The diligence process completed without an AI Act condition. That engagement paid for itself many times over.
We were uncertain whether our AI pricing tool made us a provider or a deployer. That classification question was blocking our entire compliance programme. Scanlex produced a written legal opinion in two weeks that gave our legal team the foundation to proceed.
A large enterprise client required EU AI Act compliance evidence as a procurement condition. We had eight weeks. Scanlex scoped, classified, and produced the required documentation on time. We retained the contract.
Can't find your answer? Contact us directly — we respond within one business day.
Send us a brief message and a senior advisor will respond within one business day — with an honest assessment of what you need and what it will cost. No sales team. No obligation.