Ignorance is NOT bliss - how the EU AI Act’s global reach can catch your business out (and what to do to avoid it…)

Ignorance is NOT bliss - how the EU AI Act’s global reach can catch your business out (and what to do to avoid it…)

10 June 2025

Neil Jennings


➡️ Fact #1: The EU AI Act has global application and steep fines for non-compliance

➡️ Fact #2: Mitigating your AI risk is methodical and provides certainty

➡️ Fact #3: Assumptions are the mother of all… problems

➡️ Myth #1: The EU AI Act only applies to EU companies

➡️ Myth #2: AI risks are only regulatory, and fall outside of the commercial reality

➡️ Myth #3: It’s too complicated to start mitigating AI supply chain risks, so don’t bother


The EU AI Act seems to be everywhere these days. Many of its obligations may already apply to your business, even if you’re not based in Europe. Why? Because the Act regulates how AI is built, sold, embedded, and used, not just where.



This article breaks down the hidden risks and shows how to navigate them with confidence.


Spot Quiz

What is an AI system?

What is a GPAI model?

Where is your AI system or GPAI model used?

What contractual provisions protect you right now?

Do you have an AI-specific contract due diligence process, either as seller or buyer?

What is an operator?

What is a ‘high risk’ AI system?

What’s the difference between Annex I and Annex III?

Do you need a QMS in place?

Do you even know what a QMS is?


If you struggled answering any of the questions above, you are living in a world of uncertainty.


The Basics

Simply put, the Act regulates creation and use of AI Systems and GPAI models, across different ‘operators’ and different risk categories. It is being phased in over a number of years, with some provisions already being in force, and it imposes steep fines for non-compliance.


As a reminder:

👉 An AI System is any machine-based tool that infers how to generate outputs (predictions, content, decisions) from inputs (data, prompts) and can influence physical or virtual environments.

👉 A GPAI Model is a foundational AI model (e.g. GPT-4, DALL·E, Gemini) that can power multiple downstream AI systems. They’re not defined in isolation but are subject to specific obligations.


The different operators are:

🔷 Provider: Develops or brands the AI system

🔷 Deployer: Uses the AI in the course of business

🔷 Distributor: Resells the AI system inside the EU

🔷 Importer: Brings the AI system into the EU


The different risk levels are:

🔴 Prohibited AI: Defined in the Act and prohibited since Feb 2025. Includes things like biometric surveillance.

🟠 High risk: Complex definition that captures products that require CE markings, and those that do not (Annex I and Annex III). Includes medical imaging, recruitment screening, and credit scoring systems.

🟡 Limited risk: Includes AI systems that interact with humans (e.g., chatbots), where transparency obligations apply (i.e users must be clearly informed that they are interacting with AI)

🟢 No / minimal risk: These systems present negligible risk and are not subject to specific obligations under the Act (e.g. AI-enabled spam filters on emails, etc.)


It’s More Complex Than It Seems

The main issue is that you could be a provider (and thus assume all relevant obligations) without knowing it.


Things become complex because the creation and distribution of any software (including AI) is often built on an intricate and widespread network of builders, integrators, resellers, cloud providers, and end-users. The highly distributed and somewhat opaque nature of this network is where the complexities lie. And where unsuspecting companies can trip up if not careful. Where there is uncertainty as to the ultimate destination, use and branding of an AI system, obligations will naturally be obscured.


🔊 We just build the AI, we’re a SaaS company…

🔊 We thought the distributor handled all of that stuff…

🔊 We don’t even sell to the EU…


Real-World Operator Blind Spots

These complexities play out in the real world more often than you might think. Here are just a few scenarios where businesses are caught off guard:


‼️ The provider based outside the EU who ends up with a monumental fine because they simply didn’t know their AI system was being used in the UK

‼️ The importer or distributor who has a duty to verify the provider has performed a conformity assessment, affixed the CE mark, drawn up technical documentation, and appointed an EU Authorized Representative before placing the system on the market

‼️ The deployer who becomes a provider by accident when they modify the AI tool they purchased and it becomes a high risk system, such as employee performance evaluation


Where Businesses Trip Up

There are countless examples of where businesses can trip up and be left holding a very expensive bill for a meal they didn’t know they ate…


🧩 The invisible provider - a Canadian company who builds an AI tool, which gets resold by a UK firm and lands in Germany. It’s now in scope, but the Canadian firm has no visibility and no idea. Yes, the importer has obligations to check before resale, but they aren’t responsible for obtaining the CE mark.

🧩 The silent importer - an EU based partner casually adds your AI system to a suite of AI tools they sell, but your team didn’t know about it, and didn’t realise they needed to get the CE marking or appoint an authorised rep. Again, the importer needs to check before resale, but if they don’t, you’re still on the hook and need to rely on contractual provisions.

🧩 The internal modification provider - a French HR team modifies a chatbot to undertake employee evaluations. It began as a limited risk system, but modifications have made it high risk, and crucially, have morphed the company into provider, triggering all the high risk obligations.

🧩 Countless other examples exist

  • Tiered reselling, where there is no obligation to report to the AI creator the final destination of the AI tool
  • Cloud / API infrastructures, where the EU is not ‘targeted’ as a destination, but the AI system is nonetheless made available there
  • Internal-to-external deployer who becomes provider when an AI tool previously used outside the EU, it rolled out to its EU offices

The Path To Certainty

There is no magic wand or silver bullet. Mitigating the threat of supply chain uncertainty when it comes to the EU AI Act is tedious. It’s about mapping AI systems from provider to deployer, understanding the various risks and gaps, and ensuring that you ask the right questions to obtain the right information to make risk-aware decisions when it comes to buying and selling AI systems and GPAI models.


Step 1: AI Systems Inventory & Risk Mapping

  • Map all AI systems in use or being developed
  • Assess risk categories of each
  • Identify current and foreseeable end markets

Step 2: Supply chain due diligence

  • Map the AI’s journey from development to end user
  • Identify all parties involved (integrations, distribution, cloud providers, resellers, users)
  • Review all contracts for EU AI Act obligations
  • *If you are not based in the EU, but you are a provider of high risk AI systems in the EU, you must appoint an EU representative to liaise with EU authorities

Step 3: Establish a compliance framework

  • This will be tailored to your risk tolerance
  • High risk - QMS, technical documentation, human oversight, data governance, post-market monitoring, etc.
  • Limited risk - Transparency to users, etc.
  • GPAI Models - technical documentation, training data transparency, copyright policy, and potentially conduct evaluations and put in place incident reporting mechanisms.

Step 4: Training & AI Literacy

  • Put in place role-specific AI Literacy training for relevant staff and contractors
  • Additional commercial training should include supply chain and global risk awareness
  • Reach out for a FREE copy of GLF’s AI Literacy Starter Kit or comment “LITERACY” on the LinkedIn post

Step 5: Monitor & Update

  • Keep up to date with AI regulatory developments, and monitor internal operations to ensure the AI systems you sell or use do not compromise your business

The Final Say

This is not about ticking boxes. It’s about identifying gaps to expose your commercial and regulatory risks. When you know where you stand, you can make risk-aware decisions. In some cases, that means you accept the risk based on your specific circumstances and risk tolerance. In other cases, you will want to mitigate risks and put action plans in place to address the gaps you have identified.


The problem is not the risks. The problem is uncertainty. When you don’t know your role or what your AI is doing in-market, you're exposed, commercially, reputationally, and legally.


At GLF Strategic Compliance, we bring clarity to complexity, and we help businesses identify real exposure and build smart, scalable risk strategies, before those blind spots turn into expensive problems.


The sooner the better. Your customers, staff, and investors will be happy with this approach. Avoid financial penalties, keep your hard-earned reputation intact. Ask us about our AI Operator Mapping Global Liability Package to:

Know Your Role, Know Your Liabilities

🔎 Uncover Hidden Exposure

👉 Prioritise Action


This content is informational only and not legal advice. GLF is not a law firm regulated by the SRA.

Secure Your Business With Us

Get in touch to talk about AI governance, compliance and risk management solutions!