6 March 2026
Neil Jennings
The EU AI Act is now in force. Not all obligations apply yet, and implementation timelines remain uncertain across member states. But organisations still face a practical question: how should they begin thinking about compliance?
This article looks at that reality from the business side. There is a slight irony here. Given the complexity of AI, classification starts with something more basic: visibility into their own systems.
As we know from European data protection regulation, geographic distance does not mean distance from regulatory exposure. Both GDPR and the EU AI Act have extraterritorial reach, meaning that organisations that target or reach EU users will find themselves in scope regardless of where they are headquartered.
EU businesses are by now pretty familiar with EU-wide obligations. As markets have become more global, and data-based services have expanded, non-EU organisations have also experienced the application of EU law in practice. Yes, it is true that the effectiveness of enforcement is up for debate, but the fact of exposure is not. The AI Act sits in that category. Some practices are already prohibited, like the use of AI systems for social scoring or certain emotion recognition - the ‘unacceptable’ risk category.
Other obligations, notably around ‘high risk’ systems and transparency are subject to evolving timelines and guidance. Implementation dates for high risk AI systems appear likely to shift further into 2027 and 2028, where some obligations were set to come into force in August 2026.
Codes of practice are being finalised, most member states are yet to establish full national frameworks and regulatory bodies. Ireland is the most recent example of a national implementation law that illustrates the complexity - a distributed, multi-entity regulator model of AI oversight. Multiply that across member states and the enforcement landscape becomes highly fragmented.
None of this simplifies exposure or compliance. Waiting for clarity doesn’t help, it just narrows the runway. And, all the while, AI capabilities and adoption continue to increase.
Where entities operate across multiple jurisdictions - even with only a small EU presence - complexity compounds and data flows can become complicated very quickly. To make things harder, AI adoption is inconsistent, and many companies are using a wide range of tools for different teams in different contexts. The siloes don’t help.
Without a reasonably accurate picture of what AI tools are in use - and how they are used - it is very difficult to assess scope or obligations in a meaningful way. While relatively few tools may fall within the high risk category under the AI Act (i.e. Annex I and Annex III), that is only part of the analysis required to understand exposure. Even simple ‘administrative’ failures can attract significant penalties.
That lack of visibility is a symptom of the inconsistent adoption - different decision makers saying yes to different tools with different parameters and desired results. In other words, coordinated governance rarely exists. As is common in many legal and compliance fields, governance and operational controls can be an afterthought. Anyone who has lived through GDPR implementation will recognise the pattern.
But the key takeaway is this: a clear internal picture is a prerequisite for companies to begin assessing their AI tools - are they internal or external, what AI Act risk categories apply, what operator roles are being assumed? For most companies, there was never a clear picture to begin with, and those questions have been answered informally at best.
Operator analysis looks quite simple at first. Does the AI system / GPAI model touch the EU market? If yes, the second gate is to assess operator roles.
Complexity is not in the definitions of ‘provider’, ‘deployer’, ‘importer’ or ‘distributor’, but within the corporate structure itself, none of which is ever identical. Consider a global business headquartered in the United States with a small EU subsidiary. Most AI tools may be internal and used only by US-based staff. Those tools may sit entirely outside the scope of the AI Act.
Now consider that a single EU entity employs one person and uses AI-enabled recruitment software under a white labelled arrangement. Exposure may arise through that local use alone. The EU entity could qualify as an importer and in certain circumstances assume obligations typically associated with a provider. That exposure may not be immediately visible and organisations need clarity on (i) entities, (ii) tools, (iii) usage patterns and (iv) market touchpoints.
Many problems arise from systemic deficiencies. Some AI tools are implemented quietly, or without appropriate technical, legal or management oversight. Others are connected to legacy systems or layered on top of older ones. Gradually, adoption of AI tools and agentic systems creates a tangled web of data processing - information flow that must be lawfully undertaken and properly protected.
As technical debt accumulates, so does legal and compliance debt. Often, that legal and compliance debt can be handled with time and investigation. Not a new policy or training course, but a dive into the systems, tools, uses, and data flows. Robust classification - and good advice - starts with visibility into internal systems. Where data originates. Which entities process it. Who has access. What permissions exist. How outputs are relied upon. How long information is retained. How systems interact.
There is still confusion around the legal effectiveness of EU AI Act obligations. The last few weeks of delays, discussions, and different EU bodies meeting has not helped. But what is fairly certain is that things will not become simpler from an accountability standpoint. We have seen this with GDPR - there is always complexity and nuance, like the joint controller issue in Russmedia, or the issue of pseudonymised data in the SRB decision - but none of that has eliminated the requirement to clearly and accurately describe what data is processed, for what purposes, what third it is shared with, how it’s protected, and for how long it is kept. AI systems are no different, and now is the time to undertake this exercise. In the long run, it will be the smart thing to do.
In effect, complexity doesn’t remove the requirement to be able to describe, accurately and coherently, what data is processed, for what purposes, by which entities, under what permissions, and with what safeguards.
AI systems, automation and agentic configuration amplify everything.
Companies cannot classify AI systems under the AI Act until they understand their own systems and data flows. The first step is visibility. Understanding what systems exist, where data flows, which entities are involved, and how outputs are used allows for meaningful role and risk classification.
This article is for information only and not intended as legal advice.
Get in touch to talk about AI governance, compliance and risk management solutions!