30 March 2025
The EU AI Act is a landmark regulation aimed at creating safe and trustworthy AI systems and General Purpose AI (GPAI) models. The Act defines an AI system as
“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
AI systems, in other words, can include:
A GPAI model is an essential component of an AI system. Under the EU AI Act, they are not specifically defined in their own right, but are broadly noted as ‘having the capability to serve a wide range of distinct tasks and that can be integrated into other AI systems’
Some examples of GPAI models include:
In a sense, this means that AI products like chatbots or generative AI tools are the AI systems, and the LLMs used to power them, like GPT-4, are the GPAI models.
How does the EU AI Act work?
The Act takes a risk-based approach to obligations, where higher risk AI systems (and GPAI models with ‘systemic risk’) are subject to stricter controls. Some types of AI are considered unacceptable, and have been prohibited since Feb 2025. It sets out obligations on four different ‘operators’, each of which has specific obligations in relation to high and limited risk AI systems, and systemic risk GPAI models. The various requirements and obligations are being phased in over a period of years, with the earliest having already passed, and the final coming into force in 2030.
The majority of the obligations that businesses should be aware of and plan for will come into force in August 2026. This includes important AI governance requirements as well as the majority of enforcement activities.
However, there are some transitional provisions that relate to specific systems, such as High Risk AI systems placed on the market or put into service before August 2026 need only comply once subject to significant changes. GPAI models placed on the market prior to August 2025 have until August 2027 to comply.
What is the implementation timeline?
The Act will be phased in over a number of years and there are a lot of specifics when it comes to obligations, enforcement, and other requirements. However, in terms of broad dates to watch out for (subject to the transitional provisions mentioned above) are as follows:
Who are the four operators?
The four operators are (i) provider, (ii) deployer, (iii) distributor, and (iv) importer. The first two of those are subject to the most requirements. Each of these four operators takes on a different role in the creation, use, development, or implementation of AI. They have a distinct set of obligations, which relates directly to the level of risk as determined by the Act, set out in more detail below.
As an overview, unacceptable risk is no longer permitted, while minimal/low risk AI is not regulated under the Act. This leaves high risk and limited risk, and each of the below operators must observe certain obligations in relation to the specific risk level.
Importantly, all operators can become Providers (or at least be subject to Provider obligations) if they undertake certain activities, such as make substantial changes to the AI system in their own name.
What are the different levels of risk?
Risks are broken down into different thresholds: unacceptable (prohibited since Feb 2025), high (numerous obligations), limited (some obligations) and no/minimal risk (not regulated). As mentioned above, different operators have different obligations depending on the level of risk of the AI system or GPAI model.
The different levels of risk are:
What are the obligations?
This is a very detailed and lengthy subject, and I will discuss in a later article. However, as a broad overview, some of the obligations include:
What are the consequences for failure to comply?
Similar to the GDPR, the Act places hefty financial penalties on businesses that do not observe obligations. Of course, these are subjective and will ultimately depend upon specific circumstances. Just as GDPR fines are considered on a case by case basis, the same will happen with the EU AI Act. At their highest, the fines will be the higher of:
Again, we will need to wait and see how fines are handed out over time. But as we have seen with GDPR fines, some of these have been in the hundreds of millions, and even into the billion back in 2023.
What is the best path to compliance?
This is the million Navigating the Act can be complex - it requires an understanding of where a business fits into the operator definitions, the risk level of the AI system or GPAI model, and the transitional implementation timelines. But the main thing it requires is to have a solid understanding of all of the foregoing, and the ability to plan for implementation and compliance. It cannot happen overnight, and taking a reactive approach will lead to confusion and fines.
Taking time to think about risk appetite, the product, the consumer, the financial and reputational implications, as well as the resourcing required to get to the point of desired compliance will be critical.
Are you thinking about EU AI Act compliance? Or maybe you want to talk about AI governance frameworks to make sure your business uses or develops AI in a responsible and trustworthy manner?
Send me an email if you want to talk more!
We build rock solid relationships with our clients. Get in touch today so we can learn about your business, understand your goals, and see if our solutions can fix your problems.