An Introduction to the EU AI Act

An Introduction to the EU AI Act

30 March 2025


The EU AI Act is a landmark regulation aimed at creating safe and trustworthy AI systems and General Purpose AI (GPAI) models. The Act defines an AI system as


“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”



AI systems, in other words, can include:

  • Recruitment tools, where software uses algorithms to perform a specific task, like reviewing resumes and cover letters. It then predicts which candidates are best suited for the role. The input is resume information, while the output is essentially a ranking of candidates.
  • Medical imaging diagnosis. Software can review images (x-rays, MRIs, etc) to either detect or assist in diagnosis. The input is the image and the output is a prediction of diagnosis.
  • Chatbots for customers. Tools that use natural language processing can interact with customers to understand queries and provide auto-populated responses. The input is the customer’s message received via the text tool, and the output is a pre-programmed response.
  • Email spam filtering. The filter reviews emails and predicts whether there is spam among them.

A GPAI model is an essential component of an AI system. Under the EU AI Act, they are not specifically defined in their own right, but are broadly noted as ‘having the capability to serve a wide range of distinct tasks and that can be integrated into other AI systems’

Some examples of GPAI models include:

  • Large Language Models (LLMs) (e.g. GPT-4) - these models can perform a wide range of tasks, such as text generation, translation, answering questions and generating code. They can be used as the building blocks for various downstream AI systems. It is likely that, when you use an AI tool, there will be an underlying LLM created by a third party.
  • Image generation models (e.g. DALL-E) - ability to create images from text prompts and have a wide variety of applications.
  • Foundation models (e.g. Google Gemini) - a broader term that includes LLMs, these models are trained on large datasets and can be adapted to various tasks, including NLP and image recognition. All LLMs and image generation models are Foundation Models.

In a sense, this means that AI products like chatbots or generative AI tools are the AI systems, and the LLMs used to power them, like GPT-4, are the GPAI models.


How does the EU AI Act work?

The Act takes a risk-based approach to obligations, where higher risk AI systems (and GPAI models with ‘systemic risk’) are subject to stricter controls. Some types of AI are considered unacceptable, and have been prohibited since Feb 2025. It sets out obligations on four different ‘operators’, each of which has specific obligations in relation to high and limited risk AI systems, and systemic risk GPAI models. The various requirements and obligations are being phased in over a period of years, with the earliest having already passed, and the final coming into force in 2030.


The majority of the obligations that businesses should be aware of and plan for will come into force in August 2026. This includes important AI governance requirements as well as the majority of enforcement activities.


However, there are some transitional provisions that relate to specific systems, such as High Risk AI systems placed on the market or put into service before August 2026 need only comply once subject to significant changes. GPAI models placed on the market prior to August 2025 have until August 2027 to comply.


What is the implementation timeline?

The Act will be phased in over a number of years and there are a lot of specifics when it comes to obligations, enforcement, and other requirements. However, in terms of broad dates to watch out for (subject to the transitional provisions mentioned above) are as follows:

  • August 2024: Act in force
  • Feb 2025: Unacceptable risk AI prohibited
  • May 2025: Expect to have final Codes of Practice for GPAI models
  • Aug 2025: GPAI model provider obligations to be in force
  • Aug 2026: Obligations for some high risk AI systems to be in force
  • Aug 2027: Obligations for remaining high risk AI systems to be in force

Who are the four operators?

The four operators are (i) provider, (ii) deployer, (iii) distributor, and (iv) importer. The first two of those are subject to the most requirements. Each of these four operators takes on a different role in the creation, use, development, or implementation of AI. They have a distinct set of obligations, which relates directly to the level of risk as determined by the Act, set out in more detail below.


As an overview, unacceptable risk is no longer permitted, while minimal/low risk AI is not regulated under the Act. This leaves high risk and limited risk, and each of the below operators must observe certain obligations in relation to the specific risk level.

  • Provider - this is the entity that develops or commissions the AI system and places it on the EU market. It can also be the entity that first places the AI system on the EU market under its own name. An example is the tech company who develops and sells AI recruitment software.
  • Deployer - this is the entity that uses the AI system for professional purposes, like the HR department who uses the AI recruitment software.
  • Distributor - this is an entity that supplies an AI system, but is not the provider, like a tech sales company that re-sells the AI recruitment software to the Deployer.
  • Importer - this is an EU based entity that brings AI systems from outside the EU.

Importantly, all operators can become Providers (or at least be subject to Provider obligations) if they undertake certain activities, such as make substantial changes to the AI system in their own name.


What are the different levels of risk?

Risks are broken down into different thresholds: unacceptable (prohibited since Feb 2025), high (numerous obligations), limited (some obligations) and no/minimal risk (not regulated). As mentioned above, different operators have different obligations depending on the level of risk of the AI system or GPAI model.


The different levels of risk are:

  • Unacceptable - this level of risk is now strictly prohibited, and includes things like using biometric identifiers in real time for public law enforcement (think CCTV cameras that know who you are and predict your behaviour); creating a social scoring mechanism based on your personal characteristics (such as denial of a bank loan based on your personal and professional profile); using subliminal messaging to influence people and remove the ability to make informed decisions.
  • High risk - this category is the meat of the EU AI Act and is the most heavily regulated part. This level of risk relates to AI systems with a high risk to the health and safety of EU citizens, such as: (i) safety products that are already regulated by EU product safety law, (ii) biometrics used for remote identification, (iii) safety components of critical infrastructure, (iv) recruitment or promotion selection, (v) access to services like benefits or healthcare, and (vi) access to education or vocational training. The obligations associated include technical requirements, technical documentation, provision of information, registration in the EU database, data governance, human oversight, and other obligations.
  • Limited risk - these risks carry fewer obligations than high risk, but must still be observed by the Provider and the Deployer. These include, for example, transparency provisions and ensuring that consumers know when / if they are communicating with AI. Limited risk obligations do not extend to Distributors or Importers, unless they, for example, substantially change the AI system in their own name, and de facto become subject to the Provider’s obligations.

What are the obligations?

This is a very detailed and lengthy subject, and I will discuss in a later article. However, as a broad overview, some of the obligations include:

  • Preparing and maintaining appropriate technical documentation
  • Maintaining record logs
  • Ensuring sufficient human oversight is in place
  • Registration within the EU database (and keeping registration current)
  • Obtaining the CE conformity certification
  • Continuous monitoring (for example, safety, security, incident response, bias, etc.)
  • Compliance with authorities

What are the consequences for failure to comply?

Similar to the GDPR, the Act places hefty financial penalties on businesses that do not observe obligations. Of course, these are subjective and will ultimately depend upon specific circumstances. Just as GDPR fines are considered on a case by case basis, the same will happen with the EU AI Act. At their highest, the fines will be the higher of:

  • Prohibited AI breach: 7% global turnover or €35M
  • Other AI breach: 3% global turnover or €15M
  • Incorrect or misleading information: 1% global turnover or €7.5M

Again, we will need to wait and see how fines are handed out over time. But as we have seen with GDPR fines, some of these have been in the hundreds of millions, and even into the billion back in 2023.



What is the best path to compliance?

This is the million Navigating the Act can be complex - it requires an understanding of where a business fits into the operator definitions, the risk level of the AI system or GPAI model, and the transitional implementation timelines. But the main thing it requires is to have a solid understanding of all of the foregoing, and the ability to plan for implementation and compliance. It cannot happen overnight, and taking a reactive approach will lead to confusion and fines.


Taking time to think about risk appetite, the product, the consumer, the financial and reputational implications, as well as the resourcing required to get to the point of desired compliance will be critical.


Are you thinking about EU AI Act compliance? Or maybe you want to talk about AI governance frameworks to make sure your business uses or develops AI in a responsible and trustworthy manner?

Send me an email if you want to talk more!

Secure Your Business With Us

We build rock solid relationships with our clients. Get in touch today so we can learn about your business, understand your goals, and see if our solutions can fix your problems.