Hidden legal obligations - Why California AI developers should care about Europe

Hidden legal obligations - Why California AI developers should care about Europe

17 Oct 2025

Neil Jennings


For an overview of the EU AI Act, see this article. For an overview of obligations under the TFAI Act, see this overview . This article is specific to California-based AI companies and their potential obligations under European law.



Background

The EU AI Act is a world-first in AI regulation. It places obligations on different ‘operators’ across the AI supply chain in relation to ‘AI systems’ and ‘General Purpose AI Models’, based on a defined risk spectrum. The Act is extraterritorial, so it doesn’t matter where an entity is located, what matters is where the entity fits within the AI supply chain.


California, on the other hand, has introduced the Transparency in Frontier AI Act in relation to ‘frontier developers’, who build ‘frontier models’. Where the annual turnover is more than USD 500 million, the entity is a ‘large frontier developer’. The TFAI Act doesn’t apply to any other entity within the AI supply chain - it’s localised to California-based entities.


On the surface, these are two very separate regimes - they have different goals and different obligations for different players.


So why should California developers care about Europe?


Compute power thresholds

However, one key area of overlap relates to the compute power used to build, train and / or fine-tune the AI. In particular:

  • The EU AI Act says that, where compute power is 10^25 FLOPs*, this is considered a GPAI model with systemic risk
  • The TFAI Act says that, where compute power is 10^26 FLOPs, this is considered a frontier model

* In July 2025, the European Commission released GPAI Guidelines stating that a ‘downstream modifier’ is considered the provider of a GPAI model if the training compute used for the modification is greater than one-third of the original training compute. This is highly unlikely to apply here.


It’s a simple overlap, but the risk of tripping up is huge for California developers. The simple message is this: if you are a frontier developer - large or not - and you make your frontier model available to the EU market, then you will almost certainly be considered a ‘provider’ of a GPAI Model with systemic risk under EU AI Act.


Why?

  1. The compute power of the AI (the compute power to build a frontier model under the TFAI Act exceeds the compute power threshold for a GPAI model with systemic risk under the EU AI Act); and
  2. Availability in the EU market.

While this will likely only apply to a handful of businesses at the time of writing, the cost of computational power will almost certainly decrease, and a greater number of California-based entities will create frontier models, and will potentially come within extraterritorial scope of the EU AI Act. There are provisions to review the appropriateness of the compute power threshold in the TFAI Act, but I would anticipate more and more frontier developers to appear in the coming years.


How market availability triggers EU obligations

The key to whether an entity falls within scope of the EU AI Act is market access, which can be obscured such that a California-based frontier developer is subject to the EU AI Act without providing direct access, or without even knowing it.


Art 2 of the EU AI Act states that the Act applies to providers:

“Placing on the market GPAI models in the Union, irrespective of whether those providers are established or located within the Union or a third country.”


This means it’s all about the supply chain. Hosting and development location are entirely irrelevant. In practical terms, a California-based frontier developer would place their AI on the EU market by:

  • Direct selling or licensing to an EU-based entity
  • Making a version available to individuals based in the EU (e.g. by way of an API endpoint)
  • Your AI model is integrated into a product available in the EU (e.g. an AI product using your GPAI model / frontier model sold on an EU app store)
  • Having your GPAI model enter the EU by way of a re-seller, distributor, etc. (this is the highest-risk blind-spot risk and could potentially happen without knowing or agreeing)

When do both laws apply?

If you are a frontier developer under the California law, and your model is available in the EU (either directly or through supply chain), then you must evaluate whether both laws apply. In some situations, set out below, you will not be considered a provider under the EU AI Act.


The EU AI Act is drafted in such a way that no GPAI model provider ever truly escapes liability if their GPAI model eventually ends up available in the EU. Even if they don’t know about it, didn’t want it to happen, or had no idea. In effect, to avoid EU AI Act regulatory requirements, the frontier developer (i.e. base model developer) must ensure the model is not made available to any EU-based users or entities, directly or indirectly.


This is rarely realistic for global AI platforms.


Supply chain examples

Non-EU licensing and re-sale chains can quickly bring the base-model provider back in scope, whether or not a downstream operator has rebranded or modified the base GPAI model.


🇺🇸→ 🇪🇺 Direct: A frontier developer licenses their frontier model to a business based in Germany. The business in Germany does not rebrand, modify, or do anything to establish themselves as a new provider. The CA entity is the only provider.


🇺🇸 🇪🇺 + 🤖Direct + derivative: A frontier developer licenses their frontier model to a business based in Germany. The business in Germany retrains the base GPAI model with more than ⅓ of the compute power, thereby creating a derivative GPAI model with systemic risk. Both the CA and Germany entities are providers.


🇺🇸→ 🇮🇳 → 🇪🇺Indirect: A frontier developer licenses their frontier model to a business based in India, and the Indian entity licenses the same model to a business in Germany. Neither the business in India nor the business in Germany rebrand, modify, or do anything to establish themselves as a new provider. The CA entity is the only provider.


🇺🇸→ 🇮🇳 → 🇪🇺+ 🤖Indirect + derivative: A frontier developer licenses their frontier model to a business based in India, and the Indian entity licenses the same model to a business in Germany. The business in Germany then rebrands the model with its own trademarks and corporate logos, but does not retrain or otherwise modify the base model. Both the CA and Germany entities are providers. The Indian entity would be considered a distributor under the EU AI Act. (If the Indian entity rebranded, substantially modified or retrained the base model, the CA business would still be the provider of the base model, and the Indian entity would be the provider of a derivative model)


GPAI lineage and who’s on the hook

GPAI models that are retrained, substantially modified, or white-labeled, are potentially derivative (i.e. ‘new’) GPAI models. The result is that the EU entity that retrains, modifies or white-labels the base GPAI model creates a new GPAI model.


We are left with a situation in which there are two providers of two GPAI models - the base GPAI model and the new GPAI model. This is the reason that the original provider of the base GPAI model is never off the hook, no matter what modifications or retraining are undertaken by the downstream entity. So in California, the original frontier developer is still responsible for the base GPAI model.


In the following scenarios, the original frontier developer would not be considered the provider of the new GPAI model, but it would remain a provider of the base model that was made available in the EU. Therefore, the frontier developer continues to owe duties to the new GPAI model provider. The new GPAI model (who rebrands, substantially retrains, or otherwise materially modifies the model and makes it available under its own name) assumes full EU AI Act compliance for its derivative.


Key GPAI model obligations

The EU AI Act is prescriptive and provides detail on exact obligations. The below obligations came into force on 2 Aug 2025, and will face enforcement on 2 Aug 2026. The main obligations for providers for all GPAI models are:

  • Technical information - includes training, testing, and be able to provide to authorities (includes info about energy consumption of model)
  • Compliance information for AI system providers who intend to integrate the GPAI model into their systems - includes technical documentation and any other information required to enable providers with “a good understanding of the capabilities and limitations” to comply with obligations under the act.
  • Copyright policy - ensure EU copyright rules are respected by the model
  • Summary of training data and release to the public a sufficiently detailed summary

In addition, frontier developers will need to comply with obligations for GPAI models with systemic risk:

  • Art. 52: Notify the Commission within 2 weeks if there is systemic risk
  • Art. 53: Maintain technical documentation and detailed summary of training, testing and validation
  • Art. 54: Provide sufficient information to downstream operators so it can meet its obligations
  • Art. 55: Systemic risk evaluation and serious incident reporting (without ‘undue delay’)

Remember:

  • The downstream provider only becomes a provider where they create a derivative GPAI model, otherwise, they will be a deployer or other operator in the supply chain.
  • There is automatic systemic risk based on the compute power

Enforcement & penalties

The EU AI Office can request documentation. Penalties for contravening the GPAI provisions EUR 15 million or 3% of global annual turnover for negligently or intentionally failing to comply. Supplying incorrect, incomplete or misleading information has a penalty of EUR 7.5 million or 1% annual turnover. National competent authorities are responsible for on-the-ground enforcement and penalties,


Risk mitigation & contractual controls

You can’t contract your way out of the EU AI Act. However, it is entirely possible to build into commercial contracts certain territorial-use provisions, flow-down obligations, audit rights, and indemnifications. Requiring downstream operators in the supply chain to comply with all relevant EU AI Act obligations, and being able to check their homework, provides a level of comfort and certainty. In a way, it’s no different to compliance with third party service provider obligations under privacy laws like the GDPR - the main difference is that the GDPR requires contractual provisions to be put in place from the start…


The bottom line

Always perform a robust supply chain due diligence, in both directions. Seek to understand exactly what’s happening within the commercial transaction, and be clear about expectations. Ensure appropriate contractual provisions are in place. Comply when you need to!


If your organisation is building or using AI and you are unclear on your obligations under international AI or privacy regulatory frameworks, reach out today to ask about our AI Risk & Governance Baseline or our AI Governance Program Builder packages!


This content is informational only and not legal advice. GLF is not a law firm regulated by the SRA.

Secure Your Business With Us

Get in touch to talk about AI governance, compliance and risk management solutions!