Caught Red Handed

Caught Red Handed

1 October 2025

Neil Jennings


It isn’t new. But it is risky. Biometric data sits in the highest-risk category of personal data, called special category under GDPR, and sensitive information elsewhere. In the UK, EU, and some other big jurisdictions (like Texas and Illinois), obligations are heightened when it comes to how businesses obtain and use such information.


So how can we link a zoo in sunny Spain with a biometric blunder resulting in a €250,000 (yes, quarter of a million euros) fine?


The answer is fingerprints.


The Loro Parque Zoo in Spain has two sites. As part of a promotion, members of the public were able to obtain access to both sites for a certain price. The catch? The only method of access to sites was by way of fingerprint scanning. No ticket, no ID check.


Now Spain seems a little less sunny, right?!



What’s the big issue?

The zoo case may sound quirky, but the rules it tripped over are exactly the same ones your organisation is bound by.


OK. The main message is this: fingerprints (or other biometric identifiers) are just ‘better passwords’, they very often form a distinct - and high risk - category of personal data. Obligations are higher, regulators are more strict, and without due care and attention, your business could sleepwalk itself right into a similar situation.


And if your business is adding AI into the biometrics mix, you could be multiplying your problems. Some data can predict your mood, your behaviour, and even create a fake you, all of which are enormous risks for businesses to navigate.


The zoo’s mistake is easy to dismiss as a one-off, but it sits in a much bigger global story. So have a read, take a minute, and reassess where you’re at with your privacy program.


What privacy landscape are we looking at?

The one-and-only GDPR is our starting point here. In short, biometrics (like fingerprints) are ‘special category data.’ That means you need both a lawful basis (Art 6) and an extra exception (Art 9). Most businesses reach for consent, but unless it’s a genuine choice with a real alternative, it’s not valid consent.


If it’s fingerprints or nothin’, that ain’t gonna fly…


The UK GDPR and the Information Commissioner’s Office reflect the EU approach.

What does the global landscape look like?

Globally, as usual, there is a patchwork of regulation. Europe leads the way in terms of regulatory regime and enforcement action. Other major jurisdictions have their own idiosyncrasies.

  • 🇪🇺 / 🇬🇧 EU & UK - GDPR says that special category data can only be processed in certain circumstances, and there is a much higher standard than non-special category data. Spoiler alert: biometrics and legitimate interest don’t mix.
  • 🇨🇦 Canada - the OPC’s recent guidance on PIPEDA confirms that fingerprints are considered ‘physical biometrics’, and that biometric information is (almost) always sensitive, therefore requires explicit, informed consent.
  • ⚜️ Quebec - Law 25 not only requires consent from individuals, but requires businesses to inform the regulator at least 60 days in advance of using a system that processes individuals’ biometric data.
  • 🇺🇸 USA - different states have different privacy laws. California’s CCPA does not consider biometric data or sensitive data any differently to non-sensitive data. In contrast, states like Illinois and Texas have their own biometric-specific laws, requiring express consent from individuals.
  • 🇦🇺Australia - biometric information is considered ‘sensitive’, the OAIC’s guidance confirms that sensitive information processing requires express consent.

If you operate internationally, you can’t copy-paste biometric policies!


What did the zoo do wrong?

The zoo’s fine resulted directly from a violation of Article 9. They thought they obtained consent, because people consented. What they hadn’t realised was that their consent was no good - they didn’t provide an alternative so the consent wasn’t freely given. No real choice = no real consent.


This case is a clear demonstration of the misunderstanding of what consent means in practice. Words have meaning, and this is an easy place to trip up, especially for businesses dealing with biometric data for the first time - perhaps trying to make the user experience better, or create a more appealing customer journey.


The intention does not match the impact.


Where else might we see this?

A similar situation is what most of us experience every day. Using a face-scan to unlock our devices. And by extension, that face-scan being used to unlock specific apps (like banking apps) on our phones. The phone companies say that they don’t actually process our biometric data because nothing is processed centrally - it’s locally stored on the device - and they don’t have access to it. Likewise, the companies controlling the apps (e.g. the banks) say that they also don’t process biometric data and simply rely on the device’s yes/no authentication token.


But the similarity stands. If our biometric data was not stored locally (and was processed by Apple centrally), then our consent to use such security features would only be effective under GDPR logic where an alternative option was provided, like a PIN or password.


Office buildings, gyms, even ATMs have fingerprint options. If that’s the only option, then you might want to ask a few questions. And if you’re that business, reach out to me!


What about AI?

The issue is that AI goes beyond using biometrics to identify people, AI can use biometrics to predict emotions, behaviour, liveness, and generate deepfakes, etc. And along with this comes a wide range of bias, accuracy, security, safety and misuse risks.


AI regulation (like the EU AI Act, and Colorado's AI Act) have specific provisions for classifying and regulating the use of biometric data within the context of AI systems. There is also an overlap with privacy laws like GDPR in terms of underlying principles like having lawful basis to process such information, and specific requires when processing, such as ensuring fairness, accuracy, and minimising the amount of data being processed.


Where do biometrics come up in the EU AI Act?

For an overview of the EU AI Act, read my previous article, An Introduction To The EU AI Act.


The Act slices up biometrics in three ways: prohibited, high-risk and limited-risk as follows:

  • Use of biometric data is prohibited where it would be used to categorise people to infer race, political opinion, etc., and where used in real-time in public spaces for law enforcement purposes.
  • Biometric data is considered ‘high risk’* under the Act (Annex III) where such data is used for
    • remote ID systems (“who are you in a crowd?”)
    • biometric categorisation according to sensitive or protected information (“what group of people do you belong to by virtue of a protected trait?”)
    • emotion recognition systems (“how are you feeling right now?”)
    • An exception is if biometric data is used for the sole purpose of verifying ID

*Businesses have onerous regulatory obligations in relation to high-risk AI systems.

  • Biometric data is considered ‘limited risk’ under the Act where such data is used in lower-risk capacities, like gauging consumer satisfaction. Where a business is a deployer of this type of AI system, they must inform the individual.

The zoo case won’t be the last

Even if the EU AI Act permits the use of biometric data, businesses must still comply with all relevant privacy laws. There is no way around this. That means, among other things, compliance with the core 7 principles of GDPR:

  • Lawfulness, fairness & transparency
  • Purpose limitation
  • Data minimisation
  • Accuracy
  • Storage limitation
  • Integrity & confidentiality
  • Accountability

With AI, using biometric data, you could fall at the very first hurdle - you need to offer alternatives to biometric options for that consent to be valid. If you don’t, then there’s no way you will be able to demonstrate a lawful basis for processing personal data.


The zoo case won’t be the last. The question is whether your business wants to be the next example in the regulator’s press release…


If your organisation is using biometrics, or planning to layer AI on top, now is the time to check whether your consent model, alternatives, and proportionality tests would stand up to regulatory scrutiny, so reach out today to ask about our AI Risk & Governance Baseline or our AI Governance Program Builder packages!


This content is informational only and not legal advice. GLF is not a law firm regulated by the SRA.

Secure Your Business With Us

Get in touch to talk about AI governance, compliance and risk management solutions!