Five Key GDPR issues when AI intersects with privacy

Five Key GDPR issues when AI intersects with privacy

5 May 2025


Disclaimer - this is a very complex and nuanced area of law. When it comes to using AI, every use case is different and has varied risks and rewards. The below discussion points are to spark conversation and to serve as a reminder of the complexity of privacy and AI, not to provide specific guidance. Think about them carefully!


At the corporate level, AI governance needs legal, compliance and risk management input. Of course the creation and development of AI is first and foremost a highly complex and technical endeavour, one that requires data teams, engineers, software devs and LLM experts.


However, the decision to move forward with using or creating a certain type of AI cannot be a technical decision only. There needs to be alignment between technical, business and compliance teams. Amongst other things, privacy is a main point for consideration when it comes to using AI tools.


There are many privacy considerations, and I want to focus on the following for the purposes of this article:

  1. Security does not equal privacy - in other words, GDPR compliance goes for beyond protecting personal data
  2. Anonymous data is best - when data is no longer personal, GDPR no longer applies
  3. Lawful basis & appropriate purpose - these are non-negotiable aspects of GDPR compliance
  4. AI in terms of third party processor - don’t overlook third party obligations just because it’s AI
  5. Personal data to train AI - what happens when you use personal data to train AI tools?

One question that doesn’t only apply in this situation is: Just because we can, does that mean we should? 


The GDPR is a legal framework, but it is built on ethical principles. Principles that place the individual first. Principles that, in theory, provide power and confidence to the consumer, the app user, the human. In reality, a lot of ‘compliance information’ is hidden in dense policies and made purposefully difficult to discover. Executives have a responsibility to make fully-informed decisions. In the context of AI, this includes understanding the intersecting regulatory risks and acting appropriately.  



Discussion point #1 - security ≠ privacy

There can be a tendency in some businesses to tilt heavily towards the technical / security aspect of privacy. This is not helped by the term ‘data protection’, because the implication is that security of information is sufficient. But this is not the case. With AI use and development, this can be even more pronounced. A critical piece in the AIxPrivacy compliance puzzle is the building blocks of AI itself - data. When we talk about data, the first question we need to ask is “do we have permission to use that data?”. This is a privacy question, not a security question. Where the answer is NO, that’s where everything stops. Or, at least, there is a discussion about how to approach the situation. This Forbes article provides a little more context and is definitely worth reading!


Example: Putting in place data security measures (like encryption) are incredibly important. However, such security measures do not address whether you have the right to process personal data in the first place.


Discussion point #2 - anonymous means anonymous (not anonymous for now)

When personal data comes into play, there is a key axiom to remember: if it doesn't involve personal data then there is a very good chance that privacy laws don't apply! This seems a little contradictory, but the point is, if you start with personal data and then de-identify, aggregate, or otherwise make such data anonymous, that will go a long way in reducing your privacy law headaches. While techniques like pseudonymisation offer enhanced protection, it's crucial to remember that the data can still be re-identified. This means that such methods are not sufficient to ensure anonymity, although they do help reduce the risk of identification - in risk language, they mitigate the risk, as opposed to eliminate it. In terms of risks, the first is strictly legal - that the de-identified information will be re-identified. The second is commercial, and involves considering the the cost and complexity of adequate de-identification. This Georgetown Law Technology Review Article provides some great insights!


Example: Proper anonymisation means individuals cannot be re-identified. This can happen, for example, by removing identifiers like name and date of birth, or by aggregation, which is dealing in broad numbers as opposed to specific cases. In any case, if data can be re-identified, it is still considered personal data.


Discussion point #3 - you need a lawful basis and appropriate purpose for processing

Under GDPR, once you know that you are dealing with personal data, you must establish your lawful basis for processing the personal data of individuals. There are 6 separate lawful bases, but typically, two of them are more appropriate in commercial contexts:

  1. Consent
  2. Legitimate business interest

Relying on consent is a little more restrictive than legitimate business interest. It is very specific and cannot apply broadly, or as a ‘blanket’ consent - it must be freely given, specific, informed, and unambiguous. Legitimate business interest is more broadly applicable, but must meet the legitimate interest test. To meet this test, you must show that processing the personal data is (i) necessary, (ii) is for an appropriate purpose, and (iii) appropriately balances the risk to the individual’s rights, known as the ‘Legitimate Interest Assessment’.


Purpose is critical and is one of the core concepts within the entire GDPR framework. If you don’t tell people about what you plan to do with their personal data, you can’t do it. Simple! This concept is also inextricably linked to the lawful basis. You always need to have an appropriate purpose for processing personal data. The lawful basis justifies the purpose, and the purpose can evidence a lawful basis. However, when relying on legitimate business interest, you must have a separate purpose. The crux of purpose limitation is transparency - the purpose needs to be specified at the time of collecting personal data. As always, the International Association of Privacy Professionals has published this great overview document!


Example: You use AI to analyse subscriber interactions and to assist with sales and marketing strategy. Assuming you have been transparent about this (e.g. your privacy policy says you will use personal data for the analysis and review of membership data to improve and enhance the membership offering), you have a good basis to rely on this lawful basis.


Discussion point #4 - third party processors

All businesses use third parties to process personal data, which include third party AI tools. Whether it’s AWS or Google cloud for storage purposes, a payroll provider, or CRM software, it’s an important and necessary part of doing business. Under GDPR, there are certain steps to take before entering into a relationship with a third party processor. Step 1 is always to undertake a proper due diligence of the provider - who are they, what do their relevant terms say, etc.? This will help you with a privacy impact assessment, a requirement where use of the third party is high risk. You will also need to negotiate a data protection agreement and maintain a record of processing activities - check out this Guidance Note from the Irish Data Protection Commission.


Example: If you use an AI-powered customer service chatbot, and it requires personal data to function properly, you need to treat the AI provider as you would any other third party service provider under the GDPR. As well as undertaking proper due diligence, which might require a privacy impact assessment, you also need to make sure you enter into a data protection agreement, covering issues like data breach, destruction personal data, and data security. It will also need to touch on sub-processors if relevant.


Discussion point #5 - training AI models with personal data

This is where things get a little more complex. If you are using a third party AI tool to process personal data, and the third party wants to use the same personal data to train the AI model, GDPR essentially requires both parties to be considered as separate data controllers. This is as opposed to controller and processor, or joint controllers. There may be significant roadblocks here, so the simplest solution (and likely the most compliant solution) is to prohibit (via a contract, such as the data protection agreement) the third party AI company from using the personal data to train its AI model. There are a few conceptual issues here:

  • GDPR requires the data processor to act on the specific instructions of the data controller, so if the controller uses personal data for its own purpose, they are no longer acting as processor
  • Each controller must have a lawful basis and ensure purpose limitation, which the third party is unlikely to be able to meet
  • You can never 100% prevent a third party from doing the wrong thing with personal data, but your due diligence, privacy impact assessment, and data protection agreement should provide you with a level of confidence to proceed in a risk-aware manner.

The concept of data processors and data controllers is discussed in the European Data Protection Board’s guideline document. You are unlikely to have permission to permit the third party AI company to use personal data to train its AI model. They are unlikely to be able to demonstrate lawful basis or purpose limitation for themselves. In some situations, this may be different, for example where the AI model being trained is entirely being trained to improve the effectiveness of the tool solely for your purposes, and not in a more broadly applicable way.


Example: You use the same AI-powered customer service tool, but the third party wants to use the personal data to train the AI model as well. It is likely (based on lawful basis and purpose limitation) that (i) you are not able to grant permission to the third party to use the personal data for AI training purposes, and (ii) you need to exercise a high level of caution when undertaking your due diligence and privacy impact assessment, and critical provisions restricting the use of personal data are made crystal clear in the data protection agreement.


Thinking about the above discussion points should prompt some further discussion in relation to risk tolerance and commercial requirement to use third party AI tools. An important reminder is that every use of AI is different depending on context. The above discussion points are helpful, but they are just the starting point! As AI and privacy regulation overlap to a large degree, one thing that is clear that what ethical AI is not possible without ethical privacy. GDPR is rooted in respect for individual rights, and that applies to the use of AI too.


If any of the above issues are cause for concern, feel free to reach out.

Secure Your Business With Us

Get in touch to talk about AI governance, compliance and risk management solutions!