Skip to main content

Last December, the European Data Protection Board (the “EDPB”) released their highly-anticipated opinion on AI models and GDPR compliance.

Opinion 28/2024 on certain Data Protection aspects related to the processing of personal data in the context of AI models (the “Opinion”).

Unfortunately, the Opinion did not meet stakeholders’ expectations. Rather than providing direct guidance, it requires a case-by-case analysis of the GDPR’s applicability; notably focusing on accountability and record-keeping. Moreso, the Opinion proposes utilising legitimate interest as a potential legal basis, in certain scenarios.

At MyData-TRUST, our in-house legal team has thoroughly reviewed the Opinion. We’re now ready to share our analysis along with tailored guidance to help you navigate this new recommandation.

Is the EDPB’s approach sufficiently clear to stakeholders?

Not quite. In lieu of authoritative guidance, the EDPB provides certain parameters to be considered in one’s determination on whether the AI model falls within the scope of data protection laws, or otherwise. The Opinion further emphasises documentation requirements as a means for stakeholders to demonstrate legal compliance.

Can an AI model be considered anonymous if it is trained using personal data?

The EDPB opines that AI models trained using personal data may not always be considered ‘anonymous’. A pre-requisite for AI models to be considered ‘anonymous’ is that it should not be significantly possible to extract data subjects’ personal data from the training model, nor from queries (directly, or otherwise).

The Opinion provides a non-exhaustive list of methods which may be implemented to obtain anonymity within the AI model. This includes inter alia: data minimization, reducing identifiability, and testing the AI model’s resilience from attacks. The abovementioned list further requires companies to adequately document any form of processing implemented to train the AI model, including where personal data is anonymised.

In what circumstances may legitimate interest be considered as appropriate legal basis to train AI models?

The EDPB recognizes that “legitimate interest” may be used as a lawful basis for training and implementing AI models. Nevertheless, companies must demonstrate that processing personal data is essential to fulfil its original purpose and minimise the associated risks to individuals’ rights. Notably, the extent of data collection (particularly through methods such as web-scraping publicly accessible information) is imperative for the abovementioned determination. The Opinion further highlights the necessity to conduct and publish a “balancing test” on a case-by-case basis.

What if an AI model was trained using illegally sourced data?

Where an AI model is trained using illegally sourced personal data, further use may be impaired. The EDPB encourages all who deploy AI-systems to perform an appropriate assessment, depending on the form and degree of risks associated with the AI model’s development and deployment. This ensures that personal data was not unlawfully processed during the development of the utilised model.

How should organisations ready themselves for the future evolution of regulations?

While insightful, the EDPB’s guidance poses more questions, rather than providing concrete answers. When evaluating AI’s benefits and opportunities, companies must closely monitor regulatory developments and prepare for enhanced scrutiny of AI implementation within Europe. The EDPB emphasizes accountability, lawfulness, and record-keeping. Moreso, they encourage organisations to maintain detailed records of all processing activities for both the AI model’s development and deployment phase. This includes conducting data protection impact assessments (“DPIAs”), balancing tests (where applicable), and maintaining appropriate records of processing activities (“ROPAs”). Regularly reviewing and updating data protection processes to ensure GDPR compliance is also highly recommended.

Does this apply to you? Be proactive!

As AI regulations continue to evolve, ensuring compliance with GDPR and the AI Act may prove challenging. Organisations must act proactively in assessing their AI models, document compliance efforts, and stay ahead of regulatory expectations.

Need Support? Our AI Experts Are Here to Help

At MyData-TRUST, our in-house team of AI and Data Protection experts are ready to support you in navigating these challenges. Whether you require guidance on the GDPR’s applicability, legal basis assessments, or AI governance best practices, MyData-TRUST is here to help.

🔍 Need expert advice? Contact us today to discuss how we may assist you in achieving AI compliance.

📩 Get in touch: [email protected]

AUTHORS:

Noelia Fernandez Freire

Attorney | CIPP/E | Data Protection Lawyer

Emeraude Camberlin

Senior DPO & Transformation Manager

Need expert advice? Contact us today to discuss how we may assist you in achieving AI compliance.

Contact us