Meta plans to train AI on EU user data from May 27 without consent

Meta plans to train AI on EU user data from May 27 without consent; privacy group noyb threatens lawsuit over lack of explicit opt-in. Meta plans to use EU user data for AI training starting May 27 without explicit consent. Austrian privacy group noyb threatens a class action lawsuit if the social network giant does […]

May 16, 2025 - 11:08
 0
Meta plans to train AI on EU user data from May 27 without consent

Meta plans to train AI on EU user data from May 27 without consent; privacy group noyb threatens lawsuit over lack of explicit opt-in.

Meta plans to use EU user data for AI training starting May 27 without explicit consent. Austrian privacy group noyb threatens a class action lawsuit if the social network giant does not desist.

In April, Meta announced it will start training its AI models using public data from adults in the EU, after pausing the plan last year over data protection concerns raised by Irish regulators.

In June 2024, the social media giant announced it was delaying the training of its large language models (LLMs) using public content shared by adults on Facebook and Instagram following the Irish Data Protection Commission (DPC) request.

Meta was disappointed by the DPC request, the company pointed out that this is a step “backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”

The company explained that its AI, including Llama LLM, is already available in other parts of the world. Meta explained that to provide a better service to its European communities, it needs to train the models on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe. For this reason, the company initially planned to train its large language models using the content that its European users in the EU have publicly stated on its products and services.

Meta now confirmed it is going to resume training its AI models with public data from EU individuals.

“In the EU, we will soon begin training our AI models on the interactions that people have with AI at Meta, as well as public content shared by adults on Meta Products.” reads a post published by the company. “This training will better support millions of people and businesses in Europe, by teaching our generative AI models to better understand and reflect their cultures, languages and history.”

The company pointed out that users based in the EU can choose to object to their public data being used for training purposes. Starting this week, EU users will get notices about their data being used to improve AI, with an option to easily object at any time.

Meta remarked that they do not use people’s private messages with friends and family to train their generative AI models. It also added that public data from the accounts of people in the EU under the age of 18 is not being used for training purposes.

This week Noyb sent a cease-and-desist letter to Meta as a formal settlement proposal, while other consumer groups are also taking action against the company.

“Meta has announced it will use EU personal data from Instagram and Facebook users to train its new AI systems from 27 May onwards. Instead of asking consumers for opt-in consent, Meta relies on an alleged ‘legitimate interest’ to just suck up all user data. The new EU Collective Redress Directive allows Qualified Entities such as noyb to issue EU-wide injunctions. As a first step, noyb has now sent a formal settlement proposal in the form of a so-called Cease and Desist letter to Meta. Other consumer groups also take action.” reads the post published by noyb. “If injunctions are filed and won, Meta may also be liable for damages to consumers, which could be brought in a separate EU class action. Damages could reach billions. In summary, Meta may face massive legal risks – just because it relies on an “opt-out” instead of an “opt-in” system for AI training.”

The Austrian privacy group states that Meta’s AI training practices likely breach GDPR. Instead of requiring opt-in consent, Meta claims a ‘legitimate interest’ to use EU user data, limiting users to an opt-out option. This undermines key GDPR rights like data access, correction, and deletion. Once Meta’s AI models are released as open-source, they can’t be recalled or updated, further complicating compliance with GDPR obligations.

“The European Court of Justice has already held that Meta cannot claim a ‘legitimate interest’ in targeting users with advertising. How should it have a ‘legitimate interest’ to suck up all data for AI training? While the ‘legitimate interest’ assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that it’s interest in making money is more important than the rights of its users.” said Max Schrems. “This fight is essentially about whether to ask people for consent or simply take their data without it. Meta starts a huge fight just to have an opt-out system instead of an opt-in system. Instead, they rely on an alleged ‘legitimate interest’ to just take the data and run with it. This is neither legal nor necessary. Meta’s absurd claims that stealing everyone’s person data is necessary for AI training is laughable. Other AI providers do not use social network data – and generate even better models than Meta.”

The GDPR solution for Meta is straightforward: ask users for opt-in consent to use their personal data for AI training. Despite Meta’s claims, even if just 10% of users consented, that would be enough to train effective AI models, especially for EU languages. Most competitors like OpenAI or Mistral succeed without access to social media data, proving Meta doesn’t need to use the personal data of everyone who’s used Facebook or Instagram over the past 20 years.

Meta dismissed NOYB’s claims, stating its AI data practices align with European Data Protection Board guidance and Irish privacy regulator discussions.

“NOYB’s arguments are wrong on the facts and the law,” a Meta spokesperson told Reuters. “We’ve provided EU users with a clear way to object to their data being used for training AI at Meta, notifying them via email and in-app notifications that they can object at any time.”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, newsletter)