This Isn’t the Future I Thought We’d Be Regulating
Why the AI Act Can’t Stop Big Tech from Training on Your Private Life
“Irish privacy watchdog OKs Meta to train AI on EU folks' posts”.
That was the headline that landed in my inbox earlier this month. At first, I thought it was another attempt on Meta’s side to push for this. But no: it was the official “greenlight” from the privacy authority, effectively legitimising these plans.
Many Privacy activists and Expert groups had actively advocated against this for months (some would argue, years). And now, the data of millions in the EU could be swept into Meta’s training pipeline unless users had previously opted-out.
I work in Privacy and Data Protection.
Reading this, I wasn’t precisely surprised. But I was tired. Kind of… done.
Tired in that quiet, existential way that creeps in when you realize the work you’ve devoted years to (data rights, transparency, privacy protections) still leaves most people defenseless when it matters.
No, this isn’t the future I thought we’d be regulating.
But that doesn't mean that it’s all bad news.
What Is the AI Act and what does it Regulate?
The EU AI Act is the first comprehensive law in the world that tries to regulate AI systems before they cause harm.
It doesn’t just wait for damage and lawsuits. Instead, it classifies AI systems based on how risky they are and apply rules accordingly.
Sounds good, right?
The problem is… not everyone knows what this actually means in practice. Or how it affects them.
The AI Act splits AI systems into categories based on risk:
Some AI uses are banned outright: things like real-time biometric surveillance in public spaces, systems that manipulate people through psychological tricks, social scoring, or AI that tries to predict crimes based on someone’s personality. These are called unacceptable risk systems.
Some others are considered “High-risk”: like those systems used in hiring, education, border control, credit scoring, or law enforcement. These use cases are allowed, but only if developers and companies meet the necessary safety, documentation, and oversight standards.
Other limited-risk systems have basic transparency obligations, like telling users that they’re interacting with AI, and cautioning them against possible mistakes it can make.
So, if you’re wondering, “Where does something like ChatGPT fall?”, here’s where things get confusing.
ChatGPT is not an “AI system”. This is why that matters
Chatbots like ChatGPT or Gemini are general-purpose AI models: a kind of foundation layer that other tools or applications can be built on top of.
At the user level, we interact with these chatbots directly, via application interfaces. Most people use ChatGPT to write emails, generate code, answer random questions, or provide specific advice. But companies can also use these models to build their own products on top of that infrastructure.
For example: a telecommunications company can create a customer-facing chatbot using ChatGPT’s foundational model, adding their own user experience layers to make the bot better at answering questions their clients typically ask.
You already interact with chatbots like these in many everyday scenarios, the difference is that a chatbot built on top of ChatGPT is designed to offer a more tailored or “intelligent” interaction.
But these use cases are generally not directly regulated under the AI Act, because they aren’t being used within a high-risk system.
A chatbot embedded in a hiring platform that decides whether you get an interview could be considered high-risk.
Meanwhile, the telecoms chatbot telling you how much internet data you’ve used, or how to change your mobile plan, would not.
So when someone says, “The AI Act protects us from generative AI” they’re missing the point. In most cases, it won’t.
The law only steps in more strictly when a general-purpose model is considered powerful enough to pose a systemic risk.
This applies to models trained with massive computing resources and capable of influencing decisions across finance, healthcare, education, media, and more. Potentially amplifying errors, biases, or misinformation at scale.
If a model crosses that threshold, its provider is required to test it, monitor incidents, and put cybersecurity safeguards in place.
But even then, these obligations remain largely out of sight, handled behind the scenes, with little visible impact on how people actually experience or interact with GenAI day to day.
Invisible Regulation, Visible Consequences
When I explain the AI Act to people outside of tech, I don’t talk about Articles or Annexes. I say this:
It bans some uses of AI altogether.
It restricts others based on risk.
It requires transparency in certain public-facing tools.
It tries to ensure humans stay meaningfully involved in high-impact decisions.
But these principles often get buried under compliance theatre.
The companies already doing safety work, will embrace it. The ones that aren’t, will find workarounds.
And the people affected? They’ll still have no obvious place to turn.
That’s exactly what makes the Meta decision so frustrating and so revealing.
People may assume the AI Act would protect them from having all their personal data scraped to train a model like Meta’s AI.
But it doesn’t.
The AI Act can’t “stop” companies from training AI on user data if they claim to have a lawful basis under existing data protection law. Specially if they receive a greenlight from a Data Protection Regulator.
One could argue that social media and Big Data already pose systemic risk: we’ve seen what happens when these data are used in aggregate to manipulate how people think, vote, or spend. (Who even remembers Cambridge Analytica anymore?)
But what’s the solution we’ve come up with?
Not banning the use of everything you’ve ever posted: Just politely asking Meta not to use its AI model to influence political outcomes or financial decisions, because that kind of behavior would cross into “prohibited” territory under the AI Act.
So, in theory, as long as MetaAI stays within use cases considered low risk, it’s allowed to train on almost everything.
Is that enough? Maybe, if we trust that everyone who ever deploys MetaAI will follow the rules.
And so, even as Europe celebrates the arrival of the world’s first major AI regulation, the only real protection users have against this kind of data grab is still the same fragile mechanism we’ve had for years: the GDPR right to opt out.
Under the GDPR, people have the right to see what data companies hold about them, correct it if it’s wrong, delete it when there’s no reason to keep it, and limit or object to how it’s used.
In theory, exercising these rights is as simple as sending an email. Some companies make it easier for people, providing templates to fill out and send from your customer profile, or by allowing customers to download or modify their data in a user-friendly interface.
But that ideal only works when people know their rights, AND believe using them will matter.
And in the case of Meta? Most users never knew they had to opt out in the first place.
That’s not public-facing AI safety. That’s not systemic risk mitigation. That’s a buried toggle in your privacy settings, if you even know where to look.
Even when the law seems clear, the enforcement choices can feel arbitrary.
Data Protection Authorities’ duty is towards the best interest of citizens and their data, their privacy: not towards the so-claimed “legitimate interest” of BigTech.
But, sometimes, the institutions meant to be on your side… aren’t.
And that’s when the real erosion of trust begins.
How to Push Back, Even If You’re Not a Lawyer
If you’re uncomfortable with Meta using your personal data to train its AI models, you’re not powerless. Yes, they’ve already started. But you still have the legal right to ask them to stop using your data for that specific purpose.
Last year, Citizen8 launched a simple two-click opt-out tool. No legal knowledge required: citizen8.eu/how-to-object-to-metas-ai-training-in-two-clicks
MIT Tech Review has a helpful breakdown of how opt-out rights vary by country: technologyreview.com – How to opt out
If you’re ready to walk away entirely, CNET explains how to delete Facebook or Instagram (it's not as seamless as you'd think): cnet.com – Delete Facebook
But opting out is just one form of action. If you’re someone who cares about digital rights but doesn’t feel confident navigating the legal side, you don’t have to do it alone.
Most of the privacy professionals I know would help a stranger draft a GDPR request for free! And, many of the breakthroughs we’ve seen (whether in cookie banners, facial recognition bans, or default privacy settings) have come from non-profits and independent advocates pushing back until lawmakers had no choice but to listen.
If you're wondering how to help: support the ones doing the heavy lifting.
This directory lists dozens of privacy organizations and petitions worth your time: internxt.com/privacy-directory.
And if you ever need to report a violation or contact your national privacy authority, this link will take you straight to the relevant office for your country: GDPR contact map
AI Advocacy Isn’t Just for Researchers
If you’re concerned about AI and not sure where to begin, start here.
ControlAI is one of the few organizations making it easy for everyday citizens to speak up. They’ve created an easy to follow, 3-step system to help you contact your elected representatives and push for AI regulation in concrete, specific ways.
You don’t need to be a policy expert. You just need to care.
AI authorities in many countries are still being formed, as some provisions of the AI Act have not yet come fully into force.
That means NGOs, researchers, and advocacy groups are shaping the foundations. If you want to join them (or just stay informed), this is a solid starting point: aiethicist.org/ai-organizations
And for a plain-language explainer of the AI Act itself, focusing on what citizens actually need to understand, this is one of the best summaries available: Future of Life Institute – AI Act Overview
You don’t have to be an expert to hold the line. You just have to not look away.
Closing Thought
I became a data protection officer because I believed in defending the fundamental right to privacy. I believed we were designing a future where privacy and autonomy would be preserved by default.
But lately, I’m not sure we’ve done that.
Maybe the erosion of agency doesn’t begin with dystopia.
Maybe it begins with default settings and subtle opt-ins.
Maybe it begins when we stop asking who benefits from our silence.
I still believe this work matters. But the traditional levers of accountability (complaints, audits, enforcement letters) often lag behind the systems we’re trying to govern.
And when the system itself becomes a black box, maybe the response can’t just be procedural.
Privacy and advocacy work? This will only matter if the people it’s meant to protect can feel it, use it, and believe in it again.
Without that, all we’re left with is compliance without conviction.