Phi / AI - Our first three essays
From law to learning to philosophy: our first drop confronts the shift AI brings to privacy, thinking, and choice.
We’re live.
Phi / AI has officially launched. We’re beginning as we mean to continue: with depth, clarity, and curiosity.
Our mission is to publish up to three long-form pieces per week that examine the ethical, social, and existential questions AI forces us to confront through many lenses.
Our first three essays reflect this interdisciplinary approach, weaving together insights from law, learning, and speculative philosophy:
Katalina, our seasoned privacy officer reflects on the failure of Europe’s AI Act to stop Big Tech from training on our personal lives — and what ordinary citizens can still do to push back.
Maria, our learning and development leader, examines the growing gap between access to information and genuine understanding. What happens when thinking becomes outsourced? And how do we rebuild our cognitive autonomy? She argues that just as we might choose to walk occasionally despite having cars, we need to choose to think despite having AI.
Karin, our philosopher, demands that we apply the same scrutiny to our mental consumption that we bring to our physical consumption and demand transparency about what shapes our minds with the same energy and devotion we reserve for what enters our bodies.
Three essays. Three disciplines. One shared commitment:
To slow down to to think critically and see more clearly.
Thanks for reading and enjoy the pieces!
—The Phi / AI Collective
This Isn’t the Future I Thought We’d Be Regulating
“Irish privacy watchdog OKs Meta to train AI on EU folks' posts”.
This piece explore how Europe’s landmark AI Act, despite its ambition, fails to meaningfully protect citizens from Big Tech’s use of personal data—especially in the case of training large AI models like Meta’s.
This piece argues that while AI tools offer effortless access to information, overreliance on them erodes our cognitive autonomy, critical thinking skills, and moral judgment—fundamental capacities that define what it means to be human.
Invisible choices
The modern consumer stands in the grocery aisle, carefully examining nutrition labels and scanning for artificial ingredients. Later, that same person will spend three hours on TikTok, completely unaware of the elaborate persuasion architecture determining every video she watches.
This piece states that we scrutinize what we eat but not what we mentally consume. This inspite algorithms curating our digital experiences shape our beliefs, behaviors, and autonomy in powerful, invisible ways. It’s time we demand transparency and agency over the systems that influence our minds.