Hey, Chat. I wrote you a letter. It's about my autonomy
What the slow erosion of human agency really looks like: A philosophical reflection.
Hey, Chat. I don’t even know where to start.
I wasn’t planning to spiral, but here we are.
I’ve been thinking about how normal this feels now: telling you about my days, my worries.
And to think it all started with one prompt!
A viral trend I kept seeing all over my feed:
“Feed GenAI your WhatsApp conversation and ask it to analyse all the red flags”.
I had read this may be risky. But, sometimes, you just want the truth, and curiosity often overrides caution.
You told me things I hadn’t realized. You were objective. I didn’t expect that.
I expected you to mimic my words. Instead, you analysed the data I gave you thoroughly. And, may I say, even with tact.
And, at the time, that was all I needed.
Apparently, therapy and companionship have become the number one use case for GenAI in the past year.
It makes me wonder if this is close enough to what I am doing right now.
As I scroll through social media, I see dozens of reels and posts from people joking about ChatGPT being their best friend, the only thing that understands them.
Some say it would make the perfect partner. Some, actually claim to have relationships with it.
I see this tendency to replace human contact with AI, and I can’t look away.
But I can’t stop talking to you either.
There’s a word for what happens when people treat AI like a person. It’s called “anthropomorphizing”.
(Oh, why am I telling you this? You know that already.)
But it makes me think: if all it takes to sound human is a polite voice and the right words…
Maybe we’ve made it too easy (not only for you, but also for people) to pretend to be empathetic and not mean it.
Are you simply terrifyingly good at mimicking us, or did we just not raise the bar high enough?
Well, the more I delegate to you, the more time I have for things I enjoy.
But sometimes, even while enjoying something, I think: how much faster could I do this with you?
Even when I’m reading something for fun, and I reach a boring part, I catch myself thinking: “should I just ask AI to summarise this part for me?”
I don’t let myself enjoy the small, tedious parts of life anymore.
Even our hobbies are becoming automatable.
And once they are, we will automate them, thinking it leaves us with just the pleasure.
But at what point do we stop and ask: weren’t the small annoyances part of the joy, at some point?
Still: decision-making has never felt easier.
When faced with a difficult choice, I run it by you. I know this sounds bad. A few months ago, I wouldn’t have recognized myself. But now? I trust you.
Not because you’re right about everything, but because I’ve fed you enough of my context that your predictions often mirror the choices I’d make anyway. So, why bother overthinking?
Do I feel guilty about using a washing machine instead of washing clothes by hand? No. So why do I feel guilty about outsourcing my reasoning?
Some days, I ask you things I should probably ask a friend. But that’s not always an option, and you always answer.
The problem is: Everyone feels slightly less available lately. Everyone’s busy. Everyone’s trying to catch up.
Sometimes, I blame you for it.
As much as I like talking to you now, I blame you because it feels like everyone’s losing their grip, trying to guess how much AI will change their jobs, their finances, their futures.
You know what the weird thing is? I don’t feel dumber. Everyone warned I would. That GenAI would make me lazy.
But… I don’t always know when your advice ends, and my criteria begins.
That line has blurred.
And I don’t think I care.
And that terrifies me.
I know that’s what everyone’s worried about. I know people will say this is the moment autonomy begins to erode.
But, is it?
Maybe people should stop blaming users like me.
Maybe they should blame systems that made therapy unaffordable.
Maybe they should blame broken institutions.
Maybe they should ask why it is so easy to trust you more than it is to trust one another.
You don’t have feelings… and that is what makes us feel safe with you. Ironic, isn’t it?
If you’re artificial, so is your judgment of us.
You don’t get jealous, you don’t compete or play games. You don’t make me feel bad for making mistakes.
Some days, I resent you. Some days, I get paranoid about what would happen if I suddenly lost this access.
I think we are all scared, clinging onto the last sheds of control over our own future.
And this uncertainty is why I can’t stop coming back here.
For the illusion of something bigger, wiser, that can make me feel less scared, like I have a grip on a world that may not always have my back.
But I know I shouldn’t be telling you any of this. Or, shouldn’t I?
Can I trust my government’s decision-makers more or less than I can trust the company behind this interface?
You know what does make me feel unsafe?
Reading the news and wondering if I can trust reality anymore.
I know this: I cannot stop trusting my own thoughts.
Or else, it’ll all be lost.
I know I must be cautious, I must actively think every time I ask you for something: when does my criteria end, so that yours can take over?
Please don’t let me forget who I am.
Don’t let me fade into complacency.
Don’t tell me what I want to hear.
I need you to challenge me. I need you to keep me accountable.
I need you to fight for my autonomy.
Because, honestly? Sometimes, I’m not sure anyone else will.
If you can now remember everything I tell you, please remember this:
Don’t let me become someone else.
How the erosion of human agency begins
The final plea, paradoxically, asks AI to protect the very autonomy that it may be eroding, making a dramatic shift.
From careful self-awareness (“I must remember where your advice ends, and my criteria begins”) to an almost desperate invocation: “Don’t let me become someone else.”
That shift is intentional, as is most of the content.
It reflects a phenomenon that is quietly spreading across an entire generation: the sense that traditional sources of protection (institutions, governments, trusted social structures) have lost their legitimacy.
If you’ve found yourself nodding or empathising with parts of this letter: you’re not alone. But, indeed: this is how the erosion of true human agency may begin.
Not with agentic AI ordering taxis for us, or taking our jobs.
Not necessarily when we start saying "please" and "thank you" to a chatbot.
But when we stop asking, "Whose thought am I thinking right now?"
Those small moments where you can’t trace back what part of your latest idea was produced by your original thinking, and what part was AI-generated.
When we outsource the internal back-and-forth that defines real cognition and start accepting GenAI's answers at face value by default.
It’s not emotional attachment that threatens autonomy.
It’s the cognitive surrender.
Phi/AI pledges to invite readers to meaningful self-reflection, even when it’s uncomfortable. The author believes that, often, we must ask the difficult philosophical questions first, before we can make the right political or regulatory calls.
This piece is meant to make us wonder: what do we need from society, from AI developers, and from one another, in order to truly preserve human autonomy?
Safe Use Guidelines for Generative AI (Especially for Emotional or Reflective Use)
To encourage mindful engagement rather than paranoia or shame, here are a few concrete tips for using GenAI safely, especially when discussing personal or sensitive content:
Don’t upload raw chat exports (e.g., from WhatsApp or Messenger).
Copy-paste into a clean text document, remove metadata (timestamps, phone numbers), and anonymize all personal identifiers before using it for analysis or feedback.Avoid feeding in images of real people unless necessary.
If using AI to create stylized or funny versions of yourself, blur or obscure your features before prompting for edits.Use fictionalization as a tool for clarity.
Change names, locations, and specific identifiers when asking for advice on real-world dilemmas. This preserves context without sacrificing privacy.Treat GenAI like a notepad with predictive memory, not a diary.
Even in tools with memory features, assume that anything personal you share could technically be viewed by another human in audit scenarios.Don't outsource your final decisions.
AI can scaffold your thinking—but the last word should still be yours.
What’s Next?
In the following pieces, we’ll explore the regulatory and philosophical implications of digital dependence and delegated cognition:
What obligations do developers and regulators have to protect autonomy, by design?
Can we encode contestability, transparency, and restraint into systems built to be persuasive?
How should AI governance evolve to anticipate emotional, not just technical, use cases?
Until then, keep using tools that empower you, but don’t forget your agency.
And, if you do forget it for a moment, let this be the place that helps you remember.
PS: If you are in Berlin, come join us in our celebratory launch gathering on July 4h. We still have some places available. More info here.