Thinking is hard. AI makes it easy - or does it?
The argument for thinking for ourselves when LLMs can do it for us.
"ChatGPT, how do black holes work?"
I asked this question and got responses I could parrot back at a dinner party. But the nagging feeling of incompetence stayed. I don't actually know the answer and will likely forget this response in a few hours. I can merely repeat memorized facts.
Today, as large language models seemingly answer any question and solve complex problems with a simple prompt, we face a profound philosophical crisis: What happens to human thinking when machines can think for us?
The truth is unsettling: we lose our ability to think for ourselves little by little. When we keep turning to AI to reason for us, we slowly stop using the mental muscles we need to make our own judgments. It doesn’t happen all at once - it’s a quiet, gradual shift that chips away at what makes us human.
The Erosion of Autonomy
Autonomy is the capacity for self governance and decision making, and it depends fundamentally on our ability to think for ourselves. Philosophers from Aristotle to contemporary thinkers have argued that this capacity for rational thought makes us distinctly human. Martha Nussbaum captures this perfectly in Not for Profit:
"Our mind does not gain true freedom by acquiring materials for knowledge and possessing other people's ideas but by forming its own standards of judgement and producing its own thoughts."
Thinking is an exertion of free will, an exercise of liberty. But thinking is difficult and uncomfortable, which explains these tools' seductive appeal. The erosion happens through three interconnected mechanisms that exploit our cognitive vulnerabilities.
The knowledge illusion is magnified due to AI’s speed and context window.
The Dunning-Kruger effect - where people overestimate their knowledge because they're familiar with a topic - becomes dangerously amplified with AI. This cognitive bias is particularly insidious because, as researchers Steven Sloman and Philip Fernbach explain in their book The Knowledge Illusion, "we confuse the knowledge in our heads with the knowledge we have access to, we are largely unaware of how little we understand."
When AI can instantly provide any answer, we start believing this external capability is our own understanding. We feel smart and informed, but it's a mirage. Easy access to AI-generated information makes us think we have deep understanding when we really possess only superficial familiarity. Without self-awareness about the limits of our knowledge, we can't engage with LLMs effectively - and more critically, we stop digging deeper into topics that matter.
Critical thinking muscles atrophy as we’re drawn to cognitive shortcuts.
While writing this article - on a deadline I'd long since passed - the pull to "just use the LLM" was incredibly strong. But handing over thinking to AI is like avoiding exercise: it weakens our critical thinking capacity and our ability to distinguish right from wrong.
Hannah Arendt called this our duty as humans to think - what she termed "active thinking":
"Active thinking is a highly engaged form of thinking that prepares one to act in the real world…one is aware that one is a responsible participant in the world… thinking became a tool with which people can bring new awareness into their actions. [Thinking is] a powerful tool of engagement."
When we consistently outsource this engagement to machines, we lose more than just cognitive skills - we lose our capacity for moral reasoning and independent judgment that autonomous beings require.
A recent meta-analysis aptly titled "The Unpleasantness of Thinking" studied 170 research papers across 4,670 participants. Regardless of population, task, or geography, people get grumpy when they have to do mentally demanding work. The study found that "for every point increase in effort there was a .85 increase in negative affect," leading researchers to conclude that "we really don't like thinking."
This aversion aligns with what psychologists call the "law of less work" - when given a choice, humans naturally opt for the path of least resistance. Our brains gravitate toward energy-efficient actions, making these cognitive shortcuts irresistible.
Of course the allure of LLMs is strong. Humans love get-rich-quick schemes, and AI promises get-smart-quick results. Why struggle through complex reasoning when ChatGPT provides instant, articulate responses? Why wrestle with uncertainty when algorithms offer confident answers? The very difficulty of thinking makes AI's promise of effortless cognition deeply seductive.
But efficiency isn't everything. These tools remain prone to hallucination and error, so we need to practice judgment. To judge something, however, you need to have an opinion, an internal framework to compare A versus B rather than blindly accepting what's presented. Without developing this internal capacity, we become passive consumers of machine-generated content rather than active, autonomous thinkers.
The Path Forward: Reclaiming Cognitive Agency
This isn't an argument for rejecting AI entirely - that ship has sailed, and in many ways, AI offers genuine benefits. Instead, it's a call for intentional resistance to cognitive outsourcing in domains crucial to human flourishing.
Strategic Use, Not Replacement
I used Claude and ChatGPT to write this piece and will continue doing so. AI helps me find references I'd otherwise need academic credentials to access. It helps me find the thread for a vague recollection that I likely would not have been able to find. Having attended university when you still had to pull books from library shelves and ask reference librarians for help, I still feel like I'm cheating using AI. But that's a reflection for another piece.
The key is pairing AI with our own thinking - augmenting knowledge rather than replacing judgment.
Preserving the Struggle
We need spaces - in education, in work, in our personal lives - where we commit to doing our own thinking. Just as we might choose to walk occasionally despite having cars, we need to choose to think despite having AI.
Despite finding mental effort unpleasant, research shows that "people still voluntarily engage in mentally challenging tasks" because they "learn that exerting mental effort in some specific activities is likely to lead to reward." We need to cultivate appreciation for the rewards that come specifically from the struggle of thinking - the insights that emerge only through sustained mental effort, the satisfaction of wrestling with complexity, the growth that comes from cognitive challenge.
We must become more conscious of the difference between having information at our fingertips and truly understanding something. Active learning methods like self-testing, explaining concepts in our own words, and applying knowledge in novel contexts can help us avoid the illusion of knowledge that passive consumption creates.
There is something very delicious about thinking, that moment when information clicks in your brain, the endorphin rush you get when you finally understand something, that sense of accomplishment that follows.
The Choice Is Ours
Every time we let AI write our emails, summarize our articles, or solve our problems, we're making a choice about the kind of humans we want to be. Each delegation is small, seemingly harmless, but collectively they risk something profound: our capacity for independent thought.
The effort of thinking, its difficulty and frustration, isn't a bug to be solved by technology. It's the feature that makes us human. And that's worth preserving, even if, no, especially if, it's hard.
I enjoyed reading you; doing the effort to embody your line of augmented thought, thank you for sharing.