AI: Can We Let Go of Thought?
What if AI could carry the burden of memory and repetition, leaving us the space to play, to imagine, to live?
Let’s play with the idea that outsourcing our thinking might help us free ourselves from thought. At first glance, it makes little sense. We often define the “self” through thought, and we describe our species as the thinking creature among the animals, and even other kingdoms. If, like me, you take pleasure in thinking, this may be unsettling. We’ve seen a recent and welcome call to revive critical thinking in our engagement with AI. Decades ago, a philosopher I deeply admire suggested that technology may already have replaced our capacity to think in a logical, mathematical sense, and he said this repeatedly across the 1960s and 1970s. The philosopher I have in mind is Jiddu Krishnamurti. From a spiritual vantage point, he often urged us to ask, from the depths of the mind, who we really are, noting that computers can surpass our cognitive, mathematical, and mnemonic skills. Inspired by his talks and writings, I’ll sketch an optimistic idea: outsourcing thought to AI or technology so that we can blossom astonishing qualities of being1. If you’ve watched 2001: A Space Odyssey (1968) by Stanley Kubrick, this shouldn’t surprise you; we’ve been probing the nature of our relationship with intelligent machines since at least then. Why do we treat our present moment as uniquely unprecedented?
I start by drawing a line between what I call living-thinking and death-thinking, taking cues from Krishnamurti. I frame this difference with Huizinga’s idea of learning as play, and with Wittgenstein’s2 notion of language-games from his later period. In this essay, I present AI as a real opportunity for us to delegate death-thinking so we can devote ourselves to living-thinking, a more joyful and playful creative movement. Knowledge may need to step down from its pedestal, so that AI can be seen as an individual, psychological, and spiritual opportunity. Aware that intellectual games are flexible but life is stubborn, I’ll use concrete examples showing how, in non-idealized ways, AI frees me from certain kinds of thoughts.
We tend to define thinking in purely cognitive terms and forget its ludic nature, learning through experimentation. Our obsession with knowledge and memory feels like a refusal to let go of what is dead; it echoes our primitive relationship with death.
I’ll often step into psychological and sometimes spiritual terrain. This remains an essay; I’m not instructing anyone on how to engage with AI. I hope to offer a perspective that helps readers build their own ethical framework in resonance with their values.
Why might we need to free ourselves from thought?
We are both incredibly creative beings capable of walking the unknown, and trained and conditioned creatures taught to repeat and conform. We need to step out of our conditioning, from the known, so that we can experience something beyond, something new, something alive. Death- and living-thinking cohabit us. By evolution and fear of exclusion, we drift toward death-thinking as we age, god-fearing and driven by belonging.
What is death-thinking?
We’ve learned, individually and collectively, that inherited patterns, our defining narratives, our silent and spoken ideas, have enabled survival. They’ve also confined us. The thought contour society for survival, and society returns the favor, a functional, ancient cycle. From a pragmatic point of view, there are so many forces molding the “self” that it is hard to keep up sustaining the belief in “free will”. From evolutionary forces to psychological unconscious, and social pressures, our decisions may be less singular than our Judeo-Christian lens suggests. We tend to feel that decisions and responsibilities are ours, but we often forget to ask who is asking the question, who is the one deciding, and inquire deeply enough to see the illusion dilute.
We may need to free ourselves from thought to better recognize who we are. We are movement, while knowledge is static. From Krishnamurti’s view, change cannot be grounded in ideas or collective pressure, since consensus rests on knowledge, and knowledge resists the unknown. Change begins with individual attention and a great amount of energy. Facing the unknown, which is everything before us, requires awareness of the “self”, which is a distillate of what we know about ourselves, braided with the conscious and unconscious history of humankind, and the information living in our DNA as a species living within us. Knowledge is necessary for survival, yet we have layered it exponentially, perhaps as a playful flaw, paradoxically blinding ourselves to the unknown. Seeing, living, meditating may mean observing what is not both internally and externally if there is such a difference.
Memory keeps us alive, how to find resources, how to return home. Yet, through complex relations, language, technology, and cultural artifacts, we’ve used our boundless capacity to limit ourselves in co-existence. We are social creatures. Belonging was, and feels, vital. We adapt to fit the mold, even when it is rotten. Rooted in fear, collective memory organizes around tribalism and the survival impulse of belonging, manifesting in nations, nationalisms, and symbolic group identities, which produce separation and war.
I argue that the immense training data of LLMs and their statistical algorithms are not different from what we’ve long used to operate: knowledge. Psychologically, knowledge has served fear, defining ourselves by separation to secure belonging. We sublimated this into nations and group identities with socio-economic, geographic, religious, or racial labels. Machines differ not only in speed and memory; we may need memory to remember who we are, but machines don’t care who they are; machines do not need a why for memory, it is the very reason they exist.
Everything we store in the “self,” to belong to a group, country, class, race, title, everything we use to identify ourselves by separation is what I call death-thinking, rebuilding with dead parts of what was, limiting our capacity to see what is due to its heaviness.
We can outsource a huge portion of this operation to machines, to AI. Death-thinking is deeply rooted in language; it is necessary, and now we have large language models (LLMs). Naming, labeling, and wording enable evocation, but mediate perception3.
What is living-thinking?
However, there is a quality of intelligence beyond programming or prediction that LLMs do not achieve. I call it living-thinking. It arises from our experimental, ludic way of learning and observing, evoked memorably in the ape sequences of Kubrick’s 1968 film. Huizinga argued for the priority of play nearly a century ago, reinforcing a post-humanist view:
Play is older than culture, for culture, however inadequately defined, always presupposes human society, and animals have not waited for man to teach them their playing4.
Living-thinking requires chaos, mistakes, friction, and a step outside the known. It is fueled by our animal nature and instinct. We often create most freely as children, in play, when we don’t “know” as much. The price is disobedience, ignorance, even conflict and violence, refusing what consensus has accepted, whether grounded in science, tradition, or memory. This can trigger the fear of rejection and real exclusion. We are terrified by ostracism. Though death- and living-thinking intersect, they’re often incompatible. We are rigorously conditioned to trust thinking and memory as a safe place; indeed, they seem safe, but they also bind us and poison us.
Here lies a paradox in our urge to belong, our quest for connection. When seeking connection, we face a choice: (a) recognize our singularity and serve the group from that place, risking rejection (living-thinking), or (b) recognize the group’s mold and do whatever it takes to fit, killing authenticity (death-thinking). This choice is often situational and gradient, not purely binary; it’s a living and perpetual choice.
Living-thinking is not a safe place. It locates meaning in relationships, in how we order words, our singular way of infusing spirit. It precedes language; a baby is born into living-thinking and is trained into death-thinking. Our injection of being through syntax, words, context, body language, and imperfections can resonate with others. We play language games5 to test the connection by evoking images, experiences, and emotions. LLMs predict, they don’t intend or experience. That’s why we often sense AI-generated content, not by counting em dashes, but by the absence of felt connection.
To summarize, death-thinking is identification with the group, fitting the mold through language and its symbols. It feels safe, and it is culture and society. Beyond it, and limitless, living-thinking rises from our ludic impulse to test, to seek authentic connection, to face the unknown with a child’s bravery, accepting ignorance, and risking everything. Precisely because AI can shoulder so much memory, synthesis, and reproduction, we gain a chance to keep more of our energy in living-thinking while delegating death-thinking to machines.
Why is AI an opportunity to set ourselves free from thought?
Two points make AI a tool for freedom. First, for the first time, we can play with results, scenarios, and prototypes through language, programming, mathematics, and symbols at unprecedented speed, learning or abandoning ideas via fast experimentation and simulation. Second, in this era of over-production of information, AI offers a way to lean into living-thinking while we outsource much of death-thinking to LLMs and other AI models.
The three examples below illustrate how I found the heavy work of AI taking care of some of the burden of handling death-thinking:
My first example comes from my own experience in Academia. Seventeen years ago, when I wrote my undergraduate thesis in History (2008), three tools extended my reach while cutting research time: (1) Excel and Access to structure a database from unstructured primary sources, (2) Word to write, no handwriting or typewritten copyedits, and (3) JSTOR, which kept me current with global scholarship. Still, most of my time was spent with notebooks and physical books; campus terminals sometimes were the only way to access JSTOR. I’m nostalgic about that tactile work, like film photography’s analog charm, but nostalgia isn’t function.
The non-creative load, what I call death-thinking, was heavy. Academic systems often reward demonstration of mastery over genuine novelty. If I’d had today’s AI, I could have redirected time from literature demonstration toward the unknown, communication, boundary pushing, and new approaches. AI could have handled much of the knowledge marshalling, while I focused on living-thinking as a trained historian, perhaps reaching a wider audience and freeing my imagination6.
I am not arguing that we should get rid of death-thinking. It’s crucial for consensus, especially in academia and science. I am arguing that we can engage AI to take over more of our death-thinking so that we can spend more time playing and less time carrying the load that machines can bear. Going back to my undergraduate thesis, would I do that today, my approach would be different: I would locate candidate passages by searching authoritative editions, verify page numbers and edition details, and store citations in a bibliographic manager for reproducibility. I would also cross-check with digital archives to ensure that quotations are not hallucinated or mistranscribed.
My second example comes from my experience as an entrepreneur. Seven years ago, I learned about Web3, digital assets, and crypto market cycles. High-quality asset assessments (Do Your Own Research DYOR) are laborious: requiring auditing technically smart contracts, blockchain block flows, tokenomics, social signals, use cases, and more. With rigor and patience, and with external experts, you spend weeks. Worth it, but costly.
A colleague and mentor shared a strong prompt for ChatGPT deep research mode, or Perplexity, to run DYOR. Judgment remains mine, so does triage of hallucinations. But that workflow saves time, letting me focus on risk management and evaluate assets I’d otherwise miss in a fast market.
In short, AI helps me externalize the fueling of decisions, gathering, scraping, computing, so I can spend more energy in living-thinking, responsiveness to context, and emerging opportunities and risks.
The third example comes from my writing practice. This is perhaps the clearest case of outsourcing death-thinking to AI. Approaching prose from living-thinking is often more productive and joyful. Knowledge is required, but much of its handling can be externalized. For this post, Huizinga’s thesis on play adds force. I recalled the core idea, then asked an AI assistant for relevant quotations and bibliographic details. The choice to include Huizinga, where to place him, and which quote to use remained mine. I didn’t need to hunt for the book or re-read chapters to extract passages; this was AI.
A second process worth outsourcing is copyediting. I keep drafts and prior essays in an AI workspace, and I spin up specialized threads to copyedit in my voice. English isn’t my first language, so I seek a balance, consensus, and clarity without sacrificing authenticity. AI helps me tune for an Anglophone audience without carrying all the death-thinking load myself. I verify quotations against primary editions, avoid unsourced block quotes, and reject paraphrases that do not fit my purpose or resonate with my voice.
My workflow, (1) draft the entire post in English, a living-thinking practice in my second language, (2) enrich with quotes and references that emerged while writing, (3) multiple self-edits aware of second-language limits, (4) full copyedit in my AI project, (5) selectively accept edits, (6) send to my human editor, and iterate. If my voice is authentic and the message clear, I’m satisfied. AI saves time and defends space for free, ludic writing.
Based on this thread of thought, I have a brief observation on the potential impact on organizations considering the reports of high failure rates in enterprise GenAI7: I have the hypothesis that there is a lot of confusion in identifying which processes are dead but functional, standardizable, and automatable, and which depend on living-thinking, creative, relational, and situational. What is dead can be automated; the unknown cannot. Hence, the push is to standardize and make processes predictable, as clearly explained by Olga Trögger in her recent post in Phi/Ai.
Overall, these examples show that current AI capabilities cannot replace living-thinking, that ability that precedes language and that arises from singularities and team synergies that resist capture by statistical models. Living-thinking has practical applicability but not a fixed methodology; it requires ownership of spirit, not just calculation. I hope to develop this beyond my current observations.
If this holds water, the focus of AI implementation is not technology or frameworks but people and their living potential. Can we teach people to cultivate and amplify living-thinking skills? Better, can we forget to operate from death-thinking as default?
Some thoughts before closing
I believe we can outsource death-thinking to AI, so we can reclaim our nature and step into living-thinking more fully. Can machines reduce the energy we spend in survival mode so we can create? My answer, from experience, is yes, but it’s our choice, and it requires energy and attention. I am aware that even though this is a great opportunity, it is not necessarily the way we’ll choose. We must not yield ourselves to LLMs or surrender our voices to their outputs. Current AI is great in carrying on all processes that are characteristic of death-thinking, which is a huge burden on our shoulders and spirits: memory, knowledge, processing and referring information, using better grammar and syntax, frameworks, methodologies, and all the rest of it.
We find ourselves in others, through resonance, through suffering, and through experiment. We need to play with the tools at hand, including AI and LLMs. Applying our ludic nature, personally, academically, scientifically, organizationally, it is an immense benefit in our service, but it’s not an easy task since we fight our conditioning. The movement is a very spiritual and psychological impulse, so we can let go of what we were trained to do, of what we fear by experience, and step into the out-of-the-box feature we already have: living-thinking.
Is the fear of ostracism and our urge to belong by connecting through consensus something we can delegate fully to machines or AI? Not yet, we are emotionally and spiritually very slow creatures, and we won’t move as fast as we can envision or think. My intuition is telling me that enterprises, startups, venture capital, and corporations, in many ways, while allocating human beings as human resources or assets, are considering them as non-living functional creatures, serving predictable processes, often repetitive; all of these characteristics of the Industrial Revolution make replacing humans a corporate and capitalist dream. What is often forgotten in these environments is to understand how important the living-thinking nature is to keep alive all the dead repetitive processes, which are also interacting with living matter and present circumstances. We infuse the living-thinking into the rest of it; organizations won’t and never will be singular as they have predicted (‘solo-startups’), because at least they need to serve other living beings, which we often call the market. The irony of capitalism is dreaming of having a death matter to relate with, with predictable patterns, a huge consumer death-matter which behaves predictably, so corporations and sellers can project their profits and losses. That is a dystopian capitalist dream.
More than AI making incredibly productive companies, cutting costs and creating efficiencies, we’re facing the opportunity to create a new type of organization to relate with nature, and the market as a living matter. Those who are determined to let go of the burden of what is known, and let machines and AI carry it for us, can bet on ventures, spin-offs, or internal experimentations with a totally different quality and reason of existence, which can benefit all of us.
I find it fascinating that current AI, and most likely next generations of AI models, will probably serve better individual freedom in a deeper psychological and spiritual sense than capitalist dystopias. As I stated at the beginning, I know this is an optimistic approach, but I hope I uncover how real and plausible the opportunity is.
J. Krishnamurti, “Saanen 1981, Public Talk 1,” video recording, Official J. Krishnamurti YouTube Channel.
Ludwig Wittgenstein, Philosophical Investigations, posthumously published 1953, English edition 1958 (Basil Blackwell). See also The Blue and Brown Books (preliminary studies; lectures 1933–35; published 1958).
J. Krishnamurti, “The First and Last Freedom,” 1954, p. 92: “The word is not the thing. The description is not the described. The word ‘tree’ is not the tree.”
Johan Huizinga, Homo Ludens, English translation, Routledge, 1949, p.1
Ludwig Wittgenstein, Philosophical Investigations, posthumous publication 1953, English ed. 1958.
A recognized French historian, Georges Duby, highlighted imagination as essential to the historian's work: “L’histoire exige de la clarté, de la lucidité, de la patience mais aussi du style et de l’imagination. Du lyrisme en somme.” interview with Antoine de Gaudemar, October 1984 (“Entretien avec Antoine de Gaudemar – Octobre 1984”).
State of AI in Business 2025, The GenAI Divide,” MIT NANDA, July 2025