Better Humans?
How Transhumanism shapes tech and what an alternative could look like.
I feel tingly. Something is off.
I am at an AI meet-up in Berlin, one of many such gatherings in tech hubs all over the world. Yet this one feels different. I came to the dimly-lit hacker space while researching a photo project on AI and spirituality. Unlike other meet-ups, tonight there is almost no discussion of startup ideas or business models. Instead, people share experiments using AI to find traces of consciousness, build artistic experiences or pursue other commercially unviable endeavours.
The night gets later. The people more interesting. I meet Mars, who introduces me to Cyberdelics, a group that explores how technology might expand the human experience. Over the following months, I will meet the group again at conferences, meet-ups, late-night sessions and a hackathon. And I will get a very different idea of what tech could do than the startup culture that I usually move in.
The photos from that journey are now part of an exhibition opening this Friday in Berlin. This article explores the philosophy behind them.
The Ideology behind the AI Race
The tech world runs on Transhumanism, a philosophy whose implicit values shape research priorities and the design of AI systems. These values do not remain abstract. They influence which problems are prioritised, which futures are imagined, and ultimately the direction of a society that is increasingly dependent on AI.
And yet, that night in that basement, I unexpectedly encountered an alternative. Not a competing product vision, but a fundamentally different understanding of humanity and its relationship to technology. Before turning to this alternative, it is necessary to examine Transhumanism itself more closely.
The central idea of Transhumanism is to use technology to overcome human limitations and biological constraints.
With that, it speaks to ancient human desires to live longer, suffer less and not be limited by our fallible bodies.
Transhumanism has emerged as a central ideological driver shaping contemporary technological developments. It not only inspires technological progress but also legitimises the enormous energy, capital and attention devoted to it.
And it’s not only big enterprises but also plenty of employees and founders who invest years of their lives into startups that promise not only financial returns but also to extend human lives through technology. The ideas for these improvements focus on biology or on digital means.
The biological approach is concerned about the human body and how to extend it. Some ideas in this space are already so commonplace that a startup manager I talked to exclaimed after some confusion: „ah, you just mean longevity“. Adherents of longevity try to extend their lives by physical routines like exercise, sleep, meditation, ice bathing and special diets, that can range from taking supplements to exclusively eating raw meat; the latter being one of the weirder gatherings I attended.
The movement’s poster child of the longevity camp is former entrepreneur Bryan Johnson whose main goal „don’t die“ is purely quantitative and overlooks the quality of the life he is trying to extend. In contrast, scientific research is concerned about prolonging the „joy span“ instead of the „life span“ only.
An event that already carries its Transhumanist agenda in the name are the Enhanced Games that will take place in May 2026 in Las Vegas and allow all substance use by athletes without drug testing.
On the extreme side, life-prolongation ends up with cryogenics, technology that promises to freeze your body for resurrection after death; or just your brain if you want it cheaper. Cryogenics has been around for some time. Peter Thiel considered making it an employee benefit in the early PayPal days. His investment in a cryo startup aligns with a broader trend among tech billionaires who have invested billions in Transhumanist startups. Some companies focus on prolonging current lifespans, while others aim to design the genetic makeup of future generations through gene editing.
A second, more radical strand of Transhumanism abandons the biological body altogether and focuses on digital means. Elon Musk’s brain-computer interface company Neuralink explicitly gestures toward a cyborg future, in which human limitations are overcome through digital integration as opposed to „simply” biological optimisation.
I first encountered these ideas as a teenager reading Tad Williams’ Otherland. It depicts a caste of billionaires attempting to upload their consciousness from their failing bodies into a digital world. Often, the relationship between tech companies and science fiction is reciprocal: science fiction extrapolates emerging technologies into the future, while tech culture repeatedly draws inspiration from fictional artifacts such as Star Trek’s communicator or William Gibson’s Neuromancer.
The latest developments in LLMs reanimated the „mind upload“ idea and gave rise to several companies creating avatars for families to interact with after a person’s death. We also saw an avatar giving an impact statement in court and traditional churches leaning into the new technologies by holding AI sermons or AI hearing confessions.
Longtermism, Effective Altruism, and the Utilitarian Logic
Philosophically, digital consciousness served as a thought experiment in longtermism, a movement that is concerned about the long-term future of humanity. Nick Bostrom, one of its thought leaders, proposes that in the future there might be huge numbers of „digital humans“, even a lot more than the people alive today. Therefore, a moral philosophy that cares about each human should weigh these future beings more and make decisions today with the goal to ensure and even accelerate their creation.
Even in its non-digital form, longtermism is concerned about the lives of future people. A prominent voice, William MacAskill, promotes the idea „that positively influencing the longterm future is a key moral priority of our time“. Jeff Bezos similarly justifies his investments in space technology with „a thousand Einsteins“ that humanity would produce if it expands to a trillion people. Elon Musk promotes his SpaceX company as a contribution to humanity’s survival as a „multi-planetary“ civilisation.
Longtermism stands on the philosophical tradition of Utilitarianism that wants to create the „greatest good for the greatest number”. It doesn’t see a person as intrinsically valuable but as an instrument to create utility like aggregate well-being.
MacAskill is also one of the founders of Effective Altruism (EA), a movement that wants to maximise the good in the world. He founded a platform that supports job seekers to find a career that maximises their positive impact over their lifetime. According to this view, finding a high-paying finance job and donating a big part of the salary to philanthropic causes is the optimal way to maximize good. This logic departs from the common understanding of „doing good” as social or pro-bono work. For EA optimizing for income can, from a strictly rational perspective, be the greatest contribution to society. The EA causes are conveniently selected by another EA platform on the basis of evidence-based evaluations that explicitly aim to exclude emotional considerations.
„Existential risk” and the role of EA in the AI Discourse
As future people are potentially many and thus important, early on EA looked at „existential risks“ that could threaten humanity’s survival. Artificial general intelligence (AGI) was quickly identified as an existential risk. EA heavily influenced the discourse, also using the terms AI safety or AI alignment.
For example, in the beginning OpenAI was funded by EA donors and supposed to focus on AI safety research. Elon Musk explained his early funding of OpenAI with voiced concerns about AI safety. The short-lived removal of OpenAI’s CEO Sam Altman in late 2023 was a power struggle: EA-affiliated board members were concerned about Altman harming AI safety.
Other famous EA adherents include Sam Bankman-Fried who was an EA follower and major donor before his FTX cryptocurrency exchange collapsed. The „Zizians” were an EA-affiliated group concerned with existential AI risks and later became notorious for sect-like dynamics that culminated in violent crimes.
In the bigger picture, debates about AGI serve as a major distraction. On the one hand, they fuel investor fantasies and inflated promises by U.S. tech firms. As with biological Transhumanism, many of the loudest proponents have direct financial interests in the field’s expansion.
On the other hand, they shift attention to speculative futures and doomsday scenarios, sidelining discussion of AI’s present impact on employment, public discourse, education, and society.
For example, discussions about AGI do not merely warn of existential risk; they also fuel a business vision in which artificial agents replace most employees, allowing executives to make money without dealing with workers.
In Europe, AI development is still met with a degree of caution and regulatory guardrails while in the United States, the emphasis remains on rapid build-out and scale. This contrast became tangible during an AI training I attended at a prestigious French business school. The invited AI evangelist had little to say when I asked about AI’s risks at dinner. He acknowledged that recent graduates face increasing difficulty entering the workforce, but for him it was less of a societal concern and more of a business opportunity.
Cyberdelics: Techno-optimists with Humanistic Values
The search for other, less hyped and less growth-at-all costs technological futures brought me to that dark hackerspace in Berlin and the conversation with Mars about Cyberdelics.
Cyberdelics are immersive experiences that aim to induce similar psychological states as psychedelics like presence, awe or ego dissolution, but without the substance. They want to give access to these experiences to people who can’t or don’t want to take drugs. The experiences often included virtual reality glasses with the goal to create these „altered states of consciousness“. The hope is that these new, exceptional experiences create lasting „altered traits“ that develop human capabilities like empathy beyond the specific experience. The Berlin chapter was part of a greater community with origins in Mexico and groups in multiple cities worldwide, connected by highly mobile members. The people I met were developers, artists, musicians and above all idealists.
Many of them had been involved in other community-building projects. Money only surfaced as a concern when, after an event, they realised there were insufficient funds to cover outstanding costs. The work was sustained by personal effort and dedication. For the hackathon they called on outside participants to spend a weekend to create more prototypes combining technology, body-feedback and art. A requirement was to put the resulting projects under an open license. There was some money from sponsors and apparently the mobile members had financial resources but none of the „created value“ was „captured“ as startup-lingo would call the extraction of resources out of a system for the benefit of external shareholders.
Money or Community
This rejection of commercialisation is one of the key differences of Cyberdelics to Transhumanism. It is not extractivist but community driven and prioritizes shared experiences now over protecting individual contributions for profit. Cyberdelics is community-driven while Transhumanism has a few very loud egos.
When asked, the Cyberdelics members distanced themselves strictly from Transhumanism and its cold, ego-driven culture. Instead they were promoting community, which is hard to believe when the person telling you sits encapsulated in their VR goggles.
Maybe Transhumanist organizations have to be organised that way because they are large and influential, while Cyberdelics remain a small, less profit-driven movement. Also non-commercial, idealist communities need organisation and without money there need to be other tools for coordination. Yet there are large-scale projects grounded in similar ethics, such as Wikipedia or open-source software.
Trust functions as the primary governance mechanism. Members share insights openly without legal protection, relying on community norms and reputation management, i.e. closer to science or art than business. Status is awarded by contribution, not capital. This is intrinsically rewarding but creates vulnerability: the fusion of friendship and shared mission can lead to exclusion of perceived „outsiders,” and the intensity of this communal commitment harbours burnout risk.
There are similarities between Transhumanism and Cyberdelics. They are both tech-optimistic. They are at odds with the ways things are currently done. They strongly believe in their own ideas. They are driven by active builders, not passive recipients. They use technical language and tools and are overly male.
Two Visions for a „Better Human”
The central difference between Transhumanism and Cyberdelics lies in their underlying assumptions about human value and progress.
Transhumanism is grounded in a utilitarian framework that subordinates the individual to societal benefit or even to hypothetical future populations. Human worth is measured by contribution to that utility rather than recognized as intrinsic. This logic is fundamentally anti-democratic: the individual does not possess value in itself and is not regarded as equally entitled to participate in decision-making.
Applied to human enhancement, this framework risks deepening social stratification. Access to enhancement technologies will be uneven, dividing society into those who can afford biological optimization and those who cannot. In its most extreme form, a person’s life chances are effectively determined before birth through genetic selection. Inequality is not merely reproduced but amplified, as a small elite gains longer lifespans, enhanced capabilities, and the power to shape both the development and distribution of these technologies.
Cyberdelics give us an idea of what a world could look like where technology is not merely used to overcome our human limitations but to deepen our humanity. In that logic, we are not just biological machines but embodied beings with capacities that technology can help activate. And it shows different ways of working together to use these technologies.
This contrast matters at a time when technological capabilities are accelerating and regions with different value systems are racing to develop advanced AI and enhancement technologies. The question is which assumptions will guide technology advancement. What would a democratic and humane technological future look like? Whose values are encoded in the systems we build, and whose interests are prioritized?
At the same time, we as a society must decide which way we want to go. What could a democratic, humane tech future actually look like? What is the world we want to live in and where do we focus our energy?
It’s not about rejecting technology but to first think about what society we want and let that decision guide what we build. Maybe the goal shouldn’t be to overcome our current condition but to become fully human.
This also raises the question of contribution beyond markets. Not all technological work needs to be commercial. How can we deploy our skills developed in business and engineering in non-commercial contexts, supporting communal infrastructures, shared knowledge or alternative futures that are not driven by extraction or scale alone?
The photos that kicked off this research will be part of an upcoming exhibition in Kunstquartier Bethanien in Berlin. The vernissage is this Friday, January 30 at 7pm. Consider yourself invited. The exhbition will run until March 13 2026. Some of the artwork is presented below as well.








