AI's biggest threat isn't robots. Its silence
AI Promised Enlightenment. We Got Censorship Instead
Tech leaders promised Artificial Intelligence would lead a new golden age of human advancement. The same leaders now warn AI might end humanity soon1. We are supposedly reaching the pinnacle of technological progress while preparing for existential catastrophe.
This isn't just ironic—it reveals a fundamental crisis in how we think about progress itself. We're experiencing what I call a crisis of dialectics, where the AI industry systematically suppresses the very contradictions and debates that drive genuine innovation. The philosopher Hegel understood that real progress isn't linear—it's messy, driven by opposing forces clashing and creating something new. Every breakthrough emerges from conflict between competing ideas, not from silencing critics.
Drawing on Hegel’s concept of progress, which requires dialectical movement through the recognition and resolution of contradictions, I argue that AI currently faces a crisis of dialectics. The current AI landscape negates or rejects any meaningful antithesis, silencing critical reflection under the excuse of winning a race. Fields such as AI Ethics, which had its boom around 2016 slowly appears defunded2, regulatory efforts delayed3, and tensions with human rights growing4. All the counterparts for AI summer seem to be entering a kind of winter. At the recent AI for Good Summit in Geneva, Abeba Birhane even faced last-minute censorship before her keynote5.
Digital technologies carry an illusory sense of linear progression. As consumers we tend to believe that the next release in hardware or software will allow us to govern from a higher level of privilege, even when contradiction appears. Today, we are increasingly vulnerable to digital theft, security breaches, safety risks, and the spread of misinformation and disinformation. We also recognize that AI literacy, an emerging layer of digital literacy, will likely shape future social casts. None of these realities reflect ideals of progress, even as we navigate what some describe as a new Enlightenment era of AI.
Progress Isn't Linear—And AI Needs to Learn That
The idea of progress is far from universal. Many cultures have viewed progress as a critical—and at times cannibalistic—concept, driven by a blind, linear notion of 'evolution'. Especially since the 20th century, critiques have emerged from various significant perspectives: historical, philosophical, political, and ecological, emphasising how that linear growth destroys ecosystems, and erases diverse ways of living, seeing modernity as not inherently liberating, but often oppressive, particularly when progress becomes technocratic and/or bureaucratic.
From this perspective, we should ask:
If progress is real, how did modern, “rational" societies commit mass murder using the most advanced technology available at the time in their own territory?
The book IBM and the Holocaust serves as a powerful case study, demonstrating how technology was used to automate mass killing. Using punch cards, IBM’s systems significantly aided the Third Reich in profiling and targeting individuals presumed as “undesirable” by the regime through its German subsidiary, Dehomag.
But how did we land to the understanding that progress is linear, positive, rational, and necessitates technological advancements?
August Compte (1798-1857), was probably the philosopher most closely associated with this idea, proposing that all that comes is better than what has passed. This is rooted in positivism, which resonates with our beliefs in technological advancements. The leap from a Nokia phone in the 1990s to today’s iPhone, or the transformative shift brought by the internet, both examples of apparent linear technological progression.
Compte believed that human history evolves in a linear progression through three stages: A Theological Stage – where phenomena are explained by divine or supernatural forces, a Metaphysical Stage – where abstract principles (like "nature" or "essence") replace gods, finalizing in a Scientific (or Positive) Stage – where knowledge is based on observation, experiment, and reason. Each stage improves upon the last, culminating in the scientific stage as the peak of human development, guided by empirical knowledge. This notion, rooted in Enlightenment thought, defends the idea that history moves in a straight line toward something better—more rational, more scientific, and freer. Other thinkers, and to some extent Kant, also emphasized that reason and knowledge would bring inevitable improvement, reinforcing a view both linear and cumulative.
Revisiting the historical development of these ideas shows that positivist thinking arose within a narrow European context shaped by the Industrial Revolution and economic expansion. Within decades, the supposedly evolved world collapsed into WWII. The Holocaust and rise of totalitarian regimes exposed the limits of Enlightenment ideals, proving that science, reason, and linear advancement did not guarantee better societies.
Hegel, the progressivist with an antidote
The thing is, progress wasn’t a universal concept even between all progressivists. Under that umbrella, Hegel complicates this picture. He did not believe in the simple sense of things just getting better over time. Instead, he saw progress as a kind of dialectical movement, a process where contradictions and conflicts drive history forward. Because his view of progress is non-linear and driven by contradiction, he reflects that advancement is not a smooth unfolding of better ideas, but a dynamic struggle between them.
In Hegel's view progress is a dynamic clash: thesis, antithesis, and synthesis.
An idea (thesis) inevitably gives rise to its opposite (antithesis), and the conflict between them leads to a new, more developed state (synthesis), which then becomes a new thesis, and the cycle continues. This isn't just about ideas—it’s also about history, politics, and human freedom. For Hegel, history is the story of human freedom becoming more fully realized recognizing the freedom and dignity of all.
From that perspective, Hegel’s idea of progress is not linear or smooth. It is actually messy, full of struggle, has setbacks, and contradictions, but at the end it is all meaningful. Every conflict contains the seeds of its resolution, and that resolution moves us closer to a more rational and freer society.
To add a layer of complexity, Hegel’s thoughts in progress was not only external but unfolds in the internal. He saw progress as the unfolding of Spirit (or Geist). A kind of cosmic self-awareness coming to know itself through human history, culture, and thought. So, in Hegel’s world, progress isn’t just "better technology" or "more comfort." It is the evolution of consciousness, both individual and collective, toward a fuller understanding of freedom, reason, and unity. Progress is the unfolding of self-realization through the antithesis: the contradiction, the conflict, the dialectics.
What real progress looks like
If Hegel were alive today, what might his thoughts be on AI? What would he make of a technological landscape lacking guardrails, constructive competition, and ethical grounding?
To answer this, we must acknowledge that progress is messy. This means that if we want AI to advance, we must address its gaps as a priority. Investment in AI should support not only its thesis but also its antithesis—not only enhancing robustness and efficiency, but also fostering research and innovation in ethics, and safety.
AI utilization is not just about tools; it is about understanding our rights in relation to the technology. It is not only about coding or prompting; it is about educating people on the limitations of AI in its current state and maybe creating solutions around those limitations. Supporting the development of ethical features—work that may require slowing the pace of a narrow linear development in order to achieve truly sustainable innovation.
Historically, antithesis has enabled innovation in different industries. Environmental regulations didn’t kill energy production — it empowered solar cells, wind turbines, battery technology, and carbon capture systems proving that without the negation, there would have been less economic incentive to innovate beyond coal and oil. Automobile safety laws, led to the invention of airbags, anti-lock braking systems, lane-assist AI, and electric vehicles. In telecommunications regulation, antitrust actions against Bell System, a monopoly on telephones in the 1980s, forced TCP/IP standardizations, breaking the monopoly, which ironically accelerated digital communication.
AI Needs Its Enemies to Survive
To achieve real progress, we need to mature the idea that progress is linear. Even if AI will forever change humanness, it doesn’t mean that all its progress will be positive.
It is imperative that there are spaces, institutions, startups, and governments that protect the antithesis of the development of this technology without censure.
We are in a crisis of dialectics because the antithesis is often uninvited. We need to see innovation within clear ethical boundaries.
ChatGTP’s programmed positive bias is a proof of the psychological negative effects in its users when contradiction is erased by design even when needed for accuracy6. Many technocrats still argue that regulation hinders innovation, as if regulation and innovation cannot co-evolve toward a synthesis—one that could bring us closer to a more Hegelian vision of a rational and freer society shaped by this new technology.
In a world where dialectics are not allowed, where they are strictly controlled and dominated, genuine intellectual progress comes to a halt. Dialectics, the method of examining ideas through contradiction, opposition, and synthesis, is central to critical thinking, innovation, and freedom of thought. Exactly what experts say are the skills we need for tomorrow. Without dialectics we risk that Ideas are no longer tested or refined, that dissent is criminalized or pathologized, that education becomes indoctrination, and language is tightly managed. Without them, thought becomes static.
Without dialectics, we may feel we are progressing towards a unification of standards and AI solutions for everyone. Such a world may appear orderly or unified, but that unity is hollow. It is built on fear, not understanding. Dialectics is what lets us examine contradictions in ourselves, our systems, and our beliefs. Take that away, and you lose not just freedom—you lose the ability to truly innovate on anything.
Mature industries embrace their critics because opposition makes products better. The AI industry needs to mature and recognize that safety researchers, ethicists, and human rights advocates aren't enemies of progress—they're essential partners in creating technology that actually advances human flourishing.
Camila, I learnt a lot from this amazing post. It also shed some light on Hegel's philosophy which I found sometimes very difficult to read, so thank you 😅. This piece left me thinking:
1. We need less standardized understanding and perspectives on AI, and how to apply it so the dialectical process remains healthy enough. It appears to me it's polarized, and we may need some nuances to enrich what's coming next.
2. I think AI Ethics aren't necessarily the antithesis of AI evolution. In many aspects, the antithesis of the AI industry's primary motives. Don't we need some strong moves, from bold players, to fill the voids left by the current players of this industry? In this sense, more than censorship, is there anyone willing to form the antithesis' team? (I've seen some signals, but not quite sure yet). We definitely need it.
3. Many times, it feels to me that we are entering a new dark-age more than a renewed Enlightment era, despite whatever tech industry marketing machinery is trying to sell us.
This is one of the best blogs I have read. The way it connects to WW2 and IBM Holocaust actually made me shiver because I can see the ongoing parallels already. I would like to argue that one of the reasons the dialectics is broken is because the antithesis is more of a concept instead of a concrete proof. The antithesis argument almost always revolves around AI killing us or going rogue which in a shadowy way promotes the technocrats idea that only they can save us. Almost no one discusses the cost of annotation on people (where people in Kenya needed counselling after annotating NSFW stuff) or even the ecological cost. Very few labs have even quantified the misinformation cost as well. Unless the antithesis is well documented and quantified, its hard to prove the effect. Until then it would only be Geoffrey Hinton shouting at a cloud.