It's Time for a High-Culture Revolution
We are a 'high-tech, low-culture' society, wielding god-like tools with social structures stuck in the Roman Era.
Last month, Anthropic’s Head of AI Safety Mrinank Sharma, resigned with a two page manifesto. Inside, he declares an urgent need to evolve culture to meet the power we now wield with our technology. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he wrote. “Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.” As an anthropologist and expert in paradigm shifts, I wholeheartedly agree.
The high-tech revolution seems to be moving at an ever quickening pace. But is our culture keeping up? As Sharma states, the issue is not exclusive to AI. We live in a system of incentives that reward actions often at the antithesis of human values. Market incentives value competition, growth, and maximizing financial returns, while human values encompass empathy, responsibility, and a holistic view of interrelated systems. When we allow such incentives to direct our behavior, rather than our wisdom, things go very, very wrong.
Minding the Gap
The AI revolution—the latest tech paradigm shift—is well underway. Though rapid adoption is in full swing, many open questions remain within culture.
This is what I call the Quicksand Phase, the period of cultural liquefaction when everything is messy and the collective agreement in culture has not yet been made. It is the period in which we are most vulnerable—and most empowered—our actions having far-reaching consequences in a ‘never-been-done-before’ world.
The Luddites understood this. It was not for fear of technology that the Luddites burned down a new sewing factory in 1812. It was the wholesale destruction of hundreds of livelihoods in favor of cost savings (aka. money incentive). “This dynamic of predatory managers using technology to destabilize the lives of workers or eliminate their jobs entirely is hardly just a nineteenth-century phenomenon,” says Greg Epstein in his book Tech Agnostic. “It is still how tech operates today, and it has been a foundational aspect of capitalism since the original Luddites... Luddism is not really about the rejection of technology at all—it’s about the rejection of a certain kind of political and economic deployment of tech.”
As Sharma said, when money is the goal, humanity is thrown out the window. “Nowhere do we see capitalism froth at the mouth more than in the VC room. Nobody is asked what their values are,” continues Epstein. As a startup founder, I’ve watched firsthand as the slimiest men shuffle millions between companies, funding the most outlandish ideas without first pausing to wonder ‘should we?’ The money incentive is the beginning and end of thought.
Data shifted the incentives in the early 2000s when Google rewrote its value proposition from search results to data commodification, increasing their revenue that year by 3,500%. Shoshana Zuboff has been tracking the shift ever since, ringing the bells as the incentives slide from a product-driven to data-driven economy. “Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data,” says Zuboff in her book The Age of Surveillance Capitalism.
And AI in particular is hungry for data. Starving, in fact. “The internet is a vast ocean of human knowledge, but it isn’t infinite,” an article in Nature stated. “Artificial intelligence researchers have nearly sucked it dry.” The danger is manifold: AI is not yet profitable and yet needs more data, so it must force its way into our lives any way it can — money, again, at the incentives wheel. “This entanglement of overwhelming powers for which no American has ever directly cast a vote is so dangerous because it represents an era in which surveillance is an essentially ungovernable social norm, a kind of modern force of nature to which we can only submit,” adds Epstein in his book.
When Decisions Are Made For Us
During the Quicksand Phase of a paradigm shift, boundaries are dissolved. The new will try to get away with whatever it can, often engaging a scope creep that blows past what is healthy for a society due to the incentives to do so.
We already see it in everyday life. AI is being added to doctor’s visits, job application review, and even government surveillance, long before AI has been designed to account for its biases. Compound the issue with synthetic data—the solution to the data wall AI has hit. Only when we step in (usually in some sort of public outcry) do boundaries become firm. As Sharma said in his resignation, we must become adept at doing so much earlier in the Quicksand Phase of any paradigm shift.
AI usage in private business is small fish compared to the AI-powered surveillance programs governments worldwide now employ. We’ve all heard it: “I have nothing to hide.” That phrase is volleyed back and forth across social media and dinner tables whenever concerns are raised about data collection, now occurring via your daily interactions with devices such as social media, your TV, and even your car. This logical fallacy likely stemmed from the tech industry itself, where your data is more valuable than any dollar. A dollar can only be spent once; data can be bought and sold over and over, ad nauseam. It’s too valuable to allow public concerns to balloon.
“It is obscene to suppose that this harm can be reduced to the obvious fact that users receive no fee for the raw material they supply,” says Zuboff. “That critique is a feat of misdirection that would use a pricing mechanism to institutionalize and therefore legitimate the extraction of human behavior for manufacturing and sale. It ignores the key point that the essence of the exploitation here is the rendering of our lives as behavioral data for the sake of others’ improved control of us.”
Be Better than the Romans
Today, the cutting edge of tech includes AI surveillance and influencer bots built especially for you based on your data because you had ‘nothing to hide.’ Humanity is smart enough to avoid such brush-off logic, but we are not yet exercising it as the norm.
We claim to be at the height of human development, but really we are at the height of technological development—high-tech but not high-culture. Human development is still very much stuck in the Roman Era: we continue to use the same kind of democracy perfected during its reign and define civilization by access to the latest technology. But our technology has grown far beyond Roman tech; so must our society.
If we are going to evolve our culture beyond Roman achievement and adopt the wisdom Sharma invokes, we must learn to accept all our humanity. And we must evolve these cracks in our culture if we are to survive. Yes, survive, for technology comes not only with loneliness, but with unfathomable destruction by weapons yet untested due to their egregious might.
We are the only proven being on this planet capable of metacognition. We can think about our thoughts. Cultural evolution isn’t isolated to private citizens: it affects those who would become corporate and political leaders. Political leaders who today fight like children, as if words don’t work, must end. “You want to get physical with me? Like an ape?” cries the main character in Marty Supreme. He’s exactly right. We need a culture that normalizes catching leaders at playground thinking and admonishes them for it.
Evolving Culture
Incentives are a tricky thing—they are of and by culture. That is, our held beliefs, institutions, and behaviors all feed into shaping incentives at scale. There are negative incentives (go to jail if you steal) and positive incentives (earn a living if you give your time, labor, and expertise). Culture shapes these, and these have changed over time. They go hand-in-hand, a clasp ever-enduring.
And culture is shaped by—you guessed it—people. Culture is the result of hundreds of interactions, all seemingly banal—until they aren’t.
Culture can be changed: just as during the civil rights movement (or any major movement), a paradigm shift is possible when enough people assert their agency over a system of incentives. That is what I propose we do now, while AI is in the Quicksand Phase.
You know that saying ‘if you didn’t vote, then you don’t get to complain about the government’? Culture in the Quicksand Phase is kind of like that. Culture is going to solidify whether or not you take action; isn’t it better for you to get yourself involved so you don’t end up living in a society that you hate? The biggest lie we’ve been told is that our actions have no consequences. But if you leave ChatGPT because they signed a U.S. military contract, that’s a vote. If you’re in charge of corporate AI usage and implement a privacy-first, ethical AI, that’s an even bigger vote. If you don’t let your kids use chat bots, that’s a vote.
This is the culture that we have to work towards: a culture that actively participates in every aspect of its evolution. As of now, the incentives Sharma names push individuals to remain passive and weigh heavily in the favor of large corporate entities and governments with military might. That means that new innovations favor those two groups more than any other, and will be driven by and for those groups.
But we all participate in culture, because culture is developed by and of the masses. A culture that involves itself in its own innovation, rather than passively accepting the innovation handed to us is more likely to be equitable, diverse, and sustainable.
If you are okay with nameless government officials and nefarious CEOs making decisions about your life, then by all means, take no action. But with culture in a liquid state around AI, every act you take is a vote one way or the other, and companies are waiting to see what you will do and how far you will let it go. Set your boundaries. Take action. Submit your vote.
Action in the Quicksand
If you have the ability to read this, and have the time to read this, you have the privilege and latitude to exercise agency in a paradigm shift. These early days of development is the most important time to use it.
“Each of us must decide how much we can afford to participate in an endeavor that oppresses and divides at least as much as it uplifts and heals,” says Epstein. The people must determine the direction of AI, not the corporations and governments who stand to benefit from it financially. We each must insert ourselves into the conversation—because the creators and funders of AI would rather we didn’t. Here are some ways you can vote for a new culture, now:
Start watching how corporate and political powers interact. OpenAI and other mega tech powers lobby for a clean slate of operation, and attack their employees when they whistleblow. “Two men at Google who do not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance exercise control over the organization and presentation of the world’s information,” points out Zuboff. Their activities point to the vulnerabilities they intend to exploit, next. This awareness provides you the opportunity to decide what matters before they do for you.
Build the habit of checking tech’s claims. “What if the billions of people who live in or near destitution and poverty do so not in spite of the efforts of hyperconnected, superinformed experts... but because of them?” questions Epstein. “If you can truly predict the future skillfully enough to imagine jobs for everyone, you can also plot out—perhaps even subconsciously—how to hold onto and consolidate your own preexisting power, privilege, and dominance in that future.” There is no way to check the work of a tech visionary—they have a future in mind, but what they will tell you about is the future you can agree to. The work they do, and the vision they hold, may conflict directly with the ideal they tout. A healthy skepticism keeps you immune to their advertising and manipulation so you can act freely.
Push for regulation and protections while we’re in the Quicksand Phase. For example, we desperately need privacy protections that cover this new era of tech. We need our biometrics protected like our passwords so government officials can’t force the opening of a phone via faceID; our personal visage should be copyrighted so strangers can’t make deepfake videos about ourselves or our public figures; and a DEI review of every AI that is implemented at all levels of commercial use so any public implementation—from HR to medicine—perpetuates inclusion rather than discrimination. Make the case to your human representatives before corporate lobbies speak for you.
Take a stand at your workplace. The majority of people are still uncertain about AI’s adoption, and much of AI is not ready for use by groups consisting of more than white men. That means your boss is likely not quite sure which technology to choose and how far to take it. You can shape its rollout, if only you try.
What’s more, take a stand in your home. In many ways, we are living on fumes, surviving on the last vestiges of ethics and values left from the last analogue generation(s). Upcoming generations won’t have the privilege of those experiences. What then? We haven’t prepared them for a fully tech-driven world; our mores, values, and etiquette have not evolved as technology has. It is up to each parent and mentor to pass on the tools for coping with human existence, its every high and low, so technology does not become inexorable.
Sharma’s resignation letter should alarm you. Even more, his call to action must not be ignored: it is time we evolve our culture to meet the might that is our technology and use metacognition to think critically about ourselves and our future. That requires action from all of us, for each of us participates in culture. The consequences if we don’t are too awful to fathom.
Invitation to an in person event in Berlin
We’re doing something on April 23rd in Berlin that we are truly excited about:
Three people we deeply admire are joining to think through AI, Memory and Migration:
→ Roshan Melwani (Oxford Institute for Technology and Justice)
→ Manuela Verduci (Kiron Digital Learning Solutions)
→ Mekonnen Mesghena (Heinrich Böll Foundation)
A human rights lawyer, a social entrepreneur, and a policy thinker, each looking at the same question from a different angle.
The evening explores how two of our oldest human instincts — the need to move and to remember — intersect with our newest technology. And it asks what happens when we let algorithms touch the stories that make us who we are.
We’ve also prepared something so that attendees don’t just listen. Our goal is for you to feel what’s at stake — and then sit with that feeling in a room full of others who felt it too.
Come join the conversation.
→ Register here



