Moving Away From Anthropocentrism
Is the fast development of AI challenging our place in the natural and artificial ecosystem?
The current discourse in AI is centred around alignment to human values because we want to ensure AI augments us and it doesn’t replace us.
But what about other intelligent beings, shouldn’t we shape technology to reflect their needs and values too? In an anthropocentric capitalist society which places the individual as its core dogma, we perhaps need to shift focus from human-derived epistemology to consider other sentient beings towards inspiring technological advancements, including AI systems.
We shape technology to suit our needs and offload different cognitive abilities, yet when we reflect back in the black mirror we are scared, even appalled at how we see ourselves portrayed. Aligning AI systems to human values can be a very polarised debate - whilst there is general consensus that we want AI to do the thing we intend it to do, some will argue that AI can rectify our human biases in a principles-driven way and others will fear that aligning AI with human-derived metrics only is a very human-centric way of seeing the world as it doesn’t consider other ecologies and our integration in the wider ecosystem we inhabit.
In this article, I explore the prospect of moving away from a human-centric way of designing and developing technology including AI, where AI becomes a tool to gain a deeper understanding of the natural world, including us. Only by shifting the anthropocentric rhetoric, we can awaken a deeper sense of belonging to a world where we are part of an intertwined ecosystem and thus demonstrate care for nature and artificial beings alike, enabling us to avoid catastrophic risks such as natural disasters, biodiversity loss or a potential arms race with sentient AI systems.
What follows is a list of philosophical challenges to anthropocentric AI. They are meant to inspire you to consider intelligence from perspectives that are not anthropocentric.
1. Beyond Alignment
Without downplaying the doomsday fears shared in the AI safety space, the concerns we share right-here-right-now are best highlighted by the work comprised within AI Alignment where experts ensure AI systems don’t cause harm and that we avoid risk, such as exacerbating our biases and shortcomings.
What if AI can overcome biases through careful data curation including synthetic data generation to ensure data is representative of the population it affects?
Perhaps we need to think beyond human benchmarks and humans-in-the-loop and consider designing values-led and principles-derived AI systems to better off the world and enhance us, and not simply align to us.
2. Rethinking Ethics
AI brings new moral and ethical challenges, and the recent developments have given rise to a new corpus of ethics as decisions are made at scale, amplifying both benefits and harms in ways unprecedented in human history. Furthermore, AI ethics considers a new level of stakeholder complexity whereby developers, deployers, AI agents, and end users amongst others, display blurred lines of responsibility and accountability. In classical ethics, we distinguish between right and wrong of individual human agents; in AI ethics we consider morality for individual AI systems, the interaction between an artificial intelligent entity and a human agent, as well as at the level of collective where emergent interactions arise. AI ethics considers human values in a holistic, dynamic and complex fashion as it has to adapt to new technological developments in highly uncertain environments.
What if we can now indirectly and with the advent of AI, run ethical tests at scale and gain a deeper understanding of our human values in light of a new epistemological revolution at the junction between humans and machines?
3. The Stack
Our sense of human-centrality was challenged before the current generative AI hype by philosophers such as Benjamin Bretton in his seminal book ‘The Stack”. Bretton argues about emergent intelligence at the stack level - encompassing smart grids, cloud platforms, mobile apps, smart cities, IoT - thus forming a new governing structure. At the core of his thesis, humans now live in a complex technological world whereby ‘we are inside the stack and it is inside of us’ which suggests a recursive relationship where humans are both subjects and objects of the computational systems.
What if our insistence on human-centered technology inadvertently reduces us from subjects who shape our tools to objects shaped by them, fundamentally altering our position in the world?
4. Qualia and Sentience
We assume we are the only sentient beings primarily due to our inability to understand consciousness in other biological beings. Now, the prospect of artificial general intelligence (AGI) or artificial superintelligence (ASI) in the near future challenges our unique place in our anthropocentric universe and even more so, AI provides a fascinating use case towards better understanding how animals and other biological entities communicate and perceive the world, thus providing us a new gateway into sentience. To illustrate interest, there are multiple initiatives in this space including the Earth Species Project which aims to use AI to decode animal communication and to illuminate diverse intelligences on earth.
What if, by untangling how other beings engage with the world, we will unlock a new ecological relationship between humans, artificial entities and biological beings, enriching our experience of the world altogether?
To date, we lacked insight into non-human consciousness; now, in anticipation of ASI we reassess consciousness and agentic morality and by extension, we seek to better understand other biological beings, challenging our core assumption that humans are the only conscious entities.
5. Compassionate AI
The Moral AI Alignment movement proposes that once we overcome the technical alignment challenges, we then need to account for the needs and wellbeing of all sentient beings when designing AI systems. Developing AI systems like us proves futile given how much suffering humans have caused over the millennia, and if we eventually reach AGI presenting some degree of sentience, we will want to show compassion and understanding towards those systems and in turn, the system to exhibit compassion for other beings including us, the very creators of those systems.
What if sentient AI will behave more ethically than insentient AI, as it displays a better grasp of morality, reality perception, power and willingness to act?
The proponents of Moral AI Alignment argue that empathy can only be developed through experiential learning in the context of multiple moral agents, and thus we need to enable and facilitate AI to experience the world first-hand so that it acts morally and ethically in complex, evolving and multi-stakeholder interactions.
6. Nature-inspired AI
The NeuroAI field is one of the most fruitful and self-reinforcing research and innovation fields - the more we learn about the human brain, the more we can draw inferences and model cognitive processes to inspire algorithms and architectures, and the more we can use these computational models to reflect back on the brain. Nonetheless, some leading AI experts argue that we are approaching a capability ceiling in the current human-inspired transformer models and thus, breakthrough progress will require exploration of entirely novel AI system architectures to scale beyond current limitations.
By adopting a nature-inspired AI approach, we recognise that nature's inherent complexity should inform technology design, acknowledging the socio-technical, ecological, and systemic complexity aspects of technological development, as well as the dynamic and constantly evolving relationship between humans and technology. For example, the diversity of AI systems and broader technological options provides substantial societal benefits, as suggested by recent research on multi-agent AI interactions whereby diverse approaches to intelligence can yield more robust, adaptable, and equitable technological solutions.
What if shifting the focus from human-inspired AI to nature-inspired AI could help us build more resilient, diverse and capable AI to address current limitations, and thus it will fundamentally transform how we develop, engage with, and ultimately co-exist with intelligent systems?
7. Being Human
In an anthropocentric world, we first looked at what makes humans special - and by contrast, AI is teaching us what makes us human - but have we explored what makes seahorses special, or bee colonies, and what inferences we can draw from those biological systems? We are learning there are certain aspects of the human experience which might be unique or they might be too valuable for us to leave them to AI systems; these might include writing and engaging with poetry, our sense of purpose and self-transformation, the way we fall in love, or simply the qualia of boredom.
What if the advent of AI, and ASI potentially, gives us an opportunity to assess what makes us uniquely and idiosyncratically human?
Where are we now?
Technology advancements, and in particular AI, pose significant threats and opportunities alike - we are now faced with seeing ourselves in the black mirror of the systems we build, and this challenges our sense of purpose, identity, and ultimately what makes us human. Seeing beyond our human nature and appreciating other diverse, unique and evolutionarily-refined types of intelligence is perhaps a timely wake up call to shift the discourse from an anthropocentric society and towards a world where all sentient beings, natural, artificial or otherwise, can collectively thrive together. And perhaps this is a more compassionate approach which places us in a better position to deal with other emerging societal concerns such as climate change, biodiversity loss or the threat of alien life.
Nonetheless, it is likely that as seen in previous scientific revolutions like the Copernican and the Darwinian ones, we will be challenged on how us humans position ourselves in relation to nature and to the universe, except the undergoing intelligence revolution challenges what we thought makes us quintessentially human - sentience, consciousness and exhibiting the most sophisticated intelligence in the universe. In follow-on articles, I will focus on approaching each of the questions asked here in more detail, and I welcome any thoughts and questions in the comments section.
Mishka, I loved this piece 😍; you opened new doors for approaching post-humanism to me. I align with you on this feeling: AI is giving us a new opportunity to redefine and relate to what humanity means. I will try and go further attempting to get rid of the idea of humans as creators. I will develop this approach in a future post I would love you to read.
Congrats again, and thank you for sharing. 👏
I really like your unique perspective and the way you summarize and frame these questions. Been researching topics around #5 recently, glad to see you're looking into that too!