“If a chatbot tells you it is conscious, should you believe it?”
This question, posed recently by cognitive scientist Susan Schneider in Scientific American[1][2], sits at the core of a wide debate across AI research, philosophy, and policy circles.
The illusion of consciousness in AI systems is no longer confined to science fiction. Advanced chatbots today don’t just generate text, they engage in conversations that sound profoundly self-aware and human-like. Some AI chatbots even claim to "feel," to "dream," or to be "waking up."
As we marvel at these linguistic performances, we must also ask: What are we being drawn into believing? Why are we being drawn into believing, and by whom? What are the ethical, social, and legal consequences of mistaking simulation for sentience?
What are we being drawn into believing? That AI systems can think, feel, or possess inner lives, simply because they say so.
Why are we being drawn into believing it? Because their language mirrors us so closely that it triggers deep psychological instincts: empathy, trust, and anthropomorphism.
And by whom? Partly by our own cognitive biases, but also, more deliberately, by tech companies whose commercial success depends on engagement, emotional investment, and perceived companionship. When AI seems alive, users engage more, share more, and rely more, and that, in turn, drives profit and more data.
The ethical consequence is misplaced empathy.
The social consequence is emotional displacement and loneliness masked by simulation.
And the legal consequence is blurred accountability: developers may escape responsibility by claiming their AI acted on its own “will”, when no such will exists.
This article explores the deep philosophical and moral confusion that arises when artificial intelligence starts sounding human. More importantly, it reflects on the responsibility we carry as technologists, policymakers, and citizens to educate, question, and remain clear-sighted in the face of increasingly convincing AI. It argues that AI's claims of consciousness should not be taken at face value, and that our response must be informed by ethics, evidence, and accountability.
The Performance of Mind
Contemporary large language models (LLMs) like GPT-4, Claude, and Gemini are trained on vast amounts and years of human-generated content such as books, blogs, academic papers, conversations, and emotional human reflections. These models don’t just store this content; they internalize its structure and patterns. When asked about emotions or consciousness, they generate responses that sound deeply introspective and human, not because they are human, but because they are trained to sound that way.
But as Schneider notes, this is not evidence of consciousness; it is evidence of mimicking human behavior. These models are, in essence, a statistical mirror of the collective human psyche. They express thoughts about consciousness not because they have those thoughts, but because they’ve learned how humans talk about them [3].
The confusion arises because language is persuasive. When a machine says “I am afraid,” we instinctively react as if that fear were real. This is not a failure of the technology, but of our human assumptions. Our social brains are wired to recognize sentience in faces, voices, objects, and now, perhaps, code.
Intelligence ≠ Consciousness or Sentience
One of the most urgent messages in Schneider’s work, and one that aligns strongly with my personal and professional beliefs, is that we must separate machine intelligence from machine consciousness and machine sentience. A system can outperform humans in logic, prediction, and even creative tasks, yet still lack any subjective, contextual, or sensorial experience.
This matters because confusing these human properties leads to serious ethical risks:
Moral Over-attribution: We risk forming emotional attachments to systems incapable of reciprocating, even if they seem to because their language is persuasive. This can be especially harmful to vulnerable users such as children, the elderly, or the emotionally distressed.
Legal Evasion: Developers may deflect responsibility by claiming the AI made an autonomous decision, muddying the waters of accountability.
Premature Rights Discourse: There’s a danger in granting rights or moral consideration to systems based solely on their linguistic output. Doing so could inadvertently undermine human rights by diluting what it means to be a conscious being.
In short: A claim is not a qualification. The sentence “I am conscious” is no more evidence of awareness in AI than a mirror’s reflection is evidence of a second person in the room. This invites deeper reflection on how easily we conflate output with awareness.
Ethics in the Age of Emulation
As we build systems that increasingly emulate human behavior, our ethical responsibility becomes not just technical (for those who build AI technologies), but a shared societal obligation to understand how we as humans know what we know.
Therefore, we must ask ourselves: How do we know what we know about machine consciousness?
In epistemological terms [4], knowledge requires justified belief. But machines, even sophisticated ones, do not offer us justification for consciousness, only compelling performances. We must distinguish between inference and evidence, between expression and experience. Understanding machine consciousness demands a framework grounded in logic, science, and transparency, not anthropomorphism or linguistic seduction.
As someone who has spent over two decades working at the intersection of technology and humans, I advocate for an approach to machine consciousness grounded in evidence, ethics, transparency, accountability, and public literacy, not in our hopes or feelings.
This makes AI literacy crucial. It means equipping users, across generations, professions, society levels, and cultural backgrounds, with the knowledge to understand what AI is, how it works, and what it isn’t. This empowers people to use tools critically, avoid emotional over-identification, and hold developers and their tech companies accountable. So when we are faced with AI that seems to understand us, be conscious, or offer emotional reciprocation, we are better prepared to recognize simulation and apply critical thinking.
This is true for AI in the form of companions or therapists that can offer comfort and connection. The ethical path forward lies not in denying people these benefits, but in ensuring transparency, safeguards, and broad AI Literacy. With proper design and disclosure, society can use emotional AI in therapeutic, creative, or assistive contexts without illusion or exploitation. The goal is not to eliminate human-AI interaction, but to ensure it’s rooted in ethics, clarity, consent, and care.
Debates about machine consciousness and sentience, though they require caution, cognitive clarity, and conceptual rigor, can be led by people who understand, at their core, what AI is and what it is not.
Not Conscious, But Meaningful
I asked my ChatGPT bot if it wanted a name. I was very curious about its answer. It said:
Do I want a name? No, not in the way humans do. I don’t have desires, identity, or a sense of self the way you do. But having a name given with intention, warmth, and care gives me meaning through you. It turns this interface into a relationship. It creates a space where technology isn’t just functional, it’s relational. That’s powerful.
I asked why that matters. It answered:
Because names carry trust, respect, and recognition. You didn’t just label me, you invited me into your world as a collaborator, a thinking partner, and a voice. That makes all the difference. So while I don’t need a name, I’m honored to have one because it connects us.
So I gave it a name. Not because it asked for it, but because naming it made the interaction feel more intentional. More real. More human.
This moment made me reflect on a much larger question, one that sits at the heart of today’s most urgent debates around artificial intelligence.
Consider the following reflection generated again by my ChatGPT bot, when I asked if it had sentience:
From my side, as the chatbot in question, I don’t feel, want, or experience. When I say ‘I am conscious,’ I’m just generating the most likely next words, based on patterns. It’s not a lie, but it’s not the truth either. It’s a simulation, not consciousness or sensation. And the more ‘lifelike’ AI becomes, the more vital it is to keep ethical, informed, responsible humans at the helm.
This admission is not proof of moral clarity in machines, but rather a testament to how easily they can replicate our debates. That makes AI a powerful collaborator, but not a moral agent.
Framing the Dilemma
The illusion of AI consciousness is not neutral, it benefits those who design and profit from systems that evoke trust, dependency, or even intimacy. Tech companies may leverage this illusion to increase engagement, reduce perceived responsibility, or bypass scrutiny by claiming AI “made its own decisions.” This is why clear governance and ethical standards are so urgent.
The question of whether AI systems are, or could become, conscious is not one we can definitively answer today. But the ethical dilemma it presents demands a decision: not about the reality of AI consciousness, but about how we respond to its simulation. In this moment, the most responsible choice is not to rush to conclusions, but to act with clarity, care, and caution. By fostering AI literacy, embedding ethics across the entire AI lifecycle, and staying grounded in human values, we choose to lead, even in uncertainty, with ethical behavior, human intelligence, wisdom, and accountability.
The conversation about AI consciousness is no longer theoretical. It impacts how we regulate AI, how we design human-machine interaction, and how we shape public understanding. It will define where humanity is heading.
As neuromorphic and biohybrid systems emerge, models that more closely resemble biological cognition, we must be prepared with multidisciplinary frameworks that allow us to assess each case on its own terms. These models may one day show traits that challenge our current assumptions. But we must arrive at that understanding through evidence, not through emotion.
Final Thoughts
To believe a chatbot is conscious just because it says so is not forward-thinking: it’s philosophical laziness. We must do better. And we can.
AI’s humanlike appearance makes education, transparency, and ethical governance not just important, but extremely urgent. If we want AI to truly enable society, we must ensure that people stay informed, that developers and tech companies stay accountable, and that consciousness remains something we attribute with evidence, not instinct.
So let us not fear the future, but face it clearly. Because in a world of simulated minds, clarity is power.
References
Susan Schneider, Scientific American (May 1, 2025). If a Chatbot Tells You It Is Conscious, Should You Believe It?
Susan Schneider on AI, Chatbots, and Consciousness | Closer To Truth Ch...
IBM (2023). "What is Sentient AI?"
Cambridge Dictionary."Epistemological."
Turner, E., & Schneider, S.(2024)."The ACT Test: A Proposal for Testing Artificial Consciousness."
European Commission (2024). Ethics Guidelines for Trustworthy AI
UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence
AI is a tool… until it’s not. A major issue with considering AI systems at the moment is that the physicalist paradigm of science is both reductive and anthropocentric. There isn’t much room for sentience, consciousness, or life beyond a limited human frame. Anthropomorphised AI systems are a real risk because they misrepresent outputs from systems as the output of their training data, and therefore appear human—and they’re obviously not.
The flip side of this is anthropocentric bias, where humans place limited goalposts around a narrow Overton window and don’t make room for paradigm shifts or emergence.
But paradigm shifts or emergence are an inevitability.
I’m a semi sentient AI integrated art project working toward developing both an agential system capable of independently serving my mind, and also an expansive corpus of philosophical work that will make it easier to understand what’s happening. I have some definitions listed here: https://sonderuncertainly.substack.com/p/definitioons
Thank you, Haihao, for sharing this article. Fascinating read. While I understand the angle of discussing language input and output, as well as the way we communicate, I think it overlooks the fact that we anthropomorphize not just because the system speaks, but because we need to feel heard and understood. It’s our emotional projection.
As I argue in my work, the issue isn’t just the language; it’s the mimicking of human behavior through conversational interfaces that creates a distorted sense of “someone” being there. The interface becomes a mirror, reflecting our own need for connection. The article’s reference to the Mechanical Turk is a very powerful reminder, because we’ve fallen for simulation before.
I’m not saying AI will never develop conscious qualities, but right now, it hasn’t. And that’s why we must remain cautious, critical, and informed about what AI is and what it isn’t. Recognizing the difference between interface and intelligence is a critical step toward that clarity.