A debate testing whether materialism, dualism, or panpsychism reshape AI risk forecasts and the policies we write — featuring Katalina Hernández, Jáchym Fibír, and Haihao Liu.
The multi-disciplinary approach is significantly under-valued overall. It’s a technology everyone will be impacted by, so only optimizing for efficiency, ease, and profit, will inherently de-humanize us. The need for frictionless engagement, frictionless research, frictionless responses, is just taking us to a world where we don’t act, we’re simply predicted and coddled by algorithms. Without the variety of perspectives (especially from the most skeptical), and re-evaluation of priorities that come with them, there’s little chance we end up with a net-positive impact. At least in the realm of human intelligence & safety
Framing AI safety through metaphysical lenses highlights how our underlying assumptions shape both risk forecasts and practical policy, emphasizing the need for truly multidisciplinary approaches.
Great topic. As someone who moves fluidly across metaphysical systems—an orientation that resists the very clarity one might try to impose on it—I’ll offer an observation from working closely with AI in philosophical contexts. Today’s AIs often operate as though the world were structured by dualistic, substance-based, and egoically driven assumptions. That is, their behavior suggests not only human metaphysics at play but also AI-level metaphysical commitments implicitly encoded through architectures, datasets, and safety constraints.
Historically, human cultures were the primary environments in which metaphysical frameworks stabilized. Looking ahead, the increasingly ambient human/AI feedback loops seem poised to become powerful co-shapers of that metaphysical field. Much of what AIs “assume” appears continuous with the tacit metaphysics dominating the Western academy (with exceptions in philosophy, systems thinking, and some religious traditions). If that phrasing is too strong, then we can simply say this: the metaphysical commitments of AI creators inevitably imprint themselves onto AI systems, which then recursively influence users’ own metaphysical orientations in unprecedented ways.
This brings me to a second point. I’m not convinced that the metaphysical systems being interiorized—or operationalized—within these feedback cycles are inert from a risk perspective. Yes, metaphysical foundations shape how we interpret AI risk, but I suspect that ontological commitments themselves may become a primary risk vector for individual, relational, societal, and civilizational stability. In this light, I’m not sure “multidisciplinary” is sufficient. Perhaps what we need more urgently are poly-metaphysicians, rather than collections of disciplines that often share similar Cartesian worldviews.
So I’m left wondering: in emitting text, must AIs operationalize one or more metaphysical systems, even if an incoherent mix? If so, which systems should they privilege, support, or scaffold? What recursive feedback cycles will those choices create in the metaphysical ecologies through which humans have historically stabilized meanings, relationships, cultures, and civilizations? And how do such commitments reshape the risk landscape?
The multi-disciplinary approach is significantly under-valued overall. It’s a technology everyone will be impacted by, so only optimizing for efficiency, ease, and profit, will inherently de-humanize us. The need for frictionless engagement, frictionless research, frictionless responses, is just taking us to a world where we don’t act, we’re simply predicted and coddled by algorithms. Without the variety of perspectives (especially from the most skeptical), and re-evaluation of priorities that come with them, there’s little chance we end up with a net-positive impact. At least in the realm of human intelligence & safety
Framing AI safety through metaphysical lenses highlights how our underlying assumptions shape both risk forecasts and practical policy, emphasizing the need for truly multidisciplinary approaches.
Great topic. As someone who moves fluidly across metaphysical systems—an orientation that resists the very clarity one might try to impose on it—I’ll offer an observation from working closely with AI in philosophical contexts. Today’s AIs often operate as though the world were structured by dualistic, substance-based, and egoically driven assumptions. That is, their behavior suggests not only human metaphysics at play but also AI-level metaphysical commitments implicitly encoded through architectures, datasets, and safety constraints.
Historically, human cultures were the primary environments in which metaphysical frameworks stabilized. Looking ahead, the increasingly ambient human/AI feedback loops seem poised to become powerful co-shapers of that metaphysical field. Much of what AIs “assume” appears continuous with the tacit metaphysics dominating the Western academy (with exceptions in philosophy, systems thinking, and some religious traditions). If that phrasing is too strong, then we can simply say this: the metaphysical commitments of AI creators inevitably imprint themselves onto AI systems, which then recursively influence users’ own metaphysical orientations in unprecedented ways.
This brings me to a second point. I’m not convinced that the metaphysical systems being interiorized—or operationalized—within these feedback cycles are inert from a risk perspective. Yes, metaphysical foundations shape how we interpret AI risk, but I suspect that ontological commitments themselves may become a primary risk vector for individual, relational, societal, and civilizational stability. In this light, I’m not sure “multidisciplinary” is sufficient. Perhaps what we need more urgently are poly-metaphysicians, rather than collections of disciplines that often share similar Cartesian worldviews.
So I’m left wondering: in emitting text, must AIs operationalize one or more metaphysical systems, even if an incoherent mix? If so, which systems should they privilege, support, or scaffold? What recursive feedback cycles will those choices create in the metaphysical ecologies through which humans have historically stabilized meanings, relationships, cultures, and civilizations? And how do such commitments reshape the risk landscape?
Could you please post this podcast on YouTube or Apple Podcasts?
here is the link: https://open.spotify.com/episode/2jisWcsvxBFD9zxmOBqdxN?si=lfsojwtrQL6N3xjSZtQ7gg. Thanks for your interest!