AI alignment shouldn't just mirror human values - divergent AI could transcend our values, challenge and augment human intelligence by unlocking novel morality frameworks
Mishka, the potential pragmatic frame you built in this piece is promising. Projecting a potential liberation of AI alignment to let it build something we haven't imagined in fields we currently consider exclusively human, seems tremendously coherent, as it steps out from anthropocentrism, which in turn seems to open a window for liberation. I am somehow surprised how your tread of thought is intricate with mine despite its origin it is from a totally different place on earth, and also theoretically, and philosophically talking, yet so near. I suppose that's living proof of what we sense of intelligence being across individuality and not an exclusive human trait.
Thank you for creating this piece; it enriched my view and perspective.
I feel this excitement about the future these days, and the potential freedom ahead of us.
Mishka, thank you for putting this together! This is another post that will leave me pondering as you are arguing that AI alignment shouldn't just mirror human values, but that we should allow for divergent AI that could transcend and challenge our current moral frameworks. This goes well beyond the usual discourse of aligning AI to human values as a way of making it safe. You are fliping the argument when you say that alignment to human values might not be the way of making/keeping it safe but rather it is a limitation: it limits us to trascend our own limitations.
Coming from an AI safety perspective, how would you respond to those saying the risk of misalignment is too great to even think of this?
Mishka, the potential pragmatic frame you built in this piece is promising. Projecting a potential liberation of AI alignment to let it build something we haven't imagined in fields we currently consider exclusively human, seems tremendously coherent, as it steps out from anthropocentrism, which in turn seems to open a window for liberation. I am somehow surprised how your tread of thought is intricate with mine despite its origin it is from a totally different place on earth, and also theoretically, and philosophically talking, yet so near. I suppose that's living proof of what we sense of intelligence being across individuality and not an exclusive human trait.
Thank you for creating this piece; it enriched my view and perspective.
I feel this excitement about the future these days, and the potential freedom ahead of us.
Mishka, thank you for putting this together! This is another post that will leave me pondering as you are arguing that AI alignment shouldn't just mirror human values, but that we should allow for divergent AI that could transcend and challenge our current moral frameworks. This goes well beyond the usual discourse of aligning AI to human values as a way of making it safe. You are fliping the argument when you say that alignment to human values might not be the way of making/keeping it safe but rather it is a limitation: it limits us to trascend our own limitations.
Coming from an AI safety perspective, how would you respond to those saying the risk of misalignment is too great to even think of this?