4 Comments
User's avatar
Sebastian Osorno's avatar

Mishka, the potential pragmatic frame you built in this piece is promising. Projecting a potential liberation of AI alignment to let it build something we haven't imagined in fields we currently consider exclusively human, seems tremendously coherent, as it steps out from anthropocentrism, which in turn seems to open a window for liberation. I am somehow surprised how your tread of thought is intricate with mine despite its origin it is from a totally different place on earth, and also theoretically, and philosophically talking, yet so near. I suppose that's living proof of what we sense of intelligence being across individuality and not an exclusive human trait.

Thank you for creating this piece; it enriched my view and perspective.

I feel this excitement about the future these days, and the potential freedom ahead of us.

Expand full comment
Mishka Nemes's avatar

Thank you Sebastian, I am glad this piece has inspired you and agreed, it's beautiful to witness how we converge towards the same world view despite coming from different perspectives. Look forward to reading your future articles

Expand full comment
Karin Garcia's avatar

Mishka, thank you for putting this together! This is another post that will leave me pondering as you are arguing that AI alignment shouldn't just mirror human values, but that we should allow for divergent AI that could transcend and challenge our current moral frameworks. This goes well beyond the usual discourse of aligning AI to human values as a way of making it safe. You are fliping the argument when you say that alignment to human values might not be the way of making/keeping it safe but rather it is a limitation: it limits us to trascend our own limitations.

Coming from an AI safety perspective, how would you respond to those saying the risk of misalignment is too great to even think of this?

Expand full comment
Mishka Nemes's avatar

Thanks Karin, I am challenging the status quo perspective but I should make it clear that I don't endorse building fast AI systems which are diverging from our human values; in fact, I endorse many of the concerns shared by the AI safety folks. This argument is more of a thought experiment to contemplate on what we might learn if we step away from the default concerns.. perhaps AI can help us beyond what we can imagine right now.

Expand full comment