Discussion about this post

User's avatar
Sebastian Osorno's avatar

Camila, I learnt a lot from this amazing post. It also shed some light on Hegel's philosophy which I found sometimes very difficult to read, so thank you 😅. This piece left me thinking:

1. We need less standardized understanding and perspectives on AI, and how to apply it so the dialectical process remains healthy enough. It appears to me it's polarized, and we may need some nuances to enrich what's coming next.

2. I think AI Ethics aren't necessarily the antithesis of AI evolution. In many aspects, the antithesis of the AI industry's primary motives. Don't we need some strong moves, from bold players, to fill the voids left by the current players of this industry? In this sense, more than censorship, is there anyone willing to form the antithesis' team? (I've seen some signals, but not quite sure yet). We definitely need it.

3. Many times, it feels to me that we are entering a new dark-age more than a renewed Enlightment era, despite whatever tech industry marketing machinery is trying to sell us.

Expand full comment
Pulkit Gera's avatar

This is one of the best blogs I have read. The way it connects to WW2 and IBM Holocaust actually made me shiver because I can see the ongoing parallels already. I would like to argue that one of the reasons the dialectics is broken is because the antithesis is more of a concept instead of a concrete proof. The antithesis argument almost always revolves around AI killing us or going rogue which in a shadowy way promotes the technocrats idea that only they can save us. Almost no one discusses the cost of annotation on people (where people in Kenya needed counselling after annotating NSFW stuff) or even the ecological cost. Very few labs have even quantified the misinformation cost as well. Unless the antithesis is well documented and quantified, its hard to prove the effect. Until then it would only be Geoffrey Hinton shouting at a cloud.

Expand full comment
2 more comments...

No posts