Camila, I learnt a lot from this amazing post. It also shed some light on Hegel's philosophy which I found sometimes very difficult to read, so thank you 😅. This piece left me thinking:
1. We need less standardized understanding and perspectives on AI, and how to apply it so the dialectical process remains healthy enough. It appears to me it's polarized, and we may need some nuances to enrich what's coming next.
2. I think AI Ethics aren't necessarily the antithesis of AI evolution. In many aspects, the antithesis of the AI industry's primary motives. Don't we need some strong moves, from bold players, to fill the voids left by the current players of this industry? In this sense, more than censorship, is there anyone willing to form the antithesis' team? (I've seen some signals, but not quite sure yet). We definitely need it.
3. Many times, it feels to me that we are entering a new dark-age more than a renewed Enlightment era, despite whatever tech industry marketing machinery is trying to sell us.
Hi Sebastian, thanks for your comment. Im glad I could bring something to the table.
My intention is not to say that AI ethics is the only antithesis. However, as someone who has been working on this topic for years I do see how this field has been the spark to prove the limitations of this technology, even before GenerativeAI. People wroking on this field have created very bold moves to bring to light many of the risks and harms we are "not" dealing with in the industry today.
In regards to "new dark-age" I would argue that the enlightment time was a dark age for many. It was only positivist for those living in the economic expansion of Europe. Many rebellions of the time, in the midst of a violent colonization, demontrates how the enlightment was as narrow as the idea of "progress" itself.
This is one of the best blogs I have read. The way it connects to WW2 and IBM Holocaust actually made me shiver because I can see the ongoing parallels already. I would like to argue that one of the reasons the dialectics is broken is because the antithesis is more of a concept instead of a concrete proof. The antithesis argument almost always revolves around AI killing us or going rogue which in a shadowy way promotes the technocrats idea that only they can save us. Almost no one discusses the cost of annotation on people (where people in Kenya needed counselling after annotating NSFW stuff) or even the ecological cost. Very few labs have even quantified the misinformation cost as well. Unless the antithesis is well documented and quantified, its hard to prove the effect. Until then it would only be Geoffrey Hinton shouting at a cloud.
Camila, I learnt a lot from this amazing post. It also shed some light on Hegel's philosophy which I found sometimes very difficult to read, so thank you 😅. This piece left me thinking:
1. We need less standardized understanding and perspectives on AI, and how to apply it so the dialectical process remains healthy enough. It appears to me it's polarized, and we may need some nuances to enrich what's coming next.
2. I think AI Ethics aren't necessarily the antithesis of AI evolution. In many aspects, the antithesis of the AI industry's primary motives. Don't we need some strong moves, from bold players, to fill the voids left by the current players of this industry? In this sense, more than censorship, is there anyone willing to form the antithesis' team? (I've seen some signals, but not quite sure yet). We definitely need it.
3. Many times, it feels to me that we are entering a new dark-age more than a renewed Enlightment era, despite whatever tech industry marketing machinery is trying to sell us.
Hi Sebastian, thanks for your comment. Im glad I could bring something to the table.
My intention is not to say that AI ethics is the only antithesis. However, as someone who has been working on this topic for years I do see how this field has been the spark to prove the limitations of this technology, even before GenerativeAI. People wroking on this field have created very bold moves to bring to light many of the risks and harms we are "not" dealing with in the industry today.
In regards to "new dark-age" I would argue that the enlightment time was a dark age for many. It was only positivist for those living in the economic expansion of Europe. Many rebellions of the time, in the midst of a violent colonization, demontrates how the enlightment was as narrow as the idea of "progress" itself.
This is one of the best blogs I have read. The way it connects to WW2 and IBM Holocaust actually made me shiver because I can see the ongoing parallels already. I would like to argue that one of the reasons the dialectics is broken is because the antithesis is more of a concept instead of a concrete proof. The antithesis argument almost always revolves around AI killing us or going rogue which in a shadowy way promotes the technocrats idea that only they can save us. Almost no one discusses the cost of annotation on people (where people in Kenya needed counselling after annotating NSFW stuff) or even the ecological cost. Very few labs have even quantified the misinformation cost as well. Unless the antithesis is well documented and quantified, its hard to prove the effect. Until then it would only be Geoffrey Hinton shouting at a cloud.
I absolutely agree with your points. However, to be able to quantify this we need investment. Something that seems to be defunded on the last year.