Thank you for this piece Sebastian. I completely agree with the fact, that we try to make things apparently easier and more effcient by building costly, resource-consuming, and complex technology to solve problems that a) not necessarily matter; b) to create new amazing processes (on top of those that are long-time broken instead of re-engineering what is there), forgetting that great models cannot fix broken processes and c) use technology to solve problems that do not necessarily require technology but human critical thinking skills, empathetic behaviour and just common sense. After 20+ years in tech solving business efficiency challenges, I have learnt that it’s not always about more tech, it’s about smarter choices, and more intentional use of it keeping the humans at the center of ethical design, implementation and deployment. Technology is a powerful tool, but it won’t solve all our human problems unless it's designed with care, intention, and a clear focus on solving what truly matters.
Loved this take... but it also forced me to sit down and articulate why I write with AI every day. In short -- the bugs for most are features for writers and creatives.
Sebastian, loved how you pointed out the "psychological comfort" aspect of AI—it's spot on! We often lean on these tools not because they're flawless or even fully trustworthy, but because it feels good to have something external validate our decisions. It’s like having someone else say, “Hey, you're doing fine!” even if deep down we know it's just echoing our own ideas. Maybe that's the real magic trick AI pulls off: making us feel a bit less alone with our choices. Great reflections, really got me thinking!
So recognisable. The whole confidently wrong thing LLMs have going on is risky. And keeping them on the rails almost requires just building your own model for your personal use case. Thereby negating any financial or efficiency gain. Better to use systems that have guaranteed output for any given input than these one armed bandits. Most business processes work 0% if output is less than 100% correct or at least 100% predictable and expected. Very expensive and wasteful toy for the most part for now.
Thank you for this piece Sebastian. I completely agree with the fact, that we try to make things apparently easier and more effcient by building costly, resource-consuming, and complex technology to solve problems that a) not necessarily matter; b) to create new amazing processes (on top of those that are long-time broken instead of re-engineering what is there), forgetting that great models cannot fix broken processes and c) use technology to solve problems that do not necessarily require technology but human critical thinking skills, empathetic behaviour and just common sense. After 20+ years in tech solving business efficiency challenges, I have learnt that it’s not always about more tech, it’s about smarter choices, and more intentional use of it keeping the humans at the center of ethical design, implementation and deployment. Technology is a powerful tool, but it won’t solve all our human problems unless it's designed with care, intention, and a clear focus on solving what truly matters.
Loved this take... but it also forced me to sit down and articulate why I write with AI every day. In short -- the bugs for most are features for writers and creatives.
https://aiwritersroom.substack.com/p/why-i-write-with-ai?r=5sl6
Thanks for reading and for your interesting piece, Fred.
Sebastian, loved how you pointed out the "psychological comfort" aspect of AI—it's spot on! We often lean on these tools not because they're flawless or even fully trustworthy, but because it feels good to have something external validate our decisions. It’s like having someone else say, “Hey, you're doing fine!” even if deep down we know it's just echoing our own ideas. Maybe that's the real magic trick AI pulls off: making us feel a bit less alone with our choices. Great reflections, really got me thinking!
Right... and sometimes you can find new tones and colors in the echo of your own voice. Thanks for reading, Carlos.
So recognisable. The whole confidently wrong thing LLMs have going on is risky. And keeping them on the rails almost requires just building your own model for your personal use case. Thereby negating any financial or efficiency gain. Better to use systems that have guaranteed output for any given input than these one armed bandits. Most business processes work 0% if output is less than 100% correct or at least 100% predictable and expected. Very expensive and wasteful toy for the most part for now.
Absolutely, this is no joke, Ivan.