On AI 2025

Currently, I am considering “Artificial Intelligence” to be at the level of “Artificial Intuition”. I used to be less generous and call it “Artificial Idiocy” Roko’s Basilisk notwithstanding and even as I laugh at the foolishness of it, I am not doing so entirely comfortably I am sorry to admit!

Anyways, LLMs are useful and I use them daily, very much so. I pay for licenses for ChatGPT, Claude, Cursor and Perplexity because for the most part I use them in different contexts. I have been very early into ChatGPT and Copilot and I have used these tools to great effect ever since. That said, as I wrote before, I find them to be next generation of code generators because they do not raise abstraction level. We are yet to see that.

But civilians keep asking me what I think of “AI” and I keep explaining what is LLM and what G in GPT stands for, etc. And inevitably they ask me what I expect is going to happen, both in general but also with me and my professional life. So here are my predictions (ignoring AI risk here as it’s nothing I can do about so let’s assume happy path here):

  • I don’t think AGI is coming any time soon. Closer to 2040 than to 2030. I am very much aligned with folks from this podcast.
  • I expect that we will be getting better and better tools and that AI agents will become more and more autonomous. This would hopefully be the place where we raise the abstraction in formal definition of system behavior. I expect that I will then have one and then a couple and then a whole team of such agents working “under me”. An effective team of teams will eventually also become a reality but personally I think I would be replaced in external companies as economically unrentable before then.
  • I will eventually be replaced for sure: either getting too old to work and being replaced by younger and (I really hope) better engineers or by some AI construct (which I also hope would be a better engineer than I am). Après moi, le déluge is definitely not my cup of tea: I hope everything and everybody is better in the future (though I don’t expect it - I would be happy with “most folks and most things are better”)
  • When I do get replaced, I expect to continue being interested in programming in the same manner I was interested before I started making money of it: I loved the problems and solving them in this particular way and no one needed to pay me anything to deliberately (compulsively?) choose to program instead of say playing video games. I also played video games and enjoyed them but I programmed much more than I played video games. Maybe because, as Bruce Nielson likes to argue, all understanding is algorithmical - that’s what to understand something means and I will admit to a deeply set urge to understand things.
  • As for how I plan to earn money when that comes to pass, it will depends on multiple things. If employing a team of teams is rentable enough, I expect I would try to build my own business out of that, directing them to do different things in the way I feel things should be done. I was (and remain) a terrible solo-founder because I quickly lose interest in non-technical challenges, but if I could say employ a couple of AIs to do marketing, scheduling, light support, etc. then I might tackle that. Maybe I don’t do anything digital but open a gym for old farts (e.g. you are eligible for membership only if you already had your first colonoscopy) and work with folks to get themselves into long term health shape (like I have - but that’s a different story)

Last modified on 2025-04-27