If you're building AI agents, be empathic.
If you're used to classical programming, it's easy to expect what I'd call 'determinism' from the machine you're working with.
If-then-else-etc... Predictable outcomes.
But with unlocking the language layer in programming you'll have to realize that this is a whole new different paradigm, if you want to do it properly.
This requires a different way of thinking too.
I realized that, working on Setter AI, I've hit a ceiling with the 'scrappy' solution that grew organically over time.
My mindset was always quite practical. What is the user asking for now(!), what problems are they facing now(!) etc.?
Still think this is the way to go when starting out.
There's too much uncertainty and complexity at this stage, so it wouldn't make sense to not be practical about things if you're starting a startup - think premature optimization, over-engineering or how you may call it...
But at a certain level you'll need to put your head down and rethink things from first principles again.
In software engineering it's part of the game to get to a point where - given you're to rebuild an organically grown, aged project from scratch - you'd build it in a different way than you initially did.
I'll take out some time to up my knowledge around building complex, production-ready AI agents by diving into the state-of-the-art standards, frameworks and learnings from industry pioneers and experts.
To circle back to the beginning: Today I learned about the advice of being empathic with the AI/LLM/agent to build good systems from @barry_zyj and @ErikSchluntz in 'Tips for building AI agents'.
That's the start of me focusing on the craft again and taking Setter AI to the next level.
Will report back.