8 Comments
User's avatar
Alain Di Chiappari's avatar

Great points, a few thoughts:

On the simplification gap: you're right that LLMs currently reward those who already have strong fundamentals. Probably this is more of an ecosystem maturity issue than a permanent reality, but I'm skeptical myself. Early frameworks also lacked good documentation and examples, the community built those over time. We're already seeing prompt engineering patterns, structured guides, and better tooling emerge. On self-improvement: I'd frame it slightly differently. LLMs don't need to self-improve the way frameworks iterate, the ecosystem around them does. Better fine-tuning, RAG pipelines, evaluation tools, agents, and each new model generation all represent iteration. The improvement cycle is different from traditional frameworks, but it's arguably faster.

On preparing for tomorrow: this is the timeless engineering challenge, and LLMs don't change the principle, just the tools. Strong fundamentals, adaptability, and continuous learning have always been the answer.

Thanks for your comment btw, appreciated!

AH's avatar

Thanks for this, real food for thought. My immediate reaction is two-fold.

One, I don't think see how the models we have today really solve the "Simplification" problem like frameworks do. I think LLMs are a real multiplier to the top engineers. But I feel like the mid-low level engineers will struggle to level up in the new age of engineering.

The real danger is that no one seems to be interested in solving this. How do we develop prompt techniques and train new developers? For example, I come from the 3rd world and only now starting to incorporate models into my development workflow. If I want to use frameworks, I immediately get access to documentation on what to do and lots of example of how to use them. With LLMs though, only I get from the providers are toy examples. And even the larger ecosystem seems to be lacking in documentation. My biggest fear is that the divide between the haves and have-nots will only grow wider.

The second counterpoint is that I'm not convinced that models can self-improve, at least at the moment. Frameworks would evolve over time, improving either features, architecture or ease of use. How will LLMs do this? Will we be stuck with the architecture the model was trained on? In short, are we sure LLMs can do "think about" the problems in what they generate, research and improve their output?

And though you say engineering is about solving the problems you have right now, rather than what we might encounter tomorrow, we still have to prepare for the problems of tomorrow. How will this be done?

Yuuki's avatar

I completely agree with your point of view.

I'm currently building my own library, and I have absolutely no need to use a highly complex library burdened with legacy baggage. Furthermore, I believe this is an era of creation — we can

reference projects like Spec Kit and OpenClaw, but more importantly, we should explore and experiment on our own.

Trust your own taste. The best taste often comes from individuals, not organizations.

ktsangop's avatar

Good points overall!

However, I wonder what's more complex than an AI agent these days? What requires literally whole data centers, with massive amounts of tangible resources (energy, water, minerals etc), and massive amounts of human labour and skill to operate?

News stories even claim that these data centers consume and literally destroy whole towns around them, by demanding the whole ecosystem to work for them.

I am not against artificial intelligence. I am against wasting so much resources, just to have a few senior devs output more code. This doesn't sound intelligent at all.

Also, it would be nice to mention how much you spend on subscriptions to do the job you describe. I suppose it's rather cheap compared to the labour cost of someone like you. And that's because AI companies have tons of cash to burn (yet).

I would love to see what the real cost is, but no company dares to publish any real data on how much is required to run the show.

I understand the excitement, but let's try to see the forest.

Peace!

p.s. I really don't understand how LLMs work and it seems that neither those who built them really do. I have no clue if, in the future, you could run one in your own low powered PC. That might be a real intelligence revolution. For now it looks like infinite complexity, and maximum transfer of power to Google, Meta, OpenAI etc...

Steve C's avatar

I definitely agree the cost of creating bespoke software is plummeting and engineering wisdom is growing in value (and it’s fun!). But I think [any non-trivial software application includes a framework](https://mrclay.org/2014/06/11/on-frameworks/) and you’re suggesting that a bespoke framework will have advantages that outweigh the values of a published framework. Which has prewritten docs, pre-solved problems you didn’t know you had, and a plugin ecosystem that keeps chugging along solving new problems and publishing the solutions for you and the LLMs to stumble upon. And of course, the ability to find other humans that have worked with it and understand its quirks and weaknesses. I think it’ll really depend on size of the team.

Docs, public code, and public writing around older frameworks will already be pre-trained, whereas you’ll repay to shove yours into context over and over. Just another thing to consider. Thinking longer term I still wonder how much pricing will rise as investors demand profit, and whether changes in the web (due to LLMs) will make LLMs dumber in an impactful way. The true cost of working in this new way is still really unclear.

Alain Di Chiappari's avatar

I totally agree with what you're saying. Carefully considering to reuse other people's wisdom is absolutely necessary, especially if this is crucial part of our application and not its business logic (where I still prefer to have the total freedom to leave optionality in the future to adapt and optimize for it). Clearly my post is a bit provocative and aims to raise a reflection in the blind acceptance of massive/useless/cumbersome frameworks and libraries where aren't really needed.

Sébastien Lorber's avatar

We will see what the future holds, and maybe agents will be able to ship machine code directly.

I still wonder if frameworks, or at least abstraction, aren't needed. For distribution, we still want to minimize the amount of code we ship to browsers/apps and avoid repetition. Maybe the framework for AI won't be React, but rather an ever-evolving framework that is tailor-made by the AI for your specific app?

Alain Di Chiappari's avatar

Hi Sébastien, absolutely valid points. Abstractions do have their place, and which to choose and how to build our software on them is critical, especially in the design phase. A wrong abstraction could cause at least headaches, if not the failure of a project.

On your point about an ever-evolving framework that is tailor-made by the AI: this is an interesting perspective, and this is what I see somehow happening today.

In my case, I see that starting with a solid design, which includes a clear idea of the main pillars on which the app is built (including continuous documentation, AI-generated, while the codebase evolves), almost naturally brings the AI to a rail rather than randomly putting pieces here and there. At the peak of this process, I've been seeing agents creating "their own" abstractions that they reuse.

I wrote another post around one year ago, and this was absolutely unthinkable, the bias for addition was the standard, rarely refactoring or deleting code. Now it seems they start to be able to realize when it's time to recognize the common patterns in a codebase and either do the refactor autonomously or plan it to get it approved by us. The first time I saw it was last December with Opus 4.5, and it was crazy to see.