From time to time I like playing the archaeologist a bit, reading about old tech, old papers, old tech-related books. I even started a podcast that I abandoned called "Dead Tech". Anyway, my point is that I like learning how people used to build stuff.
Recently I stumbled upon a short paper from 1995 by Niklaus Wirth, called "A Plea for Lean Software." His main point was that software was getting slower faster than hardware was getting faster. Every year, Moore's Law handed engineers a faster machine. Every year, software found new ways to consume the gain before users could notice it. He called this bloat, and he blamed it on developer laziness enabled by cheap hardware. Why optimize when the next CPU upgrade will cover for you?

The industry didn't really listen. We got richer hardware, richer frameworks, richer abstractions. The law held.

I was thinking that things are about to get way worse, if they haven't already. As engineers, we got AI assistants, Claude and the like, trained on thirty years of accumulated habits. I think Wirth would find this funny.
LLMs didn't learn to code by running profilers. They learned by reading code, tutorials, documentation, Stack Overflow answers. That corpus has a strong bias toward code that teaches rather than code that runs fast. I have wrote in similar style in this blog many times.
The deeper pattern is that LLMs write defensively at every layer. They copy arrays before sorting when mutation is safe. They introduce an intermediate variable for each transformation step because that's how you explain code to someone who is learning. The model was rewarded for clarity in its training data. It has no idea where this function sits in the callgraph, whether it's invoked once or a million times, or what the latency budget is.
The original Wirth's Law was driven by individual developers making careless choices, one at a time. The new version is the same mechanism at a different scale.
Before AI assistants, a team of five engineers might produce 2,000 lines of new production code per week. Now the same team produces 10,000. The code is (mostly) correct, it passes tests, it deploys, it may even handle edge cases if the intent is communicated clearly. It's also, on average, a few percent heavier per function than hand-crafted code for the same task.
A few percent sounds irrelevant. Multiply it across 50,000 lines of AI-assisted code and you have a service that needs 40% more memory than it should, a frontend that ships 30% more JavaScript than it needs to and so on...
Hardware isn't improving at the rate it used to. Moore's Law is practically no longer valid. Memory bandwidth hasn't kept pace, and it has also gotten super expensive. The idea that "the next instance size will cover it" is less reliable than it was in 2010. I know you can use this or that skill for performance improvements, or ask the model to do it, but do those really work? And how many engineers actually bother? My prediction is that software will get dramatically slower. It's already happening.
The answer isn't to stop using AI tools. It's to recognize which parts of the codebase can absorb the overhead and which can't. There are places where the performance characteristics don't matter enough to be worth the friction of manually crafting the perfect version, and there are other places where every millisecond counts.
The models write the most explainable version. You need to adjust it to the right one for the job.