I've been writing code for humans for ~15 years. Clear names, small functions, cohesive modules, the usual. Recently I even created a small skill for Claude based on Kent Beck's book, Tidy first. Last months I reflected on how I write code nowdays and the thing is that the reader I spend the most time accommodating is not a person. It's a language model that wakes up every session with no memory of yesterday, no idea why we chose Postgres over DynamoDB, and no clue that the Experiments module is held together by two hacks and a prayer.
And yet I need it to write production code in that codebase. Every day.
This has changed how I think about code organisation. Not because the old principles were wrong, but because there's now a second audience with fundamentally different constraints, and there are architectural choices that could serve it better than the traditional codebase organisations.
Vertical Slice Architecture (VSA)
Most codebases are organised by technical layer. You have a controllers/ folder, a services/ folder, a repositories/ folder, maybe a models/ folder. A single feature, let's say, "create an order" is spread across four or five directories. The code that works together lives apart.
src/
├── controllers/
│ ├── OrderController.ts
│ ├── UserController.ts
│ └── ProductController.ts
├── services/
│ ├── OrderService.ts
│ ├── UserService.ts
│ └── ProductService.ts
├── repositories/
│ ├── OrderRepository.ts
│ ├── UserRepository.ts
│ └── ProductRepository.ts
└── models/
├── Order.ts
├── User.ts
└── Product.ts
Vertical Slice Architecture flips this. Instead of grouping by what the code is (controller, service, repository), you group by what the code does (create order, cancel order, list products). Each feature is a self-contained slice that holds everything it needs, the handler, the validation, the data access, the types etc etc in one place.
src/
├── orders/
│ ├── create-order/
│ │ ├── handler.ts
│ │ ├── validation.ts
│ │ ├── repository.ts
│ │ └── types.ts
│ ├── cancel-order/
│ │ ├── handler.ts
│ │ ├── validation.ts
│ │ ├── repository.ts
│ │ └── types.ts
│ └── shared/
│ └── order.model.ts
├── products/
│ ├── list-products/
│ │ ├── handler.ts
│ │ ├── query.ts
│ │ └── types.ts
│ └── shared/
│ └── product.model.ts
└── users/
└── ...
The idea isn't new. Jimmy Bogard has been talking about it since at least 2018. First I find it a great idea for human readers of the code, reader doesnt have to search around on files/folders to find whats happening, but also it turns out to be exactly what AI coding agents need.
Humans
You read what you need. When you're fixing a bug in order creation, you open one directory. You don't grep across four folders trying to reconstruct the flow. The cognitive overhead of "where does this feature live" drops to zero.
Changes are local. Adding a field to the create-order payload touches files in create-order/. It doesn't ripple through a shared OrderService that six other features depend on. This is cohesion in its most literal sense, the code that changes together lives together.
Coupling is visible. When a slice imports something from another slice, that dependency is explicit and obvious. In a layered architecture, the coupling is hidden inside a shared service layer that everything touches.
Deletion is easy. Want to remove a feature? Delete the folder. In a layered architecture, removing a feature means surgically extracting code from five shared files without breaking the other features that share those files.
AI agents
A human engineer builds a mental model of the codebase over weeks. They know that OrderService also handles refunds because they were in the PR where someone added it. They know the payments module is sensitive because they got paged when it broke. They carry context that no documentation can fully capture.
An AI agent has none of that. Every session starts from zero. Its only context is what you feed it, or what it can discover by reading files. This makes the structure of the codebase not just a convenience but a hard constraint on the agent's effectiveness.
Context boundaries are token boundaries. When an agent works on a feature in a vertical slice, it can load the entire slice into its context window, handler, validation, repository, types etc etc and have everything it needs. In a layered architecture, the agent has to load OrderController, then jump to OrderService, then to OrderRepository, then to the shared Order model, then to whatever middleware or base class those inherit from. Each jump costs tokens and risks losing focus.
Predictable structure reduces hallucination. When every feature slice follows the same pattern (handler.ts, validation.ts, repository.ts, types.ts) the agent can rely on convention. It knows where to look for the validation logic. It knows the types file exists. It doesn't have to guess or infer. Predictability is the cheapest form of context you can provide.
Blast radius is contained. When an agent modifies a vertical slice, the worst case is that it breaks that feature. When an agent modifies a shared service in a layered architecture, the worst case is that it breaks every feature that depends on that service. This isn't hypothetical, I've watched Devin confidently refactor a shared utility function and introduce subtle regressions in three downstream features.
Two readers
The properties that make vertical slices good for AI are the same properties that make them good for humans. The reason these align is that both problems are fundamentally about bounded context, not in the DDD sense, but in the information-theory sense. Both a human brain and an LLM context window are finite. Both perform better when the relevant information is co-located and the irrelevant information is excluded.
What VSA doesn't solve
Vertical slices are not a silver bullet and the failure modes are worth understanding.
Duplication. When each slice is self-contained, you will inevitably duplicate logic. Two slices that both need to calculate tax will each have their own implementation unless you extract it into a shared/ module. This is a feature, not a bug, up to a point. The rule of three applies: duplicate until the pattern is clear, then extract. But you need discipline to actually do the extraction, or you end up with five slightly different tax calculations.
Cross-cutting concerns. Authentication, logging, error handling, rate limiting, these don't belong to any single feature. Vertical slices don't have an obvious answer for where this code lives. The practical solution is a thin shared layer (middleware, base handlers, or something like skeleton on Skeleton Architecture) that all slices compose with.
Discovery at scale. In a small codebase, vertical slices are trivially navigable. In a large codebase with 200 features, finding the right slice is its own problem. This is where documentation files like CLAUDE.md or feature-level READMEs earn their keep, they serve as an index for both human and AI readers.
Final thoughts
The broader principle here is that we are in an era where code has two audiences with different strengths and weaknesses. Humans have institutional memory but limited attention. AI agents have vast attention but lack historic context. Good architecture has always tried to minimise the amount of context a reader needs to hold in their head. Vertical slices are one expression of that. Documentation files that explain why, not just what, are another. Predictable naming conventions, explicit dependencies, and small surface areas all serve both readers.
None of this is revolutionary. It's the same set of principles we've taught junior engineers for years, locality, cohesion, explicit dependencies, small interfaces. What's changed is that we now have a second reader that "enforces" these principles. A human will navigate a messy codebase with patience and institutional knowledge. An AI agent will simply produce worse output. The mess doesn't slow it down, it degrades it. And unlike a human, it won't complain. It'll just silently generate code that doesn't quite fit.
The codebases that will work best with AI assistance are the ones that were already well-structured for humans. The AI just makes the cost of poor structure more visible.