You have a monolith, and you are proud of it. One repo, one deployment, one database - no Kubernetes, no service mesh. You have watched teams burn months “extracting microservices” and come out slower than when they started. You stayed sensible. You kept the monolith.
Here is the uncomfortable part - your monolith may already be a distributed system. Not in the infrastructure sense but in the only sense that actually matters: coupling, contracts, failure modes, and coordination overhead. You are paying the price of distribution without any of the benefits. Let me explain…
Database Is A Global Variable
The most dangerous coupling in any large monolith is not in the code. It is in the database.
A shared relational database behaves like a global variable at the scale of the entire system. Any module can read any table. Any migration could touch any data and schema. Any schema change is, in effect, a broadcast to every part of the system that has ever issued a query against that table.
Think about what happens when you rename a column in your orders table.
You search the codebase for references. You find the obvious ones – the OrderRepository, the OrderService, and the admin panel query. You miss the subtle ones: the analytics module that pulls orders directly to avoid going through the service layer.
Distributed systems fail when Service A changes its contract, and Service B breaks silently at runtime. The monolith has the same failure mode. The only difference is that in a distributed system, the contract is an API schema. In the monolith, the contract is the database schema – and it is implicit, unversioned, and invisible.
-- "Safe" schema change in a monolith
ALTER TABLE orders RENAME COLUMN customer_id TO user_id;
-- This breaks the report scheduler that runs at 2am,
-- which you will find out about at 9am.
If your monolith has a single database and more than three or four distinct feature areas, your database schema is already acting as a distributed coordination point. It is the pub/sub bus you never intended to build.
Modules Are Services
At some point in the life of every growing monolith, someone draws a diagram. Boxes and arrows. UserModule calls OrderModule, which calls PaymentModule, which calls NotificationModule. The diagram looks clean. The team feels good about the architecture.
The problem is that those boxes are not enforced by anything. There is no compilation boundary between them. There is no ownership model. There is no contract that governs what OrderModule is allowed to ask of PaymentModule. The arrows on the diagram are “aspirational”.
In practice, the UserModule calls the PaymentModule directly because the person who needed that data was under a deadline, and the indirection through OrderModule seemed unnecessary. The NotificationModule pulls from the users table directly because it needed a field that was not on the API that the UserModule exposed. The diagram no longer matches the code, and nobody is sure which is wrong.
This is temporal coupling. This is domain coupling. These are the exact failure modes that distributed systems architects spend careers trying to design around, and they are sitting inside your monolith right now.
When you finally decide to extract any one of those modules – and at some point you will – you will discover that it has invisible coupling going through the entire codebase.
Synchronous Chains
A function call inside a monolith takes nanoseconds. This makes it feel free. Teams develop a habit of composing functionality through long synchronous chains because there is no visible cost: no network latency, no retries, no circuit breakers. The call stack grows deeper every quarter.
def process_checkout(cart_id):
cart = cart_service.get(cart_id)
user = user_service.get(cart.user_id)
inventory = inventory_service.check(cart.items)
payment = payment_service.charge(user, cart.total)
order = order_service.create(cart, payment)
notification_service.send_confirmation(user, order)
analytics_service.track_conversion(user, order)
return order
This reads cleanly. Every step is obvious. Every step is also synchronous, and every step must succeed for the operation to complete. Interestingly, the failure domain of this function is the union of the failure domains of every system it touches.
Your monolith has the coordination costs of a distributed system. It has just traded visible network calls for invisible function call chains and made the failure modes harder to reason about in the process. Imagine any of the above functions in the chain running slower or erroring out.
Shared Libraries Are Service Mesh
Microservices teams discover early that shared code is coupling in disguise. If Service A and Service B both depend on payments-client@v1.2.3, then upgrading that library is no longer a local decision. You are coordinating a deployment across service boundaries.
Monoliths have the same problem, and it is often worse because there is no version number to make the coupling visible.
The BaseRepository that every module extends. The ApplicationContext singleton that holds global configuration. The EventEmitter that any part of the codebase can subscribe to or emit on. These are the shared libraries of the monolith, and they are the reason that a change to a “utility” class causes five unrelated test suites to fail.
Single Deploy Is An Illusion
Teams love citing the monolith’s single deployable artifact as a virtue, and in many ways it is. But it is also a hidden cost that scales with team size.
A single deploy means that every team’s changes land at the same time. This sounds like coordination, because it is. As the team grows, the deploy becomes a ritual: a changelog spanning twenty pull requests, a QA pass that blocks on the slowest team, a rollback that takes down features that were working fine because they shared the deploy unit with the thing that broke.
Distributed systems have the same problem when services are tightly coupled. The monolith has spent decades developing patterns to escape issues like semantic versioning, feature flags, backward-compatible migrations, and canary deployments, and hence, single deploy feels so much simpler on paper.
What Your Monolith Has Actually Become
If your monolith is more than a few years old and more than a few engineers have worked on it, do a quick audit:
- How many distinct feature areas are read from the same database tables?
- Can you draw the actual call graph between your “modules” – not the intended one, the real one?
- How many synchronous chains touch more than four distinct concerns?
- How many engineers need to coordinate on a deployment?
- When was the last time a change to a “utility” or “shared” class caused unexpected breakage somewhere else?
If the answers to these questions are not small, you already have a distributed system. The difference is that microservices teams know they have one, and yours is invisible.
The invisible distributed system is worse.
Not An Argument for Microservices
By the way, none of this is an argument for splitting your monolith. That is a separate conversation, and for most teams, the answer is probably no.
The argument is narrower: if you are going to keep the monolith, keep it well.
- Define real boundaries inside it.
- Own the data per module – one module should write to a set of tables, and other modules should go through its interface to read them, not the table directly.
- Make the synchronous chains visible, and
- Consciously decide which parts could be async.
Treat shared utilities with the same care you would treat a shared library in a distributed system, because that is what they are.
A well-structured monolith is one of the best architectures you can have. It gives you the deployment simplicity and the refactoring confidence that distributed systems genuinely cannot. The compiler catches your mistakes. Latency is in nanoseconds. There are no distributed transactions to debug.
But a poorly-structured monolith is not a monolith at all. It is a distributed system with the network calls replaced by function calls and the explicit contracts replaced by implicit database coupling.
The pride is understandable. The architecture might not deserve it yet.