Skip to content

Modular Monoliths: The Hard Parts

Published: at 08:19 PM

Let’s get some things out of the way:

Should you build modular monoliths? Absolutely. They offer real and important advantages. They can almost certainly handle your “scale”, and while you might outgrow your monolith someday, that day is likely further away than you think.

But that’s not what this post is about.

What This Post Is About

I am not here to convince you to build modular monoliths. You’ve probably heard all the [arguments](https://www.fearofoblivion.com/build-a-modular-monolith-first for them already. Instead, I’d like to talk about the experience of building modular monoliths & what is difficult about them.

So I’m going to assume you know what modular monoliths are and that you are familiar with their much proclaimed benefits. Being familiar with those benefits is important because I am going to write about what it takes to earn them.

Context

It’s important to be clear about what kind of modular monolith we’re discussing here. Just like microservices, modular monoliths exist on a spectrum (size, level of isolation etc.). In this post, I am describing modular monoliths that are always ready to be decomposed into microservices. This is a high bar, but I believe this kind
of modular monoliths minimize the most downside & preserve the most upside for engineering teams.

One more clarification. The “modules” in our ideal modular monolith are functional. They map well to the bounded contexts we can identify in our applications. They are not technical modules or shared code modules.

Seductive Entropy 2

The hardest thing about building modular monoliths is keeping them modular.

One of the major benefits of a modular monolith is how easy it is to share code. But that same ease is also their biggest flaw. Modular monoliths are beset by a seductive entropy, a natural tendency towards disorder. Deliberately or inadvertently, modules become coupled in inappropriate ways until the monolith degrades into a big ball of mud. This entropy feels harmless at first; reusing a table from another module, just this once. Or calling another module’s internal methods as a “temporary” workaround. Under the pressure of urgent deadlines, unexpected requirements & adjacent tech debt, the infection spreads silently through your monolith. Ironically, code sharing becomes the main vector of contamination.

The stakes are high. Most of your organization’s code is in this monolith, and any decay naturally affects the productivity of multiple teams.

This won’t get better on its own. Entropy in modular monoliths must be actively resisted. Pushing back takes a surprising amount of engineering discipline. After all, every commit potentially introduces more entropy into the system. But vigilance isn’t enough to stave off entropy, we must design against it.

Decouple All The Things

If entropy is the disease, then isolation is the cure. If engineering discipline is the immune system, then decoupling modules is an inoculation against disorder. We must try to simulate the physical isolation of microservices within the confines of your monolith so that:

This is where things get interesting-and challenging. You’re not just building a simple microservice, you’re building a whole bunch of independent services inside your monolith. And they all require the same generic capabilities: db migrations, connection pools, object storage access, email sending, user context, trace context, json serialization etc.

The good news? You can share a lot of code. The bad news? you can share a lot of code.

So how do we do it? How do we share without coupling? How do we decouple all the things.

Structural Decoupling

It helps immensely if your runtime, framework, or tooling provides physical isolation between modules, while allowing them to be composed. For example, .NET has the concept of projects which provide physical isolation and can be composed. This means we can setup a modular monolith like this:

Module Monolith Project Structure

Technology Choices Matter

This structure can work in any environment that supports a module like concept, but not all languages / runtimes / frameworks are created equal. The truth is that your tech stack choices matter more in a modular monolith. The level of support for modular concepts matter, flexibility in code visibility matters, inheritance vs composition matters. IDE features matter. It won’t please the cool kids, but full-featured (heavy even) stacks (.NET, Java etc.) work best for modular monoliths. All those language features, the years of investment in IDEs, actually pay off in here. But sometimes full-featured stacks struggle too.

Data Decoupling

Each module should deal with data independently. That means separate schemas in the same database or entirely different databases depending on your needs. This means you’re dealing with many sets of database migrations, potentially setting up many connection pools (if you don’t want modules to have access to each other’s schemas or databases) and so on.

It’s fairly easy to figure these things out for one module, but there are decisions to make and questions to ask:

Cross-Module Communication

Cross-module communication presents the most temptation to couple modules inappropriately. The code is right there after all, and when deadlines loom it can make sense to couple things and be done with it. To resist this temptation, it is important to set clear rules for cross-module communicate.

Synchronous Communications

We’ve established that it isn’t a good idea for modules to call other module’s internal methods. But that doesn’t mean we have to abandon synchronous or in-memory communication entirely. We can have our cake and eat it too by making a few key decisions:

  1. Application Core / Layer Design each module with an application core / layer that accepts commands or queries and returns results if any. This layer is made up of handlers that each handle a specific command or query (e.g. CreateAccountCommand or GetAccountByIdQuery). You can interact with this layer by sending commands or queries to it via HTTP (an API) or some other means of communication (e.g. CLI commands). The key things here are that:

Here are sample interfaces the handlers in such an application layer might implement.

public interface ICommandHandler<TCommand>
{
   Task<Result> Handle(TCommand command);
}

public interface IQueryHandler<TQuery, TResult>
{
   Task<TResult?> Handle(TQuery query);
}

public interface ICollectionQueryHandler<TQuery, TResult>
{
   Task<PagedResult<TResult>> Handle(TQuery query, QueryPaging? paging = null);
}

Some of the ideas above will be familiar to those aware of the Hexagonal / Onion architecture. As frequently maligned as those patterns are, they’re useful in this context.

  1. Module Bus Use a module bus to invoke commands & queries in other modules. A module bus is just a class / function that can send commands or queries from one module to another in-memory. The module bus dispatches commands or queries to the application layer of the target module where it will be routed to the right handler.

A module bus facilitates clean, synchronous communication between modules without the modules having any knowledge of each other’s internals. A module only needs to know about the public contract of another module to communicate with it.

Asynchronous Communication

Just like with microservices, it’s useful to continually ask if synchronous communication between modules is really necessary given its downsides (additional latency, potential two phase commit etc.). We should prefer asynchronous communication via publishing and subscribing to events. The public contracts of a module include the events it publishes, and since other modules can reference those contracts, it’s fairly straightforward to subscribe and be notified when important events occur in other modules.

Module Monolith Project Structure

You can implement this fairly easily with any lightweight pubsub implementation.

Two-Phase Commit

Be wary of two-phase commit. The fact that modules live side by side can be deceiving if the modules have separate databases. Trying to perform cross-module transactions bears the same risks as doing the same across microservices. The usual solutions (sagas, distributed transactions, event-based asynchronous communication etc.) apply. That can be hard to swallow given the decision to build a modular monolith, but it is the price one pays for optionality i.e. easily decoupling the monolith in the future.

What Can’t Be Decoupled?

DRY (Don’t Repeat Yourself) is far more crucial in a modular monolith because most of your modules need the same generic capabilities. The pressure to share code for these capabilities, to use the same infrastructure and to do everything the same way across modules—is immense. Here’s just one part of the code all the modules in the screenshots above share.

Module Monolith Shared Code

This “sameness” can be liberating for organizations tired of technology sprawl. There are hiring and resource allocation benefits to all teams sharing a tech stack. But there is danger too. It can become difficult to take advantage of newer technologies or attract new hires if your stack is outdated. Rigidity can set in.

Organizations and teams building modular monoliths need to be highly aligned across many dimensions (e.g. technology choices, local architecture, infrastructure needs, code styles etc.). This alignment & cohesion is non-negotiable for modular monoliths and cannot be decoupled away. The larger an organization grows, the harder it becomes to maintain this alignment and at some point it might make sense to graduate to microservices. A task made infinitely easier by having started with a modular monolith.

Final Thoughts

This seems like…a lot? And it is compared to a single microservice. But it’s a far more productive & easy to maintain environment than a bevy of microservices and their attending infrastructure. Modular monoliths offer an exceptional middle ground between messy monoliths and microservices—allowing organizations to scale engineering productivity and systems without incurring exponential costs. And when the time comes to evolve into microservices, they’ll be ready.

Ultimately, the success of a modular monolith isn’t just about code structure—it’s about cultivating a team culture of discipline, cohesion, and careful decision-making. The temptation to let modules entangle themselves will always be there, but with thoughtful decoupling, careful communication patterns, and clear boundaries, you can harness the real benefits of this architecture.

A great modular monolith, well structured and ready for decomposition, is the ultimate proof that a team can handle microservices.


Next Post
Bad Medicine In .NET OSS