Highly COHESIVE Software Design to tame Complexity

What is cohesion and why should you care? Highly cohesive software design can reduce complexity and coupling. But what is cohesion? It’s the degree to which the elements inside a module belong together. How you group operations together can have a widely different outcome on Cohesion. Informational Cohesion is grouped by operations on data. Functional Cohesion is grouped by operations of a task. It’s directly related to the Single Responsibility Principle, which you might also have a different definition of.


Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.


Most people generally hear about Cohesion in relation to Coupling. I’ve done a post about how to Write Stable Code using Coupling Metrics, but I haven’t yet touched directly on Cohesion. To me, Cohesion and Coupling are like the yin and yang of software design.

To give it a simple definition:

“degree to which the elements inside a module belong together”

Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design

However, most people are likely more familiar with or think of the Single Responsibility Principle, which is directly related to Cohesion. Single Responsibility Principle states:

A class should have one, and only one, reason to change

 Agile Software Development, Principles, Patterns, and Practices

What what does that really mean? What is “one reason”? Does that mean that a class/module should only have one job? What about dependencies, what if they change? Doesn’t that affect other classes (coupling) and they have to change?

Robert C. Martin, the author wrote about this years ago on his blog to clarify.

When you write a software module, you want to make sure that when changes are requested, those changes can only originate from a single person, or rather, a single tightly coupled group of people representing a single narrowly defined business function. You want to isolate your modules from the complexities of the organization as a whole, and design your systems such that each module is responsible (responds to) the needs of just that one business function.


What’s funny about this is that most classes/modules are not organized by business function, but rather they are organized by data.

If you read any of my other posts, you know that I advocate thinking about business capabilities and not technical concerns, which I describe more in my post AVOID Entity Services by Focusing on Capabilities.

Informational Cohesion

I think the common practice is to organize operations (methods) in a class/module using an informational cohesion approach. Information cohesion is about grouping related to operations on data.

For example, a ProductService class that did data access for a Product Entity (Data Model).

These operations are grouped in this Interface/Class because they operate on the same information/data.

Functional Cohesion

Functional Cohesion is about grouping related to the operations of a task. Single Responsibility mentions are about a narrowly defined business function(s).

This means that you don’t group by Entity/Data/Information, but rather the boundary in which users perform actions within your system. In other words, more based on roles. Cohesive software design, to me, is focusing on functional cohesion.


To illustrate this more, here’s an example of a class that depends on a IProductService that I’ve defined above.

The problem with Informational Cohesion that is occurring in IProductService, is that we don’t require anything more than one single method (GetProductBySku).

To test this class, we have a couple of options. We can create a stub of the interface or we can mock it. Most would probably choose to use a mocking library so we only have to mock the one method.

However, if we change the actual implementation to use a different method on our IProductService interface, this test is going to fail because we haven’t mocked every method on the interface.

Do we really want a dependency on that interface so we can use one method? When you need a class for one method, you don’t really the interface, you want a function.

In C#, Delegates are… functions!

Instead of an interface, I’ve defined delegates within a static class that are specific for the Catalog.

Now instead of depending on the interface, we depend on the delegate.

Now our test becomes much more explicit in how we create this Handler to test. We need to stub the delegate. There is no mock. There is no mocking framework.

If we change the implementation to use a different delegate, if the signature of the delegate is the same, then the test will still pass. If the signature changes, then our code won’t even compile. We’re being as explicit as we can about what we depend on.

Cohesive Software Design

When I’m thinking about Cohesive Software Design, I’m thinking about grouping by business functions and capabilities. Grouping by tasks, by roles, but what the business process and workflows actually are.

By increasing cohesion you can reduce coupling.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Event Based Architecture: What do you mean by EVENT?

The term “Event” is really overloaded. There are many different utilities that leverage events. Event Sourcing, Event Carried State Transfer, and Event Notifications. None of these are for the same purpose. When talking about an Event Based architecture, realize which one you’re using and for what purpose.


Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.


With the popularity of Microservices, Event Driven Architecture, Event Sourcing, and tooling, the term “Events” has become pretty overloaded and I find has been causing some confusion.

There’s a lot of content available online through blogs or videos that are often using the term Event incorrectly or in an ambiguous way. Almost every time I see the term event being used incorrectly is when the topic being covered is Microservices or Event Driven Architecture.

Here are the three different ways that the term “Event” is being used and in what context or pattern and for what purpose.

Event Sourcing

Event Sourcing is a different approach to storing data. Instead of storing the current state, you’re instead going to be storing events. Events represent the state transitions of things that have occurred in your system. Events are facts.

To illustrate the exact same product of SKU ABC123 that had a current state quantity of 59, this is how we would be storing this data using event sourcing.

This means that your events are the point of truth and you can derive current state from them.

The confusion when talking about Event Based Architecture comes in because I most often see people refer to Event Sourcing in a Microservices architecture as a means to communicate between services. Microservices and event sourcing are orthogonal. You do not need to be event sourcing in order to have a Microservice. Event sourcing has nothing to do with communication to other services. Again, event sourcing is about how you store data.

The reason, I believe, that there’s confusion is because when you’re event sourcing, you might be tempted to expose those events to other services. I say tempted because the events you’re storing as facts are not something you directly want to expose to other services. These events are internal and not used for integration.

Also, depending on the database or event store you’re using, it may also act as a message broker. This is often used within a service/boundary in order to create projections or read models from your event stream.

For more on projections, check out my post Projections in Event Sourcing: Build ANY model you want!

Event Carried State Transfer

The most common way I see events being used and explained is for state propagation. Meaning, you’re publishing events about state changes within a service, so other services (consumers) can keep a local cache copy of the data.

This is often referred to as Event Carried State Transfer.

The reason services will want a local cache copy of another service’s data, is so they do not need to make RPC calls to other services to get data. The issue with making the RPC call is if there are issues with availability or latency, the call might fail. In order to be available when other services are unavailable, they want the data they need locally.

In the example above, Warehouse and Billing require the Sales service. If the Sales service is unavailable, they may also be unavailable. To alleviate this, if they have the relevant data they need locally, it prevents them from having to make RPC calls to Sales.

What this looks like in practice is to use fat messages that generally contain all the data related to an entity.

Sales will publish a ProductChanged event that both the Warehouse and Billing will consume to update their local cache copies of a Product.

The contents of ProductChanged will generally look something like this:

The event will represent the entire current state of the entity. Meaning these events can get pretty large depending on the size of the entity.

While this approach is often used, I’d argue in most places if you’re doing this, you probably have some boundaries that are wrong. For more on defining boundaries check out my post on Defining Service Boundaries by Splitting Entities

Events as Notifications

The reason why Event Carried State Transfer is so popular when discussing Event Based Architecture is that Events will be used for notification purposes to other services, however being used incorrectly.

Most times events used for notifications are generally pretty slim. They don’t contain much data. If a consumer is handling an event but needs more information, to, for example, react and perform some action, it might have to make an RPC call back to the producing service to get more information. And this is what leads people to Event carried State Transfer, so they do not have to make these RPC calls.

To illustrate, the Sales service publishes an OrderPlaced event.

Event Based Architecture

The Billing Service is consuming this event so it can then create an Invoice.

Event Based Architecture

But because the event doesn’t contain much information, the Billing service then needs to make an RPC call back to Sales to get more data.

And this is how people then land on using Event carried State transfer to avoid this pattern.

Again, I sound like a broken record. But the likely reason this is occurring is because of incorrect boundaries. Services should own the data that relates to the capabilities they perform.

If you do have boundaries correct and each service has the relevant data for its capabilities, this means that events are used as a part of a workflow or long-running process. Events are used as notifications to tell other services that something has occurred.

To illustrate this again, Sales is publishing an OrderPlaced event that Billing is consuming.

Since Billing has all the data it needs, it creates an Invoice and publishes an OrderBilled event.

Event Based Architecture

Next, the Warehouse service will consume the OrderBilled event so that it can create a ShippingLabel for the order to be shipped.

Event Based Architecture

Once the shipping label has been created by the Warehouse, it publishes a LabelCreated event.

Finally, Sales is consuming the LabelCreated event so that it can update it’s Order status to show that the Order has been billed and is ready to be shipped.

This is called Event Choreography and is driven by events that are used for notifications. For more info check out my post Event Choreography & Orchestration (Sagas)

Event Based Architecture

Hopefully, this clears up some confusion about how the term even is used in different situations around event based architecture. I also hope it illustrates why Event Carried Transfer exists and the problem it’s trying to solve. However, that problem may likely be caused by incorrect boundaries.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Do Microservices require Containers/Docker/Kubernetes?

Containers, Docker, Kubernetes, and Serverless are often used when explaining a Microservices architecture. However, focusing on physical deployment is missing the point of Microservices entirely. Microservices (or any size services) are about logical separation and not about physical deployment. Deployment flexibility is a by-product of having well-defined boundaries for services that are autonomous.


Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.


Like many terms in the industry, they often get confused or conflated with other concepts and lose their original meaning. Martin Fowler calls this Semantic Diffusion. I think Microservices falls into that category as I don’t think you would get the same definition from a group of people.

I’ll use the definition from Adrian Cockcroft:

Loosely coupled service oriented architecture with bounded contexts

When it comes to a Bounded Context, which is from Domain-Driven Design

If you have to know too much about surrounding services you don’t have a bounded context.

It’s all about coupling and autonomy. You want to be autonomous and not be coupled to other services. If you have to coordinate with another service for a change your making, then you’re not autonomous. If you’re sharing a database with other services, and make a schema change and need to coordinate with other services so they also make the required change, you aren’t autonomous.

I advocate often for Domain-Driven Design and the concept of boundaries in most of my posts/videos. This is why I like to describe a service as the authority of a set of business capabilities.


The vast majority of the content you’ll find about Microservices, almost always focus on physical deployment and all the technical complexities that come from it. The focus then becomes on Containers, Docker, Kubernetes, or Serverless. But this is missing the point entirely of Microservices.

Microservices is about logical separation, not physical.

A bounded context is a logical boundary. It represents a portion of a subdomain of the larger system.

4+1 Architectural View Model

Somewhere along the rise of Microservices became this idea that a logical boundary is also a single source code repository that is built into a single deployable unit.

For example, that a service had it’s own git repository and would be built into a container image.

Although this is practical in some situations, it doesn’t need to be if you think about logical, development, and physical views separately.

4+1 Architectural View Model
!Original: MddVector: Wikimpan, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons

The above diagram is the 4+1 Architectural View Model. It’s used for describing an architecture based on multiple concurrent views.

The logical view represents a bounded context. The Development view represents the source code, repository, what you see in your editor or IDE. The physical view is the topology of deployment.

In most examples of Microservices, they focus as if these are all one unified concept without making any distinctions between them.

Do Microservices require Containers/Docker/Kubernetes?

In the example above, each Service (Sales, Warehouse, Billing) are all living in their own containers, talking to their own databases and communicating to a message broker. There’s nothing wrong with this. However, you can still have logical separation without physical separation of deployment.

At build time, you could however compose all the different services and run them within the same executable. Their communication is unchanged. They aren’t directly communicating within the same process. They are simply hosted together in the same process.

Do Microservices require Containers/Docker/Kubernetes?

I’m not advocating one way or the other, but illustrating that logical separation and physical separation are different concerns.

Do Microservices require Containers/Docker/Kubernetes?

No, Microservices are about logical separation, not physical.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for the working demo application available in a git repo. Check out the membership for more info.

Additional Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.