- read

Domain-Driven Hexagon Architecture for JavaScript / TypeScript Developers

Florian GOTO 3.5k

Architecture

Mainly based on:

And many other sources (more links below in every chapter).

Before we begin, here are the PROS and CONS of using a complete architecture like this:

Pros:

  • Independent of external frameworks, technologies, databases, etc. Frameworks and external resources can be plugged/unplugged with much less effort.
  • Easily testable and scalable.
  • More secure. Some security principles are baked in design itself.
  • The solution can be worked on and maintained by different teams, without stepping on each other’s toes.
  • Easier to add new features. As the system grows over time, the difficulty in adding new features remains constant and relatively small.
  • If the solution is properly broken apart along bounded context lines, it becomes easy to convert pieces of it into microservices if needed.

Cons:

  • This is a sophisticated architecture which requires a firm understanding of quality software principles, such as SOLID, Clean/Hexagonal Architecture, Domain-Driven Design, etc. Any team implementing such a solution will almost certainly require an expert to drive the solution and keep it from evolving the wrong way and accumulating technical debt.
  • Some of the practices presented here are not recommended for small-medium sized applications with not a lot of business logic. There is added up-front complexity to support all those building blocks and layers, boilerplate code, abstractions, data mapping etc. thus implementing a complete architecture like this is generally ill-suited to simple CRUD applications and could over-complicate such solutions. Some of the described below principles can be used in a smaller sized applications but must be implemented only after analyzing and understanding all pros and cons.

Diagram

Diagram is mostly based on this one + others found online

In short, data flow looks like this (from left to right):

  • Request/CLI command/event is sent to the controller using plain DTO;
  • Controller parses this DTO, maps it to a Command/Query object format and passes it to a Application service;
  • Application service handles this Command/Query; it executes business logic using domain services and/or entities and uses the infrastructure layer through ports;
  • Infrastructure layer uses a mapper to convert data to format that it needs, uses repositories to fetch/persist data and adapters to send events or do other I/O communications, maps data back to domain format and returns it back to Application service;
  • After application service finishes doing it’s job, it returns data/confirmation back to Controllers;
  • Controllers return data back to the user (if application has presenters/views, those are returned instead).

Each layer is in charge of it’s own logic and has building blocks that usually should follow a Single-responsibility principle when possible and when it makes sense (for example, using Repositories only for database access, using Entities for business logic etc).

Keep in mind that different projects can have more or less steps/layers/building blocks than described here. Add more if application requires it, and skip some if application is not that complex and doesn’t need all that abstraction.

General recommendation for any project: analyze how big/complex the application will be, find a compromise and use as many layers/building blocks as needed for the project and skip ones that may over-complicate things.

More in details on each step below.

Modules

This project’s code examples use separation by modules (also called components). Each module gets its own folder with a dedicated codebase, and each use case inside that module gets it’s own folder to store most of the things it needs (this is also called Vertical Slicing).

It is easier to work on things that change together if those things are gathered relatively close to each other. Try not to create dependencies between modules or use cases, move shared logic into a separate files and make both depend on that instead of depending on each other.

Try to make every module independent and keep interactions between modules minimal. Think of each module as a mini application bounded by a single context. Try to avoid direct imports between modules (like importing a service from other domain) since this creates tight coupling. Communication between modules can be done using events, public interfaces or through a port/adapter (more on that topic below).

This approach ensures loose coupling, and, if bounded contexts are defined and designed properly, each module can be easily separated into a microservice if needed without touching any domain logic.

Read more about modular programming benefits:

Each module is separated in layers described below.

Application Core

This is the core of the system which is built using DDD building blocks:

Domain layer:

  • Entities
  • Aggregates
  • Domain Services
  • Value Objects

Application layer:

  • Application Services
  • Commands and Queries
  • Ports

More building blocks may be added if needed.

Commands and Queries

This principle is called Command–Query Separation(CQS). When possible, methods should be separated into Commands (state-changing operations) and Queries (data-retrieval operations). To make a clear distinction between those two types of operations, input objects can be represented as Commands and Queries. Before DTO reaches the domain, it is converted into a Command/Query object.

Commands

  • Commands are used for state-changing actions, like creating new user and saving it to the database. Create, Update and Delete operations are considered as state-changing.

Data retrieval is responsibility of Queries, so Command methods should not return anything. There are some options on how to achieve this:

Though, violating a Command CQS rule and returning a bare minimum (like ID of created item or a confirmation message) may simplify things for most APIs.

Note: Command has nothing to do with Command Pattern, it is just a convenient name to represent that this object invokes a CQS Command. Both Commands and Queries in this example are just simple objects with data.

Example of command object: create-user.command.ts

Queries

  • Query is used for retrieving data and should not make any state changes (like writes to the database, files etc).

Queries are usually just a data retrieval operation and have no business logic involved; so, if needed, application and domain layers can be bypassed completely. Though, if some additional non-state changing logic has to be applied before returning a query response (like calculating something), it should be done in a corresponding application service.

Example of query bypassing application/domain layers completely: find-user-by-email.http.controller.ts

Ports

Ports (for Driven Adapters) are interfaces that define contracts which must be implemented by infrastructure adapters in order to execute some action more related to technology details rather than business logic. Ports act like abstractions for technology details that business logic does not care about.

In Application Core dependencies point inwards. Outer layers can depend on inner layers, but inner layers never depend on outer layers. Application Core shouldn’t depend on frameworks or access external resources directly. Any external calls to out-of-process resources/retrieval of data from remote processes should be done through ports (interfaces), with class implementations created somewhere in infrastructure layer and injected into application's core (Dependency Injection and Dependency Inversion). This makes business logic independent of technology, facilitates testing, allows to plug/unplug/swap any external resources easily making application modular and loosely coupled.

  • Ports are basically just interfaces that define what has to be done and don’t care about how it is done.
  • Ports can be created to abstract I/O operations, technology details, invasive libraries, legacy code etc. from the Domain.
  • Ports should be created to fit the Domain needs, not simply mimic the tools APIs.
  • Mock implementations can be passed to ports while testing. Mocking makes your tests faster and independent from the environment.
  • When designing ports, remember about Interface segregation principle. Split large interfaces into a smaller ones when it makes sense, but also keep in mind to not overdo it when not necessary.
  • Ports can also help to delay decisions. Domain layer can be implemented before even deciding what technologies (frameworks, database etc) will be used.

Note: since most ports implementations are injected and executed in application service, Application Layer can be a good place to keep those ports. But there are times when Domain Layer’s business logic depends on executing some external resource, in that case those ports can be put in a Domain Layer.

Note: creating ports in smaller applications/APIs may overcomplicate such solutions by adding unnecessary abstractions. Using concrete implementations directly instead of ports may be enough in such applications. Consider all pros and cons before using this pattern.

Example files:

Domain Layer

This layer contains application’s business rules.

Domain should only operate using domain objects, most important ones are described below.

Entities

Entities are the core of the domain. They encapsulate Enterprise wide business rules and attributes. An entity can be an object with properties and methods, or it can be a set of data structures and functions.

Entities represent business models and express what properties a particular model has, what it can do, when and at what conditions it can do it. An example of business model can be a User, Product, Booking, Ticket, Wallet etc.

Entities must always protect it’s invariant:

Domain entities should always be valid entities. There are a certain number of invariants for an object that should always be true. For example, an order item object always has to have a quantity that must be a positive integer, plus an article name and price. Therefore, invariants enforcement is the responsibility of the domain entities (especially of the aggregate root) and an entity object should not be able to exist without being valid.

Entities:

  • Contain Domain business logic. Avoid having business logic in your services when possible, this leads to Anemic Domain Model (domain services are exception for business logic that can’t be put in a single entity).
  • Have an identity that defines it and makes it distinguishable from others. It’s identity is consistent during its life cycle.
  • Equality between two entities is determined by comparing their identificators (usually its id field).
  • Can contain other objects, such as other entities or value objects.
  • Are responsible for collecting all the understanding of state and how it changes in the same place.
  • Responsible for the coordination of operations on the objects it owns.
  • Know nothing about upper layers (services, controllers etc).
  • Domain entities data should be modelled to accommodate business logic, not some database schema.
  • Entities must protect their invariants, try to avoid public setters — update state using methods and execute invariant validation on each update if needed (this can be a simple validate() method that checks if business rules are not violated by update).
  • Must be consistent on creation. Validate Entities and other domain objects on creation and throw an error on first failure. Fail Fast.
  • Avoid no-arg (empty) constructors, accept and validate all required properties through a constructor.
  • For optional properties that require some complex setting up, Fluent interface and Builder Pattern can be used.
  • Make Entities partially immutable. Identify what properties shouldn’t change after creation and make them readonly (for example id or createdAt).

Note: A lof of people tend to create one module per entity, but this approach is not very good. Each module may have multiple entities. One thing to keep in mind is that putting entities in a single module requires those entities to have related business logic, don’t group unrelated entities in one module.

Example files:

Read more:

Aggregates

Aggregate is a cluster of domain objects that can be treated as a single unit. It encapsulates entities and value objects which conceptually belong together. It also contains a set of operations which those domain objects can be operated on.

  • Aggregates help to simplify the domain model by gathering multiple domain objects under a single abstraction.
  • Aggregates should not be influenced by data model. Associations between domain objects are not the same as database relationships.
  • Aggregate root is an entity that contains other entities/value objects and all logic to operate them.
  • Aggregate root has global identity. Entities inside the boundary have local identity, unique only within the Aggregate.
  • Aggregate root is a gateway to entire aggregate. Any references from outside the aggregate should only go to the aggregate root.
  • Any operations on an aggregate must be transactional operations. Either everything gets saved/updated/deleted or nothing.
  • Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal.
  • Similar to Entities, aggregates must protect their invariants through entire lifecycle. When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. Simply said, all objects in an aggregate must be consistent, meaning that if one object inside an aggregate changes state, this shouldn't conflict with other domain objects inside this aggregate (this is called Consistency Boundary).
  • Objects within the Aggregate can hold references to other Aggregate roots. Prefer references to external aggregates only by their globally unique identity, not by holding a direct object reference.
  • Try to avoid aggregates that are too big, this can lead to performance and maintaining problems.
  • Aggregates can publish Domain Events (more on that below).

All of this rules just come from the idea of creating a boundary around Aggregates. The boundary simplifies business model, as it forces us to consider each relationship very carefully, and within a well-defined set of rules.

Example files:

Read more:

Domain Events

Domain event indicates that something happened in a domain that you want other parts of the same domain (in-process) to be aware of.

For example, if a user buys something, you may want to:

  • Send confirmation email to that user;
  • Send notification to a corporate slack channel;
  • Notify shipping department;
  • Perform other side effects that are not concern of an original buy operation domain.

Typical approach that is usually used involves executing all this logic in a service that performs a buy operation. But this creates coupling between different subdomains.

A better approach would be publishing a Domain Event. Any side effect operations can be performed just by subscribing to a concrete Domain Event and creating as many event handlers as needed, without glueing any unrelated code to original domain's service that sends an event.

Domain events are just messages pushed to a domain event dispatcher in the same process. Out-of-process communications (like microservices) are called Integration Events. If sending a Domain Event to external process is needed then domain event handler should send an Integration Event.

Domain Events may be useful for creating an audit log to track all changes to important entities by saving each event to the database. Read more on why audit logs may be useful: Why soft deletes are evil and what to do instead.

There may be different ways on implementing Domain Events, for example using some kind of internal event bus/emitter, like Event Emitter, or using patterns like Mediator or slightly modified Observer.

Examples:

  • domain-events.ts — this class is responsible for providing publish/subscribe functionality for anyone who needs to emit or listen to events.
  • user-created.domain-event.ts — simple object that holds data related to published event.
  • user-created.event-handler.ts — this is an example of Domain Event Handler that executes actions and side-effects when a domain event is raised (in this case, user is created). Domain event handlers belong to Application layer.
  • typeorm.repository.base.ts — repository publishes all events for execution right before or right after persisting transaction.

Events can be published right before or right after insert/update/delete transaction, chose any option that is better for a particular project:

  • Before: to make side-effects part of that transaction. If any event fails all changes should be reverted.
  • After: to persist transaction even if some event fails. This makes side-effects independent, but in that case eventual consistency should be implemented.

Both options have pros and cons.

Note: this project uses custom implementation for Domain Events. Reason for not using Node Event Emitter is that event emitter executes events immediately when called instead of when we want it (before/after transaction), and also has no option to await for all events to finish, which might be useful when making those events a part of transaction.

To have a better understanding on domain events and implementation read this:

For integration events in distributed systems here are some patterns that may be useful in some cases:

Enforcing invariants of Domain Objects

Replacing primitives with Value Objects

Most of the code bases operate on primitive types — strings, numbers etc. In the Domain Model, this level of abstraction may be too low.

Significant business concepts can be expressed using specific types and classes. Value Objects can be used instead primitives to avoid primitives obsession. So, for example, email of type string:

email: string;

could be represented as a Value Object instead:

email: Email;

Now the only way to make an email is to create a new instance of Email class first, this ensures it will be validated on creation and a wrong value won't get into Entities.

Also an important behavior of the domain primitive is encapsulated in one place. By having the domain primitive own and control domain operations, you reduce the risk of bugs caused by lack of detailed domain knowledge of the concepts involved in the operation.

Creating an object for primitive values may be cumbersome, but it somewhat forces a developer to study domain more in details instead of just throwing a primitive type without even thinking what that value represents in domain.

Using Value Objects for primitive types is also called a domain primitive. The concept and naming are proposed in the book "Secure by Design".

Using Value Objects instead of primitives:

  • Makes code easier to understand by using ubiquitous language instead of just string.
  • Improves security by ensuring invariants of every property.
  • Encapsulates specific business rules associated with a value.

Also an alternative for creating an object may be a type alias just to give this primitive a semantic meaning.

Example files:

Recommended to read:

Use Value Objects/Domain Primitives and Types system to make illegal states unrepresentable in your program.

Some people recommend using objects for every value:

Quote from John A De Goes:

Making illegal states unrepresentable is all about statically proving that all runtime values (without exception) correspond to valid objects in the business domain. The effect of this technique on eliminating meaningless runtime states is astounding and cannot be overstated.

Lets distinguish two types of protection from illegal states: at compile time and at runtime.

At compile time

Types give useful semantic information to a developer. Good code should be easy to use correctly, and hard to use incorrectly. Types system can be a good help for that. It can prevent some nasty errors at a compile time, so IDE will show type errors right away.

The simplest example may be using enums instead of constants, and use those enums as input type for something. When passing anything that is not intended IDE will show a type error.

Or, for example, imagine that business logic requires to have contact info of a person by either having email, or phone, or both. Both email and phone could be represented as optional, for example:

interface ContactInfo {
email?: Email;
phone?: Phone;
}

But what happens if both are not provided by a programmer? Business rule violated. Illegal state allowed.

Solution: this could be presented as a union type

type ContactInfo = Email | Phone | [Email, Phone];

Now only either Email, or Phone, or both must be provided. If nothing is provided IDE will show a type error right away. Now business rule validation is moved from runtime to a compile time which makes application more secure and gives a faster feedback when something is not used as intended.

This is called a typestate pattern.

The typestate pattern is an API design pattern that encodes information about an object’s run-time state in its compile-time type.

Read more about typestates:

At runtime

Things that can’t be validated at compile time (like user input) are validated at runtime.

Domain objects have to protect their invariants. Having some validation rules here will protect their state from corruption.

Value Object can represent a typed value in domain (a domain primitive). The goal here is to encapsulate validations and business logic related only to the represented fields and make it impossible to pass around raw values by forcing a creation of valid Value Objects first. This object only accepts values which make sense in its context.

If every argument and return value of a method is valid by definition, you’ll have input and output validation in every single method in your codebase without any extra effort. This will make application more resilient to errors and will protect it from a whole class of bugs and security vulnerabilities caused by invalid input data.

Data should not be trusted. There are a lot of cases when invalid data may end up in a domain. For example, if data comes from external API, database, or if it’s just a programmer error.

Enforcing self-validation will inform immediately when data is corrupted. Not validating domain objects allows them to be in an incorrect state, this leads to problems.

Without domain primitives, the remaining code needs to take care of validation, formatting, comparing, and lots of other details. Entities represent long-lived objects with a distinguished identity, such as articles in a news feed, rooms in a hotel, and shopping carts in online sales. The functionality in a system often centers around changing the state of these objects: hotel rooms are booked, shopping cart contents are paid for, and so on. Sooner or later the flow of control will be guided to some code representing these entities. And if all the data is transmitted as generic types such as int or String , responsibilities fall on the entity code to validate, compare, and format the data, among other tasks. The entity code will be burdened with a lot of tasks, rather than focusing on the central business flow-of-state changes that it models. Using domain primitives can counteract the tendency for entities to grow overly complex.

Quote from: Secure by design: Chapter 5.3 Standing on the shoulders of domain primitives

Note: Though primitive obsession is a code smell, some people consider making a class/object for every primitive may be an overengineering. For less complex and smaller projects it definitely may be. For bigger projects, there are people who advocate for and against this approach. If creating a class for every primitive is not preferred, create classes just for those primitives that have specific rules or behavior, or just validate only outside of domain using some validation framework. Here are some thoughts on this topic: From Primitive Obsession to Domain Modelling — Over-engineering?.

Recommended to read:

How to do simple validation?

For simple validation like checking for nulls, empty arrays, input length etc. a library of guards can be created.

Example file: guard.ts

Read more: Refactoring: Guard Clauses

Another solution would be using an external validation library, but it is not a good practice to tie domain to external libraries and is not usually recommended.

Although exceptions can be made if needed, especially for very specific validation libraries that validate only one thing (like specific IDs, for example bitcoin wallet address). Tying only one or just few Value Objects to such a specific library won't cause any harm. Unlike general purpose validation libraries which will be tied to domain everywhere and it will be troublesome to change it in every Value Object in case when old library is no longer maintained, contains critical bugs or is compromised by hackers etc.

Though, it is fine to do full sanity checks using validation framework or library outside of domain (for example class-validator decorators in DTOs), and do only some basic checks inside of Value Objects (besides business rules), like checking for null or undefined, checking length, matching against simple regexp etc. to check if value makes sense and for extra security.

Note about using regexp

Although there are other strategies on how to do validation inside domain, like passing validation schema as a dependency when creating new Value Object, but this creates extra complexity.

Either to use external library/framework for validation inside domain or not is a tradeoff, analyze all the pros and cons and choose what is more appropriate for current application.

For some projects, especially smaller ones, it might be easier and more appropriate to just use validation library/framework.

Keep in mind that not all validations can be done in a single Value Object, it should validate only rules shared by all contexts. There are cases when validation may be different depending on a context, or one field may involve another field, or even a different entity. Handle those cases accordingly.

Types of validation

There are some general recommendations for validation order. Cheap operations like checking for null/undefined and checking length of data come early in the list, and more expensive operations that require calling the database come later.

Preferably in this order:

  • Origin — Is the data from a legitimate sender? When possible, accept data only from authorized users / whitelisted IPs etc. depending on the situation.
  • Existence — are provided data not empty? Further validations make no sense if data is empty. Check for empty values: null/undefined, empty objects and arrays.
  • Size — Is it reasonably big? Before any further steps, check length/size of input data, no matter what type it is. This will prevent validating data that is too big which may block a thread entirely (sending data that is too big may be a DoS attack).
  • Lexical content — Does it contain the right characters and encoding? For example, if we expect data that only contains digits, we scan it to see if there’s anything else. If we find anything else, we draw the conclusion that the data is either broken by mistake or has been maliciously crafted to fool our system.
  • Syntax — Is the format right? Check if data format is right. Sometimes checking syntax is as simple as using a regexp, or it may be more complex like parsing a XML or JSON.
  • Semantics — Does the data make sense? Check data in connection with the rest of the system (like database, other processes etc). For example, checking in a database if ID of item exists.

Read more about validation types described above:

Using libraries inside application’s core

Whether or not to use libraries in application layer and especially domain layer is a subject of a lot of debates. In real world, injecting every library instead of importing it directly is not always practical, so exceptions can be made for some single responsibility libraries that help to implement domain logic (like working with numbers).

Main recommendations to keep in mind is that libraries imported in application’s core shouldn’t expose:

  • Functionality to access any out-of-process resources (http calls, database access etc);
  • Functionality not relevant to domain (frameworks, technology details like ORMs, Logger etc).
  • Functionality that brings randomness (generating random IDs, timestamps etc) since this makes tests unpredictable (though in TypeScript world it is not that big of a deal since this can be mocked by a test library without using DI);
  • Frameworks can be a real nuisance because by definition they want to be in control. Isolate them within the adapters and keep our domain model clean of them.
  • If a library changes often or has a lot of dependencies of its own it most likely shouldn’t be used in domain layer.

To use such libraries consider creating an anti-corruption layer by using adapter or facade patterns.

We sometimes tolerate libraries in the center: libraries are not in control so they are less intrusive. But be careful with general purpose libraries that may scatter across many domain objects. It will be hard to replace those libraries if needed. Tying only one or just few domain objects to some single-responsibility library should be fine. It is way easier to replace a specific library that is tied to one or few objects than a general purpose library that is everywhere.

Offload as much of irrelevant responsibilities as possible from the core, especially from domain layer. In addition, try to minimize usage of dependencies in general. More dependencies your software has means more potential errors and security holes. One technique for making software more robust is to minimize what your software depends on — the less that can go wrong, the less that will go wrong. On the other hand, removing all dependencies would be counterproductive as replicating that functionality would have been a huge amount of work and less reliable than just using a widely-used dependency. Finding a good balance is important, this skill requires experience.

Read more:

Persistence models

Using a single entity for domain logic and database concerns leads to a database-centric architecture. In DDD world domain model and persistance model should be separated.

Since domain Entities have their data modeled so that it best accommodates domain logic, it may be not in the best shape to save in a database. For that purpose Persistence models can be created that have a shape that is better represented in a particular database that is used. Domain layer should not know anything about persistance models, and it should not care.

There can be multiple models optimized for different purposes, for example:

  • Domain with it’s own models — Entities, Aggregates and Value Objects.
  • Persistence layer with it’s own models — ORM for SQL, Schemas for NoSQL, Read/Write models if databases are separated into a read and write db (CQRS) etc.

Over time, when the amount of data grows, there may be a need to make some changes in the database like improving performance or data integrity by re-designing some tables or even changing the database entirely. Without an explicit separation between Domain and Persistance models any change to the database will lead to change in your domain Entities or Aggregates. For example, when performing a database normalization data can spread across multiple tables rather than being in one table, or vice-versa for denormalization. This may force a team to do a complete refactoring of a domain layer which may cause unexpected bugs and challenges. Separating Domain and Persistance models prevents that.

An alternative to using Persistence Models may be raw queries or some sort of a query builder, in this case you may not need to create ORM Entities or Schemas.

Note: separating domain and persistance models may be an overkill for smaller applications, consider all pros and cons before making this decision.

Example files:

Read more:

General recommendations on architectures, best practices, design patterns and principles

Different projects most likely will have different requirements. Some principles/patterns in such projects can be implemented in a simplified form, some can be skipped. Follow YAGNI principle and don’t overengineer.

Sometimes complex architecture and principles like SOLID can be incompatible with YAGNI and KISS. A good programmer should be pragmatic and has to be able to combine his skills and knowledge with a common sense to choose the best solution for the problem.

You need some experience with object-oriented software development in real world projects before they are of any use to you. Furthermore, they don’t tell you when you have found a good solution and when you went too far. Going too far means that you are outside the “scope” of a principle and the expected advantages don’t appear.

Principles, Heuristics, ‘laws of engineering’ are like hint signs, they are helpful when you know where they are pointing to and you know when you have gone too far. Applying them requires experience, that is trying things out, failing, analysing, talking to people, failing again, fixing, learning and failing some more. There is no short cut as far as I know.

Before implementing any pattern always analyze if benefit given by using it worth extra code complexity.

Effective design argues that we need to know the price of a pattern is worth paying — that’s its own skill.

However, remember:

It’s easier to refactor over-design than it is to refactor no design.

Read more:

Alternatives to exceptions

There is an alternative approach of not throwing exceptions, but returning some kind of Result object type with a Success or a Failure (an Either monad from functional languages like Haskell). Unlike throwing exceptions, this approach allows to define types for exceptional outcomes and will force us to handle those cases explicitly instead of using try/catch. For example:

class User {
// ...
public createUser(): Either<User, EmailInvalidException> {
// ...code for creating a user
if (invalidEmail) {
return EmailInvalidException; // <- returning instead of throwing
}
return User;
}
}

This approach has its advantages each and may work nicely in some languages, especially in functional languages which support Either type natively, but is not widely used in TypeScript/Javascript world.

Advantages:

  • Explicitly shows type of each exception that a method can return so you can handle it accordingly.
  • Complex domains may have a lot of exceptions that need special handling and are part of a business logic (like seat already booked, choose another one). In those cases explicit error types may be useful.
  • Makes error tracing easier.

Downsides:

  • If used incorrectly, i.e for technical (connection failed) or validation (incorrect input) errors, It may cause some security issues and goes against Fail-fast principle. Instead of terminating a program flow, this approach continues program execution and allows it to run in an incorrect state, which may lead to more unexpected errors, so it’s generally better to throw in those cases.
  • It adds extra complexity. Exception cases returned somewhere deep inside application have to be handled by functions in upper layers until it reaches controllers which may add a lot of extra if statements.
  • More boilerplate code.

In most applications it makes more sense to just throw an exception and notify a user immediately. Use Result/Either error types carefully and if you really need it and know what you are doing (unless you're using a language like Rust which has this functionality built-in).

Read more:

Testing

Software Testing helps catching bugs early. Properly tested software product ensures reliability, security and high performance which further results in time saving, cost effectiveness and customer satisfaction.

Lets review two types of software testing:

Testing module/use-case internal structures (creating a test for every file/class) is called White Box testing. White Box testing is widely used technique, but it has disadvantages. It creates coupling to implementation details, so every time you decide to refactor business logic code this may also cause a refactoring of corresponding tests.

Use case requirements may change mid work, your understanding of a problem may evolve or you may start noticing new patterns that emerge during development, in other words, you start noticing a “big picture”, which may lead to refactoring. For example: imagine that you defined a White box test for a class, and while developing this class you start noticing that it does too much and should be separated into two classes. Now you'll also have to refactor your unit test. After some time, while implementing a new feature, you notice that this new feature uses some code from that class you defined before, so you decide to separate that code and make it reusable, creating a third class (which originally was one), which leads to changing your unit tests yet again, every time you refactor. Use case requirements, input, output or behavior never changed, but unit tests had to be changed multiple times. This is inefficient and time consuming.

To solve this and get the most out of your tests, prefer Black Box testing (Behavioral Testing). This means that tests should focus on testing user-facing behavior users care about (your code's public API), not the implementation details of individual units it has inside. This avoids coupling, protects tests from changes that may happen while refactoring, makes tests easier to understand and maintain thus saving time.

Tests that are independent of implementation details are easier to maintain since they don’t need to be changed each time you make a change to the implementation.

Try to avoid White Box testing when possible. However, it’s worth mentioning that there are cases when White Box testing may be useful. For instance, we need to go deeper into the implementation when it is required to reduce combinations of testing conditions, for example, a class uses several plug-in strategies, thus it is easier for us to test those strategies one at a time, in this case White Box tests may be appropriate.

Use White Box testing only when it is really needed and as an addition to Black Box testing, not the other way around.

It’s all about investing only in the tests that yield the biggest return on your effort.

Behavioral tests can be divided in two parts:

  • Fast: Use cases tests in isolation which test only your business logic, with all I/O (external API or database calls, file reads etc.) mocked. This makes tests fast so they can be run all the time (after each change or before every commit). This will inform you when something fails as fast as possible. Finding bugs early is critical and saves a lot of time.
  • Slow: Full End to End (e2e) tests which test a use case from end-user standpoint. Instead of injecting I/O mocks those tests should have all infrastructure up and running: like database, API routes etc. Those tests check how everything works together and are slower so can be run only before pushing/deploying. Though e2e tests live in the same project/repository, it is a good practice to have e2e tests independent from project’s code. In bigger projects e2e tests are usually written by a separate QA team.

Note: some people try to make e2e tests faster by using in-memory or embedded databases (like sqlite3). This makes tests faster, but reduces the reliability of those tests and should be avoided. Read more: Don’t use In-Memory Databases for Tests.

Example files: // TODO

Read more:

Logging

  • Try to log all meaningful events in a program that can be useful to anybody in your team.
  • Use proper log levels: log/info for events that are meaningful during production, debug for events useful while developing/debugging, and warn/error for unwanted behavior on any stage.
  • Write meaningful log messages and include metadata that may be useful. Try to avoid cryptic messages that only you understand.
  • Never log sensitive data: passwords, emails, credit card numbers etc. since this data will end up in log files. If log files are not stored securely this data can be leaked.
  • Avoid default logging tools (like console.log). Use mature logger libraries (for example Winston) that support features like enabling/disabling log levels, convenient log formats that are easy to parse (like JSON) etc.
  • Consider including user id in logs. It will facilitate investigating if user creates an incident ticket.
  • In distributed systems a gateway can generate an unique id for each request and pass it to every system that processes this request. Logging this id will make it easier to find related logs across different systems/files.
  • Use consistent structure across all logs. Each log line should represent one single event and contain things like timestamp, context, unique user/request id and/or id of entity/aggregate that is being modified, as well as additional metadata if required.
  • Use log managements systems. This will allow you to track and analyze logs as they happen in real-time. Here are some short list of log managers: Sentry, Loggly, Logstash, Splunk etc.
  • Send notifications of important events that happen in production to a corporate chat like Slack or even by SMS.
  • Don’t write logs to a file from your program. Write all logs to stdout (to a terminal window) and let other tools handle writing logs to a file (for example docker supports writing logs to a file). Read more: Why should your Node.js application not handle log routing?
  • Logs can be visualized by using a tool like Kibana.

Read more:

Health monitoring

Additionally to logging tools, when something unexpected happens in production, it’s critical to have thorough monitoring in place. As software hardens more and more, unexpected events will get more and more infrequent and reproducing those events will become harder and harder. So when one of those unexpected events happens, there should be as much data available about the event as possible. Software should be designed from the start to be monitored. Monitoring aspects of software are almost as important as the functionality of the software itself, especially in big systems, since unexpected events can lead to money and reputation loss for a company. Monitoring helps fixing and sometimes preventing unexpected behavior like failures, slow response times, errors etc.

Health monitoring tools are a good way to keep track of system performance, identify causes of crashes or downtime, monitor behavior, availability and load.

Some health monitoring tools already include logging management and error tracking, as well as alerts and general performance monitoring.

Here are some basic recommendation on what can be monitored:

  • Connectivity — Verify if user can successfully send a request to the API endpoint and get a response with expected HTTP status code. This will confirm if the API endpoint is up and running. This can be achieved by creating some kind of ‘heath check’ endpoint.
  • Performance — Make sure the response time of the API is within acceptable limits. Long response times cause bad user experience.
  • Error rate — errors immediately affect your customers, you need to know when errors happen right away and fix them.
  • CPU and Memory usage — spikes in CPU and Memory usage can indicate that there are problems in your system, for example bad optimized code, unwanted process running, memory leaks etc. This can result in loss of money for your organization, especially when cloud providers are used.
  • Storage usage — servers run out of storage. Monitoring storage usage is essential to avoid data loss.

Choose health monitoring tools depending on your needs, here are some examples:

Read more:

Folder and File Structure

So instead of using typical layered style when all application is divided into services, controllers etc, we divide everything by modules. Now, how to structure files inside those modules?

A lot of people tend to do the same thing as before: create a separate folders/files for services, controllers etc and keep all module’s use-cases logic there, making those controllers and services bloated with responsibilities. This is the same approach that makes navigation harder.

Using this approach, every time something in a service changes, we might have to go to another folder to change controllers, and then go to dtos folder to change the corresponding dto etc.

It would be more logical to separate every module by components and have all the related files close together. Now if a use-case changes, those changes are usually made in a single use-case component, not everywhere across the module.

This is called The Common Closure Principle (CCP). Folder/file structure in this project uses this principle. Related files that usually change together (and are not used by anything else outside of that component) are stored close together, in a single use-case folder. Check user use-cases folder for examples.

And shared files (like domain objects, repositories etc) are stored apart since those are reused by multiple use-cases. Domain layer is isolated, and use-cases which are essentially wrappers around business logic are treated as components. This approach makes navigation and maintaining easier. Check user folder for an example.

The aim here should to be strategic and place classes that we, from experience, know often changes together into the same component.

Keep in mind that this project’s folder/file structure is an example and might not work for everyone. Main recommendations here are:

  • Separate you application into modules;
  • Keep files that change together close to each other (Common Closure Principle);
  • Group files by their behavior that changes together, not by type of functionality that file provides;
  • Keep files that are reused by multiple components apart;
  • Respect boundaries in your code, keeping files together doesn’t mean inner layers can import outer layers;
  • Try to avoid a lot of nested folders;
  • Move files around until it feels right.

There are different approaches to file/folder structuring, like explicitly separating each layer into a corresponding folder. This defines boundaries more clearly but is harder to navigate. Choose what suits better for the project/personal preference.

Custom utility types

Consider creating a bunch of shared custom utility types for different situations.

Some examples can be found in types folder.

Pre-push/pre-commit hooks

Consider launching tests/code formatting/linting every time you do git push or git commit. This prevents bad code getting in your repo. Husky is a great tool for that.

Read more:

Prevent massive inheritance chains

This can be achieved by making class final.

Note: in TypeScript, unlike other languages, there is no default way to make class final. But there is a way around it using a custom decorator.

Example file: final.decorator.ts

Read more:

Conventional commits

Conventional commits add some useful prefixes to your commit messages, for example:

  • feat: added ability to delete user's profile

This creates a common language that makes easier communicating the nature of changes to teammates and also may be useful for automatic package versioning and release notes generation.

Read more:

Additional resources

Articles

Repositories

Documentation

Blogs

Videos

Books

More content at plainenglish.io