- read

Golang. Ways to Design API. Server/Client multimodule repository.

Alexander Blackman 81

In different teams I’ve seen very different approaches to organising API interfaces.

The most comfortable and effective from the point of view of system design for me is Design first pattern.

When we are not interested in internal implementation of separate parts of the system. We consider a single unit of the system as an abstraction that communicates with us in a strictly defined language. Reminiscent of Test Driven Design from the design world.

I want to review the different approaches I’ve seen.
To tell about advantages and disadvantages and to solve one unpleasant problem in one of such cases.

gRPC allows you to organise contracts well out of the box.
Design first pattern is already implemented by the approach itself. You can organise one big repository with common service api and develop standards within the team. Versioning, documentation and naming in a single place. On the downside, the business needs to invest in code generation, maintenance and usability.

Multiple microservices around a monolith in a monorepository is another case.
Just as convenient. Usually a microservice appears next to a monolith when you need to do a finite and isolated task. Microservices on go are written quickly, sometimes http server in single main implementation is enough to quickly run some mvp. At the same time it is convenient to place the server client inside the same package. Everything is perfectly and quickly covered by tests using httptest. The headache here is mainly for the devops team, that would organise the deploy, as well as competently build within the team the correct work with the monorepository. Monorepository vs. multirepository ancient holywar)

In a classic microservice architecture based on synchronous http REST interactions between individual services, you can organise the logic of placing servers and their clients differently.

In general, if your system has a lot of synchronous interactions, then I have bad news for you. Perhaps you are a happy owner of a distributed monolith. How to deal with it is a separate story.

The first approach is to implement Design First codegeneration. First we describe the openAPI/swagger contract. On the basis of this we build a billet in which we already implement from top to bottom all the necessary logic. It is perfect for companies that have come to unified templates for microservices. Where new services appear frequently and this process is well established. The advantages are the same as in the process of protobuf description. Minuses — exactly the same) Great efforts of the development team are spent on creation and support of the code generation system.

Here is a good article on this topic

https://goa.design/

Swagger and OpenAPI have their own libraries for code generation too.

On the other hand — it may be inefficient to use the swagger separately.
I’ve seen a lot of dead swaggers.

Like any documentation, a swagger becomes useless as soon as it is no longer supported. What would it take to use it effectively outside of a Design First approach. You need to answer a few questions: Where will it be stored? Who will be responsible for it? What do you need to know and be able to update it? What do you need to do to roll back to a previous version? What happens if we change it in the code but not in the contract?
Some of the questions need to be answered in other cases, but Design First solves the last one for you. You first document the strict language of APIs inside the system and only then implement the logic. Documentation is always one step ahead of development. Without it, any factor that disrupts the development process can destroy the documentation that has been going on for a long time. It will take a lot of effort to rebuild it. Problems with deadlines, unstructured processes within the team, change of part of the team, human factor, lack of discipline — anything and the swagger is dead.

This is exactly the situation I have encountered on more than one occasion.

For example, we had separate repositories with their own versioning for clients.

At the same time we had services with their own versioning and documentation. The documentation on the project lived a separate life. It was useless to restore the swagger (very expensive) and it was not possible to justify to the business the necessity to dig into it. Then we decided to fix the contract in jira. By the task number from the commit you can always go and see what the person was doing and what interface he left behind.

Cheap and practical, it came in handy more than once.

Especially after half a year, when asked to take into account some ancient business logic.

There was another problem.
The release of any changes in the contract was long and very tedious. Making changes to the server. Push changes. Go to the repository. Test the stable version. Roll out a tag. Make the same changes to the client. Push changes. Go to another repository. Roll out another tag. Every time when u want to add veryImportantBytes to the system contract…

Another annoyance — transport layer models are duplicated in server and client.

The way out was not so complicated and solved several problems at once.

As in the monorep case — place the client next to the server.

Now I’ll tell you how to do it neatly in go:

For example you have got very easy http server with single handler.
(github link)

package main

import (
"encoding/json"
"github.com/AgeroFlynn/server-client/pkg/server"
"io"
"net/http"
)

func PostHandler(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Invalid request method", http.StatusMethodNotAllowed)
return
}

body, err := io.ReadAll(r.Body)
if err != nil {
http.Error(w, "Read body error", http.StatusBadRequest)
return
}

var dto server.RequestDTO
if err = json.Unmarshal(body, &dto); err != nil {
http.Error(w, "Unmarshal error", http.StatusBadRequest)
return
}

// ---
// do some business logic
// ---

result := server.ResponseDTO{Result: "some result"}

data, err := json.Marshal(&result)
if err != nil {
http.Error(w, "Response marshal error", http.StatusBadRequest)
}

_, err = w.Write(data)
if err != nil {
http.Error(w, "Failed to response", http.StatusInternalServerError)
}
}

func main() {
http.HandleFunc("/api/v1/method", PostHandler)
if err := http.ListenAndServe(":8080", nil); err != nil {
panic(err)
}
}

This example does not pay close attention to naming or technical implementation details. Only the usecase of the multimodule repository and the client’s location near the server.

Here is project structure:

And client code:

package server

import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
)

var (
emptyRequestError = errors.New("empty request")
)

type client struct {
BaseURL string
*http.Client
}

func NewClient(baseUrl string) *client {
c := &http.Client{}

return &client{
BaseURL: baseUrl,
Client: c,
}
}

func (c *client) V1MethodPost(dto *RequestDTO) (*ResponseDTO, error) {
if dto == nil {
return nil, emptyRequestError
}

payloadBuf := new(bytes.Buffer)
err := json.NewEncoder(payloadBuf).Encode(dto)
if err != nil {
return nil, fmt.Errorf("failed encode data V1Method from API Server: %v", err)
}

res, err := c.Post(c.BaseURL, "application/json", payloadBuf)
if err != nil {
return nil, err
}
defer res.Body.Close()

data, err := io.ReadAll(res.Body)
if err != nil {
return nil, fmt.Errorf("failed ReadAll body response V1Method from API Server: %v", err)
}

if res.StatusCode != http.StatusOK {
if res.StatusCode == http.StatusNoContent {
return nil, nil
}

return nil, fmt.Errorf(`not success status V1Method from API Server.
Actual status: %d\nBody response: %s`, res.StatusCode, string(data))
}

result := &ResponseDTO{}

if err = json.Unmarshal(data, &result); err != nil {
return nil, fmt.Errorf("failed decode response V1Method from API Server: %v", err)
}

return result, err
}

Server — abstract name of the microservice for which we write a client. It can be for example order order service. then the naming in the place of use will be order.NewClient(baseUrl)

  1. We need use go mod init inside client directory.
  2. Then make tag with naming like pkg/server/v0.0.0.
  3. Then go mod edit -replace github.com/AgeroFlynn/server-client/pkg/server=../pkg/server
  4. Official documentation for Golang multimodule repos

Profit)

Now you have got server and client in the single repo.
http transport logic incapsulated into client.
You can write tests client -> transport server layer unit using httptest.
And man advantage — single place for DTO structs. Server and involved microservices using same dependency with its own versioning.

You can import client like this:

Disadvantages:

  1. go test ./… from the root now dont work with submodules. You should manual write into gitCI.
  2. Your team should agree and remember that the client requires its own way and order of naming tags. Of course you can automate this.

To summarise — there are no perfect approaches.

Especially since legacy can force you to conform to the current state of affairs. There is a different solution for every problem. You have to weigh the pros and cons. It seems to me that the idea of design first pattern pays off in the long run when building distributed systems. Let’s strive to write good code)