As promised in part 1, we will look at the lib directory and its contents first.
lib
├── config
│ └── config.go
├── http
│ └── http.go
├── logger
│ └── logger.go
├── password
│ ├── password.go
│ └── password_test.go
└── random
└── random.go
There comes a time in any mid-sized or big project, where you’ll need code which “doesn’t fit anywhere in your domain”. I’ve seen people create packages called “utils”, or “helpers” , and put a lot of unrelated code together in there. It’s an antipattern. As projects grow bigger, you’ll need to put more and more code into your utils package, and imagine the scenario where you’ll only need one function from it, and for that you’ll need to import the whole thing… far from optimal. Readability is another issue, a package called utils does not tell anything to people reading your code.
I decided to create a directory called lib, and created little packages that represent code belonging together. In my opinion this is a clear approach, helps readability, and you don’t have to litter your domain with unrelated pieces of code all the time.
config.go
package config
import (
"fmt"
"time"
"github.com/spf13/viper"
)
type Config struct {
Environment string `mapstructure:"ENV"`
LogLevel string `mapstructure:"LOG_LEVEL"`
DBConnString string `mapstructure:"DB_CONN_STRING"`
HTTPServerAddress string `mapstructure:"HTTP_SERVER_ADDRESS"`
PASETOSecret string `mapstructure:"PASETO_SYMMETRIC_KEY"`
AccessTokenDuration time.Duration `mapstructure:"ACCESS_TOKEN_DURATION"`
}
// Load reads configuration from file or environment variables.
func Load(path string) (config Config, err error) {
viper.AddConfigPath(path)
viper.SetConfigName("development")
viper.SetConfigType("env")
viper.AutomaticEnv()
err = viper.ReadInConfig()
if err != nil {
log.Fatalf("Viper couldn't read in the config file. %v", err)
}
err = viper.Unmarshal(&config)
if err != nil {
log.Fatalf("Viper could not unmarshal the configuration. %v", err)
}
return
}
As mentioned in part 1, I’m using Viper for configuration. I’m reading in the development.env file which contains all my configuration through environment variables. Then, we need a struct which represents these variables. All the struct fields are being mapped to environment variables. I really like this approach, because as you’ll see you won’t have to create variables for each of your config pieces, and you won’t have to use os.Getenv(), and also typing is really nice. I also fancy this from a readability perspective, because this just looks really clean:
l := logger.New(config.LogLevel)
The thing we have to pay attention to is, when we decide to involve more config pieces from the environment, we must not forget to update this code. I am not a fan of huge structs (and you shouldn’t be either), so maybe we could create separate structs in the future for config pieces belonging together, e.g ServerConfig, DatabaseConfig, etc.
logger.go
package logger
import (
"os"
"github.com/rs/zerolog"
"github.com/rs/zerolog/pkgerrors"
)
var logger zerolog.Logger
func New(level string) zerolog.Logger {
zerolog.ErrorStackMarshaler = pkgerrors.MarshalStack
loglevel, err := zerolog.ParseLevel(level)
if err != nil {
zerolog.SetGlobalLevel(zerolog.InfoLevel)
}
zerolog.SetGlobalLevel(loglevel)
logger = zerolog.New(os.Stdout).
With().
Timestamp().
CallerWithSkipFrameCount(2).
Logger()
zerolog.DefaultContextLogger = &logger
return logger
}
Ahh, my good old friend, Zerolog. We don’t need extremely complicated things here, we just want the damn logs!
Here we create a global instance of the logger, and return it in a factory function called New. In the first line of the function we are making sure that we can extract stack traces from an error by setting the ErrorStackMarshaler. This is important, as we want to trace the root of our errors.
After setting the loglevel and using the builder pattern to configure our logger, the DefaultContextLogger setting is up. To understand why I’m doing this, we need to think about how we can use logging in general. There are a few options available, but I always find myself passing the logger in either as a function parameter, or as a struct field.
If we look at our code, we have quite a few HTTP handler functions, and we are really interested in what’s happening inside. Do we want to pass the logger in every time as a function parameter? I’d say no. Instead with this line, I’m telling my code that this instance of the logger should be used in the Ctx() function of Zerolog, so that I can set the Logger context to the request context, and use my logger nicely throughout the lifetime of the request context. The implementation of this can be found in my httplib package, where I created a helper that’s used in every handler.
Before you flip the desk and shout “this tutorial is utter s***, we all know that you should NOT use context in Go to pass around values!”, take a moment and think about why this is the case. Context propagation and cancellation is really a dangerous thing if not handled properly, and can easily break your code, there’s no debate around this. But what happens if this HTTP request gets cancelled, where we pass in the logger? You guessed it, nothing. If the request is cancelled, we don’t need to log anything around it anymore, so we don’t use our Logger in that context.
Throughout the application, I’m only using the Error and Info level. In the future, we can add DEBUG level as well, when we implement tracing with Jager (getting ahead of myself again…)
http.go
package httplib
import (
"context"
"encoding/json"
"net/http"
"time"
"github.com/rs/zerolog"
)
type Msg map[string]string
func JSON(w http.ResponseWriter, payload interface{}, code int) {
response, err := json.Marshal(payload)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte("Internal server error while marshalling the response"))
return
}
w.WriteHeader(code)
w.Write(response)
}
func SetupHandler(w http.ResponseWriter, ctx context.Context) (*zerolog.Logger, context.Context, context.CancelFunc) {
w.Header().Set("Content-Type", "application/json")
ctx, cancel := context.WithTimeout(ctx, 5 * time.Second)
l := zerolog.Ctx(ctx)
return l, ctx, cancel
}
func SetCookie(w http.ResponseWriter, name string, token string, expiresAt time.Time) {
http.SetCookie(w, &http.Cookie{
Name: name,
Value: token,
Expires: expiresAt,
HttpOnly: true,
Secure: true,
})
}
The httplib package contains nothing out of the ordinary, just some helpers to write JSON messages back to the client (Chi does not have a built-in JSON function as Gin does — although you can use the Render package if you want), and a SetCookie wrapper function. The SetupHandler function is where I do what I described in the section above. We set a context with a timeout, and then set the request’s context (that’s being passed in as a function parameter from every request) to be the logger context. Then, because we earlier did
zerolog.DefaultContextLogger = &logger
our global logger instance can be used in every request!
password_test.go and password.go
When users register to our application, they need to provide an email address, a username, and a password. We cannot just store their password in plain text in our Postgre DB, we need to hash it. For this I used the bcrypt library which is just perfect for this. We also need to be able to validate their password when they log in to the site.
package password
import (
"errors"
"fmt"
"golang.org/x/crypto/bcrypt"
)
var ErrTooShort = errors.New("the given password is too short")
func Hash(password string) (string, error) {
if len(password) < 5 {
return "", ErrTooShort
}
hash, err := bcrypt.GenerateFromPassword([]byte(password), 10)
if err != nil {
return "", fmt.Errorf("could not hash user password! %v", err)
}
return string(hash), nil
}
func Validate(hashedPassword string, password string) error {
if len(password) < 5 {
return ErrTooShort
}
err := bcrypt.CompareHashAndPassword([]byte(hashedPassword), []byte(password))
if err != nil {
return fmt.Errorf("error while comparing the current and hashed password! %v", err)
}
return nil
}
Now the tests:
package password
import (
"testing"
"github.com/adykaaa/online-notes/lib/random"
"github.com/stretchr/testify/require"
)
func TestHashUserPassword(t *testing.T) {
t.Run("password hashing OK", func(t *testing.T) {
hpw, err := Hash(random.NewString(10))
require.NoError(t, err)
require.NotEmpty(t, hpw)
})
t.Run("same hash for different password", func(t *testing.T) {
pw := random.NewString(20)
hpw1, err := Hash(pw)
require.NoError(t, err)
hpw2, err := Hash(pw)
require.NoError(t, err)
require.NotEqual(t, hpw1, hpw2)
})
t.Run("different hash for different password", func(t *testing.T) {
pw := random.NewString(20)
hpw1, err := Hash(pw)
require.NoError(t, err)
hpw2, err := Hash(pw)
require.NoError(t, err)
require.NotEqual(t, hpw1, hpw2)
})
t.Run("fails if too short", func(t *testing.T) {
hpw, err := Hash(random.NewString(4))
require.Error(t, err)
require.Empty(t, hpw)
})
t.Run("fails if empty", func(t *testing.T) {
hpw, err := Hash("")
require.Error(t, err)
require.Empty(t, hpw)
})
t.Run("fails if too long", func(t *testing.T) {
hpw, err := Hash(random.NewString(100))
require.Error(t, err)
require.Empty(t, hpw)
})
}
func TestValidate(t *testing.T) {
const pw1 = "abc123!"
const pw2 = "abc321!"
t.Run("password validation OK", func(t *testing.T) {
hpw, _ := Hash(pw1)
err := Validate(hpw, pw1)
require.NoError(t, err)
})
t.Run("fails with not same hash", func(t *testing.T) {
hpw, _ := Hash(pw1)
err := Validate(hpw, pw2)
require.Error(t, err)
})
}
As I said, the vast majority of my tests are table driven tests — these are not. I found this to be easier to implement, and read. When I’m designing tests, I like to start with the OK case where everything is working, and everyone is happy, then I like to think about the edge cases my function can break.
In general, it’s really important to test your code in a way, that you provide the input with the expected output, and in no way, shape, or form your tests should rely on implementation details. What does this mean in our case? Well, we test 2 functions here: hashing and validating. Do we care how our functions actually do the hashing, or the validation? No. We only care about feeding in the input, and getting the expected output. This is of crucial importance, because if we were to change the hashing mechanism from using bcrypt to another library, our tests would still pass — as they should.
Let me take a quick detour and express my short opinion on TDD here. I could lie and tell you that I developed this code using the TDD methodology — I didn’t. The reason is, I think TDD only works in the books. I think good software is written at least 2 or 3 times. You need to experiment, experience the edge cases, see where your code can break — so that later when you rewrite it to an actual acceptable form, you can follow TDD — because you’ll know what to actually test for. Don’t get me wrong, I think TDD is of vital importance — it provides a really strong safety net, as well as makes you think about writing good code, but I think how it’s written in the books is no more than an ideal scenario which is not the case in real life. Also, good luck following strict TDD when the requirements of your sotware change by the hour :).
random.go
Not much to talk about here — just some things to make testing easier.
package random
import (
"database/sql"
"math/rand"
"strings"
"time"
db "github.com/adykaaa/online-notes/db/sqlc"
"github.com/google/uuid"
)
const chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
func NewString(n int) string {
var sb strings.Builder
k := len(chars)
for i := 0; i < n; i++ {
c := chars[rand.Intn(k)]
sb.WriteByte(c)
}
return sb.String()
}
func NewDBNote(id uuid.UUID) *db.Note {
note := db.Note{
ID: id,
Title: NewString(15),
Username: NewString(10),
Text: sql.NullString{String: NewString(60), Valid: true},
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
return ¬e
}
So, this concludes our lib directory tour. In part 3, we will FINALLY start looking at how main.go is structured (as of the time of writing this), and start diviving deeper into NoteService and its tests.