- read

Test Smarter, Not Harder: Harnessing Table Tests in Go

Benjamin Cane 95

Test Smarter, Not Harder: Harnessing Table Tests in Go

Benjamin Cane
Level Up Coding
Published in
7 min readJul 31

--

I’m a big fan of testing; it is a critical part of software engineering and how reliable software gets built. I am also a big fan of making tests easier to manage and maintain. The easier a set of tests are to maintain, the more they are maintained, making them more effective.

In today’s article, I will discuss a technique popular within the Go community for simplifying repetitive tests. This technique is called Table Tests, and I will show how to use Table Tests to reduce the number of test functions while simultaneously increasing code coverage.

What are Table Tests?

Table Tests are a great way to test functions with varying inputs and results. The idea behind Table Tests is rather than writing a unique test function for each combination of input and output, a single test function can iterate over a table of function inputs and validate expected results.

Basic Greeter

To best explain the concept of Table Tests, we will first create a simple set of functions that output various greetings in different languages.

We will then use Table Tests to iterate through the multiple languages and expected greetings.

// Config is used to configure a Greeter based on desired language parameters.
type Config struct {
// Language is the language to use while greeting.
Language string
}

// Greeting returns a greeting in the language specified by the Config.
type Greeter struct {
language string
}

// New returns a new Greeter based on the provided Config.
func New(cfg Config) (*Greeter, error) {
switch cfg.Language {
case "en", "fr", "es", "de", "jp", "cn":
return &Greeter{language: cfg.Language}, nil
default:
return nil, fmt.Errorf("unsupported language: %s", cfg.Language)
}
}

// Greeting returns a greeting in the language specified by the Greeter.
func (g *Greeter) Greeting() string {
switch g.language {
case "en":
return "Hello"
case "fr":
return "Bonjour"
case "es":
return "Hola"
case "de":
return "Hallo"
case "jp":
return "こんにちは"
case "cn":
return "你好"
}
return ""
}

We must first look at the New() function to understand the example code. This function takes in a Config struct that allows users to specify a desired language. The New() function will return a Greeter struct. The Greeter struct has a method Greeting() that returns a string value.

To write a test for this code, we must identify the critical decision points of the code.

While executing, the New() function will validate the user-provided Config struct by ensuring the language is supported. The function will return an error if the language is not supported. To properly test this function, we must call it with both supported and unsupported languages to ensure it produces the expected results.

Another critical decision point is within the Greeting() method. The language is validated and returns the pre-set greeting when executed. Ensuring the correct mapping of language to the greeting will be very important.

The need to pass multiple languages and validate multiple outcomes makes the example code an excellent prospect for Table Tests. But before we get to Table Tests, let’s first write a simple unit test.

func TestExampleEn(t *testing.T) {
g, err := New(Config{Language: "en"})
if err != nil {
t.Errorf("Error creating new greeter: %s", err)
}

if g.Greeting() != "Hello" {
t.Errorf("Expected greeting to be 'Hello', got '%s'", g.Greeting())
}
}

The unit test above is straightforward. We created a new Greeter struct for the English language, ensured there was no error, and validated that the greeting returned from Greeting() was correct.

But this is just one language; how would we test all the supported languages? We could write a new test function for each language, but that would be a lot of repetitive code. Instead, let’s use a Table Test.

Introducing Table Tests

Rather than writing a new test function for each language and greeting, an alternative is to create a single test function that iterates over a map of languages and greetings.

func TestExampleTable(t *testing.T) {
tt := map[string]string{
"en": "Hello",
"es": "Hola",
"fr": "Bonjour",
"de": "Hallo",
"jp": "こんにちは",
"cn": "你好",
}

for k, v := range tt {
g, err := New(Config{Language: k})
if err != nil {
t.Errorf("Error creating new greeter: %s", err)
}

if g.Greeting() != v {
t.Errorf("Expected greeting to be '%s', got '%s'", v, g.Greeting())
}
}
}

The above test function is an example of the basic concepts of Table Tests. Within this test, we have a map of languages and greetings which we iterate over.

We create a new Greeter struct for each language and greeting with each iteration and validate that the greeting is correct.

We can add a new test case in the map when adding a new language and greeting. This approach is much easier to maintain than writing a new test function for each language and greeting.

But, there are some gaps.

For example, it isn’t easy to differentiate the results of each test case in the map. If one language were to fail, we would not know which. There is also no easy way to validate negative test cases. How do we test that an unsupported language returns an error?

Advanced Table Tests

To test more complex behavior, such as negative scenarios, rather than creating a map of language and greeting, we can create a TestCase struct that contains the language, greeting, and a field that indicates if the test case should pass or fail.

type TestCase struct {
lang string
greeting string
pass bool
}

In our test function, we can create a slice of TestCase structs and iterate over the slice.

func TestExampleTable2(t *testing.T) {
tt := []TestCase{
{"en", "Hello", true},
{"es", "Hola", true},
{"fr", "Bonjour", true},
{"de", "Hallo", true},
{"jp", "こんにちは", true},
{"cn", "你好", true},
{"xx", "", false},
}

for _, tc := range tt {
t.Run("Validate Language - "+tc.lang, func(t *testing.T) {
// Create a new greeeter
g, err := New(Config{Language: tc.lang})
if err != nil && tc.pass {
t.Errorf("Error creating new greeter: %s", err)
}

// Escape if we expect an error
if !tc.pass {
if err == nil {
t.Errorf("Expected error, got nil")
}
return
}

// Validate the greeting
if g.Greeting() != tc.greeting {
t.Errorf("Expected greeting to be '%s', got '%s'", tc.greeting, g.Greeting())
}
})
}
}

We can drive intricate test behavior using a custom TestCase struct, such as validating whether a test case should pass or fail. Our example contains valid and invalid languages, and based on whether pass is true or false, our test code expects a success or failure.

This same approach enables complex testing scenarios that validate expected error messages, passing in custom function arguments, and more. This methodology can open the door to very elaborate testing logic.

In the example, we also introduced the t.Run() function. This function creates a sub-test within a test function. Sub-tests give users a way to programmatically group tests into smaller segments. We can quickly identify which test cases failed and why using sub-tests within our for-loop.

Sub-Tests

Sub-tests are a great way to provide additional context during test execution; what makes them powerful is that sub-tests can be nested, allowing for fine-grain test grouping and execution.

func TestExampleTable3(t *testing.T) {
tt := []TestCase{
{"en", "Hello", true},
{"es", "Hola", true},
{"fr", "Bonjour", true},
{"de", "Hallo", true},
{"jp", "こんにちは", true},
{"cn", "你好", true},
{"xx", "", false},
}

for _, tc := range tt {
t.Run("Validate Language - "+tc.lang, func(t *testing.T) {
// Create a new greeeter
g, err := New(Config{Language: tc.lang})
if err != nil && tc.pass {
t.Errorf("Error creating new greeter: %s", err)
}

// Escape if we expect an error
if !tc.pass {
if err == nil {
t.Errorf("Expected error, got nil")
}
return
}

t.Run("Validate Greeting", func(t *testing.T) {
// Validate the greeting
if g.Greeting() != tc.greeting {
t.Errorf("Expected greeting to be '%s', got '%s'", tc.greeting, g.Greeting())
}
})
})
}
}

Our example test function breaks the greeting validation into a sub-test. By breaking the greeting validation into a sub-test, if this validation fails, we can quickly identify what part of the test failed. This breakout may be overkill for our example, but more complex scenarios with multiple validation points can benefit from using sub-tests.

Table Tests with Benchmarks

Table Tests are also a great way to simplify benchmark tests. Like our unit tests, we should test the performance and allocation for each language’s Greeting() function. To do this, we can create a Table Test that works similarly to our previous examples.

type BenchmarkTestCase struct {
language string
greeting string
}

func BenchmarkExample(b *testing.B) {
tt := []BenchmarkTestCase{
{"en", "Hello"},
{"fr", "Bonjour"},
{"es", "Hola"},
{"de", "Hallo"},
{"jp", "こんにちは"},
{"cn", "你好"},
}

for _, tc := range tt {
b.Run("Benchmark Language - "+tc.language, func(b *testing.B) {
g, err := New(Config{Language: tc.language})
if err != nil {
b.Fatalf("Error creating greeter: %v", err)
}

for i := 0; i < b.N; i++ {
_ = g.Greeting()
}
})
}
}

The example above works exactly like our previous example; we created a new BenchmarkTestCase for each test case and executed it using a for-loop and sub-benchmark.

With just a few extra lines of code, we can avoid writing a unique benchmark function for every possible language and greeting combination.

Conclusion

Table Tests are a great way to write both unit tests and benchmarks. While they can be a bit complex to understand initially, they are a great way to keep your testing code clean.

Using Table Tests helps me identify new testing scenarios quickly and makes me think of my testing more thoroughly. Which, in turn, leads to more code coverage and better-quality software.