Strategies for Successful Component Tests

Writing good component tests (a.k.a. API tests or integration tests) is hard, and they can become hard to maintain when the service they are testing keeps growing and changing. This post explains strategies that help to write a stable and maintainable test suite.

Successful API Tests
Photo by Nic Wood from Pexels

Testable Service

Component tests can be hard or easy to implement, depending on the service they are testing. In my opinion, having a testable service is a prerequisite for being able to write component tests:

  1. External dependencies like databases, external caches etc. can be substituted for in-memory versions for component tests. This has the advantage that the tests are easy to run anywhere, and we do not have problems with ‘leftover’ test data.
    For sceptics: Problems with these dependencies are not usually found in component tests. The code for these should be either generated or a standard library. To test high load scenarios or error handling, a separate suite of integration tests will be ‘cheaper’ (easier, faster and more concise) to implement.
  2. ALWAYS use a generated client or standard library to access downstream APIs. Handwriting client code introduces the potential for bugs, and it needs to be tested => the testing effort is much higher.
  3. Consider tests when implementing timeouts and other options.
    1. They should always be configurable, even if they have default values.
    2. The biggest unit for timeouts to consider should be seconds – how long do we want tests to wait?
  4. Easy to configure and deploy in a CI/CD pipeline

A good way to implement a testable service is to implement tests as early as possible in the life cycle: Adding some rudimentary component tests as soon as you start implementing a new feature has two major advantages.

  1. Developers get value from the tests early on, and
  2. if there is anything hard to test it will be brought up and fixed. This is much harder to achieve when developers have started the next feature.

Generate Code

Generate as much code as possible in both test and production code. This saves you the work of writing clients or models yourself and ensures that you can trust your mocks. Since the code is generated it is correct and reduces the potential for bugs. Check my post about how to write Mocks you Trust for details.

Only Test Once

The default behavior when automating tests often is to worry about not testing enough. However, testing too much – and in particular re-testing the same behavior – means that more tests are likely to break when the API behavior changes. The test suite becomes brittle and hard to maintain. It also takes more time to write the tests in the first place, since mocking is ‘expensive’.

How can we make sure we test ‘everything’, but not duplicate tests? If in doubt it can help to discuss the strategy with QA and Developers together. Here is a sample of what tests might be implemented around a standard API endpoint.

  1. Happy path – the API returns a 2xx response. Here we check everything:
    • The mock verifies the request path and all parameters / the request body
    • Check that the request headers are correct. Headers which are applied to all requests may not need to be checked for every single request.
    • The API returns the correct response, based on the mocked response. There may be multiple scenarios to cover here.
  2. Error scenarios – downstream services and/or the API returns an error response: Since we know that the API builds the request correctly, we do not need to check that the request is correct. Mocks can return an error without matching a request body or other parameters.
  3. Additional behaviours like caching or resilience, i.e. retries with polly. Again, we do not need to check that the request is correct. We may not even need to check the full response, since all the ‘remaining’ logic is the same. It might even be enough to check the response code.

This topic is big and I will try to add a separate post about analysing an API for test coverage if I find the time.

Less is More: Keep Mocks Simple

When starting to write component tests, it is very tempting to just record the traffic between services and use that to mock a downstream service. This way we usually end up with a very big mock, which mocks the whole service.

In my previous experience that works very well at the beginning and makes mocking very easy, but it does not scale. Using this approach you need to live with the following limitations:

  1. It will be difficult to mock scenarios you cannot easily record via end-to-end tests. This could be intermittent errors or scenarios which need manual setup.
  2. You will likely start to ‘edit’ the big mock on the fly, to test additional scenarios. This gets ugly very quickly.
  3. All tests are coupled together. Changing the big mock becomes really hard, because the effect on all tests needs to be considered. This leads to more variations of responses being added to the big mock, and it becomes even harder to tell what is used where.
  4. To understand and analyse problems in a test, you need to find the mock which is triggered in the big mock file. The bigger it is, the harder it gets.
  5. It is impossible to test scenarios which are not implemented yet in the downstream service, or backwards compatibility.

The solution to this problem is not as daunting as it may sound: Write mocks yourself, and do not re-use them!
These two techniques will help to make mocking easy, and to make tests easier to read and maintain:

  1. Generate code as much as possible (see above)
  2. Use the builder pattern to build mocks

The Soft Stuff

Writing good test automation is really hard, and a team effort. The learning curve when writing component tests is also higher than for ‘simple’ UI or End-to-End tests. Here some more thoughts on the required framework for successful component tests:

  1. Everything around the tests needs to be easy: running, mocking, adding new tests and maintenance.
  2. The tests need to add value for everybody, otherwise it will be impossible to get the required help from developers.
  3. QA: Trust that developers are not sloppy morons trying to hide bugs! Each bug which needs to be fixed after we started a new piece of work is a serious disruption in our development flow! To write a reliable test suite, which is resilient to change and covers all important features, the team needs to work together.
  4. Time: Under high feature pressure component tests will likely deteriorate, since developers are rushing to pump out features and will neglect helping with the component tests. I have heard neglected test automation being quoted in post mortems as partial reason for bugs making it into production.
  5. The tests should be run automatically as often/early as possible. Ideally in a CI/CD pipeline before changes are merged to the main branch. A previous development lead of mine always said ‘Tests are not automated until they run automated.’ and I agree with this. There is some overhead in setting up the pipelines, but in our case it was well worth it.

I hope this helps and I am really curious now to try this on some new services when I start my new position in a months time 🙂

Angela Evans

Senior Software Engineer at Diligent in Christchurch, New Zealand