Last year I was asked to talk at a conference about my experience with component tests for microservices in distributed systems. The talk still has not happened since it is being postponed due to Covid. This has given me more time to prepare my slides, and to realize what helped us to succeed. I originally planned this post as companion article to my talk, but is a good starting point for anybody wanting to know more about component tests.
Note: This and all linked articles are based on my personal experience working with ASP.Net Core services. Every team/service/product is different and all I am writing is my subjective opinion. Pick and use what works for you and keep in mind that everything has it’s trade-off.
Why Do We Need Component Tests?
Component tests are not easy. They are somewhere between unit tests (developer owned) and end-to-end tests (QA owned). Some people are very skeptical and do not trust them:
Are the any better than unit tests? You cannot trust mocks, they will be wrong! Too hard to maintain!
My own thoughts and chats within our team, before getting into it
Yes, they are very different from unit tests and prove that a service is working as expected. Unlike unit tests, which sometimes only prove that a quarter responsibility class calls three others to do the most simple operation, component tests exercise the whole service.
Martin Fowler has written a very good article about testing microservices which contains a section about component tests. Reading this article opened my eyes to the world of testing in distributed systems! Component tests are just one type of test, but an often neglected one.
My Thoughts
Systems (at least successful ones) grow over time and need good test automation. Even if they start as a single service, they often turn into distributed systems. Most services consist of mainly library/pipeline/generated code with a bit of business logic, glued together by dependency injection. No other test type really tests that a particular service works correctly.
- End-to-end tests cannot cover all test cases, especially for error scenarios. Best case the data/environment setup is becoming very complex/slow. In our case it took hours to manually set up the test data, and the tests were not reliable. We also could not reproduce a lot of scenarios in a downstream service.
- Unit tests do not prove that a service actually works. They test little bits of the service, but not that it is ‘plumbed together’ correctly. In addition, unit tests often break when the internal architecture changes (i.e. most refactoring).
- Component Tests hit a ‘sweet spot’: They allow testing of a whole service in isolation, without setting up and configuring an environment. They are generally fast and easy to run everywhere, can cover important flows (both happy path and border cases) and allow refactoring inside a service without breaking all tests. As long as input and output do not change, they pass.
However, they cannot cover all details like unit tests, or that multiple components really work together as expected.
Below is a picture of a typical ASP.NET Core service. As you can see, only a small portion of the code is written and unit tested by the team. How do we test that everything is plumbed together correctly?
Note: Ideally you want to write only as many component tests as necessary. Cover as much functionality with unit or other lower level tests as possible!
Getting Started with Component Tests for a Microservice
Before starting to write any component tests, you need to make some decisions.
First, you need to decide between in-process or external component tests. Read this post for a comparison. I used to be very skeptical about the in-process tests, in particular mocking downstream services. However, this changed when I discovered that with WireMock.Net the mocking is as realistic as with external component tests. Check my post WireMock.Net for better Integration Tests for details.
Next, you need to analyze the external service dependencies and decide what to mock / replace with in-memory implementations. We did the following:
- Replace data store with an in-memory implementation (cache, database, …). Chances are that an in-memory implementation already exists. Otherwise you usually can write a simple implementation yourself. In my experience this does not invalidate the tests. Data access code is usually a standard library or generated. Problems usually occur under high load and are more likely to be surface in other types of tests.
- Use mountebank or any other mock service to replace downstream services. For in-process tests WireMock.Net is the tool of choice.
- Ignore metrics, APM, logs for external component tests. We later added some in-process tests for metrics, where we wrote an in-memory metrics recorder.
Check the following posts for getting started:
- Best Practices for ASP.NET Core Integration Tests has some additional tips on how to set up the in-process Microsoft Integration tests.
- Easy API Tests with Mountebank shows how to get started with external tests with Mocha/Typescript using Mountebank to mock downstream services.
Maximising ROI
Writing and maintaining tests is expensive. Often it is difficult to quantify the gain. Component tests often need developer input (or technical adapt test automation engineers). How can we achieve this? What can we do to minimize the effort to write and maintain a component test suite? How do we get the maximum return of time/quality/productivity?
Here is what helps in my experience:
- Component tests are useless if you do not trust the mocks! Check this post to see how to write mocks you trust, with less effort.
- Build a simple and scalable test suite by following some simple strategies for successful component tests.
- Make sure the tests are easy to run, without much setup:
- On development machines, so devs can get fast feedback during implementation or refactoring.
- In your CI pipelines
- Run the tests automatically! Before each merge into the main branch and regularly on the main branch. My favourite development manager once said: ‘Tests are not automated if they need to be started manually’. This will stop bugs which are covered by test automation from sneaking onto the main branch.
- Write features and tests in parallel. One option is that developers implement some minimal test when they add the feature. This is usually really quick and saves repeated manual testing. It also prevents potential testability issues for the feature, which would be difficult to fix later. Writing tests and features in parallel can also change the team dynamics. In our team we communicate much more, and everybody felt more involved.
Final Words
When we started, it took us a long time until we had good strategies for mocking and writing the tests. We continually re-evaluated. Nobody does everything right from the start – neither in test nor in production code. We treated the tests in a similar way as production code: Following design principles, fixing technical debt, and continually improving them. As a result, they helped us to deliver high-quality code faster. With less stress and more fun.
But regardless how good the test suite is, they always initially cost time and will be the first part of the software to be neglected under time pressure.
All my experience around test automation stems from my time at Diligent. I am about to start a new position soon and am very curious to see how they test. Are there are component/integration tests? I am looking forward to learning how a different system works, and how my current experience applies to it.