Mocking REST Services: How to Write Mocks you Trust

Getting started with mocking REST services is easy, thanks to tools like Mountebank, WireMock, and others. However, if there is any doubt regarding the correctness of the mocks, the tests are losing their value. This post shows how to create mocks you can trust in two steps.

mock rest API

Summary / TLDR

The first impulse to make sure that mocks are correct might be to record traffic to use as mocks. But that does not scale well when the service grows, and it is difficult to mock scenarios which are hard to reproduce.

Using the OpenApi specification to generate code wherever possible solves a lot of problems:

  • In production code to call downstream services (standard libraries provided by the APIP owner fulfil the same purpose),
  • To generate models for creating mocks (response & request models) and
  • To generate clients for the service under test, if no library exists.

Read further to see why generating code, rather than writing it, improves the quality of your code and tests, so that you can trust your mocks.

Mock Problems

We need to look at the potential problems when mocking REST services before we can fix them. The following parts of a mock could be incorrect:

  1. The path for an endpoint
  2. Request parameter names or body
  3. Response body
  4. Upper/lower case
  5. Headers

At least the first four problems are straightforward to fix with the strategies outlined below.

Pre-Requisite

Services we want to mock need to generate OpenApi / Swagger documentation. This allows to generate clients and parts of the mocks. Generating code instead of writing it yourself makes the difference between

  1. Mocks you can trust and which are easy to write or
  2. Mocks which require a lot of effort to write and may be wrong.

Mocking will be much harder and more time-consuming if everything needs to be handwritten.

Step 1: Use Generated REST Clients in Production Code

Solved Problem: The mocks are incorrect and mask a bug. This could happen if the service has a bug and sends incorrect requests. If the mocks have the same problem (wrong path, parameter name, body, or case) they ‘hide’ the problem.

In most cases, developers will use standard libraries or generate REST clients from the OpenAPI specification to communicate with downstream services. This means that the requests to downstream services are correct. If clients are handwritten, there will be bugs around spelling, names, and more.

Removing one possible source of bugs makes it easier to troubleshoot problems. If the test/mock does not work as expected, we can ‘debug’ the traffic (see this post for an example with Mountebank). There could still be a logical problem in the service under test, but we can trust that any request which is made is syntactically correct.

There may be cases where production code uses handwritten clients, so it is worth checking whether this is the case. The reasons for not generating clients could be that the developers did not know any better, or it was not easy to set up client generation.

Step 2: Generate Models for Mocks

Component tests use models for two reasons:

  1. Verify that the request body is correct and
  2. Send back a response with the correct structure.

In the beginning it can be much easier and faster to manually write the request and response models. But the bigger a service grows, the more effort it takes to keep request/response models up to date. And how do you make sure that they are correct?

It is not always easy to generate code from the OpenApi specification, see this post. Do not give up though! When services mature it will pay out! Maintaining models manually quickly turns into a nightmare: Writing models takes time. In addition to that, there will be spelling mistakes or models are incomplete. When this happens, trust erodes. Writing tests becomes harder and frustrating.

A customizable code generator that only generates models would be ideal, but whatever generator you are using to generate the REST client for the service under test should work. We have previously done that and ignored all files the tests do not use.

Note: Generated models may have a different casing to the real service responses. This can happen if the tests are written in a different language to the mocked API.

  • For request verification, the casing can be ignored when a generated client is used used in production code (since the service under test will send the correct request). Mountebank ignores upper/lower casing for request matching by default.
  • For responses, some mapping logic may not be exercised, but I have not found this to be a problem as long as the service still can handle the ‘real’ responses (i.e. end-to-end tests pass).

Final Words

Depending on how headers are set in the production code, they still could contain bugs. However, the headers are often cross-functional and do not change for every request, so they are not as difficult to check and keep up to date as the models for all requests.

There are more ways to set up mocks incorrectly, but getting the basics right is a good start for writing component tests.

Generating code will increase the code quality for both production and test code, and save time in the long run.

For best practices around component / API tests check my post about Strategies for Successful Component Tests.

Angela Evans

Senior Software Engineer at Diligent in Christchurch, New Zealand