Best Practices for ASP.NET Core Integration Tests

ASP.NET Core Integration Tests are a very easy way to get started with test automation for ASP.NET Core APIs. In fact – they are not much harder to write than unit tests, and as easy to add to the CI pipelines! No deploy is required!

There are good posts describing how to write the the tests, mock and more. However, some basic practices which help to work more efficiently are not mentioned. I will try to cover some of these in this post.

ASP.NET Core Integration Tests
Photo by fotografierende on Pexels.com

The sample code for this blog is available in https://github.com/AngelaE/blog-integration-test.

Pre-Requisite: Create the First Test

This post assumes you have already started to write some ASP.NET Core Integration tests. If this is not the case, the official Microsoft Documentation is a very good starting point. For example, a simple test could look like the one below:

    public class PingTest : IClassFixture<WebApplicationFactory<Startup>>
    {
        private readonly WebApplicationFactory<Startup> _factory;

        public PingTest(WebApplicationFactory<Startup> factory)
        {
            _factory = factory;
        }

        [Fact]
        public async Task Ping_returns_success()
        {
            // this test proves that the service is up and running
            var client = _factory.CreateClient();

            // Act
            var response = await client.GetAsync("/api/service/ping");

            // Assert
            response.EnsureSuccessStatusCode(); 
        }
    }

Use a Generated Client to call the API

In all tutorials I have seen, the HTTP client is used directly to access the API. That means URLs is specified explicitly, for example '/api/service/ping'. In addition to that, requests and response models are hand coded. This works well for a few requests, but starts to become tedious when the service grows bigger.

Why not use a generated client? Each API should have an OpenAPI specification these days, so using that should be the standard for the following reasons:

  1. Dogfood the API swagger.json to make sure it really works. Surprisingly, the swagger.json generated in ASP.NET Core by default cannot be used to generate autorest clients! Check my post on how to fix common problems when generating autorest clients for details.
  2. Write less code: Autorest generates classes to call the service and all models which are used. Now you can concentrate on writing the tests without thinking about the models!
[Fact]
public async Task Get_Books_returns_empty_list()
{
	var client = _factory.CreateClient();
	var bookApiClient = new BookApiClient(client, false);

	// Act
	var books = await bookApiClient.Books.GetAllAsync();

	// Assert
	books.Should().NotBeNull();
	books.Count.Should().Be(0);
}

Autorest generates all request and response models. Writing tests just got a lot easier!

Endpoints can be called via methods and responses are typed when using a generated client.

Handle Error Responses

After implementing some happy path tests, we will likely want to test the edge cases.

Autorest clients throw exceptions for all responses which are not defined in the OpenApi specification. For this reason it is easiest to test errors if the swagger.json only contains definitions for success responses. That means all error responses will cause an HttpOperationException and can be handled as below:

// Act - define a function which will cause an error response
Func<Task> requestBook = async () => await bookApiClient.Books.GetAsync(-1);

// Assert
requestBook.Should().Throw<HttpOperationException>()
	.Where(e => e.Response.StatusCode == System.Net.HttpStatusCode.NotFound);

Test Bad Requests

One feature of a generated client is that it makes it harder to send invalid requests to an API. Autorest clients can send strings which do not match a pattern, or numbers which are out of range. But an autorest client does NOT allow to send wrong data types, for example strings instead of numbers.

Does this need to be tested? The ASP.NET Core framework de-serializes the request into the request model. When this fails, the framework throws an exception. As a result the ‘bad data’ does not even get to ‘our code’.

However, if the service needs to be tested with invalid data models we have to go back to using the HTTP client directly. The sample below creates a valid model first and then replaces only a part of it. Thus we can be sure the request error is not caused by a different problem.

var client = _factory.CreateClient();
// first create a valid model
var bookString = JsonSerializer.Serialize(new Book { Id = 3, Title = "test title", Author = "author", Type = "Hardcover" });
// then invalidate a property (string instead of number)
bookString = bookString.Replace("3", "invalid");

var content = new StringContent(bookString,
	Encoding.UTF8,
	"application/json");

// Act
var response = await client.PostAsync("Books", content);

// Assert
response.StatusCode.Should().Be(HttpStatusCode.BadRequest);

Logs instead of Debugging

It is very easy to debug failing tests locally when there are issues. But what about the CI pipeline? Or when there are real problems in production? The integration tests offer a good opportunity to ‘test’ whether API logs sufficient information to pinpoint the cause of problems.

By default the test output only contains the logs from the client perspective. But to solve problems, we generally the need the service logs. In order to show the service and any console logs in the test output, we need to configure a xUnit logger.

For example, below is the test output for a failure after breaking the dependency injection. Unfortunately it is not very useful:

The default test output does not contain any useful information to analyze the cause for the error response.

After the API is configured to use the xUnit logger, the test output contains an option to show ‘additional output for this result’. This information can help us to solve the problem without debugging. It may seem easier to debug, but the ability to use logs for solving problems is very useful.

Problem is that the IBookStore cannot be resolved!

My conclusion: If the service logs do not help to find the majority of problems [without debugging], we will not be able to analyze problems in production. Therefore we should use them in tests.

How to Capture Service Logs in xUnit Test Output

Unfortunately I cannot take any credit for the following code – I found it in a good post which explains logging within xUnit tests. Thanks https://www.meziantou.net/! Here the abbreviated version of how to capture log output in ASP.NET Core Integration tests.

Step 1
Firstly implement a custom ILogger which logs to the xUnit ITestOutputHelper and a corresponding ILoggerProvider. The sample code is available here.

Step 2
Then define a custom WebApplicationFactory which adds the xUnit logger as logger to the service. To capture output for a test, the ITestOutputHelper needs to be set in each test fixture.

public class DefaultTestWebApplicationFactory<TStartup> : WebApplicationFactory<TStartup>
	where TStartup : class
{
	public ITestOutputHelper TestOutputHelper { get; set; }

	protected override void ConfigureWebHost(IWebHostBuilder builder)
	{
		// Register the xUnit logger
		builder.ConfigureLogging(loggingBuilder =>
		{
			loggingBuilder.AddProvider(new XUnitLoggerProvider(TestOutputHelper));
		});
	}
}

Step 3
Finally add the ITestOutputHelper as constructor parameter to the test fixture and set the property on the WebApplicationFactory.

public class BookControllerTests : IClassFixture<DefaultTestWebApplicationFactory<Startup>>
{
	private readonly DefaultTestWebApplicationFactory<Startup> _factory;
	private ITestOutputHelper _outputHelper;

	public BookControllerTests(DefaultTestWebApplicationFactory<Startup> factory, ITestOutputHelper outputHelper)
	{
		_factory = factory;
		_outputHelper = outputHelper;
		_factory.TestOutputHelper = outputHelper;
	}
	
	[...]

Further Thoughts

Develop for Easy Testing

Integration tests allow to mock any classes configured in the dependency injection. On the downside this means that mocks need to be implemented just for testing purposes.

Another option is to provide in-memory components for the development environment by default. As a result developers do not need to install databases and other downstream components. Samples for first-class in memory components are:

Some people argue that using in-memory components increases the chance of bugs. In my opinion this argument is flawed:

  1. End-to-End tests and some manual testing will exercise the ‘real’ components.
  2. Problems with downstream components often occur under load. The bugs which are missed usually do not surface with functional tests. (These tests could be either automated or manual.)
  3. When an API is designed for easy testing, more automated tests are implemented. Thus more bugs are discovered.

Project Structure

Each test fixture has an overhead of ~500ms on my laptop since the API is starting in memory. This makes it worthwhile to group tests together.

On the other hand, all tests within a fixture share the same data store. This means that tests need to be split in multiple fixture or tolerate manipulated data.

Conclusion

ASP.NET Core Integration tests are a great way to start test services. Give them a go! I have to admit that I was very sceptical before I started.

Angela Evans

Senior Software Engineer at Diligent in Christchurch, New Zealand