Dos and don'ts

Testing the Database Layer

Colin Vipurs
Check image via Shutterstock

There’s one thing we can agree on when it comes to database tests: they ain’t easy. In this JAX Magazine preview, testing guru and JAX London speaker Colin Vipurs runs through the strengths and weaknesses of common approaches to testing databases.

Over my many years of software development I’ve had to perform various levels of testing against many different database instances and types including RDBMS and NoSQL, and one thing remains constant – it’s hard. There are a few approaches that can be taken when testing the database layer of your code and I’d like to go over a few of them pointing out the strengths and weaknesses of each.

Mocking

Colin listing 1This is a technique that I have used in the past but I would highly recommend against doing now. In my book “Tests Need Love Too” I discuss why you should never mock any third-party interface, but just in case you haven’t read it (you really should!) I’ll go over it again.

As with mocking any code you don’t own, what you’re validating is that you’re calling the third-party code in the way you think you should, but, and here’s the important part – this might be incorrect. Unless you have higher lever tests covering your code, you’re not going to know until it hits production. In addition to this, mocking raw JDBC is hard, like really hard. Take for example the test code snippet in Listing 1.

Within this test, not only are there a huge amount of expectations to setup, but in order to verify that all the calls happen in the correct order, jMock “states” are used extensively. Because of the way JDBC works, this test also violates the guidelines of never having mocks returning mocks and in fact goes several levels deep! Even if you manage to get all of this working, something as simple as a typo in your SQL can mean that although your tests are green this will still fail when your code goes to production.

Think the Unit Test is dead? Hear Colin Vipurs speak on the topic at the JAX London 2015.

A final note on mocking – no sane developer these days would be using raw JDBC, but one of the higher-level abstractions available, and the same rules apply for these. Imagine a suite of tests setup to mock against JDBC and your code switches to Spring JdbcTemplate, jOOQ or Hibernate. Your tests will now have to be rewritten to mock against those frameworks instead – not an ideal solution.

Testing Against a Real DatabaseColin listing 2

It may sound silly, but the best way to verify that your database interaction code works as expected is to actually have it interact with a database! As well as ensuring you’re using your chosen API correctly this technique can verify things that mocking never can, for example, your SQL is syntactically correct and does what you hope.

In-Memory Databases: One the easiest and quickest ways to get setup with a database to test against is to use one of the in-memory versions available, e. g. H2, HSQL or Derby. If you’re happy introducing a Spring dependency into your code, then the test setup can be as easy as this (Listing 2).

This code will create an instance of the H2 database, load the schema defined in schema.sql and any test data in test-data.sql. The returned object implements javax.sql.DataSource so can be injected directly into any class that requires it.

One of the great benefits of this approach is that it is fast. You can spin up a new database instance for each and every test requiring it giving you a cast iron guarantee that the data is clean. You also don’t need any extra infrastructure on your development machine as it’s all done within the JVM. This mechanism isn’t without its drawbacks though.

Unless you’re deploying against the same in-memory database that you’re using in your test, inevitably you will run up against compatibility issues that won’t surface until you hit higher level testing or god forbid – production. Because you’re using a different DataSource to your production instance it can be easy to miss configuration options required to make the Driver operate correctly. Recently I came across such a setup where H2 was configured to use a DATETIME column requiring millisecond precision. The same schema definition was used on a production MySQL instance which not only required this to be DATETIME(3) but also needs the useFractionalSeconds=true parameter provided to the driver. This issue was only spotted after the tests were migrated from using H2 to a real MySQL instance.

Real Databases: Where possible I would highly recommend testing against a database that’s as close as possible to the one being run in your production environment. A variety of factors can make this difficult or even impossible, such as commercial databases requiring a license fee meaning that installing on each and every developer machine is prohibitively costly. A classic way to get around this problem is to have a single development database available for everyone to connect to. This in itself can cause a different set of problems, not least of which is performance (these always seem to get installed on the cheapest and oldest hardware) and test repeatability. The issue with sharing a database with other developers is that two or more people running the tests at the same time can lead to inconsistent results and data shifting in an unexpected way. As the number of people using the database grows, this problem gets worse – throw the CI server into the mix and you can waste a lot of time re-running tests and trying to find out if anyone else is running tests right now in order to get a clean build.

If you’re running a “free” database such as MySQL or one of the many free NoSQL options, installing on your local development machine can still be problematic – issues such as needing to run multiple versions concurrently or keeping everyone informed of exactly what infrastructure needs to be up and what ports they need to be bound to. This model also requires the software to be up and running prior to performing a build making onboarding staff onto a new project more time consuming than it needs to be.

Thankfully over the last few years several tools have appeared to ease this, the most notable being Vagrant and Docker. As an example, spinning up a local version of MySQL in Docker can be as easy as issuing the following command:

$ docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=bob mysql

This will start up a self-contained version of the latest MySQL mapped to the local port of 3306 using the root password provided. Even on my 4 year old MacBook Pro, after the initial image download, this only takes 12 seconds. If you need Redis 2.8 running as well you can tell Docker to do that too:

$ docker run -p 6389:6389 redis:2.8

Or the latest version running on a different local port:

$ docker run -p 6390:6389 redis:latest

This can be easily plugged into your build system to make the whole process automated meaning the only software your developers need on the local machine is Docker (or Vagrant) and the infrastructure required for the build can be packaged into the build script!

Testing Approach: Now you have your database up and running the question becomes “how should I test?”. Depending on what you’re doing the answer will vary. A greenfield project might see a relational schema changing rapidly in the early stages whereas an established project will care more about reading existing data. Is the data transient or long lived? Most* applications making use of Redis would be doing so with it acting like a cache so you need to worry less about reading existing data.

* Most, not all. I’ve worked with a fair few systems where Redis is the primary data store.Colin listing 3+4

The first thing to note is that for functional tests the best thing to do is start with a clean, empty database. Repeatability is key and an empty database is a surefire way to ensure this. My preference is for the test itself to take care of this, purging all data at the beginning of the test, not the end. In the event of a test failure, having the database still populated is an easy way to diagnose problems. Cleaning up state at the end of the test leaves you no trace and as long as every test follows this pattern you’re all good.

A popular technique for seeding test data is to use a tool like DbUnit which lets you express your data in files and have it easily loaded. I have two problems with this; the first is that if you’re using a relational database there is duplication between the DB schema itself and the test data. Not only does a schema change require changing the dataset file(s) but the test data is no longer in the test class itself meaning a context switch between tests and data. For an example of a of DbUnit XML file see Listing 3.

One question I usually hear from newcomers to DB testing is whether they should round-trip the data or poke the database directly for verification. Round-tripping is an important part of the testing cycle as you really need to know that the data you’re writing can be read back. An issue with this though is that that you’re essentially testing two things at once, so if there is a failure on one side it can be hard to determine what that is. If you’re using TDD (of course you are) then tackling the problem will likely feel very uncomfortable as the time between red and green can be quite high and you won’t be getting the fast feedback you’re used to.

Colin listing 5The technique I have adopted is a hybrid approach that let’s me get the best of both approaches while mostly avoiding the drawbacks of each. The first test I write will be a pure read test that will insert the data by hand within the test itself. Although this seems like duplication, and it is a little bit, the test code will bypass any logic the write path might make. For example, an insert that has an “ON DUPLICATE KEY” clause will not do this and make the assumption this record does not exist as the test is in complete control of the state of the data. The test will then use the production code to read back what the test has inserted and presto, the read back is verified. An example of a read test can be seen in Listing 4.

Once the read path is green, the write tests will round-trip the data using production code for both writing and reading. Because the read path is known to be good, there is only the write path to worry about. A failure on the read path at some point in the future will cause both sets of tests to fail, but a failure only on the write path helps isolate where the problem is. In addition, if you’re using a test DSL for verifying the read path, it can be reused here to save you time writing those pesky assertions! An example of a round-trip test can be seen in Listing 5.

This was a sneak preview from the JAX Magazine – sign up here for more free developer tips, trends and tutorials.

Author
Colin Vipurs
Colin Vipurs started professional software development in 1998 and released his first production bug shortly after. He has spent his career working in a variety of industries using a wide range of technologies always attempting to release bug-free code. He holds a MSc from Liverpool University and current works at Shazam as a Developer/Evangelist. He has spoken at numerous conferences worldwide.

Comments
comments powered by Disqus