Automated end-to-end testing using Vagrant, Puppet and JBehave
What do you do when your intelligence solution reaches its limit? Dr. Claire Fautsch tells us what aspects of testing emerged as the most important and most efficient when her team at Goodgame Studios began migrating data and making production changes.
Due to a rapidly increasing number of new players using our games, our existing business intelligence solution, based on PHP and MySQL, has reached its limit, triggering the need for a new data warehouse solution. At Goodgame Studios, we decided against a “big bang” migration due to the amount of data to be migrated and the importance of the existing solution in the day-to-day work of not only our data analysts, but also our marketing or conversion teams, to give an example.
Instead, we opted for a step-by-step migration, using an agile project methodology. Furthermore, this approach would enable the smooth transitions of users and reporting tools from the old system to the new one.
Soon it became obvious that (regression) testing was a crucial part of the development process. The reasons for this are twofold. On the one side, data consistency between the old and the new system is essential. On the other, as the development team is working with new, partially unknown technologies and systems, continuous rework is necessary. This includes the improvement and extensions of newly developed components and the optimization of already migrated ELT (extract load transform) processes.
In this article, we will describe how we have automated end-to-end (E2E) testing. This would enable us to guarantee fully-tested ELT processes and regression tests before applying any changes to production, at the same time reducing the often tedious manual effort required by regression testing.
High Level Architecture Overview
To better understand the setup of E2E tests, we will briefly outline the high level architecture of our data warehouse (DWH) solution (see Figure 1). The data in our DWH is based on events, which various sources send to RabbitMQ servers in a JSON format (see our previous article for more information).
Once the events reach the servers, they are consumed by a Consumer component, which reads the events from the RabbitMQ servers, validates the input, separates them based on their ID, and persists them to HDFS.
From HDFS, events are loaded into our analytical database (HP Vertica) using ELT jobs. The jobs are scheduled and triggered by a component, referred to as Core. In addition, external data sources provide the supplementary data necessary for aggregations. This includes, for example, marketing or application download information.
The requirements for the tests were very simple:
- Tests should be automated (i. e., no manual intervention or data preparation necessary)
- There should be the option to execute full regression tests at any moment
- Subsets of tests (e. g., related to a specific event) can be executed separately
- The tests should be executed in a self-contained environment close to production (not necessarily in terms of resources)
- There should be the option to mock data from external sources (due to some request limitations on APIs, for example)
- It should be easy for each developer to execute tests on his development machine
Taking requirements 4 and 6 into account, it quickly became clear that the test environment should be some kind of virtual machine.
If we want to be as close to the production environment as possible, we cannot have the different components run on only one machine. We need separate VMs for HDFS, Vertica, RabbitMQ, Consumer, and Core. Furthermore, the different machines need to be able to talk to each other. After some research, we came to the conclusion that Vagrant matches our needs perfectly.
When it comes to adding content to the virtual machines, i.e., provisioning them, Vagrant offers different out-of-the-box possibilities, such as shell scripts, Chef cookbooks, or Puppet modules. Finally, we use JBehave together with Maven as a test framework. This helps us to fulfill requirements 1, 2, 3, and 6. In the following sections, we will briefly describe each of the frameworks used and outline why it was our tool of choice.
Vagrant is an open-source tool developed for creating virtual (development) environments. Its main purpose is to provide a framework for lowering development environment setup time and to avoid excuses such as “… but it works on my computer”.
Vagrant is operated from the command line and provides a set of commands for basic usage. Boxes are the main component of Vagrant. Boxes are preconfigured virtual machines for Vagrant environments. They are basically templates for the environment to be set up using Vagrant. There are several public catalogs for Vagrant boxes, or you can build your own box using the vagrant box command.
Once the box is set up, you will want to install software or change configurations, and this cannot be done manually. Vagrant offers so-called provisioners for this purpose, which allow you to handle this automatically. Either simple shell scripts or configuration managements systems such as Puppet or Chef can be used for provisioning. If the company uses a configuration management system, provisioning scripts can be reused to set up development environments as close to production as possible.
We get asked relatively frequently why we use Vagrant and not Docker. Indeed, Docker would have also met our needs. The main reason for not opting for Docker is that we wanted to remain as close to our productive state as possible. Of course this is only possible within certain limits – mainly due to limited resources at the devs workstations – but at least using virtual machines rather than Docker containers keeps us a bit closer.
To keep everything fully automated and simple for the end user (developer), we decided to opt for a configuration management system to provision our Vagrant boxes. A configuration management system allows system administrators to define the state of the company’s IT infrastructure and automatically retain the correct state.
As no configuration management system was in use at our company at the moment of implementing the test environment, we opted for Puppet, since its initial learning curve seemed less steep than Chef’s and it fulfilled all our needs. Puppet comes with its own declarative language. The system’s state is defined via Puppet manifest files, which are basically the Puppet programs. The manifest files contain a set of resources, which describe the system. Manifests can be split up in modules to give them a clearer structure and keep similar functionalities grouped together and reusable.
For example, let’s say we would like to set up a server with a MySQL database, a Java installation, and an installed monitoring, and a second one with only a MySQL database setup. We would then write a module for MySQL setup, which could be reused to provision both servers.
Puppet installations usually work using a client-server principle with one Puppet master (the server) and Puppet agents on every node you want to provision. However, for our Vagrant provisioning, we opted for a setup without a server to keep the program simple and easy to run on every developer’s machine.
JBehave is a Java-based framework for test automation designed for behavior-driven development (BDD). The framework allows you to write test cases in a natural language. Each test case is a scenario, and related scenarios are grouped in one story.
Each scenario is itself a set of steps. Each step is of one of the types Given, When, or Then. A Given step represents a precondition for a test case. For example, it allows you to define what the input data should look like. A When step, on the other hand, defines an action happening, and a Then step describes the outcome. You can also use And to combine several steps of one type.
For example, a story could look like in Listing 1.
This natural, syntax-like language even enables non-developers like business people to write tests. However, at some point we still need to tell the systems what to do with the steps.
For each step known to the system, there is an annotated Java method (within a POJO) representing the implementation. For the example given above, this would look like (high level) in Listing 2.
Also, if used for slightly different purposes than what it’s intended for (which is BDD), JBehave also proved to be a good tool for our use case. Most of the ELT jobs which we want to E2E test have different content but often perform similar steps, as for example:
- Given an Event X as input
- When we run job X
- Then database table X should contain data Y
This means that once we have a set of test steps defined by a developer, even a non-developer can write new test cases.
Putting it all together
With Vagrant and Puppet, we have a self-contained test environment setup, provisioned to reflect productive systems. Each developer can use the environment not only for testing but also for developmental purposes.
The JBehave tests need to run on an environment, ideally starting with a clean environment for each set of test runs to avoid negative side effects from previously failed runs. We have therefore set up our JBehave steps to work on the Vagrant environment, and before each single test, we wipe the database, HDFS, and the queues to start with a clean environment.
As this environment is used solely for the automated test, we are not destroying anyone’s data or risking the breakage of anything else. Each test case should include the data it needs in its given steps.
We use Maven to launch the tests, and this is made easy for us with JBehave’s Maven plugin. Using Maven properties, this also allows us to run only specific test cases. Currently we start the Vagrant environment manually using the vagrant up command. It needs to be fully started before tests can be run. In the future, we would like to include the Vagrant Maven plugin so that we can also control this completely automatically.
We did not introduce automated tests at the start of the project, but only a few months later once a lot of the code and jobs had already been written. Consequently, there was a lot of extra effort required in the first step of setting up and configuring the environment and writing the test cases for the already existing jobs.
However, once you close that initial gap, developing automated test cases becomes part of the standard development process. It also now takes us a lot less time to execute regression tests than it did before (manually). Furthermore, we have observed that writing these automated test cases is a helpful step in the review process, especially if they are not written by the developer himself. Bugs can be identified at a much earlier stage, especially when testing edge and special cases.
In general, we can conclude that the additional amount of time spent developing those test cases is still less than the time spent on reviews, regression tests, and bug fixes. Additionally, the Vagrant environment has the nice side effect that each developer has his very own development environment at hand. The main points we have concluded from our setup are that:
- Automated tests are not only a way to accelerate regression and E2E tests but also assist in the review of newly developed features.
- Don’t think: “Oh, but we will lose time developing those tests”. The time you will gain by avoiding bugs and by minimizing manual tests, as well as the improved quality you will deliver, makes up for it completely.
- Don’t wait until your project is far along to start automating your (end-to-end) tests. Start from the beginning.
- Plan to write automated tests as “standard” parts of your development process, the same as documentation or unit tests.
Goodgame Studios is a German online games company which develops and publishes free-to-play web and mobile games. Founded in 2009, their portfolio now comprises nine games with over 200 million registered users.