days
0
6
hours
1
1
minutes
0
8
seconds
2
8
search
With great power provided by microservices comes great responsibility

Testing Java Microservices: Not a big problem?

Daniel Bryant
JHipster 4
Solar eclipse image via Shutterstock

The microservice architecture has become the defacto best style for implementing web-based applications. However, with the “great power” provided by microservices comes great responsibilities and challenges. In this article, Daniel Bryant, Chief Scientist at OpenCredo and JAX DevOps speaker talks about the challenges of testing Java-based microservices.

It should come as no surprise to any developer who regularly reads articles on the web that many organisations are looking to migrate away from (or augment) their current monolithic Java application application. Rightly or wrongly, the microservice architecture has become the defacto best style for implementing web-based applications. However, with the “great power” provided by microservices comes great responsibilities and challenges.

Indeed, much has been written on the challenges associated with implementing microservice-based architecture, modelling the data, and the operational (and organisational) aspects. However, not much has been written about the challenges of testing with this new architectural style, with the exception of Toby Clemson’s excellent article “Testing Strategies in a Microservices Architecture”. This article aims to add to the discussion around testing Java-based microservices.

As part of my work with SpectoLabs and OpenCredo we have helped several organisations build and deploy microservice-based applications, both for greenfield prototypes and brownfield integrations and migrations. We’ve learned lots along the way, and today we are keen to share our findings in how to design, build and test microservice-based systems:

Design the system: Determine service boundaries

  1. Identify areas of business functionality in the existing (or new) system. We often define these areas as domain-driven design (DDD)bounded contexts‘.
  2. This step can take some time (and will also be iterative), but the output is typically a context map which represents the first pass at defining the application service boundaries.

Design the service APIs: Determine service functionality

  1. Work with the relevant business owners, domain experts and the development team to define service functionality and APIs.
  2. Use behaviour-driven design (BDD) technique the ‘Three Amigos‘.
  3. The typical outputs from this step include:
    • A series of BDD-style acceptance tests that asserts component (single microservice) level requirements, for example Cucumber Gherkin syntax acceptance test scripts;
    • An API specification, for example a Swagger or RAML file, which the test scripts will operate against.

Build services outside-in

  1. Now we have our API specification and associated (service-level) business requirements we can begin building the service functionality outside-in!
  2. Following Toby Clemson’s excellent article on microservice testing, this is where we use both integration testing and unit testing (both social and solitary), frequently using a double-loop TDD approach.
  3. We often use JUnit, Lambda-behave, Mockito, and Hoverfly Java for foundational unit testing.

Component test

  1. In combination with building a service outside-in we also work on component-level testing. This differs from the integration testing mentioned above, in that component testing operates via the public API and tests an entire slice of business functionality.
  2. Typically the first wave of component tests utilise the acceptance test scripts we defined in 2.3, and these assert that we have implemented the business functionality correctly within this service
  3. Tooling we like here includes REST-assured and Spring Boot testing features.
  4. Test non-functional requirements (NFRs). Examples include:
    • Performance testing of a series of core happy paths offered by the service e.g. JMeter (often triggered via the Jenkins Performance Plugin) or Gatling (often run via flood.io).
    • Basic security testing using a framework like Continuum Security’s bdd-security, which includes the awesome OWASP ZAP.
    • Fault-tolerance testing, where we deterministically simulate failures using Hoverfly and associated middleware (and in the past, Saboteur).
    • Visibility testing, which asserts that the service offers the expected endpoints for metrics and health checks.

Contract test – verify the component interactions

  1. Verify the proposed interaction between components.
  2. A popular approach for this in the microservice world is by using consumer-driven contracts, and this can be implemented using frameworks like Spring Cloud Contract, Pact-JVM or Pacto.

End-to-end (E2E) tests: Asserting system-level business functionality and NFRs

  • E2E automated tests essentially assert core user journeys and application functionality (and also prevent regression).
  • Test non-functional/cross-functional requirements, for example, asserting that all critical business journey are working, respond within a certain time, and are secure.
  • When E2E tests touch systems that are not available then use tooling like Hoverfly to simulate the API. Latency or failures can be injected via Hoverfly middleware.
  • Outputs from this step of the process should include:
    • A correctly functioning and robust system;
    • The automated validation of the system;
    • Happy customers!

Final words

If you have enjoyed reading this article, then a more comprehensive version can be found at specto.io: “A Proposed Recipe for Designing, Building and Testing Microservices”.

 

eclipseorb_color

This post was originally published in the January 2017 issue of the Eclipse Newsletter: Exploring New Technologies

For more information and articles check out the Eclipse Newsletter.

 

asap

Author
Daniel Bryant
Daniel Bryant is a Principal Consultant at OpenCredo and CTO at SpectoLabs. He specialises in enabling agility within organisations, creating and leading effective software development teams, and maximising the impact of software delivery. Daniel’s current work includes introducing better requirement gathering and planning techniques, focusing on the relevance of architecture within agile development, and facilitating continuous integration/delivery.

Comments
comments powered by Disqus