Go forth and test in-container!
Arquillian: A Component Model for Integration Testing
If you've ever attended a No Fluff Just Stuff event, you likely came away with the message ingrained in your head that testing is important... correction, crucial, and you should do it. Period.
As Venkat Subramaniam, a long-time speaker on the No Fluff Just Stuff symposium series, often reminds audiences:
Testing is the engineering rigor of software development.
In other words, if you're going to carry around the title Software Engineer, you had better put ample effort into engineering a plethora of tests.
But this article is not about the importance of testing. At this point, we'll assume its importance is well established. What I'm here to address is why the best intentions to write tests are so often foiled and how to prevent it.The main obstacle is that your tests live in a different world than your application. To overcome this discrepancy, I'll introduce you to the idea of having a component model for your tests, an architecture that is provided by Arquillian. Arquillian is a container-oriented testing framework developed in Java that brings your test to the runtime rather than requiring you to manage the runtime from the test. This strategy eliminates setup code and allows the test to behave more like the components it's testing. The end result is that integration testing becomes no more complex than unit testing. You'll believe it when you see how much power you get out of a simple Arquillian-based test.
I want to start by studying this transition from unit to integration tests and establish an understanding as to why making this transition knocks the wind out of the sails of so many testing initiatives. We'll then look at how a component model for tests can ease this transition. Of course, we'll look at some examples of how to use Arquillian along the way.
Why Don't We Do It?
Why don't we test? (For those of you who test religiously, think about a scenario when you've hesitated or decided to not write a test). I've asked this question at several NFJS events and it doesn't take long before I hear all the responses I'm seeking:
• Too hard
• Not possible
• Not enough time
• Too slow
• Not enjoyable
• No proper tools
Unless you have an enlightened manager who embraces agile software practices (perhaps because she attended a NFJS event), you likely fear her walking by to find you working on something that isn't the application code (i.e., tests). Worse is if she catches you writing your own testing framework, a tangent even by an enlightened manager's standards. So we can generalize why testing gets the axe by saying there isn't enough time.
The reason there isn't enough time is because we don't have the proper tools. I'm not talking about those fancy graphical tools. Those likely just compound the problem (because they require additional learning.) What we're missing is a tool in the form of a programming model that makes us efficient at solving the other testing challenges listed above.
But wait, isn't that what a unit testing framework is for? It is – to a point. Unit testing frameworks, such as JUnit and TestNG, provide a programming model for writing unit tests. They don't specifically address integration testing concerns (though they can drive integration tests).What lies between these two classifications is a large chasm.
The Testing Bandgap
Unit tests are fine-grained, simple and fast. They exercise individual components while ignoring the rest of the system. Since the test plugins for IDEs were designed with this type of test in mind, executing them in the IDE is easy.
A majority of books and articles that cover testing demonstrate the concept with unit testing. Indeed, it looks so simple and promises to keep regression errors from creeping in that you feel motivated to run back to your application and start adding tests. But when you give it a try, you soon discover it's harder than it looks.
Where things fall apart is when you transition to integration testing. Integration tests are coarse-grained and generally perceived as complex and slow (though this article will hopefully change that perception in your mind).They exercise multiple components from a subset of the application, verifying their communication and interactions.
Once you start involving multiple components, the task of isolating the code being tested from the rest of the system becomes more difficult. On top of that, getting components to interact often requires bootstrapping the environment they need in order to function (e.g., dependency injection). All of a sudden, the task of writing a test isn't so simple.
I describe the sharp increase in configuration complexity between unit and integration testing as the testing bandgap, represented in Figure ALL-1.
The shift to integration testing requires a significant increase in mental effort when any non-trivial setup is required (which is typical). This interrupts your flow.
Work stops, your browser history becomes flooded with search results, the build script multiplies and the code-build-test cycle becomes a code-build...-catch up on social timeline-test cycle. Testing becomes tedious and costly, two things that directly lead to its demise. We've got to find a way past this complexity barrier because integration is all around us.