Testing strategies and challenges – Dos and don’ts
Are you ready to learn more about testing strategies and challenges from Mark Price? You’re in luck —you can attend his session and/or workshop at JAX Finance! Here is a small teaser for you to get even more hyped!
JAXenter: What is the ideal testing strategy for companies? How do they sometimes fall short?
Mark Price: Ideally, a company should have different levels of testing. Staring with unit-testing, moving up to integration and acceptance testing, each level of testing being broader in scope than the previous one. If systems performance is a business goal, then it makes sense to test it alongside a product’s functional features.
All these tests should be part of a continuous integration strategy so that failures and regressions can be caught quickly. One of the most important things is making the state of tests visible so that trends can be identified and everyone is aware when something is broken.
JAXenter: What is a performance test harness? Why do companies need to have this as a part of their testing strategy?
Mark Price: A performance test harness is simply another type of test interaction that can be used to model user behavior. It is typically focused on measuring how well the system performs under varying levels of load. Reporting of performance tests may be different than for functional tests; for instance, it may be difficult to define what constitutes a test failure.
No business wants their product to fail when it suddenly becomes popular, so as part of capacity planning it is important to know how much load the system can handle. Having a test harness in place allows us to inform the business of how popular the product can become before there are going to be scaling issues. It also provides a platform for analyzing and improving system performance in a safe environment, where experiments can be performed without worrying about affecting production systems.
JAXenter: How can we determine that the tests are reflecting what is actually happening accurately?
Mark Price: This is probably the starting point for building a test harness. First, we need to understand how systems are being used in production, so some form of traffic analysis is required (e.g. looking at requests per second, distribution of request types, etc). While the test harness is being developed and run, we can analyze the traffic come from the harness, to make sure that it has the same “shape” as that which is seen in production. It’s also possible to automate this as an extra validation step to ensure that production traffic loads are not diverging from the test model.
JAXenter: What’s the biggest challenge in testing? How do you mitigate it?
Mark Price: Performance regressions are one of the hardest things to track down. Due to the duration of most performance tests (they may run for several minutes, or even hours), there is usually lots of change incorporated into each run. When a regression occurs, it is very useful to use something like ‘git bisect’ to track down the change that caused the regression. Sometimes, a regression is caused by a configuration change at the system level, so it is also important to have a record of any changes made external to the actual application (e.g. OS updates).
JAXenter: What can attendees expect from your workshop?
Mark Price: We will cover more detail about the whys and whats of performance testing, and look at techniques to ensure that we are accurately measuring system performance. Really trusting your tools (in this case a test-harness) involves understanding how they work at a very low level. The workshop covers the use of profilers and other monitoring tools, along with low-latency coding techniques that will result in a test-harness that can measure system performance down to the microsecond.
Thank you very much!
Mark Price will be delivering a talk and a workshop at JAX Finance in London.
In a world of highly-distributed systems and microservices, applications need to communicate with reliable speed. In the quest for ever-faster messaging, a number of techniques have emerged aimed at gaining the lowest-possible latency when processing network traffic.
In his talk, Mark Price will explore these techniques: why they work, and how they are utilized by current state-of-the-art event-processing frameworks. Heading further down the stack, we will then take a look at how to optimize the operating system components involved in getting our application data from the network fabric to our business code.
During his full-day workshop, you will explore how to validate that your load-testing harness is producing accurate results as well as develop and iterate on a load-test harness to measure the responsiveness of a simple microservice. The course will cover how to measure and report system throughput and latency, and how to measure the system-under-test to understand where bottlenecks lie.