Debugging tests left you tired and scarred? Gauge is here to stop the bleeding!
You highly value test automation but debugging tests is giving you a hard time? Meet Gauge, an open source test automation framework that is here to take care of your test wounds!
During each stage of pipeline building, teams rely on tests to quickly detect and fix problems. As the team behind Gauge, realized, and sometimes brutally experienced, it is tough to maintain or debug tests, especially acceptance tests, even if you follow the test pyramid.
And that was the driver behind the conception of Gauge!
Over time, teams over-engineer test suites with design opinions making it harder and harder to write new tests or maintain existing ones. We believe it’s important to minimize or eliminate this bloated design process. Gauge removes this overhead by making test automation a natural part of the software development cycle and by removing hurdles that come in the way of writing and maintaining acceptance tests.
In short, Gauge is:
- An easy to setup single binary with no dependencies that’s available for all major languages and platforms.
- Easy to learn with markdown syntax and a simple implementation.
- Easy to maintain by focusing on creating readable and reusable tests without design overhead and with less code.
- Easy to extend with its modular design and plugins.
What the future holds
Moving out of Beta is a big step but it is still just the beginning. Gauge roadmap gives us a taste of what appears to be a very bright future. Just to mention some of the upcoming features, version 1.1 will come with support for .NET core on VS Code.
Eager to give it a try? Get started here.
While we are on the topic of testing, check out our interview with Mark Price on the most important dos and don’ts in testing.
JAXenter: How can we determine that the tests are reflecting what is actually happening accurately?
Mark Price: This is probably the starting point for building a test harness. First, we need to understand how systems are being used in production, so some form of traffic analysis is required (e.g. looking at requests per second, distribution of request types, etc). While the test harness is being developed and run, we can analyze the traffic come from the harness, to make sure that it has the same “shape” as that which is seen in production. It’s also possible to automate this as an extra validation step to ensure that production traffic loads are not diverging from the test model.
JAXenter: What’s the biggest challenge in testing? How do you mitigate it?
Mark Price: Performance regressions are one of the hardest things to track down. Due to the duration of most performance tests (they may run for several minutes, or even hours), there is usually lots of change incorporated into each run. When a regression occurs, it is very useful to use something like ‘git bisect’ to track down the change that caused the regression. Sometimes, a regression is caused by a configuration change at the system level, so it is also important to have a record of any changes made external to the actual application (e.g. OS updates).
See the full interview here.