Introducing the Varnish API Engine
A new API management tool is promising 20,000 API calls per second in a lightweight first release that’s optimised for mobile and IoT. Varnish CTO Per Buer walks us through version 1.0 of the Varnish API Engine.
Over the last couple of years there’s been an explosion in the use of HTTP-based APIs. We’ve seen them go from being a rather slow and useless but interesting technology fifteen years ago to today’s current, high performance RESTful interfaces. These now power much of the web, most of the app-space and connects the ‘things’ with the internet.
However, many of today’s API management tools date back to the time of the API explosion 15 years ago. That was a time when API calls were counted per hour and performance wasn’t much of an issue. If you look at API management tool evaluations, they typically have long lists of criteria, with performance usually left off. That might be fine in certain environments, but not where IoT and mobile are concerned. For these environments the number of API calls have increased to the point that even the typical rate of 200 API calls per second is no longer enough.
Reducing licensing costs and server farms
Relying on these solutions to scale APIs is cumbersome and expensive. In order to deliver something like 10,000 managed API calls per second, some of the bigger API publishers require a farm of up to 50 servers. The licensing and operational costs shoot through the roof.
Performance is central to Varnish. The whole reason the Varnish Cache project got started in back in 2005 was due to poor HTTP caching performance. Since its inception, Varnish Cache has been used for HTTP-based APIs. Varnish Cache’s principal configuration mechanism is Varnish Configuration Language (VCL), a domain-specific language. VCL’s combination of caching, performance and flexibility made it an ideal proxy for APIs.
A couple of years back a customer asked us to expand on their Varnish-based API proxy. They wanted authentication and authorization in the proxy layer in addition to the caching. This wasn’t long after we’d created Varnish Paywall, a content control tool that enables users to set rules on how viewers can access controlled content. That made us confident in Varnish as a security gateway for HTTP.
Since then we’ve gradually added more features, like metering and throttling. At a certain point we realized that we had all the bits for an API management tool: Varnish API Engine.
Built on Varnish, the Varnish API Engine focuses primarily on performance. Early customer trials indicate that it can handle more than 20,000 API calls per second. It concentrates on the basic functions of API management, excluding any fancy features that could slow down the engine.
The initial release supports the following:
- Security: The Varnish API engine adds an authentication and authorization layer on your API. Authentication happens through API keys and authorization rules can be added to grant access to individual APIs based on the client’s identity.
- Throttling: Throttling of API requests is essential to avoid running your API into the ground. The API engine allows you to set quotas on the extent to which clients can access each API.
- Metering: The API Engine leverages VCS to gather data on how the API calls flow through the solution. Data on usage of individual keys, API, timing information, error rates etc. is gathered in VCS.
- Caching: Since this is built on Varnish it naturally offers caching (except for SOAP-based APIs).
The product itself is API-driven and it takes the average developer less than 60 minutes to deploy. It comes with two interfaces for administration. One is an API, naturally, and the other is a command line interface (CLI). Work on a graphical interface is underway.
The deployment process with the CLI using VCL looks in version 1.0 the following:
The API Engine is licensed on a per server basis. Basic deployment starts with a three-node cluster +admin server.