Traffic management technology

Intelligent traffic management in the modern application ecosystem

Kris Beevers
Traffic image via Shutterstock

As application architecture continues to undergo change, modern applications are now living in increasingly distributed and dynamic infrastructure. Meanwhile, DNS and traffic management markets are finally shifting to accommodate the changing reality.

Internet based applications are built markedly differently today than they were even just a few years ago. Application architectures are largely molded by capabilities of the infrastructure and core services upon which the applications are built. In recent years we’ve seen tectonic shifts in the ways infrastructure is consumed, code is deployed and data is managed.

A decade ago, most online properties lived on physical infrastructure in colocation environments, with dedicated connectivity and big-iron database back ends, managed by swarms of down-in-the-muck systems administrators with arcane knowledge of config files, firewall rules and network topologies. Applications were deployed in monolithic models, usually in a single datacentre – load balancers fronting web heads backed by large SQL databases, maybe with a caching layer thrown in for good measure.

Since the early 2000s, we’ve seen a dramatic shift toward “cloudification” and infrastructure automation. This evolution has led to an increase in distributed application topologies, especially when combined with the explosion of database technologies that solve replication and consistency challenges, and configuration management tools that keep track of dynamically evolving infrastructures. Today, most new applications are built to be deployed — at minimum — in more than one datacentre, for redundancy in disaster recovery scenarios. Increasingly, applications are deployed at the far-flung “edges” of the Internet to beat latency and provide lightning fast response times to users who’ve come to expect answers (or cat pictures) in milliseconds.

As applications become more distributed, the tools we use to get eyeballs to the “right place” and to provide the best service in a distributed environment have lagged behind. When an application is served from a single datacentre, the right service endpoint to select is obvious and there’s no decision to be made, but the moment an application is in more than one datacentre, endpoint selection can have a dramatic impact on user experience.

Imagine someone in California interacting with an application served out of datacentres in New York and San Jose. If the user is told to connect to a server in New York, most times, they’ll have a significantly worse experience with the application than if they’d connected to a server in San Jose. An additional 60-80 milliseconds in round trip time is tacked onto every request sent to New York, drastically decreasing the application’s performance. Modern sites often have 60-70 assets embedded in a page and poor endpoint selection can impact the time to load every single one of them.

Solving endpoint selection

How have we solved endpoint selection problems in the past? The answer is, we haven’t – at least, not very effectively.

If you operate a large network and have access to deep pockets and a lot of low-level networking expertise, you might take advantage of IP anycasting, a technique for routing traffic to the same IP address across multiple datacentres. Anycasting has proven too costly and complex to be applied to most web applications.

Most of the time, endpoint selection is solved by DNS, the domain name system that translates hostnames to IP addresses. A handful of DNS providers support simple notions of endpoint selection for applications hosted in multiple datacentres. For example, the provider might ping your servers, and if a server stops responding, it is removed from the endpoint selection rotation. More interestingly, the provider may use a GeoIP database or other mechanism to take a guess at who’s querying the domain and where they’re located, and send the user to the geographically closest application endpoint. These two simple mechanisms form the basis of many large distributed infrastructures on the Internet today, including some of the largest content delivery networks (CDNs).

In today’s modern Internet, applications live in increasingly distributed and dynamic infrastructure. The DNS and traffic management markets are finally shifting to accommodate these realities.

Modern DNS and traffic management providers are beginning to incorporate real-time feedback from application infrastructures, network sensors, monitoring networks and other sources into endpoint selection decisions. While basic health checking and geographic routing remain tools of the trade, more complex and nuanced approaches for shifting traffic across datacentres are emerging. For example, some of the largest properties on the Internet, including major CDNs, are today making traffic management decisions based not only on whether a server is “up” or “down,” but on how loaded it is, in order to utilize the datacentre to capacity, but not beyond.

Several traffic management providers have emerged that measure response times and other metrics between an application’s end users and datacentres. These solutions leverage data in real time to route users to the application endpoint that’s providing the best service, for the user’s network, right now, ditching geographic routing altogether. Additional traffic management techniques, previously impossible in the context of DNS, are finding their way to market, such as endpoint stickiness, complex weighting and prioritizing of endpoints, ASN and IP prefix based endpoint selection and more.

The mechanisms and interfaces for managing DNS configuration are improving, as new tools mature for making traffic management decisions in the context of DNS queries. While legacy DNS providers restrict developers to a few proprietary DNS record types to enact simplistic traffic management behaviours, modern providers offer far more flexible toolkits. This enables developers to either write actual code to make endpoint selection decisions or offering flexible, easy to use rules engines to mix and match traffic routing algorithms into complex behaviours.

What’s next for traffic management technology?

As with many industries, traffic management will be driven by data. Leading DNS and traffic management providers, such as NSONE, already leverage telemetry from application infrastructure and Internet sensors. The volume and granularity of this data will only increase, as will the sophistication of the algorithms that act on it to automate traffic management decisions.

DNS and traffic management providers have found additional uses for this data outside of making real-time endpoint selection decisions. DNS providers are already working with larger customers to leverage DNS and performance telemetry to identify opportunities for new datacentre deployments to maximize performance impact. DNS based traffic management will be an integral part of a larger application delivery puzzle that sees applications themselves shift dynamically across datacentres in response to traffic, congestion and other factors.

Applications and their underlying infrastructure have changed significantly in the last decade. Now, the tools and systems we rely on to get users to the applications are finally catching up.


Kris Beevers

Kris is an internet infrastructure geek and serial entrepreneur who’s started two companies, built the tech for two others, and has a particular specialty in architecting high volume, globally distributed internet infrastructure. Before NSONE, Kris built CDN, cloud, bare metal, and other infrastructure products at Voxel, a NY based hosting company that sold to Internap (NASDAQ:INAP) in 2011.

comments powered by Disqus