days
-6
-3
hours
-1
-7
minutes
-1
-5
seconds
-4
-5
search

In search of application agility

Jim Scott
agility
© Shutterstock / Kalakruth

Technical expert and author Jim Scott offers a candid look at the transition of architectures from monolithic through microservices, containers and potentially serverless computing in this exclusive extract from his latest book.

Converged infrastructure is the key on-ramp to digital transformation. The once-in-a-generation replatforming currently underway is synonymous with infrastructure agility, because enabling the next-gen applications requires a next-gen infrastructure.

Application agility depends upon an equally agile infrastructure. Both have similar characteristics. Agile applications are deconstructed, distributed, and dynamically assembled as needed. The whole point of microservices and containers is to eliminate as much of the need for vertically integrated and hard-coded logic as possible. It is also to remove dependencies on specific servers and hardware components.

Today, organizations have an unprecedented variety of options for deploying infrastructure. There’s traditional on-premises infrastructure – the data center – as well as public cloud, private cloud, and hybrid cloud. There is also a huge selection of software as a service (SaaS) providers who expose their services through APIs that their customers can use to extend their own applications.

The downside of choice, of course, is complexity. That’s why agile infrastructure should be designed to be deployed on as broad a combination of platforms as possible. Resources should be shared. All platforms should support multi-tenancy for the greatest deployment flexibility. This is even desirable within the on-premises data center. As much as possible, platforms should have common operating systems, containers, automation tools, permissions, security, and even pathnames. If something is in a home directory on-premises, there should be a duplicate home directory and path on a cloud platform, so applications don’t come to a halt over things like simple naming conventions.

SEE ALSO: How do we keep containers secure?

IT infrastructure is undergoing a transformation that is no less radical than that being seen in data and applications. Virtualization has brought unprecedented flexibility to resource provisioning, a foundation that containers build upon. Software-defined everything is close to becoming a reality. In the future, infrastructure components, such as storage, networks, security, and even desktops, will be defined in software, enabling resources to be quickly reconfigured and reallocated according to capacity needs.

Agile infrastructure is also highly automated. Processes that once took weeks, such as the introduction of a new storage controller or network switch, can be reduced to minutes with minimal operator intervention. Policy-based automation predicts and adjusts resource requirements automatically. A data-intensive process requiring rapid response can be automatically provisioned with flash storage, while a batch process uses a lower-cost spinning disk.

This kind of agility will be essential to achieving comparable nimbleness in applications and data. Users shouldn’t have to care whether software is running on a hard disk or flash. Networks should automatically re-provision according to capacity needs, so that a videoconference doesn’t break down for lack of bandwidth. Storage will be available in whatever quantity is needed. Containers will spin up fully configured with required services. An administrator should be able to make these types of changes without a reengineering effort.

Most importantly, distinctions between on-premises and cloud infrastructure will fade. Open standards and on-premises mirrors of cloud environments, such as Microsoft Azure Stack and Oracle Cloud on Customer, are among the forces moving toward complete cloud transparency. Developers will be able to build on-premises and deploy in the cloud, or vice versa. Users should expect workloads to shift back and forth between environments without their knowledge or intervention. Infrastructure agility is effectively infrastructure transparency.

This kind of flexible, self-provisioning infrastructure will be required to support big data and analytics. In that scenario, agile infrastructure includes the following five foundational principles:

  1. Massive, multi-temp, reliable, global storage with a single global namespace
  2. High-scale, asynchronous, occasionally connected, global streaming data layer that is persistent (because who knows when a network connection will fail)
  3. Support for multiple types of analytical workloads, machine learning, or compute engines
  4. Ability to operationalize whatever happens; operational applications combined in the same platform.
  5. Utility-grade cloud architecture with DR, workload management, scale-out

Seen this way, the next-generation infrastructure is not an incremental improvement of existing approaches. It truly is a radical replatforming, able to bridge new modern applications with legacy systems.

Big data platforms are changing the way we manage data. Legacy systems often require throwing away older data, making trade-offs about which data to maintain, moving large data sets from one silo to another, or spending exorbitant amounts to handle growth. But those are becoming the modus operandi of the past. Scale, speed, and agility are front and center with the modern data architectures that are designed for big data. Data integrity, security, and reliability remain critical goals as well. The notion of a ‘converged application’ represents the next generation of business applications for today and the future.

Containers and clouds

One of the most compelling advantages of cloud computing is developer productivity. Developers can quickly spin up their own cloud instances, provision the tools they want, and scale up and down easily.

Containers are an ideal tool for developers to use when shifting between on-premises, private cloud, and public cloud architectures. Because containers are independent of the underlying operating system and infrastructure, they can be moved quickly and with minimal disruption. Some organizations even use multiple public clouds, and shift workloads back and forth, depending upon price and special offers from the service providers. Containers make this process simple.

SEE ALSO: Know your history — When your head’s in the clouds

Networking containers with Kubernetes

Microservices bring their own set of security challenges. Instead of protecting a few monolithic applications, administrators must attend to a much larger number of federated services, each communicating with each other and creating a large amount of network traffic. Service discovery is a capability that enables administrators to automatically identify new services by pinpointing real-time service interactions and performance.

Cluster-based micro-segmentation is another useful tool. Network segments can be set up with their own security policies at a high level – for example, separating the production environment from the development environment – or in a more granular fashion, such as governing interactions between a CRM system and customer financial information. These policies are enforced at the cluster level.

Automation is also the security administrator’s friend. The complexity of a containerized microservices environment naturally lends itself to human error. By using automation for tasks such as defining policies and managing SSL certificates, that risk is significantly reduced.

In the early days of the container wave, security was a weak point of the technology. Much progress has been made in just the past few years, though. By using the techniques noted above, your containerized microservices environment should be no less secure than your existing VMs.

New technologies are rapidly emerging to make containers more appropriate for enterprise-class applications. Kubernetes (K8s) is a highly functional and stable platform that is rapidly becoming the favoured orchestration manager for organizations who are adopting containerized microservices. Kubernetes has been a big step toward making containers mainstream. Kubernetes introduced a high-level abstraction layer called a “pod” that can be used to define a group of interdependent services, which simplifies the administration of large containerized environments. Kubernetes also handles load balancing to ensure that each container gets the necessary resources. Kubernetes monitors container health and can automatically roll back changes or shut down containers that don’t respond to pre-defined health checks. It automatically restarts failed containers, reschedules containers when nodes die, and can shift containers seamlessly between servers on-premises and in the cloud. Altogether, these features give IT organizations unprecedented productivity benefits, enabling a single administrator to manage thousands of containers running simultaneously.

Security

Having a robust but flexible security architecture is integral to the success of these technologies. The use of containers and microservices may significantly increase the number of instances running in your organization, compared to virtual machines. This requires attention to security policies and the physical location of containers. Without proper security, you would want to avoid, for example, running a public web server and an internal financial application in containers on the same server. Someone who compromises the web server to gain administrative privileges might be able to access data in the financial application.

Containers increase the complexity of the computing infrastructure because they can be dynamically orchestrated across services or even across multiple clouds. Self-provisioning means that administrators don’t necessarily know which containers are running, and a container’s IP address may be invisible outside of the local host.

Containers are different from virtual machines in security. They use similar security features to LXC containers, which are an operating-system-level virtualization method for running multiple isolated Linux systems on a control host using a single Linux kernel. When a container is started, it creates a set of namespaces and control groups. Namespaces ensure that processes running within a container cannot see or interfere with processes running in other containers. Each container also has its own network stack, which prevents privileged access to the sockets or interfaces of another container.

Containers can interact with each other through specified ports for actions like pinging, sending and receiving packets, and establishing TCP connections. All of this can be regulated by security policies. In effect, containers are just like physical machines connected through a common ethernet switch.

Stateful containers present somewhat more complicated security considerations because they connect directly to underlying storage. This presents the possibility that a rogue container could intercept read or write operations from a neighbour and compromise privileges. Using a persistent data store with security built-in can minimize risk. The data store should have the following features:

  • A pre-built, certified container image with predefined permissions. This image should be used as a template for any new containers, so that new security issues aren’t introduced.
  • Security tickets. Users can pass a MapR ticket file into the container at runtime with all data access authorized and audited according to the authenticated identity of the ticket file. This ensures that operations are performed by the authenticated user. A different ticket should be created for each container that is launched.
  • Secure authentication at the container level. This ensures that containerized applications only have access to data for which they are authorized.
  • Encryption. Any storage- or network-related communications should be encrypted.
  • Configuration via Dockerfile scripts. This can be used as a basis for defining security privileges with the flexibility to customize the image for specific application needs.

This is an excerpt from the book, A Practical Guide to Microservices and Containers: Mastering the Cloud, Data, and Digital Transformation – you can download the eBook version here.

Author

Jim Scott

Jim Scott is Director, Enterprise Strategy & Architecture at MapR and an experienced leader having worked in financial services, regulatory, digital advertising, IoT, manufacturing, healthcare, chemicals and geographical management systems. He is a cofounder of the Chicago Hadoop Users Group (CHUG) where he helped grow a now flourishing community around next generation technologies. Scott has built systems scaling to 50+ billion transactions per day, and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts. His passion is in building combined big data and blockchain solutions.