Modularity, Microservices and Containers
Modularity makes complexity manageable. As long as design rules are obeyed, different parts of a modular application may be independently configured, deployed and upgraded.
Microservices and containerization are the latest examples of this ongoing drive towards greater modularity. Microservices improve modularity by building applications from internal light-weight services, each of which may be independently replaced. Containerization improves environment isolation. Each component is isolated from its physical platform and other tenants via its host container. The small size of container images, compared to Virtual Machine images, makes it practical to deploy multiple individual application services in containers.
Recognizing the fundamental importance of modularity, OSGi™ was established as the modularity framework for Java fifteen years ago. Today, OSGi’s Service-centric approach, Requirements & Capabilities model and Remote Services, are powerful and generically applicable concepts that provide the basis for a compelling microservices and containerised solution.
Let’s examine this in more detail to see how OSGi is a great fit for developing and deploying microservices-based applications, including the dependencies that can be provided by containerized service-instances.
Microservices is an architecture style, in which a single application is composed of small independent processes that use light-weight communications and language-agnostic APIs.
A microservices architecture is different from a service-oriented architecture (SOA) because microservices belong to a single application while SOA services are typically used to integrate between multiple business applications.
While there is no precise definition of the term ‘microservices,’ it can be characterized by the following features:
- easy to replace
- can be deployed independently and automatically
- typically run as separate processes
- light-weight communication (often REST over HTTP)
- can be implemented using different programming languages
These characteristics facilitate building modular applications. However, microservices also have some drawbacks:
- separate processes add complexity and new problems including network latency and serialization overheads
- the management of dependencies and deployment is more complex
Let’s see how OSGi services can help:
Services have been at the heart of OSGi since its creation. They have been called microservices ever since SOA used the word ‘service’ for their much heavier abstraction. Now that the term microservice is gaining popularity, OSGi services are often referred to as µServices, to distinguish them from the comparatively heavier REST-based microservices.
OSGi µServices fit into the general microservices definition: lightweight communications, independently deployable etc, but they can also mitigate some of the criticisms of microservices cited above.
Service boundaries are hard to predict in advance, so rather than forcing all services to run as separate processes, OSGi allows the decision to go remote to be deferred until deployment.
OSGi has a flexible capabilities model for handling service (and other) dependencies. Components specify the capabilities they provide and those they require, which facilitates automated resolution and deployment.
Let’s look at an example from the recent OSGi IoT Demo at the 2015 OSGi Community Event:
We have a TrackController component that requires SignalSegmentController services to operate the train signals. The requirement is indicated by the @Reference annotation:
The TrackController does not specify whether the SignalSegmentController is running locally, in the same process or on a remote machine. During initial development, it is often convenient to run services locally to reduce testing time.
The local SignalSegmentController services are defined as shown below. This actually defines a factory service so that a new instance is created for each signal configuration. The @Component annotation implicitly declares the component as providing a SignalSegmentController service.
The OSGi Declarative Services runtime automatically constructs the configured Signal services and injects them into the TrackController. This allows us to quickly test in a single process.
In the IoT demo, the physical signals are controlled by remote Raspberry Pis. So we need to move the service boundaries. How do we modify the application so that the Signal services can run remotely? There are a few options, but the simplest is to use OSGi Remote Services.
As the OSGi services model is already dynamic (it can handle services disappearing and reappearing), it is possible to inject proxies to remote services without changing the application. The extra failure conditions that remoting introduces can be handled within the existing services model.
All that a service provider needs to do is indicate their willingness for their service to be exported remotely. This is done by adding the highlighted line to the Signal service definition:
The service could also be made remotable, without modifying the code, by setting the same property using Configuration Admin.
The OSGi runtime also needs to be configured with a Remote Services distribution provider, but this does not require any changes to the application code. It is now easy to deploy Signal services to multiple Raspberry Pis and have them discovered and injected into a (remote) TrackController.
An alternative distribution mechanism is to use REST layered over OSGi services, using the whiteboard pattern. This is currently being considered for standardization in OSGi R7 via OSGi Alliance RFP-173: JAX-RS Services (publicly available under RFPs at GitHub). Until then, https://github.com/hstaudacher/osgi-jax-rs-connector works well.
Operating system virtualization isn’t new. It started back in the 1960s, but it wasn’t until 1999 that VMware introduced its first x86 virtualization product. Virtualization became hugely popular as a method to run workloads of all sizes, as well as enabling the many self-service cloud virtualization portals available today.
Virtual Machine images provided a mechanism to reliably deploy applications, as all their dependencies, including the operating system and patches, were contained in the image. However, this is also a problem. Images can be very large (~500Mb), since they contain a whole operating system as well as the target application, and performance is slower due to the virtualization layer.
Containers are an approach that overcomes some of these Virtual Machine image shortfalls, and it’s worth recognising that operating system container technology has also been around for a long time:
- FreeBSD Jails (2000)
- Solaris Zones (2004)
- Linux cgroups (2007) – various projects use cgroups as their basis, including:
- LXC (LinuX Containers 2008)
- CoreOS (2013)
- Docker (2013)
Container instances share the underlying operating system kernel and provide an isolated environment for their target application. This allows container images to be much smaller than VM images, and faster because they can use the operating system’s normal system call interface, without any extra virtualization layer.
Containers have many advantages:
- Simplify application deployment using container images
- similar to using VM images, except they are much smaller
- Consistent way to ship and run applications
- once you know how to run a Docker image, you can run any image
- Increased security/isolation
- compared to running multiple applications on the same host
- Powerful for deploying opaque, 3rd party components of your application
- MySQL, Hadoop, ZooKeeper etc
- Allows OSGi applications to be deployed just like any other container image
- bndtools 3.2 may support “export as Docker image”
However, containers also have some downsides. Their main disadvantage compared to virtual machines is that they share the underlying operating system kernel so cannot, for example, run any flavor of Windows on an underlying Linux kernel.
Plus, containers don’t solve the following problems:
- Modularity and Technical Debt
- The application code is not changed by containerisation. This is great for deploying opaque 3rd party apps, but makes it all to easy to create new containerized monolithic applications that are just as hard to maintain and evolve as their non-containerized counterparts.
- Dependency management
- The applications in a container may rely on having a particular OS kernel version, CPU architecture, or access to a GPU; there is no way to describe or enforce this.
- dependent services – we don’t want to embed MySQL and ZooKeeper in our application image, so how do we specify and resolve the dependencies?
- mechanisms to deploy an application, and connect its dependencies across multiple distributed containers are only just starting to appear
- Configuration approaches vary widely
Container orchestration using OSGi
OSGi has a mature capability model for specifying service (and other) dependencies.
It also provides:
- Consistent life-cycle and configuration mechanisms
- Strong isolation and well defined boundaries between software artifacts
Could these capabilities be used to orchestrate services running in Docker (and other) containers?
Introducing Paremus Packager
Initially announced at OSGi DevCon, Boston in 2013, Paremus Packager integrates the lifecycle of external (non-Java) applications with OSGi. It provides a consistent means to start, stop, configure and inter-connect services as part of the OSGi lifecycle.
Packager originally used an external process launcher, but it was extended earlier this year to support Docker containers:
- OSGi bundles can depend on containerized services
- The OSGi resolver can provision these containerized dependencies
When used with the Paremus Service Fabric, Packager allows any services to be deployed, scaled and rebalanced across thousands of computers; automatically managing service inter-connections as they are migrated.
This is clearly a very generic concept. A complete user-facing application may consist of many such services interacting with each other. For example, a web site could be implemented as a load-balancer service, talking to some web services, interacting with various REST back-end microservices and ultimately backed by a relational database service.
Some of the services in an application will be custom-built, but many will be off-the-shelf commercial products or open source components. Interconnecting these services can be challenging, with each service requiring custom configuration and being unaware of the location of other services unless explicitly configured.
Let’s look at an example application that depends on a MySQL database:
We cannot specify the dependency using the @Reference annotation, as we did for the TrackController because the MySQL dependency is not defined in a Java interface; instead, we have expressed a dependency on our custom MySQL package by directly using the OSGi generic Requirements and Capabilities model ‘Require-Capability’ manifest header.
As long as the OSGi resolver can find a bundle with a corresponding Provide-Capability manifest header, it will be able to administer and interconnect the services.
Anatomy of a Packager Package
Packager requires an OSGi bundle artifact that wraps or references the physical service implementation and mediates Packager’s access to it. In the case of a Docker containerized service, this is quite simple to implement.
The two main constituents of a Package are the Type and the Guard.
A Package Type controls how the package works. It knows how to install the physical implementation of the service. It knows how to start the executable service given appropriate launch properties. However, the Type does not actually launch the service itself because it doesn’t know when to launch or what launch properties to use.
A Package Guard controls when the package is launched, when it is reconfigured or shut down, and it knows how to define the launch properties using a combination of static configuration data and temporal state. The Guard also gets notified when the package service is successfully running, and can publish information that can be broadcast across the system and possibly used by other Guards.
The Type and the Guard are implemented as Java classes and are published as OSGi services. It is possible to have multiple Guards for a single Type, with each Guard configuring the launch properties of the Package differently.
Package Types are often platform-dependent since many package implementations contain native executables. In this case, an OSG requirement on the native platform can be added, for example:
Require-Capability:\ osgi.native;filter:="(&\ (osgi.native.osname=Linux)\ (osgi.native.processor=x86-64)\ )"
A Docker Package Type simply has to generate a Docker Launch Descriptor and declare the Provide-Capability for the package it provides. In the case of MySQL, the Launch Descriptor contains the default port mapping and bindings for the data volumes.
A Guard must generate all the configuration required by the Type in order to launch the package. It can use:
- explicit configuration provided to the Guard.
- temporal data detected from the state of the environment. For example, locations of other discovered services.
Advertising Presence with OSGi Remote Services
Package Guards can advertise the successful launch of their package by publishing a service in the OSGi service registry. For this to be useful, the Guard needs to advertise its service across the network.
The easiest way of doing this is to use OSGi Remote Services and simply add the property service.exported.interfaces=* to the published service. Normally a marker interface is sufficient, typically org.bndtools.service.endpoint.Endpoint is used.
Since this is a common pattern, Packager provides a utility class ServiceGuard that implements most of the pattern for us. The Guard class just needs to override the registerService() method, for example:
This enables higher-level services to use the OSGi Declarative Services @Reference annotation to declare their dependencies on potentially containerised services – just like they do for other services for example:
We can now extend the previous diagram to show the MySQL service Endpoint published by Packager and consumed by WordPress – another containerised application started with Packager:
I have demonstrated that microservices and containerization are each currently fashionable topics of a more fundamental and important underlying trend; that of increasing software modularity. This article explains how OSGi can naturally address the life-cycle, configuration, dependency and discovery issues associated with managing a number of dynamically deployed containerised microservices.
The approach is based on Paremus Packager. Packager enables applications to leverage an OSGi resolver to manage dependencies on containerized components, and an OSGi Configuration Admin Service to configure those services in a consistent way. When combined with the Paremus Service Fabric, it enables any application to be simply deployed and robustly maintained and distributed across a fluid population of servers.
Finally, for those developing microservices applications in Java you can start with local OSGi µServices running within a single OSGi framework. This allows you to defer the decision to “go remote” until later in the development cycle, when the service boundaries, target runtime platform and business SLAs are better defined.