Angular shitstorm: What's your opinion on the controversial plans for Angular 2.0?
Lighting the fuse

Tutorial: Managing Apache ServiceMix clusters with Fuse Fabric

TorstenMielke
fuse

Torsten Mielke introduces some powerful OSGi and ESB concepts through the former FuseSource, now Red Hat led project

Torsten Mielke introduces some powerful OSGi and ESB concepts through the former FuseSource, now Red Hat led project 

Managing a large number of ServiceMix instances with dozens of applications deployed is a non trivial task, but open source project ServiceMix from Red Hat can help reduce the complexity of your application deployment.

Apache ServiceMix is quite a popular open source ESB that is best suited for Integration and SOA projects. It offers all the functionality one would expect from a commercial ESB – but in contrast to most commercial counterparts, at its core it is truly based on open standards and specifications .

ServiceMix leverages a number of very popular open source projects. Its excellent message routing capabilities are based on the Apache Camel framework. Apache Camel is a lightweight integration framework that uses standard Enterprise Integration Patterns (EIP) for defining integration routes using a variety of domain specific languages (DSL).

The majority of integration projects require a reliable messaging infrastructure. ServiceMix supports reliable messaging by embedding an Apache ActiveMQ message broker, which is one of the most popular, fully JMS 1.1 compliant open source message brokers. It offers a long list of messaging features, can be scaled to thousands of clients and supports many Clustering and High Availability broker topologies.

Support for Web Services and RESTful services is achieved by integrating Apache CXF. CXF is perhaps the most well known open source Web Services framework and has been fully integrated into ServiceMix. CXF supports both the JAX-WS and JAX-RS standard and all major WS-* specifications.

At the heart of ServiceMix is an OSGi container runtime. The OSGi Framework is responsible for loading and running truly dynamic software modules – so called OSGi bundles. An OSGi bundle is a plain Java jar file that contains additional OSGi specific meta data information about the classes and resources contained inside the jar.

The OSGi runtime used in ServiceMix is Apache Karaf, which offers many interesting features like hot deployment, dynamic configuration of OSGi bundles at runtime, a centralized logging system, remote management via JMX and an extensible shell console that can be used to manage all aspects of an OSGi runtime. Using Karaf one can manage all life cycle aspects of the deployed application modules individually. Karaf not only supports deploying OSGi bundles, but also plain Java jar files, Blueprint XML, Spring XML and war files. The flexible deployment options ease the migration of existing Java applications to OSGi.

ServiceMix deploys these open source projects out-of-the-box on top of the Karaf OSGi runtime. ActiveMQ and Camel register additional shell commands into the Karaf shell that can manage the embedded JMS broker and Camel environment at runtime. It’s also possible to only deploy those ESB functions that are needed for a particular project. If support for a certain element, for example Web Services, is not needed, the CXF related OSGi bundles can all be uninstalled. This further reduces the already small runtime memory footprint of ServiceMix. Figure 1 summarizes the technologies and standards that Apache ServiceMix is built on.

Figure 1: ESB enabling technologies in ServiceMix 

ServiceMix leverages a number of very successful open source projects. Each of these projects is based on open standards and industry specifications and designed to provide a maximum level of interoperability. All of these aspects make ServiceMix a very popular ESB that is deployed in thousands of customer sites today and in many mission critical applications. There is also professional, enterprise level support available from companies like Red Hat (who acquired FuseSource in 2012) and Talend.

Introduction to Fuse Fabric

It’s no surprise that some companies have dozens and even hundreds of ServiceMix instances deployed in their IT infrastructure. Larger projects may spawn multiple ServiceMix containers as one single JVM instance would not fit the entire application. In addition, the same application may be deployed to multiple ServiceMix containers for load balancing reasons. Each ServiceMix instance is an independent OSGi container with its own OSGi runtime and an individual set of deployed applications. However, managing a larger number of ServiceMix instances with dozens of applications deployed becomes a non trivial task as ServiceMix itself does not provide any tools to manage multiple ESB instances centrally.

Installing updates of an application deployed to multiple independent OSGi containers becomes a tedious and error-prone task. It is necessary to manually log into each OSGi container (e.g. using an ssh client session), stop the existing application, install and perhaps reconfigure the new version of the application and finally start the new application. These steps then need to be repeated on all the remaining ESB instances that run the same application. If anything goes wrong during such upgrade, changes need to be reverted back manually. This manual approach is cumbersome and chances are high that mistakes are made along the way.

Here, we can use Fuse Fabric, an open source Integration Platform under Apache license which began as a project within FuseSource. With Fuse Fabric you can group all ServiceMix container instances into one or several clusters, so called Fabrics. All instances of this cluster can then be managed from a central location, which potentially may be any ServiceMix instance within the Fabric. This includes both the configuration of all ESB instances in a cluster as well as the deployment of applications to each ServiceMix container.

Fabric extends the Karaf shell with additional commands for managing the cluster of OSGi containers, so users don’t need to use another tool for managing the Fabric. It also supports deploying applications to both private and public clouds. Using the jclouds library, all major cloud providers are supported. Applications may be deployed to the cloud with a single Karaf shell command and even the virtual machine in the cloud can be started by Fabric.

Fabric can also create ESB containers on-demand. Not only can it create new ESB containers locally (sharing the existing installation of ServiceMix) but it can also start new ESB containers on remote machines that do not even have ServiceMix pre-installed. Using ssh, Fabric is capable of streaming a full ServiceMix installation to a remote machine, unpacking and starting that ServiceMix installation and provision it with pre-configured applications.

To better understand these features, let’s have a look into the mechanisms and concepts used by Fabric.

Fabric concepts

Fabric defines a couple of components that work together to offer a centralized integration platform.

Each Fabric contains one or more Fabric Registries. A Fabric Registry is an Apache Zookeeper-based, distributed and highly-available configuration service which stores the complete configuration and deployment information of all ESB containers making up the cluster in a configuration registry.

The data is stored in a hierarchical tree-like structure inside Zookeeper. ESB containers get provisioned by Fabric based on the information stored in the configuration registry. There is also a runtime registry that stores details of the physical ESB instances making up the Fabric cluster, their physical locations and the services they are running. The runtime registry is used by clients to discover available services dynamically at runtime. The Fabric Registry can be made highly available by running replica instances. The example cluster in Figure 2 consists of three ESB instances that each run a registry replica.

Figure 2: A Fabric cluster consisting of 3 ESB instances, all running a Fabric Registry.

Fabric Registries store all configuration and deployment information of all ESB instances. This information is described in Fabric Profiles, where users fully describe their applications and the necessary configuration in these profiles. Profiles therefore become high level deployment units in Fabric and specify which OSGi bundles, plain Java jar or war files, what configuration and which Bundle Repositories a particular application or application module requires.

A Profile can be deployed to many ESB containers and each ESB container may deploy multiple profiles. Profiles are versioned, support inheritance relationships, and are managed using a set of Karaf shell commands. It is possible to describe common configuration or deployment information in a base profile that other more specific profiles inherit from.

Figure 3 shows some example profiles that are provided out-of-the-box. It defines a common base profile called default that all other profiles inherit from. The example also lists profiles named camel, mq or cxf. These profiles define the OSGi bundles and configuration for various ESB functions like message routing (based on Camel), reliable messaging (based on ActiveMQ) and Web Services support (based on CXF). Users are encouraged to create their own profile that inherit from these standard profiles.

Figure 3: Sample profiles

Profiles can be easily deployed to one or more ESB containers. Deploying a profile to a particular container is the task of the Fabric Agent. There is an agent running on each ESB container in the Fabric cluster. It connects to the Fabric Registry and evaluates the set of profiles it needs to deploy to its container. The agent further listens for changes to profile definitions and provisions the changes immediately to its container.

Finally Fabric defines the component of a Fabric Server or Fabric Container. This is every ESB container that is managed by Fabric. Each Fabric Server has a Fabric Agent running.

For true location transparency Fabric also defines a number of Fabric Extensions. Each CXF based Web Service, each Camel consumer endpoint (the start endpoint of a Camel integration route) and each ActiveMQ message broker instance can register its endpoint address in the Fabric runtime registry at start up. Clients can query the registry for these addresses at runtime rather than having the addresses hard-coded. This allows you to move endpoints to different physical machines at runtime, running replicas of endpoints for load balancing reasons, or even creating master/slave topologies where a slave endpoint (e.g. a slave message broker) waits on standby for the master endpoint to become unavailable. Fabric Extensions are outside the scope of this article but the link above explains them in full detail.

Fabric defines some really powerful concepts. All provisioning information is stored in a highly available Fabric Registry in form of Fabric profiles. These profiles can then be deployed quickly to any number of ESB instances inside the cluster thanks to the Fabric Agents. Also, Fabric is capable of creating new local and remote ESB instances on demand. Together with the Fabric Extensions this allows for very flexible deployments. If the load of a particular ESB container increases it is possible to start up another ESB container instance (perhaps in the cloud) that deploys the same set of applications and then load balance the overall work across all instances. Furthermore ESB instances can be moved to different physical servers if there is a need to run on faster hardware while clients automatically get rebalanced. With Fuse Fabric it is possible to quickly and easily adapt on any changes to your runtime requirements and have a fully flexible integration platform.

                      

Walkthrough

Having introduced the concepts of Fabric, this last section aims to provide a quick introduction on how to practically use Fuse Fabric for deploying an integration project. Although one could download and run Fuse Fabric from its project web site, this part uses Fuse ESB Enterprise 7.1 as released by Red Hat. Fuse ESB Enterprise is based on Apache ServiceMix and already includes Fabric out-of-the-box. It is fully documented here. The default workflow when working with Fabric is as follows:

  1. Create a new Fabric. This starts the Fabric Registry and imports the default profiles.

  2. Create the Integration or SOA application using the technologies offered by ServiceMix.

  3. Define the deployment of the application plus its configuration in one or more Fabric Profiles.

  4. Create the required number of ESB containers and configure these containers for one or many profiles.

  5. Test or run the deployed application.

Lets go through these steps one by one.

Create a new Fabric

After installing Fuse ESB Enterprise 7.1, it can be started using the script ‘bin/fuseesb’. A few seconds later the welcome screen of the shell console is displayed.

 

|  ___|                 |  ___|/  ___|| ___  
| |_  _   _  ___   ___  | |__   `--. | |_/ / 
|  _|| | | |/ __| / _  |  __|  `--. | ___  
| |  | |_| |__ |  __/ | |___ /__/ /| |_/ / 
_|   __,_||___/ ___| ____/ ____/ ____/ 

  Fuse ESB (7.1.0.fuse-047) 

http://fusesource.com/products/fuse-esb-enterprise/

Hit '<tab>' for a list of available commands 
and '[cmd] --help' for help on a specific command. 
Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Fuse ESB. 

FuseESB:karaf@root>

 

Tip: All Karaf shell commands take the –help argument which displays a quick man page of the command.

On its first start up this ESB container does not have a Fabric pre-configured. Its only a standalone ServiceMix installation with a number of OSGi bundles deployed. It is necessary to create a Fabric first using the Karaf shell commandfabric:create’. This reconfigures the current ESB container, deploys and starts the Fabric registry and imports the default profiles into the registry. Alternatively a container can join an existing fabric cluster using the command ‘fabric:join’ providing the URL of the already running Fabric registry. This Fabric enabled ESB container does not deploy any ESB functionality by default (use the command ‘osgi:list’ to verify). ESB functions get enabled by deploying the relevant profiles.

Create the Integration or SOA Application

Fuse ESB Enterprise 7.1 also comes with a couple of demos from which this article picks the examples/jms demo. It demonstrates how to connect to an ActiveMQ broker and use JMS messaging between two Camel based integration routes. The demo works in a plain ServiceMix environment but in this part it will be deployed to a Fabric enabled ESB container. This demo has only one interesting file, which is the Camel route definition located in ‘examples/jms/src/main/resources/OSGI-INF/blueprint/camel-context.xml’.

 

<camelContext xmlns="http://camel.apache.org/schema/blueprint" 
              xmlns:order="http://fusesource.com/examples/order/v7" 
              id="jms-example-context"> 

  <route id="file-to-jms-route"> 
    <from uri="file:work/jms/input" /> 
      <log message="Receiving order ${file:name}"/> 
      <to uri="activemq:incomingOrders" /> 
  </route> 

  <route id="jms-cbr-route"> 
    <from uri="activemq:incomingOrders" /> 
      <choice> 
        <when> 
          <xpath>/order:order/order:customer/order:country = 'UK'</xpath> 
          <log message="Sending order ${file:name} to the UK"/> 
          <to uri="file:work/jms/output/uk"/> 
        </when> 
        <when> 
          <xpath>/order:order/order:customer/order:country = 'US'</xpath> 
          <log message="Sending order ${file:name} to the US"/> 
          <to uri="file:work/jms/output/us"/> 
        </when> 
        <otherwise> 
          <log message="Sending order ${file:name} to another country"/> 
          <to uri="file:work/jms/output/others"/> 
        </otherwise> 
      </choice> 
    <log message="Done processing ${file:name}"/> 
  </route> 
</camelContext> 

 

This Camel context defines two Camel routes. The first route with the id=file-to-jms-route consumes a message from a file location on the local file system (directory work/jms/input). It then logs the file name and sends the content of the file to the incomingOrders queue on an external ActiveMQ broker.

The second Camel route with id=jms-cbr-route consumes messages from the incomingOrders JMS queue and runs a content-based routing. Depending on the XML payload of the message it gets routed to different target directories on the local file system. This is a simple yet fairly common integration use-case. Some small additional configuration is needed to tell Camel how to connect to the external ActiveMQ broker.

 

<!-- connects to the ActiveMQ broker --> 
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent"> 
  <property name="brokerURL" value="discovery:(fabric:default)"/> 
  <property name="userName" value="admin"/> 
  <property name="password" value="admin"/> 
</bean> 

Notice the brokerURL property. Rather than using a hard coded url like tcp://localhost:61616 the real broker address is queried from the Fabric registry at runtime using the Fabric MQ Extension. That way the broker can be moved to a different physical machine and clients automatically reconnect to the new broker address.

The demo can be build by running ‘mvn install’. This will install the generated OSGi bundle to the local Maven repository.

Define deployment

Now it’s time to create the Fabric profiles that will deploy this integration project. Let’s assume there is a requirement to run the ActiveMQ broker externally in its own ESB container, which can be useful for various reasons like providing a common messaging infrastructure to a number of deployed applications. Therefore two ESB containers are required: one running the ActiveMQ broker, the other running the Camel integration route.

For running an ActiveMQ broker there is already a profile with the name mq provided out of the box. That ActiveMQ broker has a default configuration, which is sufficient for running this demo. The mq profile can simply be re-used so there is no need to create a new profile. The command ‘fabric:profile-list’ lists all available profiles. ‘fabric:profile-display profilename’ shows the content of a profile.

For running the Camel integration demo, the Camel runtime needs to be deployed to the ESB container. Furthermore, both Camel routes connect to the external ActiveMQ broker. So it’s also necessary to deploy the ActiveMQ client libraries to this ESB container. ‘fabric:profile-list’ lists the following three profiles among others

 

FuseESB:karaf@root> profile-list 
[id]                                     [# containers] [parents] 
activemq-client                          0              default 
camel                                    0              karaf 
camel-jms                                0              camel, activemq-client
...

The profile activemq-client deploys the ActiveMQ client libraries needed for connecting to an ActiveMQ broker. The profile camel deploys the core Camel runtime (but not the many Camel components). Finally the profile camel-jms has two parent profiles named camel and activemq-client, so it deploys both of the ActiveMQ client libraries – the Camel core runtime and camel-jms component. When using the profile camel-jms as a parent, it will automatically deploy the Camel runtime and ActiveMQ client runtime.

fabric:profile-create –parents camel-jms camel-jms-demo

This command creates a new profile called camel-jms-demo and uses the profile camel-jms as its parent., which also needs to deploy the OSGi bundle of the ServiceMix demo. This can be added using the demo’s Maven coordinates (the demo was previously built and installed to the local Maven repository) by invoking

fabric:profile-edit –bundles mvn:org.fusesource.examples/jms/7.1.0.fuse-047 camel-jms-demo

It modifies the camel-jms-demo profile and adds the demo’s OSGi bundle that is identified by its Maven coordinates org.fusesource.examples/jms/7.1.0.fuse-047. That’s all! Thanks to the out-of-the-box profiles it took only two Fabric shell commands to create a profile that fully deploys the Camel integration demo.

Create ESB containers

The last step is to create the two ESB containers that run the ActiveMQ broker and the Camel demo. For running the ActiveMQ broker in its own ESB container this command is all that is needed:

fabric:container-create-child –profile mq root activemq-broker

It creates a new local ESB container called ‘activemq-broker’ (using the existing installation of Fuse ESB Enterprise) with the parent container being the root container. It also deploys the mq profile, which runs the ActiveMQ broker. The ESB container could be created on a different machine, using the command ‘fabric:container-create-ssh. Running fabric:container-list’ verifies that the new ESB container got started. It is possible to connect to that container using ‘fabric:container-connect activemq-broker and check the log file using log:tail’. If the ActiveMQ broker got started successfully, the log will contain a line like:

Apache ActiveMQ 5.7.0.fuse-71-047 (activemq-broker, ID:XPS-49463-1357740918210-0:1) started.

With the broker running its time to deploy the camel-jms-demo profile to another ESB container. The existing root container only runs the Fabric Registry so the demo can be deployed to the root container using the command

fabric:container-add-profile camel-jms-demo root

This reconfigured the root container to also deploy the camel-jms-demo profile (the jms demo).

Test the application

The demo can finally be tested by copying a sample XML message to the work/jms/input folder that the first Camel route listens on. Fortunately some sample messages are provided with the demo. On a plain Unix or Windows shell run

cp examples/jms/src/test/data/order2.xml instances/camel-jms-demo/work/jms/input/

Right after copying the file will be picked up by Camel, get routed through the two Camel routes via JMS and is finally put into the target directory instances/camel-jms-demo/work/jms/output/uk/order2.xml. This verifies that the demo works correctly.

For users that aren’t fans of command line tools, it is also possible to manage all aspects of a Fabric using the Fuse Management Console (FMC). The FMC is a graphical, browser based management tool for Fabric and a full alternative to using the Karaf Shell. It can be installed directly to the root ESB container using the command:

fabric:container-add-profile fmc root

Thereafter, it can be accessed from a browser using the url http://localhost:8181/index.html (see Figure 4). Discussing the details of the Fuse Management Console is outside of the scope of this article.

Figure 4: Fuse Management Console

Conclusion

Anyone who needs to manage multiple instances of ServiceMix should look into Fuse Fabric. The ability to describe all deployments centrally and roll them out to any number of ESB instances can greatly increase productivity and reduce management complexity.

Author Bio: 

Torsten Mielke works as a Senior Technical Support Engineer at Red Hat. He is part of the global professional support team at Red Hat and a specialist in open source enterprise integration and messaging systems. Torsten actively works on open source projects such as Apache ActiveMQ, Apache ServiceMix, Apache Camel and Apache CXF and is a committer on the Apache ActiveMQ project.

This article appeared in JAX Magazine: Socket to them!. For other articles and issues, click here.

Image courtesy of Razor512

Author
TorstenMielke
Torsten Mielke works as a Senior Technical Support Engineer at Red Hat. He is part of the global professional support team at Red Hat and a specialist in open source enterprise integration and messaging systems. Torsten actively works on open source projects such as Apache ActiveMQ, Apache ServiceMix, Apache Camel and Apache CXF and is a committer on the Apache ActiveMQ project.
Comments
comments powered by Disqus