Let's get down to business process management

Tutorial: JBoss Enterprise BRMS Best Practices

Red Hat's Eric Schabell and Edson Tirelli offer some handy hints as how to get the most out of Red Hat’s JBoss Enterprise BRMS.

DISCLAIMER - The article represents the personal views of Schabell and Tirelli, not of Red Hat.

Introduction

Business experts and application developers in organizations of any size need to be able to model, automate, measure, and improve their critical processes and policies. Red Hat’s JBoss Enterprise BRMS makes it possible with fully integrated business rules management, business process management (BPM), and complex event processing (CEP).

This article will provide you with an overview of our insights into creating process and rule based applications with JBoss Business Rules Management System (BRMS) that can be scaled to handle your current and future enterprise project needs. We will cover both process based and rule based applications, providing you with some basic insights to allow you to develop large scale applications.

In this article it is assumed you have a working knowledge of the JBoss BRMS product and will not cover the basics. A solid software architectural background is useful as this article discusses the design decisions being made and what you need to do so thatyour project(s) will scale in your enterprise architecture moving forward.

Finally, we will not be examining the CEP component in this article.

Processes

To start with, we need to take a closer look at a typical enterprise landscape and then peel back the layers like an onion for a closer look at how we can provide BPM projects that scale well. Figure 1 below shows that there are several component layers where we will want to focus our attention:

  • Initialization Layer
  • Implementation Layer
  • Interaction Layer

The process initialization layer will be covered first, where we present best practices around you, your customer and how processes can be started. The process implementation layer is where processes are maintained, with help from the process repository, tooling, business users and developers that design them. Here you will also find the various implementation details, such as domain specific extensions to cover specific node types within our projects. Finally, the process interaction layer is where your processes will connect to all manner of legacy systems, back office systems, service layers, rules systems even third party systems and services.

Fig 1 - JBoss BRMS process architecture  

Initialization Layer

Taking a look at how to initialize your processes, we want to provide you with some of the best practices we have seen used by large enterprises over the years. There seems to be a main theme of gathering the customer, user or system data that is needed to start your process, then injecting it by the call. This can be embedded in your application via the BRMS jBPM API call, make use of the RESTful service or via a standard Java web service call. No matter how you gather the data to initialize your process instances, you might want to think about how you would want to scale out your initialization setup from the beginning. Often the initial projects are setup without much thought as to the future, so certain issues have not been taken into consideration.

Customers 

 The customer defined here can be a person, system or user that provides the initial process starting data. In Figure 2 we provide a high level look at how our customers provide process data that we then package into a request to be dropped into one of the process queues. From the queues we can then prioritize and let different mechanisms fetch these process requests and start a process instance with the provided request data. We show here EJBs, MDBs and clouds that represent any manner of scheduling that might be used to empty the process queues.

Fig 2: Queues and more

Queues

These queues can be as simple as database tables or as refined as message queues. They can be set up any way your project desires, such as Last-In-First-Out (LIFO) or First-In-First-Out (FIFO). The benefits of using message queues is that you can prioritize these from your polling mechanism.

The reason for this setup is twofold. First, you have made sure that by not directly starting the process instance from the customer interface, you have persisted the customer request. It will never be lost en route to the process engine. Second, you have the ability to prioritize future processes that might not be able to meet project requirements like a new process request that has to start in 10 seconds after submission by the customer. If it gets put at the bottom of a queue that takes an hour to get to processing it, you have a problem. By prioritizing your queues you can adjust your polling mechanism to check the proper queues in the proper order each time.

Java / Cloud

The Java icons shown in Figure 2 are representing any JEE mechanism you might want to use to deal with the process queues. It can be EJBs, MDBs, a scheduler you write yourself or whatever you want to come up with to pick up process requests.

The cloud icons are meant to represent services that can be used by your software to actually call the final startProcess method to initialize the process instance being requested and pass it initial data. It is important to centralize this interaction with the jBPM API within a single service to ensure minimal work if the API should change, for possible version migrations in the future, and should you wish to expand it in future projects to extend the service interaction with jBPM.

Implementation layer

This layer focuses on your business process designs, implementations of custom actions in your processes and extensions to your ways of working with your processes. The adoption of the standard BPMN2 for process design and execution has taken a lot of the troubles out of this layer of your BPM architecture. Process engines are forced to adhere and support the BPMN2 standard which means you are limited in what you can do during the designing of your processes.

There is within the JBoss BRMS BPM component one thing of interest for building highly scalable process architectures. This is the concept of a Stateful Knowledge Session (SKS). This is created to hold your process information - both data and an instance of your process specification. When running rules based applications it is normal procedure to run a single KS (note, not stateful!) with all your rules and data leveraging this single KS. With a SKS and processes, we want to leverage a single SKS per process instance. We can bundle this functionality into a single service to allow for concurrency and to facilitate our process instance lifecycle management. Within this service you can also embed eventual synchronous or asynchronous Business Activity Monitoring (BAM) event producers as desired. 

Interaction layer

There is much to be gained by a good strategy for accessing business logic, back-end systems, back-office systems, user interfaces, other applications, third-party services or whatever your business processes need to get their jobs done. Many enterprises are isolating these interactions with a service layer within a Service Oriented Architecture (SOA), which provides for flexibility and scales nicely across all the various workloads that may be encountered. Taking a look at the BPM layer here, we want to mention just a few of these backend systems as an example of how to optimize your process projects in your enterprise.

The JBoss BRMS BPM architecture includes a separate Human Task (HT) server that runs as a service and implements the WS-HT specification. Being pluggable, there is nothing to keep you from avoiding hosting another server in your enterprise by exposing the WS-HT task lifecycle in a service. This should then use a synchronous invocation model which vastly simplifies the standard product implementation that leverages a HornetQ messaging system by default.

A second service that you can implement to provide great reporting scalability we call a BAM service. This service you would use to centralize the BAM events, use it to push these events to JMS queues which are both reliable and fast. A separate machine can then be used to host these JMS BAM queues, processing the messages without putting load on the BPM engine itself, write to a separate BAM database, optimize with batch writing and any clients that consume the BAM information will again not be putting any load on the BPM engine itself.

 

Pages

Eric Schabell
Eric Schabell
Edson Tirelli
Edson Tirelli

What do you think?

JAX Magazine - 2014 - 05 Exclucively for iPad users JAX Magazine on Android

Comments

Latest opinions