It Has Channels, and It’s Not Mars

Spring Integration in Action - Part 2

 

Is choosing a channel all that difficult?

The selection of channels will be based on both functional and nonfunctional requirements, and there are several decision factors that can help us making the right choice. This section provides a brief overview of the technical criteria and the best practices that you should consider when selecting the correct channels.

Sharing context

▪ Do we need to propagate context information between the successive steps of a process? Thread local variables are used to propagate context when needed in several places but where passing via the stack would needlessly increase coupling, for example, transaction context.


▪ Relying on the thread context is in itself a subtle form of coupling and will have an impact when considering the adoption of a highly asynchronous SEDA model. It may prevent splitting the processing in concurrently executing steps, prevent partial failures, or introduce security risks like leaking permissions to the processing of different messages.

 

Atomic boundaries ▪ Do we have "all or nothing" scenarios? Classic example: bank transaction where credit and debit should either succeed or fail.
▪ Typically used to decide transaction boundaries and, because of that, it is a specific case of context sharing. It influences the threading model and therefore limits the available options when choosing a channel type.
Buffering messages ▪ Do we need to consider variable load? What is immediate and what can wait?
▪ The ability of systems to withstand high loads is an important performance factor, but load is typically fluctuating, so adopting a thread-per-message processing scenario will require more hardware resources for accommodating the peak load situations. Those resources will stay unused when the load decreases, so this could be expensive and inefficient. Moreover, some of the steps may be very slow, so resources may be blocked for a long amount of time.
▪ Consider what requires an immediate response and what can be delayed and use a buffer to store incoming requests at peak rate and allow the system to process them at its own pace. Consider mixing the types of processing. For example, an online purchase system that will acknowledge the receipt of the request and do some mandatory steps immediately (credit card processing, order number generation) and respond to the client but will do the actual handling of
Blocking and non- blocking operations ▪ How many messages can we actually buffer? What should we do when we can’t cope with demand?
▪ If our application is unable to cope with the number of messages being received and no limits have been in place, we may simply exhaust our capacity for storing the message backlog or breach quality of service guarantees in terms of response turnaround.
▪ Recognizing that the system cannot cope with demands is usually a better option than continuing to build up a backlog.
▪ A common approach is to apply a degree of self-limiting to the system. This can be achieved by blocking the acceptance of new messages when the system is approaching its maximum capacity. This can commonly be a maximum number of messages awaiting processing or a measure of requests received per second.
▪ Where the requester has a finite number of threads for issuing requests, blocking those threads for long periods of time may result in timeouts or quality of service breaches. It may be preferable to accept the message and discard it later if system capacity is exceeded, or set a timeout on the blocking operation to avoid indefinite blocking of the requester.
Consumption model ▪ How many components are interested in receiving a particular message?
▪ There are two major messaging paradigms: point-to-point or publish-subscribe. In the former, a message is consumed by exactly one recipient connected to the channel (even if there are more of them); in the latter, the message is received by all of them.
▪ If the processing requirements are that the same and a message should be handled by multiple consumers, they can do this concurrently, and a publish/subscribe channel can take care of that. An example is a mash-up application that aggregates results from searching flight bookings. Requests are broadcast simultaneously to all potential providers, which will respond by indicating whether they can offer a booking or not.
▪ Conversely, if the request should be always handled by a single component (for example, processing a payment), a point-to-point strategy is what we are looking for.

Let's see how these criteria apply to our flight booking sample.

A channel selection example

Using the default channel throughout, we have three channels: one accepting requests and the other two connecting our services.

In Spring Integration, the default channels are SubscribableChannels, and the message transmission is synchronous. The effect of this is simple—we have one thread responsible for invoking the three services sequentially, as shown in figure 1.

Because all operations are executing in a single thread, there is a single transaction encompassing those invocations. If the transaction configuration does not require new transactions to be created for any of the services, the three service invocations will occur within the scope of transaction.

Figure 2 shows what you get when you configure an application using the default channels, which are subscribable and synchronous.

Having all service invocations happening in one thread and encompassed by a single transaction is a mixed blessing: it could be a good thing in certain applications where all three operations must be executed atomically but takes its toll on the scalability and robustness of the application.

But email is slow and our servers are unreliable

The basic configuration is all well and good in the sunny day case when the email server is always up and responsive and the network is 100 percent reliable. Reality is different. Our application needs to work in a real world where the email server is sometimes overloaded and the network sometimes fails. Analysing our actions in terms of what we need to do now and what we can afford to do later is a good way of deciding on what service calls we should block. Billing the credit card and updating the seat availability are clearly things we need to do now in order to be able to respond with confidence that the booking has been made. Sending the confirmation email is not time critical and we don't want to refuse bookings simply because the mail server is down. Therefore, introducing a queue between the mainline business logic execution and the confirmation email service will allow us to do just that—charge the card, update availability, and send the email confirmation when we can.

Introducing a queue on our emailConfirmationRequests channel will allow the thread passing in the initial message to return as soon as the credit card has been charged and the seat availability has been updated. Changing the Spring Integration configuration to do this is as simple as adding a child <queue/> element to the <channel/>

Let's recap how the threading model changes by introducing the QueueChannel, as shown in figure 3.

Because now there isn't a single thread context that encompasses all invocations, the transaction boundaries change as well. Essentially, every operation that is executing on a separate thread executes in a separate transaction, as shown in figure 4.

We have replaced one of the default channels with a buffered QueueChannel and have set up an asynchronous communication model. What we got was some confidence that long-running operations will not block the application because some component is down or just takes a long time to respond. But, now we have another challenge: what if we need to connect one producer with not only one but two (or more) consumers?

 

Pages

Mark Fisher
Mark Fisher
Jonas Partner
Jonas Partner
Marius Bogoevici
Marius Bogoevici
Iwein Fuld
Iwein Fuld

What do you think?

JAX Magazine - 2014 - 05 Exclucively for iPad users JAX Magazine on Android

Comments

Latest opinions