Event-driven architecture

Reactive Microservices with Scala and Akka

Vaughn Vernon

In this article, Vaughn Vernon, author of “Implementing Domain- Driven Design”, “Reactive Messaging Patterns with the Actor Model”, and “Domain-Driven Design Distilled”, will teach you a first approach method to designing microservices, giving you a workable foundation to build on. He will introduce you to reactive software development and summarize how you can use Scala and Akka as a go-to toolkit for developing reactive microservices.

Do you ever get the impression that our industry is one of extremes? Sometimes I get that impression. Many seem to be polarized in their opinions about various aspects of how software should be written or how it should work, whereas I think we could find a better balance based on the problem space that we face.

One example that I am thinking of is the common opinion that data must be persisted using ACID transactions. Don’t get me wrong. It’s not that I think that ACID transactions are not useful. They are useful, but they are often overused. Should we find a better balance here? Consider the case when data that is spread across different subsystems must reflect some kind of harmony. Some hold to the strong opinion that all such dependent data must be transactionally consistent; that is, up-to-date across all subsystems all at once.

vernon1

What about you? Do you agree with that? Do some of your software systems use global transactions in this way to bring data on multiple subsystems into harmony all at once? If so, why? Is it the business stakeholders who have insisted on this approach? Or was it a university professor who lectured you on why global transactional consistency is necessary? Or was it a database vendor who sold you on this service-level agreement? The most important opinion of these three is that of the business. It should be real business drivers that determine when data must be transactionally consistent. So, when was the last time you asked the business about its data’s consistency requirements, and what did the business say?

If we all took a look back to the time before most businesses were run by computers, we would find a very different picture of data consistency. Back then almost nothing was consistent. It could take hours or even days for some business transactions to be carried out to completion. Paper-based systems required physically carrying forms from one person’s desk or work area to another. In those days even a reasonable degree of data consistency was completely impossible. What is more, it didn’t result in problems. You know what? If today you got an authentic business answer to the question of necessary data consistency, you would very likely learn that a lot of data doesn’t require up-to-the-millisecond consistency. And yet, extreme viewpoints in our industry will still push many developers to strive for transactional consistency everywhere.

vernon2

If you want to use microservices, you are going to need to embrace eventual consistency. Understand that the business will likely not be opposed, even if your university professor and your database sales representative will be. I explain how this works later in this article.

There’s another extreme that has cropped up recently. It’s service size, and specifically with regard to the use of “microservices.” What is a microservice anyway, and how does “micro” qualify the size of a service? Some guidance says: 100, 400, and 1,000. What? Yes, I have seen and heard that a microservice should be no more than 100 lines of code. Others say no more than 400 lines of code. And some are a bit more liberal at 1,000 lines of code. So, who is correct?

I am not sure that any of those extremes provide good guidance as to the size of a microservice. For example, if someone agrees with the rule that a microservice should be no more than 400 lines of code, and I create a microservice that is 537 lines of code, does that mean that my microservice is too big? If so, why? Is there something magical about 400 lines of code that makes a service just right?

On the other hand, some will take the opposite extreme view that software systems should be developed as monoliths. In such a case you could expect that nearly every subsystem that is required to support the core of the system will be housed in a single code base. When you deploy this system you must deploy all subsystems with it. If even one subsystem changes in a non-intrusive way to the other subsystems, the whole system must be deployed for the users to see the effects of one isolated change. Clearly there seem to be disadvantages with this approach, but is 400 lines of code the answer?

vernon3

I think that the emphasis on the word “micro” should convey more the meaning that your team should not be creating a monolith, but instead a number of smaller independent services that work together to accomplish some significant business goal. But to say that a service that is of “micro” status must be no more than 400 lines of code also seems imbalanced. Take for example segmenting an existing monolith system into components of no more than 400-lines each and deploying all of them. At 400 lines of code, you are probably talking in terms of just one entity type per microservice. You could actually end up with hundreds or even thousands of these very small microservice components. What kind of problems could your teams face in trying to administer all those microservices? The hardware and network infrastructure alone would be very complex. It could be quite difficult to create and maintain such as system, say the least.

Still, if we strike a balance in terms of “micro” we might find a “measurement” of service size that is not only logical, but actually business driven. Since I am a big proponent of the Domain-Driven Design (DDD) approach to developing software, what I am suggesting here is to use two of the most important concepts of DDD to determine the size of a microservice: Bounded Context and the Ubiquitous Language.

vernon4

If you have read the book “Building Microservices” by Sam Newman, you already know that Sam says that a microservice should be defined by a Bounded Context. Sam and I are in agreement. But actually saying that a microservice is a Bounded Context doesn’t exactly state what the size should be. That’s because the size of a Bounded Context is determined by its Ubiquitous Language. Thus, if you want to know the size of your Bounded Context (and microservice), you have to ask your Ubiquitous Language. It is the combination of the business experts and software developers working together that determines the Language of the Bounded Context. It’s important to understand that using business language drivers means that all linguistically cohesive components will live in the same Bounded Context. Those that are not linguistically cohesive with one Bounded Context thus belong in another.

vernon5

What I find is a Bounded Context that is truly constrained by an actual business-driven Ubiquitous Language is quite small. It’s typically bigger than one entity (although it may be just one), but also much, much smaller than a monolith. It’s impossible to state an exact number because the unique business domain drives the full scope of the Ubiquitous Language. But to throw out a number, many Bounded Contexts could be between 5 and 10 entity types total. Possibly 20 entity types is a large Bounded Context. So, a Bounded Context is small, yes “micro” in size, especially when compared with a monolith. Now, if you think about dividing a monolith system into a number of Bounded Contexts, the deployment topology isn’t so extreme.

At a minimum I think that this is a good place to start if your team is determined to break up a legacy monolith into a number of microservices; that is, use Bounded Context and Ubiquitous Language as your first step into microservices and you will go a long way away from the monolith.

Reactive software is…

Reactive software is defined as having these four key characteristics:

  • Responsive
  • Resilient
  • Elastic
  • Message driven

A responsive system is one that responds to user requests and background integrations in an impressive way. Using the Lightbend platform, you and your users will be impressed by the responsiveness that can be achieved. Often a well-designed microservice can handle write-based single requests in 20 milliseconds or less, and even roughly half that time is not uncommon.

Systems based on the Actor model using Akka can be designed with incredible resilience. Using supervisor hierarchies means that the parental chain of components are responsible for detecting and correcting failures, leaving clients to be concerned only about what service they require. Unlike code that is written in Java that throws exceptions, clients of actor-based services never have to concern themselves with dealing with failures from the actor from which they are requesting service. Instead, clients only have to understand the request-response contract that they have with a given service, and possibly retry requests if no response if given in some time frame.

An elastic microservices platform is one that can scale up and down and out and in as demands require. One example is an Akka cluster that scaled to 2,400 nodes without degradation. Yet, elastic also means that when you don’t need 2,400 nodes, only what you currently need is allocated. You will probably find that when running Akka and other components of the Lightbend reactive platform, you will become accustomed to using far fewer servers than you would with other platforms (e.g. JEE). This is because Akka’s concurrency capabilities enable your microservices to make non-blocking use of each server’s full computing resources at all times.

The Actor model with Akka is message driven to the core. To request a service of an actor, you send it a message that is delivered to it asynchronously. To respond to a request from a client, the service actor sends a message to the client, which again is delivered asynchronously. With the Lightbend platform, even the web components operate in an asynchronous way. When saving actor persistent state, the persistence mechanism is generally designed as an asynchronous component, meaning that even database interactions are either completely asynchronous or block only minimally. In fact, the point is that the actor that has requested persistence will not cause its thread to block while waiting for the database to do its thing. Asynchronous messaging makes all of this possible.

Reactive components

At each logical layer of the architecture, expect to find the following Lightbend platform components and microservice components that your team employs or develops:

vernon6

For example, an Akka-based persistent actor looks like this:

class Product(productId: String) extends PersistentActor { override def persistenceId = productId
var state: Option[ProductState] = None
override def receiveCommand: Receive = { 
case command: CreateProduct =>
  val event = ProductCreated(productId, command.name, command.description)
  persist(event) { persistedEvent =>
    updateWith(persistedEvent)
    sender ! CreateProductResult(
} 
... } 
productId,
command.name,
command.description,
command.requestDiscussion)
override def receiveRecover: Receive = {
case event: ProductCreated => updateWith(event) ... 
} 
  def updateWith(event: ProductCreated) {
    state = Some(ProductState(event.name, event.description, false, None))
} } 

This is a component based on event sourcing. The Product entity receives commands and emits events. The events are used to constitute the state of the entity, both as the commands are processed and when the entity is stopped and removed from memory and then recovered from its past events.
Now with those events we can derive other benefits. First we can project the events into the views that users need to see, by creating use-case-optimized queries. Secondly, the events can be published to the broader range of microservices that must react to and integrate with the event originating microservice.

vernon7

What I am describing here is a event-driven architecture, one that is entirely reactive. The actors within each microservice make the one reactive, and the microservices that consume events from the others are also reactive. This is where eventual consistency in separate transactions come into play. When other microservices see the events they create and/or modify the state that they own in their context, making the whole system agree in time.

I have written three books on developing these kinds microservices-based on Domain-Driven Design, and I also teach two workshops on these topics. I encourage you to read my books and contact me for more information on how you can put this practical microservices architecture to work in your enterprise.

 


ReactiveTo read more about reactive programming, download the latest issue of JAX Magazine:

Reactive programming means different things to different people and we are not trying to reinvent the wheel or define this concept. Instead, we are allowing our authors to prove how Scala, Lagom, Spark, Akka and Play coexist and work together to create a reactive universe.

If the definition “stream of events” does not satisfy your thirst for knowledge, get ready to find out what reactive programming means to our experts in Scala, Lagom, Spark, Akka and Play. Plus, we talked to Scala creator Martin Odersky about the impending Scala 2.12, the current state of this programming language and the technical innovations that await us.

Thirsty for more? Open the magazine and see what we have prepared for you.

Author

Vaughn Vernon

Vaughn is a veteran software craftsman, with more than 30 years of experience in software design, development, and architecture. He is a thought leader in simplifying software design and implementation using innovative methods. Vaughn is the author of the books “Implementing Domain- Driven Design”, “Reactive Messaging Patterns with the Actor Model”, and “Domain-Driven Design Distilled”, all published by Addison-Wesley. Vaughn speaks and presents at conferences internationally, he consults, and has taught his IDDD Workshop and Go Reactive with Akka workshop around the globe to hundreds of developers. He is also the founder of For Comprehension, a training and consulting company, found at http://ForComprehension.com. You may follow him on Twitter: @VaughnVernon


Comments
comments powered by Disqus