Tutorial

Building Microservices application on AWS

Matthias Jung, Sascha Möllering

In this article, we shortly summarize the common characteristics of Microservices, talk about the main challenges of building Microservices, and describe how product teams can leverage AWS to overcome those challenges.

Microservices are an architectural and organizational approach to software development to speed up deployment cycles, foster innovation, and improve maintainability and scalability of software applications. Therefore, software is composed into small independent services that communicate over well-defined APIs and are owned by small self-contained teams.

The term “Microservices” received increased attention in the past few years. Microservice architectures are not a completely new approach to software engineering, but rather a collection and combination of various successful and proven concepts such as object-oriented methodologies, agile software development, service-oriented architectures, API-first design, and Continuous Delivery.

Given that the term Microservices is an umbrella term, it is hard to put down a precise definition. However, all Microservice architectures share some common characteristics:

  • Decentralized: Microservice architectures are distributed systems with decentralized data management. They don’t rely on a unifying schema in a central database. Each Microservice has its own view on data models. Those systems are decentralized also in the way they are developed, deployed, managed, and operated.
  • Independent: different components in a Microservices architecture can be changed, upgraded, or replaced independently and without affecting the functioning of other components. Similarly, the teams responsible for different Microservices are enabled to act independently from each other.
  • Do one thing well: every component is designed around a set of capabilities and with a focus on a specific domain. As soon as a component reaches a certain complexity, it might be a candidate to become its own Microservice.
  • Polyglot: Microservice architectures don’t follow a “one size fits all” approach. Teams have the freedom to choose the best platform for their specific problems. As a consequence, Microservice architectures are usually heterogeneous with regard to operating systems, programming languages, data stores, and tools – an approach called polyglot persistence and programming.
  • Black Box: individual components of Microservices are designed as a black box, i.e. they hide the details of their complexity from other components. Any communication between services happens via well-defined APIs. Generally, they avoid any kind of hidden communication that would impair the independence of the component such as sharing libraries or data.
  • You build it, you run it: typically, the team responsible for building a service is also responsible for operating and maintaining it in production – this principle is also known as DevOps. In addition to the benefit that teams can progress independently at their own pace, this also helps to bring developers into close contact with the actual users of their software and improves the understanding of the customers’ needs and expectations. The organizational aspect for Microservices shouldn’t be underestimated as according to Conway’s law [1] system design is largely influenced by the organizational structure of the teams that build the system.

Benefits of Microservices

The distributed nature of Microservices fosters an organization of small independent teams that take ownership of their service. They are enabled to develop and deploy independently and in parallel to other teams, which speeds up both development and deployment processes. The reduced complexity of the code base teams are working on and the reduced number of dependencies minimizes code conflicts, facilitates change, shortens test cycles, and eventually improves the time to market for new features and services. Feedback from customers can be integrated much faster into upcoming releases.

The fact that small teams can act autonomously and choose the appropriate technologies, frameworks and tools for their respective problem domains is an important driver for innovation. Responsibility and accountability foster a culture of ownership for services.

Establishing a DevOps culture by merging development and operational skills in the same group eliminates possible frictions and contradicting goals. Agile processes no longer stop when it comes to deployment. Instead, the complete application life-cycle management processes from committing to releasing code can be automated. It becomes easy to try out new ideas and to roll back in case something doesn’t work. The low cost of failure creates a culture of change and innovation.

Organizing software engineering around Microservices can also improve the quality of code. The benefits of dividing software into small and well-defined modules are similar to those of object-oriented software engineering: improved reusability, composability, and maintainability of code.

Fine-grained decoupling is a best practice to build large scale systems. It’s a prerequisite for performance optimizations since it allows choosing the appropriate and optimal technologies for a specific service. Each service can be implemented with the appropriate programming languages and frameworks, leverage the optimal data persistence solution, and be fine-tuned with the best performing service configurations. Properly decoupled services can be scaled horizontally and independently from each other. Vertical scaling, i.e. running the same software on bigger machines, is limited by the capacity of individual servers and can incur downtime during the scaling process. Horizontal scaling, i.e. adding more servers to the existing pool, is highly dynamic and doesn’t run into limitations of individual servers. The scaling process can be completely automated. Furthermore, the resiliency of the application can be improved since failing components can be easily and automatically replaced.

Microservice architectures also make it easier to implement failure isolation. Techniques like health-checking, caching, bulkheads or circuit breakers allow to reduce the blast radius of a failing component and to improve the overall availability of a given application.

Simple Microservice architecture on AWS

Slide1

Figure 1 shows a reference architecture for a typical Microservice on AWS. The architecture is organized along four layers: Content Delivery, API Layer, Application Layer, and Persistence Layer.

The purpose of the content delivery layer is to accelerate the delivery of static and dynamic content and potentially off-load the backend servers of the API layer. Since clients of a Microservice are served from the closest edge location and get responses either from a cache or a proxy server with optimized connections to the origin, latencies can be significantly reduced. Microservices running close to each other don’t benefit from a CDN but might implement other caching mechanisms to reduce chattiness and minimize latencies.

The API layer is the central entry point for all client requests and hides the application logic behind a set of programmatic interfaces, typically an HTTP REST API. The API Layer is responsible for accepting and processing calls from clients and might implement functionality such as traffic management, request filtering, routing, caching, or authentication and authorization. Many AWS customers use Amazon Elastic Load Balancing (ELB) together with Amazon Elastic Compute Cloud (EC2) and Auto-Scaling to implement an API Layer.

The application layer implements the actual application logic. Similar to the API Layer, it can be implemented using ELB, Auto-Scaling, and EC2.

The persistence layer centralizes the functionality needed to make data persistent. Encapsulating this functionality in a separate layer helps to keep the state out of the application layer and makes it easier to achieve horizontal scaling and fault-tolerance of the application layer. Static content is typically stored on Amazon S3 and delivered by Amazon CloudFront. Popular stores for session data are in-memory caches such as Memcached or Redis. AWS offers both technologies as part of the managed Amazon ElastiCache service. Putting a cache between application servers and database is a common mechanism to alleviate read load from the database which in turn may allow resources to be used to support more writes. Caches can also improve latency.

Relational databases are still very popular to store structured data and business objects. AWS offers six database engines (Microsoft SQL Server, Oracle, MySQL, MariaDB, PostgreSQL, and Amazon Aurora) as managed services.

Slide2

Figure 2 shows the architecture of a Microservice, where all layers are built out of managed services, which eliminates the architectural burden to design for scale and high availability, and eliminates the operational efforts of running and monitoring the Microservice’s underlying infrastructure.

Reducing complexity

The architecture above is already highly automated. Nevertheless, there’s still room to further reduce the operational efforts needed to run, maintain and monitor.

Architecting, continuously improving, deploying, monitoring and maintaining an API Layer can be a time-consuming task. Sometimes different versions of APIs need to be run to assure backward compatibility of all APIs for clients. Different stages (such as dev, test, prod) along the development cycle further multiply operational efforts.

Access authorization is a critical feature for all APIs, but usually complex to build and often repetitive work. When an API is published and becomes successful, the next challenge is to manage, monitor, and monetize the ecosystem of 3rd party developers utilizing the APIs.

Other important features and challenges include throttling of requests to protect the backend, caching API responses, request, and response transformation, or generating API definition and documentation with tools such as Swagger [2].

Amazon API Gateway addresses those challenges and reduces the operational complexity of the API Layer. Amazon API Gateway allows customers to create their APIs programmatically, by importing Swagger definitions, or with a few clicks in the AWS Management Console. API Gateway serves as a front door to any web-application running on Amazon EC2, on Amazon ECS, on AWS Lambda, or on any on-premises environment. In a nutshell: it allows running APIs without managing servers. Figure 2 visualizes how API Gateway handles API calls and interacts with other components. Requests from mobile devices, websites, or other backend services are routed to the closest Amazon CloudFront PoP to minimize latency and provide the optimum user experience. API Gateway first checks if the request is in the cache and – if no cached records available – then forwards it to the backend for processing. Once the backend has processed the request, API call metrics are logged in Amazon CloudWatch and content is returned to the client.

AWS provides several options to facilitate the deployment and further reduce the operational complexity of maintaining and running application services compared to running ELB/Autoscaling/EC2. One option is to use Elastic Beanstalk. The main idea behind Elastic Beanstalk is that developers can easily upload their code and let Elastic Beanstalk automatically handle infrastructure provisioning and code deployment. Important infrastructure characteristics such as Auto-Scaling, Load-Balancing, or Monitoring are part of the service.

AWS Elastic Beanstalk supports a large variety of programming frameworks such as Java, .Net, PHP, Node.js, Python, Ruby, Go, and Docker with familiar web servers such as Apache, Nginx, Phusion Passenger, and IIS.

Another approach to reducing operational efforts for deployment is Container based deployment. Container technologies like Docker have gained a lot of popularity in the last years. This is due to a couple of benefits:

  • Flexibility: containerization encourages decomposing applications into independent, fine-grained components, which makes it a perfect fit for Microservice architectures.
  • Efficiency: containers allow the explicit specification of resource requirements (CPU, RAM), which makes it easy to distribute containers across underlying hosts and significantly improve resource usage. Containers also have only a light performance overhead compared to virtualized servers and efficiently share resources on the underlying OS
  • Speed: containers are well-defined and reusable units of work with characteristics such as immutability, explicit versioning and easy rollback, fine granularity and isolation – all characteristics that help to significantly increase developer productivity and operational efficiency.

Amazon ECS eliminates the need to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, Elastic Load Balancing, EBS volumes, and IAM roles.

Serverless compute

The ultimate way to eliminate operational complexity is to just don’t use servers anymore. Customers simply need to upload their code and let AWS Lambda take care of everything required to run and scale the execution with high availability. Lambda supports several programming languages and can be triggered from other AWS services or be called directly from any web or mobile app. Lambda is highly integrated with Amazon API Gateway. The possibility to make synchronous calls from Amazon API Gateway to AWS Lambda enables the creation of fully serverless applications.

While relational databases are still very popular and well understood, they are not designed for endless scale, which can make it very hard and time intensive to apply techniques to support a high number of queries. NoSQL databases have been designed to favor of scalability, performance, and availability over the consistency of relational databases. One important element is that NoSQL databases typically do not enforce a strict schema. Data is distributed over partitions that can be scaled horizontally and is retrieved via partition keys.

Since individual Microservices are designed to do one thing well, they typically have a simplified data model that may well be suited to NoSQL persistence. You can use Amazon DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity specified by the customer and the amount of data stored, while maintaining consistent and fast performance.

Orchestrating deployments for a serverless architecture can be a challenging task because sometimes a few manage services have to be updated. If you want to add a new functionality to an existing API, the API in API Gateway has to be changed and new Lambda functions have to be deployed. Frameworks like Chalice [3] or Serverless [4] simplify the whole deployment process extremely. The following example shows how to set up and deploy a simple application managed by the Serverless-framework:

npm install serverless –g
serverless create --template aws-nodejs
serverless deploy

The three lines of code install the Serverless-framework using NPM, create a simple project based in Node.js, and deploy the project to AWS. Serverless is using Amazon CloudFormation as base to store all necessary configurations. Each resource that is created by Serverless gets created through a central CloudFormation template. For example, if you want to add an additional S3 bucket for images, this can be configured in the central serverless.yml-file:

service: lambda-images
provider: aws
functions:
  ...

resources:
  Resources:
    ImagesBucket:
      Type: AWS::S3::Bucket
      Properties:
         BucketName: images-lambda
         # Or you could reference an environment variable
         # BucketName: ${env:BUCKET_NAME}

With a simple serverless deploy, this custom resource will be deployed for you.

Service discovery

One of the primary challenges with Microservice architectures is allowing services to discover and interact with each other. The distributed characteristics of Microservice architectures not only makes it hard for services to communicate but also presents interesting challenges such as checking the health of those systems and announcing when new applications come online. In addition, we must decide how and where to store meta-store information such as configuration data that can be used by applications. Below we explore several techniques for performing Service Discovery on AWS for Microservices based architectures.

A fairly straight-forward approach is using the Elastic Load Balancing service (ELB). One of the advantages of this approach is that it provides health checks and automatic registration/de-registration of backend services in failure cases. Combining these features with DNS capabilities, it is possible to build a simple service discovery solution with minimum efforts and low cost. You can configure a custom domain name for each Microservice and associate the domain name with the ELB’s DNS name via a CNAME entry. The DNS names of the service endpoints are then published across other applications that need access. With the new Application Load Balancer it is possible to use create a listener with rules to forward requests based on the URL path. This is known as path-based routing. You can route traffic to multiple back-end services using path-based routing. For example, you can route general requests to one target group and requests to render images to another target group.

Amazon Route 53 could be another source to hold service discovery information. Route 53 provides several features that can be leveraged for service discovery. The Private Hosted Zones feature allows it to hold DNS record sets for a domain or subdomains and restrict access to specific Virtual Private Clouds (VPCs). You would register IP addresses, hostnames, and port information as SRV records for a specific Microservice and restrict access to the VPCs of the relevant client Microservices. You can also configure health checks that regularly verify the status of the application and potentially trigger a fail-over among resource records.

A different approach is using a key/value store for the discovery of Microservices. Although this approach requires more time to be built compared to the other approaches, it provides more flexibility and extensibility and doesn’t encounter DNS caching issues. It also naturally works well with client-side load-balancing techniques such as Netflix Ribbon. Client side load balancing can help to eliminate bottlenecks and simplify management.

Finally, many people use service discovery software like HashiCorp Consul or Netflix Eureka. Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. Additionally, it offers failure detection using health checking which prevents routing requests to unhealthy hosts. The following blog post shows how to set up service discovery via Consul with Amazon ECS [5].

Distributed monitoring

A Microservice architecture likely consists of many “moving parts” that have to be monitored. You can use Amazon CloudWatch to collect and track metrics, centralize and monitor log files, set alarms, and automatically react to changes in your AWS environment. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. Amazon CloudWatch provides a reliable, scalable, and flexible monitoring solution that you can start using within minutes. You no longer need to set up, manage, and scale your own monitoring systems and infrastructure. In a Microservice architecture, the capability of monitoring custom metrics using Amazon CloudWatch is an additional benefit because developers can decide which metrics should be collected for each service. In addition to that, dynamic scaling can be implemented based on custom metrics. Consistent logging is critical for troubleshooting and identifying issues. Microservices allow the production of many more releases than ever before and encourage engineering teams to run experiments on new features in production. Understanding customer impact is crucial to improving an application gradually.

Storing logs in one central place is essential to debug and get an aggregated view on distributed systems. In AWS you can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, or other sources. Amazon EC2 includes support for awslogs Log Driver that allows the centralization of container logs to CloudWatch Logs. There are different options to centralize your log files. Most AWS services already centralize log files out of the box. The primary destinations for log-files on AWS are Amazon S3 and Amazon CloudWatch Logs. Log files are a sensitive part of every system, almost every process on a system generates log files. A centralized logging solution aggregate all logs in a central location in order to have the ability to search and analyze these logs using tools like Amazon EMR or Amazon Redshift. In many cases, a set of Microservices work together to handle a request. Imagine a complex system consisting of tens of Microservices, while an error occurs in one of the services in the call chain. Even if every Microservice is logging properly and logs are consolidated in a central system, it can be very hard to find all relevant log messages.

The central idea of correlation ID is that a specific ID is created if a user-facing service receives a request. This ID can be passed along in the HTTP header (e.g. using a header field like “x-correlation-id”) to every other service or in the payload that is passed. The ID can be included in every log message to find all relevant log messages for a specific request. In order to get the correct order of calls in log files, it is a good approach to send a counter in the header that is incremented if the request flows through the architecture. Spring Cloud offers a correlation ID implementation called Sleuth [6] that can be used for distributed tracing with e.g. Zipkin [7]. Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in Microservice architectures. Using Spring Cloud Sleuth is really easy, just add Sleuth to the classpath of a Spring Boot application and correlation data will be collected in logs. An example HTTP handler that puts correlation IDs in logs could look like:

@RestController
public class BaseController {
  private static Logger log = LoggerFactory.getLogger(BaseController);
  @RequestMapping("/")
  public String base() {
    log.info("Handling base");
    ...
    return "Hello World";
  }
}

Chattiness

By breaking monolithic applications into small Microservices, the communication overhead increases because Microservices have to talk to each other. In many implementations, REST over HTTP is used as a communication protocol, which is pretty light-weight, but high volumes can cause issues. In some cases, it might make sense to think about consolidating services that send a lot of messages back and forth. If you find yourself in a situation where you consolidate more and more of your services just to reduce chattiness, you should review your problem domains and your domain model. For Microservices, it is quite common to use simple protocols like HTTP. Messages exchanged by services can be encoded in different ways, e.g. in a human-readable format like JSON or YAML or in an efficient binary format. HTTP is a synchronous protocol. The client sends a request and waits for a response. By using async IO, the current thread of the client does not have to wait for the reponse but can do different things.

Conclusion

Microservice architectures are a distributed approach to overcome the limitations of traditional monolithic architectures. While Microservices help to scale applications and organizations whilst improving cycle times, they also come with a couple of challenges that may cause additional architectural complexity and operational burden.

AWS offers a large portfolio of managed services that help product teams to build Microservice architectures and minimize architectural and operational complexity. This article guides you through the relevant AWS services and how to implement typical patterns such as Service Discovery natively with AWS Services.

Author

Matthias Jung, Sascha Möllering

As Solutions Architect for Amazon Web Services, Matthias has helped many German startups to migrate their applications to the cloud and build highly reliable, scalable, and cost efficient backend architectures. Prior to AWS, Matthias worked as software engineer and architect and founded his own startup in the cloud security space. Matthias holds a PhD in Communication Systems and Network Protocols from the University of Nice (France) and a Master of Computer Science from the University of Mannheim.

Sascha Möllering works as Solutions Architect at Amazon Web Services. He’s interested in automation, infrastructure as code, distributed computing, containers, and the JVM. Prior to AWS, Sascha worked as software developer and software architect.


Comments
comments powered by Disqus