days
-6
-7
hours
-1
0
minutes
-4
-4
seconds
-5
-7
search
Microservices made easy

MicroProfile, the microservice programming model made for Istio

Emily Jiang
Eclipse
© Shutterstock / Sadovski

Do you need a cloud-based platform for your microservices? In this article, Emily Jiang explores how the popular service mesh Istio can be used to harness the open source power of Eclipse Profile to deploy microservices securely.

MicroProfile in a nutshell

MicroProfile is a fast-growing open community. It is a warm and friendly platform for developers to come together to evolve programming model for cloud-native microservices. Since it was established in June 2016, it has released 6 overall releases and 16 individual specification releases in less than 2 years. This page shows which application servers support MicroProfile at which version. Open Liberty is seen as one of the leading implementations of MicroProfile and determined to implement MicroProfile’s latest releases rapidly.

Below is the latest status of each individual specifications:

The following specifications are in progress:

The cloud-native microservices created using MicroProfile can be deployed anywhere freely, including a service mesh architecture, e.g. Istio. In this article, we explore how microservice using MicroProfile is functioning in Istio platform. Let’s look at the Istio in a nutshell.

Istio in a nutshell

Cloud-native microservices are well-suited to be deployed to a cloud infrastructure. When there are many microservices, the communication among the microservices will need to be orchestrated. The orchestration is managed by so-called service mesh, which is a dedicated infrastructure layer to make service to service communication fast, safe and reliable. It also provides discovery, load balancing, failure recovery, metrics and monitoring. It may also include A/B testing, canary releases, etc.

Istio is the most popular service mesh, designed to connect, manage and secure microservices. It is an open source project with an active community, which started from IBM, Google and Lyft. Istio 1.0 was released at the end of July, 2018.

Istio provides the following core functionalities:

  • Traffic management:
    • Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
    • Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
    • A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Observability
    • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • Security
    • Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Istio is platform-independent and designed to run in a variety of environments, such as Kubernetes, Mesos, etc. This article focus on Istio under Kubernetes.

Istio consists of a data plane and a control plane (see diagram below for Istio Architecture, taken from istio.io).

Istio

MicroProfile meets Istio

As mentioned in the previous section, MicroProfile offers

  • Config
  • Fault Tolerance: Retry, Circuit Breaker, Bulkhead, Timeout, Fallback
  • Health Check
  • Metrics
  • Open API
  • Open Tracing
  • Rest Client
  • JWT

Istio is capable of doing:

  • Fault Tolerance: Retry, Circuit Breaker, Limits on concurrent connections or requests, Timeouts
  • Metrics
  • Open Tracing
  • Fault Injection

At a quick glimpse, there are some overlapping. Let’s look at each individual MicroProfile specification and investigate how they can be used in Istio.

MicroProfile config in Istio

MicroProfile Config provides a solution to externalize the configuration. The default config sources include environment variables, system properties and microprofile-config.properties file on the classpath.

The properties defined in Kubernetes config map can be transformed to environment variables via envFrom capability, as highlighted below.

kind: ConfigMap
apiVersion: v1
metadata:
name: example-config
namespace: default
data:
EXAMPLE_PROPERTY_1: hello
EXAMPLE_PROPERTY_2: world

# Use envFrom to load ConfigMaps into environment variables

apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mydeployment
labels:
app: servicea
spec:
replicas: 1
selector:
matchLabels:
app: servicea
template:
metadata:
labels:
app: servicea
spec:
containers:
- name: app
image: microprofile/servicea/latest
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: example-config 

The config properties specified in the config map are automatically injectable to the microservices via MicroProfile Config APIs.

@ApplicationScoped
public class Demo {

@Inject
@ConfigProperty(name="example.property.1") String myProp1;
@Inject
@ConfigProperty(name="example.property.2") String myProp2;
public void echo() {
System.out.println(myProp1 + myProp2);
}
} 

Note: You might have noticed that in the above Istio config rules, configmap defines the properties EXAMPLE_PROPERTY_1 EXAMPLE_PROPERTY_2. However, the above snippet looks for the properties example.property.1 example.property.2. Why? How could this work?

Environment variable names used by the utilities in the Shell and Utilities volume of IEEE Std 1003.1-2001 consist solely of uppercase letters, digits, and the ‘_’ (underscore) from the characters defined in Portable Character Set and do not begin with a digit. Other characters may be permitted by an implementation; applications shall tolerate the presence of such names. Some shells might not support anything besides letters, numbers, and underscores, e.g. Ubuntu.

MicroProfile Config 1.3 onwards directly maps any non-alphanumeric characters (e.g. “.”, which is invalid environment variable in some OS) to `_`. In this example, in the configmap, the property names are EXAMPLE_PROPERTY_1 EXAMPLE_PROPERTY_2, which is the mapping name of example.property.1 and example.property.2.

MicroProfile health check in Istio

In Service Mesh architecture, each pod has a lifecycle, which is often in Kubernetes cluster. Kubernetes needs to know when to kill a pod and Istio needs to know when to route requests to a pod. In summary, knowing each pod’s health status is necessary. A pod’s health status is measured using Liveness and Readiness.

Liveness

Many microservices run for long periods of time and they eventually might transit to a broken state. Therefore, it cannot recover except being restarted. This is called liveness lifecycle.

Readiness

Sometimes, microservices are temporarily unable to serve traffic. For example, a microservice might need to load large data or configuration files during startup.

MicroProfile Health Check denotes whether the microservice is live or ready. It exposes an endpoint /health. Invoking the endpoint returns either UP (healthy) or DOWN (unhealthy).

Health Check of microservices in Istio

Service Mesh e.g. Istio can utilize the readiness and liveness status from the underline component such as Kubernetes. Kubernetes provides liveness probes or readiness probes to detect and remedy such situations. Kubernetes can check the pods frequently. If the pod is not live, it will destroy the pod and start a new one. If the application is not ready, Kubernetes doesn’t want to kill the application, but not to send its requests either.

Microservices in Istio can utilize the endpoint exposed by MicroProfile Health for its liveness probe, so that Kubernetes can control whether to destroy the pod or not. Below is the configuration for a liveness prob. Any return code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure, which will cause the pod to be destroyed.

livenessProbe:
exec:
command:
- curl
- -f
- http://localhost:8080/health
initialDelaySeconds: 10
periodSeconds: 10 

MicroProfile metrics in Istio

MicroProfile Metrics is to provide a unified way to export telemetry to management agents and APIs that microservice developers can use to add their telemetry data. For an instance, the following metrics will hold the number of systems in the inventory.

 @Gauge(unit = MetricUnits.NONE, name = "inventorySizeGuage", absolute = true,
description = "Number of systems in the inventory")
public int getTotal() {
return invList.getSystems().size();
} 

MicroProfile metrics will be able to provide application-specific metrics over and above what Istio manages to get. It is complementary to Istio telemetry.

MicroProfile Open API in Istio

In service mesh, it is important to view the capability of each service so that the service can be discovered. MicroProfile Open API comes to the rescue with the aim to provide a set of Java interfaces and programming models which allow Java developers to natively produce OpenAPI v3 documents for their JAX-RS applications.

@GET
@Path("/{hostname}")
@Produces(MediaType.APPLICATION_JSON)
@APIResponses(
value = {
@APIResponse(
responseCode = "404",
description = "Missing description",
content = @Content(mediaType = "text/plain")),
@APIResponse(
responseCode = "200",
description = "JVM system properties of a particular host.",
content = @Content(mediaType = "application/json",
schema = @Schema(implementation = Properties.class))) })
@Operation(
summary = "Get JVM system properties for particular host",
description = "Retrieves and returns the JVM system properties from the system "
+ "service running on the particular host.")
public Response getPropertiesForHost(
@Parameter(
description = "The host for whom to retrieve the JVM system properties for.",
required = true,
example = "foo",
schema = @Schema(type = SchemaType.STRING))
@PathParam("hostname") String hostname) {
// Get properties for host
Properties props = manager.get(hostname);
if (props == null) {
return Response.status(Response.Status.NOT_FOUND)
.entity("ERROR: Unknown hostname or the system service may "
+ "not be running on " + hostname)
.build();
}

//Add to inventory to host
manager.add(hostname, props);
return Response.ok(props).build();
} 

The APIs can be viewed via the endpoint of /openapi/ui

Istio

This specification offers a great addition to Istio as DevOps can use this to find out the details about each JAX-RS endpoint.

MicroProfile Open Tracing in Istio

In a service mesh architecture, an essential need is to trace the service invocations. A complete chain from the client to the final service will help visualize service invocation hops. If there is a problem, this can be used to identify the faulty service.

MicroProfile Open Tracing helps to achieve this goal. This specification defines behaviors and an API for accessing an OpenTracing compliant Tracer object within JAX-RS microservices. All incoming and outgoing requests will have OpenTracing spans automatically created. It works with the tracer implementations including Zipkin or Jaeger. Istio provides Jaeger and also works with Zipkin.

Istio requires microservices to propagate the following 7 headers, which is automatically propagated by MicroProfile Open Tracing, which saves microservice developers from writing the boilerplate code.

  • x-request-id
  • x-b3-traceid
  • x-b3-spanid
  • x-b3-parentspanid
  • x-b3-sampled
  • x-b3-flags
  • x-ot-span-context

MicroProfile JWT in Istio

Securing the service to service communication is essential requirement in service mesh architecture. MicroProfile JWT defines a means to secure service to service communication, strongly related to RESTful Security. One of the main strategies to propagate the security state from clients to services, services to services involves the use of security tokens. It uses OpenID Connect based JSON Web Tokens (JWT) for role-based access control (RBAC) of microservice endpoints. The MicroProfile JWT token is used to authenticate and authorize the user roles on @RolesAllowed, @PermitAll, @DenyAll defined in JSR-250.

// The JWT of the current caller. Since this is a request scoped resource, the
// JWT will be injected for each JAX-RS request.
@Inject
private JsonWebToken jwtPrincipal;

@GET
@RolesAllowed({ "admin", "user" })
@Path("/username")
public Response getJwtUsername() {
return Response.ok(this.jwtPrincipal.getName()).build();
} 

Istio security provides two types of authentications:

  • Transport authentication (service-to-service authentication): verifies the direct client making the connection using mutual TLS as a full stack solution for transport authentication. This can be used without any microservice code change.
  • Origin authentication (end-user authentication): verifies the origin client making the request as an end-user or device. It only supports JWT origin authentication. Istio can add extra authentication and intercept with MicroProfile JWT authentication. The Origin authentication can be used if microservices have no security embedded.

MicroProfile Rest Client in Istio

In Istio, service to service communication is often via JAX-RS. One issue with JAX-RS is its lack of type safe client. In order to fix this issue, MicroProfile Rest Client defines a type safe client programming model and also provide a better validation for misconfigured JAX-RS clients.

@Dependent
@RegisterRestClient
@Path("/properties")
public interface SystemClient {
end::annotations[]

@GET
@Produces(MediaType.APPLICATION_JSON)
public Properties getProperties() throws UnknownUrlException, ProcessingException;
} 

The above code snippet automatically builds and generates a client implementation based on what is defined in the SystemClient interface, which automatically setting up the client and connecting with the remote service.

When the getProperties() method is invoked, the SystemClient instance sends a GET request to the /properties endpoint.

@ApplicationScoped
public class InventoryManager {
@Inject
@RestClient
private SystemClient defaultRestClient;

public Properties get(String hostname) {
try {
return defaultRestClient.getProperties();
} catch (UnknownUrlException e) {
System.err.println("The given URL is unreachable.");
} catch (ProcessingException ex) {
handleProcessingException(ex);
}
return null;
}
} 

@Inject and @RestClient annotations inject an instance of the SystemClient called defaultRestClient to the InventoryManager class , which is type-safe client.

This specification is used by the microservice and does not surface anything beyond. Therefore, it has no direct interaction nor conflict with Istio.

MicroProfile Fault Tolerance in Istio

Building a resilient microservice is key for microservices design. Eclipse MicroProfile Fault Tolerance provides a simple and flexible solution to build a Fault Tolerance microservice, which is easy to use and configurable. It offers the following Fault Tolerance policies:

  1. Timeout: Define a duration for timeout.
  2. Retry: Define criteria on when to retry.
  3. Bulkhead: isolate failures in part of the system while the rest part of the system can still function.
  4. CircuitBreaker: offer a way of fail fast by automatically failing execution to prevent the system overloading and indefinite wait or timeout by the clients.
  5. Fallback: provide an alternative solution for a failed execution.

The main design is to separate execution logic from execution. The execution can be configured with fault tolerance policies.

Istio also defines a set of opt-in failure recovery features, including:

  1. Timeouts
  2. Bounded retries with timeout budgets and variable jitter between retries
  3. Limits on number of concurrent connections and requests to upstream services
  4. Fine-grained circuit breakers (passive health checks) – applied per instance in the load balancing pool

Istio’s failure recovery is via Envoy proxy to mediate outbound traffic e.g. duplicating requests etc. However, it cannot manipulate any secure calls, e.g. https requests.

Let’s compare MicroProfile Fault Tolerance with Istio failure handling.

istio

*MicroProfile Fault Tolerance Circuit Breaker is owned by the clients, which has no sharing among different clients, while Istio Circuit Breaker is owned by the backend, which means multiple connections can contribute towards the same Circuit Breaker.

Let’s compare the failure handling in more details by investigating each individual policy.

Timeout

For Timeout, MicroProfile Fault Tolerance uses @Timeout annotation to specify the timeout period.

@Timeout(400) // timeout is 400ms
public void callService() {
//calling ratings
} 

Istio uses the following configure rule to specify the timeout period.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
timeout: 10s 

If both Istio Timeout and MicroProfile Fault Tolerance Timeout is specified, the most restrictive of the two will be triggered when failures occur.

Retry

How to deal with unstable services? Retry is the obvious choice to increase the success. MicroProfile uses @Retry to specify the retry.

/**
* The configured the max retries is 90 but the max duration is 1000ms.
* Once the duration is reached, no more retries should be performed,
* even through it has not reached the max retries.
*/
@Retry(maxRetries = 90, maxDuration= 1000)
public void callService() {
//calling rating;
}
/**
* There should be 0-800ms (jitter is -400ms - 400ms) delays
* between each invocation.
* there should be at least 4 retries but no more than 10 retries.
*/
@Retry(delay = 400, maxDuration= 3200, jitter= 400, maxRetries = 10)
public Connection serviceA() {
return connectionService();
}
/**
* Sets retry condition, which means Retry will be performed on
* IOException.
*/
@Timeout(400)
@Retry(retryOn = {IOException.class})
public void callRating() {
//call ratings;
} 

Istio uses the following config rule to specify Retry.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
retries:
attempts: 3
perTryTimeout: 2s 

The above config rule can be simply mapped to MicroProfile Fault Tolerance Retry with @Retry(maxRetries=3, delay=2, delayUnit=ChronoUnit.SECONDS)

When MicroProfile Fault Tolerance Retry and Istio Retry are specified, the microservice will eventually multiply the number the retries. For an instance, if MicroProfile Fault Tolerance specifies 3 retries and Istio specifies 3 retries, the maximum retries will be 9 (3×3), as each outgoing request are duplicated 3 times. Don’t panic. Read on. A solution is provided by MicroProfile Fault Tolerance.

Bulkhead

MicroProfile Bulkhead provides two different categories of bulkhead:

  • Thread Isolation
    • Use thread-pool with a fixed number of threads and a waiting queue by using the annotation of Asynchronous and Bulkhead
// maximum 5 concurrent requests allowed, maximum 8 requests allowed in the waiting queue
@Asynchronous
@Bulkhead(value = 5, waitingTaskQueue = 8)
public Future <Connection> serviceA() {
Connection conn = null;
conn = connectionService();
return CompletableFuture.completedFuture(conn);
}
  • Semaphore Isolation
    • Limit the number of concurrent requests
@Bulkhead(5) // maximum 5 concurrent requests allowed
public Connection serviceA() {
Connection conn = null;
conn = connectionService();
return conn;
} 

Istio can use Circuit Breaker config rules to configure the connection pool, so that it can limits the concurrent number of requests. The following rule indicate that if you exceed more than one connection and request concurrently, you should see some failures when the Istio-proxy opens the circuit for further requests and connections.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1 

Circuit breaker

Circuit Breaker is an important pattern for creating resilient microservices. It can be used to prevent repeatable timeouts by instantly rejecting the requests. MicroProfile Fault Tolerance uses @CircuitBreaker to control the client calls.

@CircuitBreaker(successThreshold = 10, requestVolumeThreshold = 4, failureRatio=0.75, delay = 1000)
public Connection serviceA() {
Connection conn = null;
conn = connectionService();
return conn;
} 

The above code-snippet means the method serviceA applies the CircuitBreaker policy. For the last 4 invocations, if 75% failed (i.e. 3 out of the 4 invocations failed) then open the circuit. The circuit will stay open for 1000ms and then back to half open. After 10 consecutive successful invocations, the circuit will be back to close again. When a circuit is open, A CircuitBreakerOpenException will be thrown instead of actually invoking the method.

Istio uses Circuit Breaker rules to limit the impact of failures, latency spikes, and other undesirable effects of network issues.

The following rule sets a connection pool size of 100 connections and 1000 concurrent HTTP2 requests, with no more than 10 req/connection to “reviews” service. In addition, it configures upstream hosts to be scanned every 5 mins, such that any host that fails 7 consecutive times will be ejected for 15 minutes.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-cb-policy
spec:
host: reviews.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http2MaxRequests: 1000
maxRequestsPerConnection: 10
outlierDetection:
consecutiveErrors: 7
interval: 5m
baseEjectionTime: 15m 

As you see, Istio CircuitBreaker cover both some aspect of bulkhead and circuit breaker. The above configuration can be translated to the following MicroProfile Fault Tolerance.

@Bulkhead(1000)
@CircuitBreaker(requestVolumeThreshold=7, failureRatio=1.0, delay=15, delayUnit=ChronoUnit.MINUTES)

The Circuit Breaker between MicroProfile and Istio are different. In MicroProfile Fault Tolerance, the policy is placed on the clients, as the policy controls outbound requests. Istio places the policy on the destination, so multiple clients could contribute towards the same Circuit Breaker.

Fallback

MicroProfile Fault Tolerance or Istio Fault Tolerance is here to increase the success chance. However, in realty, they cannot guarantee 100% success rate. You still need to come up with a contingency plan. What if my request fails? Fallback in MicroProfile Fault Tolerance comes to rescue.

Istio Fault Tolerance does not provide any fallback capabilities. It does make sense as only application developers can decide the contingency plan, which requires business knowledge.

MicroProfile Fault Tolerance offers a great fallback capability via @Fallback annotation.

In the following code snippet, when the method fails and retry reaches its maximum retry, the fallback operation will be performed. In this example, it just return a string. You might choose to call a different backup service.

@Retry(maxRetries = 2)
@Fallback(fallbackMethod= "fallbackForServiceB")
public String serviceB() {
counterForInvokingServiceB++;
return nameService();
}
private String fallbackForServiceB() {
return "myFallback";
} 

Current ecosystem

When you read up to here, you might wonder:

Q: Can I use MicroProfile Fault Tolerance fallback together with Istio Fault handling?
A: Yes, you can by just using @Fallback annotation.
This is a simple ecosystem. Let’s go further.

Q: What if I want to use MicroProfile Fault Tolerance in dev and testing but when it is deployed to Istio, I want to use Istio fault handling? 
A: Thanks to the configuration MP_Fault_Tolerance_NonFallback_Enabled provided by MicroProfile Fault Tolerance, you can configure this property in configmap with the value of false, which will disable MicroProfile Fault Tolerance capabilities except Fallback.

apiVersion: v1
kind: ConfigMap
metadata:
name: servicea-config
data:
MP_Fault_Tolerance_NonFallback_Enabled: "false" 

This ecosystem is still basic as it directly disables MicroProfile Fault Tolerance except fallback. Microservice developers fault handling knowledge is completely thrown away. DevOps has to create Istio config rules from scratch.

Future ecosystem in my view

Producing a correct Istio config rules can be daunting. If we can use MicroProfile Fault Tolerance annotation as the input to Istio fault handling rule creation, it will be possible to generate the corresponding Istio config rules. In this way, developers’ knowledge about timeout or retries will be reflected in the configure rules.

However, for https requests, Istio cannot intercept the request to add fault handling capability. The corresponding Istio config rules can still be generated but MicroProfile Fault Tolerance will not be disabled. DevOps can modify the parameters in the rules, which will automatically take effect in the microservices as all MicroProfile Fault Tolerance annotation parameters are configurable. The ecosystem can be summarised in the following table.

As mentioned earlier, for https requests, if Istio fault handling is to be used, MicroProfile Fault Tolerance except fallback can be disabled via the following configuration. The corresponding properties can be set in configmap as explained in the previous section, which will disable the relevant Fault Tolerance capabilities.

For Https requests, MicroProfile Fault Tolerance will handle the fault tolerance capabilities since Istio cannot inject the fault handling.

The plan is to generate Istio config rules and then disable MicroProfile Fault Tolerance if Istio can handle the situation.

This section is what I am thinking and would love to hear more feedback from the public.

MicroProfile and Istio ecosystem in action

MicroProfile sets up a sample github repository to explore the ecosystem with particular focus on MicroProfile Fault Tolerance. There are also two microservices set up to demonstrate the ecosystem, servicea and serviceb. The two microservices will demonstrate all MicroProfile specifications. If you are interested in this exercise, please join in the gitter room and join the weekly call, where the details can be found the MicroProfile Calendar.

In summary, MicroProfile is seen as the programming model for developing microservices for Istio service mesh.

References

  1.  MicroProfile website
  2. Istio
  3. MicroProfile service mesh repo
  4. MicroProfile service mesh samples A and B
  5. Open Liberty guides on MicroProfile
  6. MicroProfile Fault Tolerance article
  7. MicroProfile service mesh experience

 

eclipseorb_color

This post was originally published in the September 2018 issue of the Eclipse Newsletter: Eclipse MicroProfile

For more information and articles check out the Eclipse Newsletter.

Author

Emily Jiang

Emily Jiang is Liberty Architect for MicroProfile and CDI in IBM. Based at IBM’s Hursley laboratory in the UK, she has worked on the WebSphere Application Server since 2006 and is heavily involved in Java EE implementation in WebSphere Application Server releases. She is a key member of MicroProfile and CDI Expert Group, and leads the specification of MicroProfile Config and Fault Tolerance. Emily is also Config JSR co-spec lead. She regularly speaks at conferences, such as JAX, Voxxed, Devoxx US, Devoxx UK, Devoxx France, DevNexus, EclipseCon and CodeOne, etc.