Implement stream-aware, reactive, integration pipelines with Alpakka Kafka
If you are looking for a tool that understands streaming natively and provides a DSL for reactive and stream-oriented programming, then Alpakka Kafka is what you need. Let’s have a closer look.
Driven by the need for streaming data in the world of microservices and cloud deployment where new components must interact with legacy systems, project Alpakka was born!
Alpakka Kafka is an open source initiative to implement stream-aware, reactive, integration pipelines for Java and Scala.
According to the GitHub repo, it is built on top of Akka Streams and has been designed from the ground up to understand streaming natively and provide a DSL for reactive and stream-oriented programming, with built-in support for backpressure.
Today, we take a closer look at what Alpakka Kafka has to offer and we’ll go over the important highlights featured in its first milestone release.
Here are the main features Alpakka Kafka has to offer:
- Provides Apache Kafka connectivity for Akka Streams
- Supports consuming messages from Kafka into Akka Streams with at-most-once, at-least-once and transactional semantics, and supports producing messages to Kafka
- Once consumed messages are in the Akka Stream, the whole flexibility of all Akka Stream operators becomes available
- achieves back-pressure for consuming by automatically pausing and resuming its Kafka subscriptions
- Alpakka Kafka 1.0 uses the Apache Kafka Java client 2.1.0 internally
The 1.0 release also features some important changes since 0.22:
- Upgrade to Kafka client 2.1.0 #660 – This upgrade makes it possible to use of the zstandard compression (with Kafka 2.1 brokers). Use Kafka client 2.x poll API #614.
- No more
WakeupException– The Kafka client API 2.x allows for specifying a timeout when polling the Kafka broker, thus you do not need to use the cranky tool of Kafka’s
WakeupExceptions to be sure not to block a previous thread. The settings to configure wake-ups are not used anymore.
- Alpakka Kafka consumers don’t fail for non-responding Kafka brokers anymore (as they used to after a number of
Committer.flowfor standardized committing #622 and #644
- Commit with metadata #563 and #579
- Java APIs for all settings classes #616
- Alpakka Kafka testkit
If you are interested in learning more about Akka, don’t miss Manuel Bernhardt’s “Tour of Akka Cluster” and “Akka anti-patterns” collections:
- A tour of Akka Cluster – Akka distributed data
- A tour of Akka Cluster – Eventual consistency, persistent actors, message delivery semantics
- A tour of Akka cluster – Cluster sharding
- A tour of Akka Cluster – Testing with the multi-node-testkit and a handful Raspberry PIs