I recently came across testcontainers-java which is a Java library that supports JUnit
tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container (check out https://testcontainers.org/ for details!)
We can use it to run integration tests for Kafka
based applications against an actual Kafka
instance. This blog will show you how to use this library to test Kafka Streams
topologies along with the Kafka Stream testing utility classes (discussed in an earlier blog post)
Example available on GitHub
In simple words, testcontainers-java
library allows you to spin up Docker containers programmatically. You can obviously use a Java Docker client such as this directly, but testcontainers-java
provides some benefits and ease of use. It's only pre-requisite is Docker itself
Kafka with testcontainers
With testcontainers-java
, you can spin up an actual Kafka broker (or cluster) against which you can run your integration test as opposed to an embedded version.
You can start off by using the GenericContainer
API to spin up a Kafka container for e.g. using the confluent kafka docker image. This would something like this:
public GenericContainer kafka = new GenericContainer<>("confluentinc/cp-kafka")
.withExposedPorts(9092);
....
String host = kafka.getContainerIpAddress();
Integer port = kafka.getFirstMappedPort();
String bootstrapServer = host+":"+Integer.toString(port)
....
//use the bootstrapServer in tests...
We spawn a Kafka container based on a Docker image, get the randomly generated port which mapped to our local machine and we are off to the races. But we can do better!
testcontainers-java
is flexible and supports the concept of ready-to-use modules. There is one available for Kafka already and it makes things a little easier. Thanks to the KafkaContainer
module, all we need to do is start off the Kafka container for e.g. we can use a JUnit @Rule
or @ClassRule
as such which will start off before the tests start and tear down after they end.
@ClassRule
public KafkaContainer kafka = new KafkaContainer();
... or use it with in combination with @Before/@BeforeClass
and @After/@AfterClass
if you need more control.
Other noteworthy points include...
- You don't need to handle
Zookeeper
dependency but the module is flexible enough to provide you an option to access an external one (if needed) e.g.KafkaContainer kafka = new KafkaContainer().withExternalZookeeper("zk-ext:2181");
- If you have containerized Kafka client applications, they can access the
KafkaContainer
instance as well - Ability to select a specific version of Confluent platform e.g.
new KafkaContainer("5.4.0")
- Custom techniques such as using a
Dockerfile
instead of referring to a Docker image or using a DSL to programmatically buildDockerfile
Example: How to use this for testing Kafka Streams apps?
Make sure you have the required dependencies e.g. for Maven
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams-test-utils</artifactId>
<version>2.4.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>kafka</artifactId>
<version>1.13.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-core</artifactId>
<version>1.3</version>
<scope>test</scope>
</dependency>
Here is an example:
Use the @Before
method for the setup - start the Kafka container and set the bootstrap server property for the Kafka Streams application
public class AppTest {
KafkaContainer kafka = new KafkaContainer();
.....
@Before
public void setUp() {
kafka.start();
config = new Properties();
config.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBootstrapServers());
config.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, App.APP_ID);
.....
}
Here is a simple Topology
....
static Topology filterWordsLongerThan5Letters() {
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> stream = builder.stream(INPUT_TOPIC);
stream.filter((k, v) -> v.length() > 5).to(OUTPUT_TOPIC);
return builder.build();
}
... which can be tested as such:
@Test
public void shouldIncludeValueWithLengthGreaterThanFive() {
topology = App.filterWordsLongerThan5Letters();
td = new TopologyTestDriver(topology, config);
inputTopic = td.createInputTopic(App.INPUT_TOPIC, Serdes.String().serializer(), Serdes.String().serializer());
outputTopic = td.createOutputTopic(App.OUTPUT_TOPIC, Serdes.String().deserializer(), Serdes.String().deserializer());
inputTopic.pipeInput("foo", "foobar");
assertThat("output topic was empty", outputTopic.isEmpty(), is(false));
assertThat(outputTopic.readValue(), equalTo("foobar"));
assertThat("output topic was not empty", outputTopic.isEmpty(), is(true));
}
That's it! This was a quick introduction to testcontainers-java
along with an example of how to use it alongside Kafka Streams test utility (full sample on GitHub)