March 25, 2025

ikayaniaamirshahzad@gmail.com

Using KRaft Kafka for Development and Kubernetes Deployment


With KRaft for Kafka ZooKeeper is no longer needed. KRaft is a protocol to select a leader among several server instances. That makes the Kafka setup much easier. 

The new configuration is shown on the example of the MovieManager project.

Using Kafka for Development

For development, Kafka can be used with a simple Docker setup. That can be found in the runKafka.sh script:

#!/bin/sh
# network config for KRaft
docker network create app-tier --driver bridge
# Kafka with KRaft
docker run -d \
    -p 9092:9092 \
    --name kafka-server \
    --hostname kafka-server \
    --network app-tier \
    -e KAFKA_CFG_NODE_ID=0 \
    -e KAFKA_CFG_PROCESS_ROLES=controller,broker \
    -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \
    -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=
         CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
    -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-server:9093 \
    -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
    bitnami/kafka:latest
# Start Kafka with KRaft
docker start kafka-server

First, the Docker bridge network app-tier is created to enable the KRaft communication of the Kafka instances among each other. Then, the Kafka instances are started with the docker run command. The port 9092 needs to be exported. After the run command has been executed, the kafka-server configuration can be started and stopped with docker commands.

To run a Spring Boot application with Kafka in development, a kafka profile can be used. That enables application configurations with and without Kafka. An example configuration can look like the application-kafka.properties file:

kafka.server.name=${KAFKA_SERVER_NAME:kafka-server}
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.producer.compression-type=gzip
spring.kafka.producer.transaction-id-prefix: tx-
spring.kafka.producer.key-serializer=
  org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=
  org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.enable.idempotence=true
spring.kafka.consumer.group-id=group_id
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=
  org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=
  org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.isolation-level=read_committed
spring.kafka.consumer.transaction-id-prefix: tx-

The important lines are the first two. In the first line, the kafka.server.name is set to kafka-server, and in the second line, the spring.kafka.bootstrap-servers are set to localhost:9092. That instructs the application to connect to the dockerized Kafka instances on localhost.

To connect to Kafka locally, the DefaultHostResolver has to be patched:

public class DefaultHostResolver implements HostResolver {
  public static volatile String IP_ADDRESS = "";
  public static volatile String KAFKA_SERVER_NAME = "";
  public static volatile String KAFKA_SERVICE_NAME = "";

  @Override
  public InetAddress[] resolve(String host) throws UnknownHostException {
    if(host.startsWith(KAFKA_SERVER_NAME) && !IP_ADDRESS.isBlank()) {
      InetAddress[] addressArr = new InetAddress[1];
      addressArr[0] = InetAddress.getByAddress(host, 
        InetAddress.getByName(IP_ADDRESS).getAddress());
      return addressArr;
    } else if(host.startsWith(KAFKA_SERVER_NAME) && 
      !KAFKA_SERVICE_NAME.isBlank()) {
      host = KAFKA_SERVICE_NAME;
    }
    return InetAddress.getAllByName(host);
  }
}

The DefaultHostResolver handles the name resolution of the Kafka server hostname. It checks if the KAFKA_SERVER_NAME is set and then sets the IP_ADDRESS for it. That is needed because the local name resolution of kafka-server does not work(unless you put it in the hosts file).

Using Kafka in a Kubernetes Deployment

For a Kubernetes deployment, an updated configuration is needed. The values.yaml changes look like this:

...
kafkaServiceName: kafkaservice
...
secret:
  nameApp: app-env-secret
  nameDb: db-env-secret
  nameKafka: kafka-env-secret

envKafka:
  normal: 
    KAFKA_CFG_NODE_ID: 0
    KAFKA_CFG_PROCESS_ROLES: controller,broker
    KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
    KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 
      CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
    KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 0@kafkaservice:9093
    KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER 

Because ZooKeeper is no longer needed, all of that configuration has been removed. The Kafka configuration is updated to the values KRaft needs.

In the template, the ZooKeeper configuration has been removed. The Kafka deployment configuration has not changed because only the parameters change. The parameters are provided by the values.yaml with the helpers.tpl script. The Kafka service configuration template needs to change to support the KRaft communication between the Kafka instances for leader selection:

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.kafkaServiceName }}
  labels:
    app: {{ .Values.kafkaServiceName }}
spec:
  ports:
  - name: tcp-client
    port: 9092
    protocol: TCP
  - name: tcp-interbroker
    port: 9093
    protocol: TCP
    targetPort: 9093
  selector:
    app: {{ .Values.kafkaName }}   

This is a normal service configuration that opens the port 9092 internally and works for the Kafka deployment. The tcp-interbroker configuration is for the KRaft leader selection. It opens the port ‘9093’ internally and provides the ‘targetPort’ to enable sending requests among each other.

The application can now be run with the profiles prod and kafka and will start with the application-prod-kafka.properties configuration:

kafka.server.name=${KAFKA_SERVER_NAME:kafkaapp}
spring.kafka.bootstrap-servers=${KAFKA_SERVICE_NAME}:9092
spring.kafka.producer.compression-type=gzip
spring.kafka.producer.transaction-id-prefix: tx-
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.group-id=group_id
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.isolation-level=read_committed
spring.kafka.consumer.transaction-id-prefix: tx-

The deployment configuration is very similar to the development configuration. The main difference is in the first two lines. The application will not start if the environment variable KAFKA_SERVICE_NAME is not set. That would be an error in the deployment configuration of the MovieManager application that has to be fixed.

Conclusion

With KRaft, Kafka no longer needs ZooKeeper, making the configuration simpler. For development, two Docker commands and a simple Spring Boot configuration are enough to start Kafka instances to develop against. For deployment in Kubernetes, the configuration of the Docker commands for the development setup can be used to create Kafka instances, and a second port configuration is needed for KRaft.

The time of Kafka being more difficult than other messaging solutions is over. It is now easy to develop and easy to deploy. Kafka should now be used for all the use cases where it fits the requirements.



Source link

Leave a Comment