Set Kafka default replication factor

This ensures that when using automatic Kafka topic creation, with more than one
node in the Kafka cluster, all partitions in the topic are automatically
replicated. When a single node goes down in a >=3 node cluster, these topics will
continue to accept writes providing there are at least two insync replicas.

In a two node cluster, no failures are tolerated. In a three node cluster, only a
single node failure is tolerated. In a larger cluster the configuration may need
manual tuning.

This configuration follows advice given here:

[1] https://docs.cloudera.com/documentation/kafka/1-2-x/topics/kafka_ha.html#xd_583c10bfdbd326ba-590cb1d1-149e9ca9886--6fec__section_d2t_ff2_lq

Closes-Bug: #1888522

Change-Id: I7d38c6ccb22061aa88d9ac6e2e25c3e095fdb8c3
This commit is contained in:
Doug Szumski 2020-07-22 17:18:26 +01:00
parent 61e32bb131
commit a273e28e20
2 changed files with 12 additions and 0 deletions

View File

@ -8,6 +8,7 @@ socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/var/lib/kafka/data
min.insync.replicas={{ kafka_broker_count if kafka_broker_count|int < 3 else 2 }}
default.replication.factor={{ kafka_broker_count if kafka_broker_count|int < 3 else 3 }}
num.partitions=30
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor={{ kafka_broker_count if kafka_broker_count|int < 3 else 3 }}

View File

@ -0,0 +1,11 @@
---
fixes:
- |
An issue where when Kafka default topic creation was used to create a
Kafka topic, no redundant replicas were created in a multi-node cluster.
`LP#1888522 <https://launchpad.net/bugs/1888522>`__. This affects Monasca
which uses Kafka, and was previously masked by the legacy Kafka client used
by Monasca which has since been upgraded in Ussuri. Monasca users with
multi-node Kafka clusters should consultant the Kafka `documentation
<https://kafka.apache.org/documentation/>`__ to increase the number of
replicas.