f4dce6c37d
When a large post (> 10s of MB) is made to the Monasca API an attempt is made to write these metrics to the metrics topic in Kafka. However, due to the large size of the write, this can fail with a number of obscure errors which depend on exactly how much data is written. This change supports splitting the post into chunks so that they can be written to Kafka in sequence. A default has been chosen so that the maximum write to Kafka should be comfortably under 1MB. A future extension could support splitting the post by size, rather than the number of measurements. A better time to look at this may be after the Python Kafka library has been upgraded. Story: 2006059 Task: 34772 Change-Id: I588a9bc0a19cd02ebfb8c0c1742896f208941396
5 lines
160 B
YAML
5 lines
160 B
YAML
---
|
|
features:
|
|
- A new config option, queue_buffering_max_messages, has been added to
|
|
support controlling the size of posts to Kafka from the Monasca API.
|