Lars Madsen: Hi guys - want to follow up on the posted in the forum a while back https://forum.zeebe.io/t/memory-profile-for-zeebe-brokers/1648/3
We are still seeing the same behaviour, with the 25.1 version. So, checking releases I see this: https://github.com/zeebe-io/zeebe/issues/5882 - is it possible that this has caused our extensive memory usage?
Another thing I would want to check, is what impact does the MaxMessageSize param have on memory usage - we’re running the default (4mb?) Would you expect lower memory footprint if this was reduced to say 64k?
Lars Madsen: I see we are running with
""executionMetricsExporterEnabled" : false
Nicolas: The maxMessageSize has an impact on memory usage, but it shouldn’t be too big - it mostly has an impact on performance due to cache/page faults.
That said, our current hypothesis is that most of the memory ends up being used by RocksDB, which is probably misconfigured at the moment (we hope to remedy that soon). With that in mind, lowering the message size might impact it since it would impact the size of the data stored in RocksDB, but I expect it wouldn’t do much.
Your best bet is to use
zeebe.broker.data.rocksdb.columnFamilyOptions to specify the following options:
This should help limit the memory usage of RocksDB while we figure out the correct configuration/usage (probably using less column families). Keep in mind that lowering the write buffer sizes has a performance impact, so you should tune these to get the performance you want and the memory usage under control. Also note that these settings help provide a soft upper bound, not a hard cap. It’s unclear whether a hard memory cap is at all possible with RocksDB.
Finally, if you’re on 0.26 (RC1 for now, but coming out January 12th), or if you turned on memory mapped storage (
ZEEBE_BROKER_DATA_USEMMAP="true"), keep in mind that the RSS will be over inflated due to shared mappings, so you should be looking more specifically at the PSS (if running on bare metal) or the WSS (if running on Kubernetes, as this is what the oom-killer monitors). Measuring the page cache is also a good idea, as once the memory starts to be limited, the first thing the OS will do is drop the page cache, which will drastically slow down your application.
Lars Madsen: That’s great @Nicolas ! Thanks for your quick response!
Lars Madsen: Just a follow up question - we would ideally like to not have too much custom config compared to the «official» defaults. It sounds to me like you are planning to make these changes to the defaults as well? If so, do you know roughly when you expect to release?
Nicolas: Probably not before may/June next year. Our immediate focus for Q1 is helping get Camunda Cloud out of beta, and it runs on preemptible nodes, meaning this issue isn’t affecting that environment since the nodes are restarted at least once a day, probably more.
Nicolas: Of course, this is an open source project, meaning others may contribute this in the meantime, so it could be earlier as well.
Lars Madsen: Great - thanks for your response. We’re looking forward to Camunda Cloud
Note: This post was generated by Slack Archivist from a conversation in the Zeebe Slack, a source of valuable discussions on Zeebe (get an invite). Someone in the Slack thought this was worth sharing!
If this post answered a question for you, hit the Like button - we use that to assess which posts to put into docs.