Skip to content

fix(deps): bump node-rdkafka to ^3.6.0 to fix cooperative-sticky rebalance bug#2728

Open
delthas wants to merge 1 commit intodevelopment/9.3from
bugfix/BB-760/bump-node-rdkafka
Open

fix(deps): bump node-rdkafka to ^3.6.0 to fix cooperative-sticky rebalance bug#2728
delthas wants to merge 1 commit intodevelopment/9.3from
bugfix/BB-760/bump-node-rdkafka

Conversation

@delthas
Copy link
Copy Markdown
Contributor

@delthas delthas commented Mar 26, 2026

Summary

Bumps node-rdkafka from ^2.12.0 (librdkafka 2.3.0) to ^3.6.0 (currently resolving to 3.6.1 / librdkafka 2.12.0) to pick up a fix for a cooperative-sticky partition assignor bug that causes partitions to become permanently orphaned during consumer group rebalances.

Investigation

This was discovered while investigating a flaky CI failure in the "Kafka Cleaner — Verify that consumed messages gets deleted by kafkacleaner" test (Zenko CI run). The test passed on attempts 1-2 but failed on attempt 6.

What the kafkacleaner test does

The test lists all Kafka topics, snapshots their low/high watermarks, then periodically checks that the low watermark advances (messages consumed and cleaned by kafka-cleaner) or that the topic is empty. It fails if any topic remains uncleaned after ~60 minutes.

What failed

The backbeat-metrics topic had ~28 messages across partitions 0-4, but they were never consumed. With no committed offsets, kafka-cleaner could not advance the low watermark.

Root cause analysis

We traced the failure through three layers:

Layer 1 — Partitions orphaned for 50 minutes

During operator reconciliation, pods were rapidly replaced (18+ instances of each deployment). Tracing the rdkafka.assign/rdkafka.revoke events in fluentbit logs revealed that after a round of pod kills at ~07:38, the consumer group backbeat-metrics-group-crr ended up with a single surviving member (notification-producer) holding only partition 4 out of 5. Partitions 0-3 were left completely unassigned for 50 minutes — no consumer was reading from them.

Layer 2 — The librdkafka bug (root cause)

BackbeatConsumer._onRebalance calls this._consumer.commit() in the revoke callback (line 733 of BackbeatConsumer.js). With the cooperative-sticky protocol, this commit during rebalance bumps the Kafka generation ID. The next JoinGroup request then fails with "illegal generation", which causes librdkafka to rejoin the group — but in doing so, the current assignment is lost. The rebalance never converges, and partitions from dead members are never redistributed to surviving consumers.

This is a known librdkafka bug, present since v1.6.0, fixed in confluentinc/librdkafka#4908 (merged 2025-03-25). The fix was first released in librdkafka 2.10.0.

See also: confluentinc/librdkafka#4059 for the original issue report and detailed explanation.

Layer 3 — auto.offset.reset=latest skips orphaned messages

When new consumers finally got assigned partitions 0-3 (after 08:31, when new pods joined), there were no committed offsets for those partitions. MetricsConsumer does not set fromOffset when constructing BackbeatConsumer, so librdkafka defaults to auto.offset.reset=largest (latest). The new consumers started reading from the end of each partition, silently skipping all 23 messages that had been produced during the orphan window. The consumers then polled happily for 4+ hours, seeing nothing.

Why the test passed on attempts 1-2

On earlier attempts, the backbeat-metrics topic was empty (0 messages produced), so the kafkacleaner test's low + 1 >= high condition (0 + 1 >= 0) was trivially true.

Verification on a live cluster

We verified on a live ARTESCA cluster (artesca-1) that MetricsConsumer consumption works correctly under normal conditions (no pod churn) — consumer groups were at lag 0 with all partitions assigned and messages consumed immediately. The bug only manifests during rapid pod replacement with multiple rebalances.

Solution

Bump node-rdkafka to ^3.6.0, currently resolving to 3.6.1 (librdkafka 2.12.0). This includes the fix from librdkafka 2.10.0 that prevents the "commit during rebalance" from derailing the cooperative-sticky protocol, ensuring partitions are correctly redistributed after member deaths.

Why ^3.6.0 specifically

node-rdkafka librdkafka Has fix?
2.18.0 (latest 2.x) 2.3.0 No
3.0.0 2.3.0 No
3.4.0 2.10.0 Yes (minimum)
3.5.0 2.11.1 Yes
3.6.1 2.12.0 Yes (currently resolved)

We set the floor to ^3.6.0 (librdkafka 2.12.0) since nothing in the 2.12.0 changes affects our use case: the KIP-848 consumer group protocol is opt-in (not enabled unless group.protocol=consumer is explicitly set) and the metadata recovery behavior change is minor.

Alternatives considered

  • Add backbeat-metrics to the kafkacleaner test's ignoredTopics: Would hide the symptom but not fix the underlying bug. The metrics topic isn't special — any topic consumed via BackbeatConsumer with cooperative-sticky assignment is affected.
  • Set fromOffset: 'earliest' in MetricsConsumer: Would fix the "skipped messages" symptom (layer 3) but not the partition orphaning (layer 2). Worth doing separately as defense-in-depth but not sufficient alone.
  • Remove commit() from the revoke callback: Would fix the root cause in BackbeatConsumer but is a riskier code change that could affect offset commit guarantees across all consumers. The librdkafka upgrade fixes the same issue at the library level without changing application semantics.

Upgrade safety

librdkafka 2.3.0 → 2.12.0: The librdkafka CHANGELOG shows no consumer-breaking changes in this range. The only notable breaking change is in v2.4.0 where INVALID_RECORD producer errors became non-retriable — this does not affect consumers. KIP-848 (new consumer group protocol) in 2.12.0 is opt-in (group.protocol=consumer must be explicitly set) and does not affect the default classic protocol. The metadata recovery behavior change in 2.12.0 (brokers not in metadata responses are removed and clients re-bootstrap) is a minor behavioral difference that should not impact normal operation.

node-rdkafka 2.x → 3.x: The 3.0.0 release only dropped support for EOL Node.js versions — no API changes. Backbeat requires Node >= 20 and runs on Node 22.14.0.

Issue: BB-760

@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Mar 26, 2026

Hello delthas,

My role is to assist you with the merge of this
pull request. Please type @bert-e help to get information
on this process, or consult the user documentation.

Available options
name description privileged authored
/after_pull_request Wait for the given pull request id to be merged before continuing with the current one.
/bypass_author_approval Bypass the pull request author's approval
/bypass_build_status Bypass the build and test status
/bypass_commit_size Bypass the check on the size of the changeset TBA
/bypass_incompatible_branch Bypass the check on the source branch prefix
/bypass_jira_check Bypass the Jira issue check
/bypass_peer_approval Bypass the pull request peers' approval
/bypass_leader_approval Bypass the pull request leaders' approval
/approve Instruct Bert-E that the author has approved the pull request. ✍️
/create_pull_requests Allow the creation of integration pull requests.
/create_integration_branches Allow the creation of integration branches.
/no_octopus Prevent Wall-E from doing any octopus merge and use multiple consecutive merge instead
/unanimity Change review acceptance criteria from one reviewer at least to all reviewers
/wait Instruct Bert-E not to run until further notice.
Available commands
name description privileged
/help Print Bert-E's manual in the pull request.
/status Print Bert-E's current status in the pull request TBA
/clear Remove all comments from Bert-E from the history TBA
/retry Re-start a fresh build TBA
/build Re-start a fresh build TBA
/force_reset Delete integration branches & pull requests, and restart merge process from the beginning.
/reset Try to remove integration branches unless there are commits on them which do not appear on the source branch.

Status report is not available.

@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Mar 26, 2026

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • 2 peers

@claude
Copy link
Copy Markdown

claude bot commented Mar 26, 2026

  • ^3.5.0 semver range allows 3.6.0+ (librdkafka 2.12.0), contradicting the PR description's goal of avoiding 2.12.0 metadata recovery changes. Pin tighter or clarify intent.

    Review by Claude Code

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 26, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.76%. Comparing base (51f94a6) to head (1735ada).

Additional details and impacted files

Impacted file tree graph
see 2 files with indirect coverage changes

Components Coverage Δ
Bucket Notification 80.37% <ø> (ø)
Core Library 81.19% <ø> (+0.62%) ⬆️
Ingestion 71.14% <ø> (ø)
Lifecycle 79.01% <ø> (ø)
Oplog Populator 85.83% <ø> (ø)
Replication 59.61% <ø> (ø)
Bucket Scanner 85.76% <ø> (ø)
@@                 Coverage Diff                 @@
##           development/9.3    #2728      +/-   ##
===================================================
+ Coverage            74.51%   74.76%   +0.25%     
===================================================
  Files                  200      200              
  Lines                13610    13610              
===================================================
+ Hits                 10141    10176      +35     
+ Misses                3459     3424      -35     
  Partials                10       10              
Flag Coverage Δ
api:retry 9.14% <ø> (ø)
api:routes 8.96% <ø> (ø)
bucket-scanner 85.76% <ø> (ø)
ft_test:queuepopulator 10.87% <ø> (+1.81%) ⬆️
ingestion 12.54% <ø> (ø)
lifecycle 18.84% <ø> (-0.01%) ⬇️
notification 1.03% <ø> (ø)
oplogPopulator 0.14% <ø> (ø)
replication 18.47% <ø> (-0.01%) ⬇️
unit 51.11% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@claude
Copy link
Copy Markdown

claude bot commented Mar 26, 2026

LGTM — the dependency bump is clean with no code changes required. The node-rdkafka 3.x release has no API breaking changes (only dropped EOL Node.js support), and the librdkafka upgrade path from 2.3.0 to 2.12.0 has no consumer-breaking changes. The root cause analysis in the PR body is thorough.

- PR body says ^3.5.0 but package.json has ^3.6.0 — update one to match the other

Review by Claude Code

Upgrades node-rdkafka to ^3.6.0 (resolving to 3.6.1 / librdkafka 2.12.0)
to fix a cooperative-sticky partition assignor bug that causes partitions
to become orphaned during consumer group rebalances.

Issue: BB-760
@delthas delthas force-pushed the bugfix/BB-760/bump-node-rdkafka branch from f528083 to 1735ada Compare March 26, 2026 16:17
@claude
Copy link
Copy Markdown

claude bot commented Mar 26, 2026

Excellent investigation and write-up. The dependency bump itself is clean — only package.json and yarn.lock touched, no API changes in node-rdkafka 3.x, and the librdkafka upgrade path is well-justified.

- PR title says ^3.5.0 but the code sets the floor to ^3.6.0 — title should be updated to match

Review by Claude Code

@delthas delthas changed the title fix(deps): bump node-rdkafka to ^3.5.0 to fix cooperative-sticky rebalance bug fix(deps): bump node-rdkafka to ^3.6.0 to fix cooperative-sticky rebalance bug Mar 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants