Skip to content

Commit 7f70dea

Browse files
authored
Merge pull request #1342 from tulios/v2.0.0-release
Release v2.0.0
2 parents 3323145 + 707827b commit 7f70dea

File tree

8 files changed

+242
-8
lines changed

8 files changed

+242
-8
lines changed

CHANGELOG.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,36 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
66
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
77

8+
## [2.0.0] - 2022-05-06
9+
10+
This is the first major version released in 4 years, and contains a few important breaking changes. **A [migration guide](https://kafka.js.org/docs/migration-guide-v2.0.0) has been prepared to help with the migration process.** Be sure to read it before upgrading from older versions of KafkaJS.
11+
12+
### Added
13+
- Validate configEntries when creating topics #1309
14+
- New `topics` argument for `consumer.subscribe` to subscribe to multiple topics #1313
15+
- Support duplicate header keys #1132
16+
17+
### Removed
18+
- **BREAKING:** Drop support for Node 10 and 12 #1333
19+
- **BREAKING:** Remove deprecated enum `ResourceTypes` #1334
20+
- **BREAKING:** Remove deprecated argument `topic` from `admin.fetchOffsets` #1335
21+
- **BREAKING:** Remove deprecated method `getTopicMetadata` from admin client #1336
22+
- **BREAKING:** Remove typo type `TopicPartitionOffsetAndMedata` #1338
23+
- **BREAKING:** Remove deprecated error property originalError. Replaced by `cause` #1341
24+
25+
### Changed
26+
- **BREAKING:** Change default partitioner to Java compatible #1339
27+
- Improve consumer performance #1258
28+
- **BREAKING:** Enforce request timeout by default #1337
29+
- Honor default replication factor and partition count when creating topics #1305
30+
- Increase default authentication timeout to 10 seconds #1340
31+
32+
### Fixed
33+
- Fix invalid sequence numbers when producing concurrently with idempotent producer #1050 #1172
34+
- Fix correlation id and sequence number overflow #1310
35+
- Fix consumer not restarting on retriable connection errors #1304
36+
- Avoid endless sleep loop #1323
37+
838
## [1.16.0] - 2022-02-09
939

1040
### Added

docs/Admin.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -105,10 +105,6 @@ await admin.createPartitions({
105105
| count | New partition count, mandatory | |
106106
| assignments | Assigned brokers for each new partition | null |
107107

108-
## <a name="get-topic-metadata"></a> Get topic metadata
109-
110-
Deprecated, see [Fetch topic metadata](#fetch-topic-metadata)
111-
112108
## <a name="fetch-topic-metadata"></a> Fetch topic metadata
113109

114110
```javascript

docs/Configuration.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -289,6 +289,16 @@ new Kafka({
289289
})
290290
```
291291

292+
The request timeout can be disabled by setting `enforceRequestTimeout` to `false`.
293+
294+
```javascript
295+
new Kafka({
296+
clientId: 'my-app',
297+
brokers: ['kafka1:9092', 'kafka2:9092'],
298+
enforceRequestTimeout: false
299+
})
300+
```
301+
292302
## Default Retry
293303

294304
The `retry` option can be used to set the configuration of the retry mechanism, which is used to retry connections and API calls to Kafka (when using producers or consumers).

docs/MigrationGuide-2-0-0.md

Lines changed: 191 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,191 @@
1+
---
2+
id: migration-guide-v2.0.0
3+
title: Migrating to v2.0.0
4+
---
5+
6+
v2.0.0 is the first major version of KafkaJS released since 2018. For most users, the required changes in order to upgrade from 1.x.x are very minor, but it is still important to read through the list of changes to know what, if any, changes need to be made.
7+
8+
## Producer: New default partitioner
9+
10+
> 🚨&nbsp; **Important!** 🚨
11+
>
12+
> Not selecting the right partitioner will cause messages to be produced to different partitions than in versions previous to 2.0.0.
13+
14+
The default partitioner distributes messages consistently based on a hash of the message `key`. v1.8.0 introduced a new partitioner called `JavaCompatiblePartitioner` that behaves the same way, but fixes a bug where in some circumstances a message with the same key would be distributed to different partitions when produced with KafkaJS and the Java client.
15+
16+
**In v2.0.0 the following changes have been made**:
17+
18+
* `JavaCompatiblePartitioner` is renamed `DefaultPartitioner`
19+
* The partitioner previously called `JavaCompatiblePartitioner` is selected as the default partitioner if no partitioner is configured.
20+
* The old `DefaultPartitioner` is renamed `LegacyPartitioner`
21+
22+
If no partitioner is selected when creating the producer, a warning will be logged. This warning can be silenced either by specifying a partitioner to use or by setting the environment variable `KAFKAJS_NO_PARTITIONER_WARNING`. This warning will be removed in a future version.
23+
24+
### What do I need to do?
25+
26+
What you need to do depends on what partitioner you were previously using and whether or not co-partitioning is important to you.
27+
28+
#### "I was previously using the default partitioner and I want to keep the same behavior"
29+
30+
Import the `LegacyPartitioner` and configure your producer to use it:
31+
32+
```js
33+
const { Partitioners } = require('kafkajs')
34+
kafka.producer({ createPartitioner: Partitioners.LegacyPartitioner })
35+
```
36+
37+
#### "I was previously using the `JavaCompatiblePartitioner` and I want to keep that behavior"
38+
39+
The new `DefaultPartitioner` is re-exported as `JavaCompatiblePartitioner`, so existing code will continue to work. However, that export will be removed in a future version, so it's recommended to either remove the partitioner from the configuration or explicitly configure it to use what is now the default partitioner:
40+
41+
```js
42+
// Rely on the default partitioner being compatible with the Java partitioner
43+
kafka.producer()
44+
45+
// Or explicitly use the default partitioner
46+
const { Partitioners } = require('kafkajs')
47+
kafka.producer({ createPartitioner: Partitioners.DefaultPartitioner })
48+
```
49+
50+
#### "I use a custom partitioner"
51+
52+
No need to do anything unless you are using either of the two built-in partitioners.
53+
54+
#### "It's not important to me that messages with the same key end up in the same partition as in previous versions"
55+
56+
Use the new default partitioner.
57+
58+
```js
59+
kafka.producer()
60+
```
61+
62+
## Request timeouts enabled
63+
64+
v1.5.1 added a request timeout mechanism. Due to some issues with the initial implementation, this was not enabled by default, but could be enabled using the undocumented `enforceRequestTimeout` flag. The issues have long since been ironed out and request timeout enforcement is is now enabled by default in v2.0.0.
65+
66+
The request timeout mechanism can be disabled like so:
67+
68+
```javascript
69+
new Kafka({ enforceRequestTimeout: false })
70+
```
71+
72+
See [Request Timeout](/docs/2.0.0/configuration#request-timeout) for more details.
73+
74+
## Consumer: Supporting duplicate header keys
75+
76+
If a message has more than one header value for the same key, previous versions of KafkaJS would discard all but one of the values. Now, it instead returns each value as an array.
77+
78+
```js
79+
/**
80+
* Given a message like this:
81+
* {
82+
* headers: {
83+
* event: "birthday",
84+
* participants: "Alice",
85+
* participants: "Bob"
86+
* }
87+
* }
88+
*/
89+
90+
// Before
91+
> message.headers
92+
{
93+
event: <Buffer 62 69 72 74 68 64 61 79>,
94+
participants: <Buffer 42 6f 62>
95+
}
96+
97+
// After
98+
> message.headers
99+
{
100+
event: <Buffer 62 69 72 74 68 64 61 79>,
101+
participants: [
102+
<Buffer 41 6c 69 63 65>,
103+
<Buffer 42 6f 62>
104+
]
105+
}
106+
```
107+
108+
Adapt your code by handling header values potentially being arrays:
109+
110+
```js
111+
// Before
112+
const participants = message.headers["participants"].toString()
113+
114+
// After
115+
const participants = Array.isArray(message.headers["participants"])
116+
? message.headers["participants"].map(participant => participant.toString()).join(", ")
117+
: message.headers["participants"].toString()
118+
```
119+
120+
## Admin: `getTopicMetadata` removed
121+
122+
The `getTopicMetadata` method of the admin client has been replaced by `fetchTopicMetadata`. `getTopicMetadata` had limitations that did not allow it to get metadata for all topics in the cluster.
123+
124+
See [Fetch Topic Metadata](/docs/2.0.0/admin#a-name-fetch-topic-metadata-a-fetch-topic-metadata) for details.
125+
126+
## Admin: `fetchOffsets` accepts `topics` instead of `topic`
127+
128+
`fetchOffsets` used to only be able to fetch offsets for a single topic, but now it can fetch for multiple topics.
129+
130+
To adapt your current code, pass in an array of `topics` instead of a single `topic` string, and handle the promise resolving to an array with each item being an object with a topic and an array of partition-offsets.
131+
132+
```js
133+
// Before
134+
const partitions = await admin.fetchOffsets({ groupId, topic: 'topic-a' })
135+
for (const { partition, offset } of partitions) {
136+
admin.logger().info(`${groupId} is at offset ${offset} of partition ${partition}`)
137+
}
138+
139+
// After
140+
const topics = await admin.fetchOffsets({ groupId, topics: ['topic-a', 'topic-b'] })
141+
for (const topic of topics) {
142+
for (const { partition, offset } of partitions) {
143+
admin.logger().info(`${groupId} is at offset ${offset} of ${topic}:${partition}`)
144+
}
145+
}
146+
```
147+
148+
## Removed support for Node 10 and 12
149+
150+
KafkaJS supports all currently supported versions of NodeJS. If you are currently using NodeJS 10 or 12, you will get a warning when installing KafkaJS, and there is no guarantee that it will function. We **strongly** encourage you to upgrade to a supported, secure version of NodeJS.
151+
152+
## `originalError` property replaced with `cause`
153+
154+
Some errors that are triggered by other errors, such as `KafkaJSNumberOfRetriesExceeded`, used to have a property called `originalError` that contained a reference to the cause. This has been renamed `cause` to closer align with the [Error Cause](https://tc39.es/proposal-error-cause/) specification.
155+
156+
## Typescript: `ResourceTypes` replaced by `AclResourceTypes` and `ConfigResourceTypes`
157+
158+
The `ResourceTypes` enum has been split into `AclResourceTypes` and `ConfigResourceTypes`. The enum values happened to be the same for the two, even though they were actually unrelated to each other.
159+
160+
To migrate, simply import `ConfigResourceTypes` instead of `ResourceTypes` when operating on configs, and `AclResourceTypes` when operating on ACLs.
161+
162+
```ts
163+
// Before
164+
import { ResourceTypes } from 'kafkajs'
165+
await admin.describeConfigs({
166+
includeSynonyms: false,
167+
resources: [
168+
{
169+
type: ResourceTypes.TOPIC,
170+
name: 'topic-name'
171+
}
172+
]
173+
})
174+
175+
// After
176+
const { ConfigResourceTypes } = require('kafkajs')
177+
178+
await admin.describeConfigs({
179+
includeSynonyms: false,
180+
resources: [
181+
{
182+
type: ConfigResourceTypes.TOPIC,
183+
name: 'topic-name'
184+
}
185+
]
186+
})
187+
```
188+
189+
## Typescript: `TopicPartitionOffsetAndMedata` removed
190+
191+
Use `TopicPartitionOffsetAndMetadata` instead.

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "kafkajs",
3-
"version": "1.16.0",
3+
"version": "2.0.0",
44
"description": "A modern Apache Kafka client for node.js",
55
"author": "Tulio Ornelas <[email protected]>",
66
"main": "index.js",

src/index.js

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ const DEFAULT_METADATA_MAX_AGE = 300000
2525
const warnOfDefaultPartitioner = once(logger => {
2626
if (process.env.KAFKAJS_NO_PARTITIONER_WARNING == null) {
2727
logger.warn(
28-
`KafkaJS v2.0.0 switched default partitioner. To retain the same partitioning behavior as in previous versions, create the producer with the option "createPartitioner: Partitioners.LegacyPartitioner". See ${websiteUrl(
29-
'docs/producing',
30-
'default-partitioners'
28+
`KafkaJS v2.0.0 switched default partitioner. To retain the same partitioning behavior as in previous versions, create the producer with the option "createPartitioner: Partitioners.LegacyPartitioner". See the migration guide at ${websiteUrl(
29+
'docs/migration-guide-v2.0.0',
30+
'producer-new-default-partitioner'
3131
)} for details. Silence this warning by setting the environment variable "KAFKAJS_NO_PARTITIONER_WARNING=1"`
3232
)
3333
}

website/i18n/en.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,9 @@
4343
"title": "A Brief Intro to Kafka",
4444
"sidebar_label": "Intro to Kafka"
4545
},
46+
"migration-guide-v2.0.0": {
47+
"title": "Migrating to v2.0.0"
48+
},
4649
"pre-releases": {
4750
"title": "Pre-releases"
4851
},
@@ -331,6 +334,7 @@
331334
"Usage": "Usage",
332335
"Examples": "Examples",
333336
"API Reference": "API Reference",
337+
"Migration Guides": "Migration Guides",
334338
"Developing KafkaJS": "Developing KafkaJS"
335339
}
336340
},

website/sidebars.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@
2020
"consumer-example"
2121
],
2222
"API Reference": [],
23+
"Migration Guides": [
24+
"migration-guide-v2.0.0"
25+
],
2326
"Developing KafkaJS": [
2427
"contribution-guide",
2528
"development-environment",

0 commit comments

Comments
 (0)