Improve performance by replacing reduce with spread with forEach #1436
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We are using KafkaJS with a cluster that has a lot of topics - thousands of them. Now we want to migrate to KafkaJS 2, but, unfortunately, there is a very heavy computation happening in the
getActiveTopicPartitions
function (see screenshot).The issue is caused by using spread in reduce, which has n^2 complexity. For the case of 12k topics, with the current implementation, it takes 40 seconds to compute the result of this function. However, the proposed implementation only takes 20 milliseconds.
See this article for more trough explanation.
It is a show stopper for us for an upgrade.
Also, other places in the code have similar behavior. They are less critical for the tested flow but can be changed to forEach. WDYT?
Thank you.