You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Within my integration test I have encountered situation in which deleted topics where being recreated without reason by producer instance (messages where not being produced anymore to those already deleted topics). I'm relaying on latest confluent-kafka-dotnet v2.5.2, writing here since log looks like originating from Librdkafka
How to reproduce
I am creating a long living producer which will be used during test (with settings: Acks.All, MaxInFlight = 1 and EnableDeliveryReports = true)
I am creating a batch of topics with large number of partitions (for instance 4 topics each with ~200 partitions)
In parallel I am producing to these topics many messages (my goal is to cover all partitions) and await every produce
Once I complete my assertions I remove those topics and prepare new batch of topics to move forward (with smaller count of partitions)
I reuse same producer instance against new batch of topics and do same until my last topic has only 1 partition
On stderr I observe logs like:
%5|1724155743.093|PARTCNT|A.ff694163-3c88-42d0-885d-95129e189b26#producer-947| [thrd:main]: Topic beb73ee463ae43edb25576bf83f48c59_Partitions_200 partition count changed from 200 to 9
If I dispose producer instance and create it per each batch this is not occurring.
Since I await each produce request I believe that it happens when producer asks for metadata for already deleted topics due to broker setting auto.create.topics.enable = true. That it is creating them with broker defaults in my case 9 partitions.
Is it correct and is it a bug or desired behavior?
It is almost always occurring locally if topics are above 80 partitions, for lower count of partitions it is not always occurring. Bellow 50 partitions I have not spotted it.
Additionally I tested against version of LibrdKafka 2.3.0 and behavior is same
Once I complete my assertions I remove those topics and prepare new batch of topics to move forward (with smaller count of partitions)
Do all the topics in the new batch have different names than the old batch, or they have the same names?
If they have the same names, this is normal as even if deleted the topic is present in metadata cache until it's detected it was deleted, from metadata response, and it's possible it's detected directly as a partition count change.
Description
Within my integration test I have encountered situation in which deleted topics where being recreated without reason by producer instance (messages where not being produced anymore to those already deleted topics). I'm relaying on latest confluent-kafka-dotnet v2.5.2, writing here since log looks like originating from Librdkafka
How to reproduce
On stderr I observe logs like:
If I dispose producer instance and create it per each batch this is not occurring.
Since I await each produce request I believe that it happens when producer asks for metadata for already deleted topics due to broker setting
auto.create.topics.enable = true
. That it is creating them with broker defaults in my case 9 partitions.Is it correct and is it a bug or desired behavior?
It is almost always occurring locally if topics are above 80 partitions, for lower count of partitions it is not always occurring. Bellow 50 partitions I have not spotted it.
Additionally I tested against version of LibrdKafka 2.3.0 and behavior is same
So these recreated topics are empty.
Checklist
IMPORTANT: We will close issues where the checklist has not been completed.
Please provide the following information:
2.5.0
but also checked against2.3.0
docker hosted on Ubuntu
also on Gitlab runners in CIThe text was updated successfully, but these errors were encountered: