Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On batch-msg dispatching: broker should disconnect consumer which doesn't support batch-message #215

Merged
merged 3 commits into from
Feb 24, 2017

Conversation

rdhabalia
Copy link
Contributor

Motivation

Broker should verify consumer's version before dispatching batch-messages to the consumer. Right now, broker dispatches the batch-messages to consumer regardless consumer support it and it causes failure while client-consumer consumes the batch-message if consumer doesn't support batch-message feature.

Modifications

Disconnect the consumer while dispatching batch-message if consumer doesn't support batch-message.

Result

Broker will not dispatch batch-message if consumer doesn't support it.

@rdhabalia rdhabalia added the type/bug The PR fixed a bug or issue reported a bug label Feb 16, 2017
@rdhabalia rdhabalia added this to the 1.17 milestone Feb 16, 2017
@rdhabalia rdhabalia self-assigned this Feb 16, 2017
Copy link
Contributor

@merlimat merlimat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only issue I see is that, the consumer will keep reconnecting in a loop without backoff.

The backoff time in reconnection is reset every time a producer/consumer is successful in re-establishing itself to the broker.

@rdhabalia
Copy link
Contributor Author

The only issue I see is that, the consumer will keep reconnecting in a loop without backoff.
The backoff time in reconnection is reset every time a producer/consumer is successful in re-establishing itself to the broker.

That is correct. But we are targeting clients which are using old version so, we have to prevent at broker side. To address reconnection, I modified change:

  • close consumer at broker side without closing connection
  • this will remove consumer from subscription and ServerCnx and resends pendingAck messages to other consumers

This will prevent reconnection, logs message at broker and consumer will not receive any further messages.

Any thought?

@merlimat
Copy link
Contributor

Sound good to me. Another option I was thinking of would to mark the topic as a "topic that a some point got batched messages" and prevent consumer to subscribe again. That way the back off would kick in

@rdhabalia
Copy link
Contributor Author

rdhabalia commented Feb 17, 2017

Another option I was thinking of would to mark the topic as a "topic that a some point got batched messages" and prevent consumer to subscribe again. That way the back off would kick in

Yes, that is also a good option and at least in that way, client will receive an error and consumer would know what's going on.

However, there are few complications to implement it and not sure if it's worth to introduce it to solve a problem which may touch very small no of clients for now.

  • to prevent same unsupported-consumer connect again: need to store unsupported-consumerId set at topic level (because once consumer is disconnected, it will try to create new connection on topic)
  • now, we can have unique consumer-id at connection level but not at topic-level so, can't store consumer with just consumerId.

Do you think we should still go with this approach?

@rdhabalia
Copy link
Contributor Author

@merlimat I think I misunderstood your previous comment. I have updated the change. Can you please take a look of it. I have introduced new error-code UnsupporteVersion in case if one introduces new client without inital support of batch-msg, can also handle the error gracefully.

Copy link
Contributor

@merlimat merlimat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

int permitsToReduce = 0;
Iterator<Entry> iter = entries.iterator();
boolean unsupportedVersion = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about :

boolean clientSupportBatchMessages = cnx.isBatchMessageCompatibleVersion();

@@ -77,6 +77,7 @@ enum ServerError {
ProducerBlockedQuotaExceededError = 7; // Unable to create producer because backlog quota exceeded
ProducerBlockedQuotaExceededException = 8; // Exception while creating producer because quota exceeded
ChecksumError = 9; // Error while verifying message checksum
UnsupportedVersionError = 10; // Error when broker/client doesn't support requested feature
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. when an older client/version doesn't support a required feature

@rdhabalia
Copy link
Contributor Author

updated the changes.

@rdhabalia rdhabalia merged commit 82f9adb into apache:master Feb 24, 2017
rdhabalia added a commit that referenced this pull request Feb 25, 2017
…sn't support batch-message (#215)

* On batch-msg dispatching: broker should disconnect consumer which doesn't support batch-message

* Close consumer at broker-side without closing connection if consumer doesn't support batch-message

* Fail unsupportedBatchVersion-subscriber if topic has served batch-message
@rdhabalia rdhabalia deleted the batch_consumer branch June 21, 2017 18:55
sijie pushed a commit to sijie/pulsar that referenced this pull request Mar 4, 2018
hrsakai pushed a commit to hrsakai/pulsar that referenced this pull request Dec 10, 2020
Fix data race when accessing partition producer state
hangc0276 pushed a commit to hangc0276/pulsar that referenced this pull request May 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug The PR fixed a bug or issue reported a bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants