-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move deserialization from KafkaConsumerActor to KafkaConsumer #902
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice simplification!
I think I'll go ahead and merge this with the behaviour change, but put out a release candidate before the next version to give a chance to test it in the wild. |
I think we should change the behavior. |
Meaning - you think we should release the change from this PR as-is? |
Yes. I don't think that the current behavior could be considered by someone as an intended or as a "feature". |
Postpones deserialising records until they are pushed onto a consumer stream, reducing the complexity of
KafkaConsumerActor
.This means that a deserialization error in a partitioned stream will cause only the specific partition stream to fail, rather than crash the actor - I think that's a good thing, but maybe we should reproduce the existing behaviour (with a configurable toggle) in the 2.x series?