-
Notifications
You must be signed in to change notification settings - Fork 593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CORE-6807] kafka
: change offset_out_of_range
condition in replicated_partition::prefix_truncate()
#22905
[CORE-6807] kafka
: change offset_out_of_range
condition in replicated_partition::prefix_truncate()
#22905
Conversation
dc8b34a
to
a07b6c2
Compare
Force push to:
|
a07b6c2
to
568e20f
Compare
Force push to address Nicolae's comments:
|
Previously, we would return an `error_code::offset_out_of_range` for a request where `kafka_truncation_offset <= start_offset`. This is different behavior from what most Kafka clients are expecting when issuing `DeleteRecords` requests. Change the condition to return early with a `error_code::none` result. Also, correct conditions for tests that would previously expect a truncate call with `offset <= start_offset` to fail with the new, expected behavior (which is success).
Adds a wrapper function around `kafka-delete-records.sh` which can be used to issue `DeleteRecords` requests.
Uses `kafka-delete-records.sh` and `rpk trim-prefix` to assert that Redpanda and Kafka achieve parity for certain edge cases involving `DeleteRecords` requests. Uses both `redpanda` and `KafkaService` brokers as mixins.
568e20f
to
a3f7130
Compare
Force push to address Andrew's comments:
|
ducktape was retried in https://buildkite.com/redpanda/redpanda/builds/53078#01915dd0-b9b4-42ca-afa9-8e7a381f5281 |
/backport v24.2.x |
/backport v24.1.x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just guessing that we should have gotten a replication and maybe enterprise reviewer on this pr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think any divergence from kafka at the API layer is considered a bug from the enterprise team's perspective, so the spirit of the change seems legit. nice tests!
if ( | ||
kafka_truncation_offset <= start_offset() | ||
|| kafka_truncation_offset > high_watermark()) { | ||
if (kafka_truncation_offset <= start_offset()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/cluster/Partition.scala#L1688
suggests that if truncation_offset < 0 it should throw a OffsetOutOfRangeException(), its an edge case so probably not a big deal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, there is even more nuance here than that: -1
indicates a complete wipe up to high_watermark
, while other values < 0
are an offset_out_of_range
error. Will have this covered in a follow up 👍
Previously, we would return an
error_code::offset_out_of_range
for a request wherekafka_truncation_offset <= start_offset
.This is a divergence from the current Kafka behaviour, and can lead to errors with certain Kafka clients/consumers.
Remove the condition that throws an
offset_out_of_range
error ifkafka_truncation_offset <= start_offset()
, instead returningerror_code::none
.More context and details in the JIRA link below.
JIRA Link: https://redpandadata.atlassian.net/browse/CORE-6807
Backports Required
Release Notes
Improvements
DeleteRecords
requests from Kafka clients orrpk topic trim-prefix
to be called withtruncation_offset <= start_offset
without returning an error. The request is instead treated as a no-op.