Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v23.3.x] Change the kafka_latency_fetch_latency metric #17977

Conversation

vbotbuildovich
Copy link
Collaborator

Backport of PR #17720

@vbotbuildovich vbotbuildovich added this to the v23.3.x-next milestone Apr 22, 2024
@vbotbuildovich vbotbuildovich added the kind/backport PRs targeting a stable branch label Apr 22, 2024
@piyushredpanda piyushredpanda modified the milestones: v23.3.x-next, v23.3.13 Apr 23, 2024
@ballard26
Copy link
Contributor

Need to push an additional commit to this PR before we can merge.

The `kafka_latency_fetch_latency` metric originally measured the time
it'd take to complete one fetch poll. A fetch poll would create a fetch
plan then execute it in parallel on every shard. On a given shard
`fetch_ntps_in_parallel` would account for the majority of the execution time
of the plan.

Since fetches are no longer implemented by polling there isn't an
exactly equivalent measurement that can be assigned to the metric.

This commit instead records the duration of the first call to
`fetch_ntps_in_parallel` on every shard to the metric. This first call takes
as long as it would during a fetch poll. Hence the resulting measurement
should be close to the duration of a fetch poll.

(cherry picked from commit 44bde8d)
@ballard26 ballard26 force-pushed the backport-pr-17720-v23.3.x-868 branch from 6537843 to 7c5f71d Compare April 25, 2024 15:19
@piyushredpanda piyushredpanda merged commit f445662 into redpanda-data:v23.3.x Apr 25, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/redpanda kind/backport PRs targeting a stable branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants