Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add 'negative' value type for negative events #3656

Closed
wants to merge 1 commit into from
Closed

Add 'negative' value type for negative events #3656

wants to merge 1 commit into from

Conversation

joanlopez
Copy link
Contributor

What?

It adds a new value type negative for those metrics that represent negative events, and uses it in the default handleSummary to display either ✓ or ✗ for the amount of passes/fails based on that.

Why?

Now the handleSummary always displays ✓ for passes and ✗ for fails, but that's not accurate (and even confusing) for those metrics that represent negative events (like http_req_failed).

So, I decided to give a try to @na--'s proposal on the reported issue, as a solution that can be used by any other Rate metric, not only by http_req_failed.

Also, because I feel that using a label instead of ✓ or ✗ might be more difficult to generalize. For instance, what would fails or passes (to give an example) mean in the context of http_req_failed? Or having to parameterize that (the labels) would start making the metric model a little bit more complex with no clear value.

Checklist

  • I have performed a self-review of my code.
  • I have added tests for my changes.
  • I have run linter locally (make lint) and all checks pass.
  • I have run tests locally (make tests) and all tests pass.
  • I have commented on my code, particularly in hard-to-understand areas.

Related PR(s)/Issue(s)

Closes #2306

@joanlopez joanlopez added the ux label Mar 20, 2024
@joanlopez joanlopez requested a review from mstoykov March 20, 2024 07:20
@joanlopez joanlopez self-assigned this Mar 20, 2024
@joanlopez joanlopez requested a review from a team as a code owner March 20, 2024 07:20
@joanlopez joanlopez requested review from oleiade and removed request for a team March 20, 2024 07:20
@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 71.42857% with 2 lines in your changes are missing coverage. Please review.

Project coverage is 73.51%. Comparing base (f27cca5) to head (e5ca288).

❗ Current head e5ca288 differs from pull request most recent head dd70e22. Consider uploading reports for the commit dd70e22 to get more accurate results

Files Patch % Lines
metrics/value_type.go 66.66% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3656      +/-   ##
==========================================
- Coverage   73.58%   73.51%   -0.07%     
==========================================
  Files         277      275       -2     
  Lines       20247    20248       +1     
==========================================
- Hits        14898    14885      -13     
- Misses       4401     4409       +8     
- Partials      948      954       +6     
Flag Coverage Δ
macos ?
ubuntu 73.51% <71.42%> (+0.01%) ⬆️
windows ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@joanlopez joanlopez added this to the v0.51.0 milestone Mar 21, 2024
@mstoykov
Copy link
Contributor

Hi sorry for the slow reply :(

I have 2 problems with this solution:

  1. it makes metrics even more complicated adding stuff to them that are not really present in other metric presentations
  2. makes the whole thing even more confusing if you actually use it for thresholds. And especially if you are used to this (which is the least problematic part).

To me it seems like the best solution was for us to reverse this ... years ago after this was reported and have the old metric for a while.

A bunch of people also seem to agree that having not checks/crosses will fix this as well and has no problems with the UI now changing depending on what the UI is.

But that also depends on the above negative changes by the looks of it.

It is great someone tried to fix so we can see how it looks, but to me this doesn't seem like the correct solution.

I also would recommend showcasing how the UI changes when making changes to it.

old:
image
this PR:
image

As an alternative solution making it obvious that is X out of N stuff seems to look okay to me.

image

Also opentelemtry has units as part of the specificaiton which seem like at thing we can (ab)use to get this and hte other "value type" under control, but arguably that should be part of a bigger refactor if we go down this route. One that likely should wait for some opentelemtry work to be done (cc @olegbespalov )

@joanlopez
Copy link
Contributor Author

Hi sorry for the slow reply :(

Don't worry at all! :)



To be honest, I don't have any strong preference, I opted for this approach because it seemed to me to be the one with greater acceptance, but it's true that this is difficult to measure. Also, because I felt that having this mechanism for defining whether a metric is positive or negative (or none) despite the unit, could be useful for other purposes. But it's also true that we don't have any immediate plans that justify that.

As an alternative solution making it obvious that is X out of N stuff seems to look okay to me.

I think the question is.. do we really want that?
Or do we really want to postpone this until a major refactor happens (as you said)?

If so, I can easily do it. But this looks like one of these situations where deciding what is way more challenging than deciding how, and actually doing it.

@mstoykov
Copy link
Contributor

I think the question is.. do we really want that?
Or do we really want to postpone this until a major refactor happens (as you said)?

I guess this is more a question for the issue?

I will also paste this there.

Arguably this solution has 2 possitives:

  1. no code changes apart from visualization (potentially the same needs to happen for checks btw)
  2. IMO it removes even more of ambiguoty as previously what one culture will take as a ✔️ and one as ❌ when we are talking about a mteric type such as Rate which counts falsy and truthy stuff is a bit questinable. Arguably this is the reason why it wasn't found to be so bad when first implemented.

@codebien
Copy link
Contributor

codebien commented Apr 10, 2024

I think the question is.. do we really want that?

I would prefer if we do it now. The text proposal (as already suggested originally by Ned and now refined by Mihail) sounds already a good solution.

@joanlopez joanlopez modified the milestones: v0.51.0, v0.52.0 Apr 25, 2024
@joanlopez joanlopez closed this by deleting the head repository May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Console output] http_req_failed : "succMark" & "failMark" seems inversed?
4 participants