-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for adding a new standard http_req_failures
metric
#1828
Comments
I think the import http from 'k6/http'
export let options = {
metricEmissionHooks: {
http: {
successHook: 'myHttpSuccessHook',
}
}
};
export function myHttpSuccessHook(response){
return response.status >= 200 && response.status <= 204
}
export default function() {
http.get("https://httpbin.test.k6.io/status/200");
} Instead, we can define a name convention for the hook method. For example, import http from 'k6/http'
export function didSuccessRequest(response){
return response.status >= 200 && response.status <= 204
}
export default function() {
http.get("https://httpbin.test.k6.io/status/200");
} |
I'm not that fond of I'd much rather see something along the lines of: export let options = {
hooks: {
http: {
after: 'myHttpSuccessHook' // after might be a bad name, but you get the gist.
}
}
}; (without having to quote the name, if that's even possible, but I have some vague memory of the interop layer requiring it to be a string?) The issue with these "named exports", is that we really can't guard against errors in a good way. If the user were to write I've had issues myself more than once due to the current types being too permissive, not warning when you write |
I am definetely for it not being 1 more magically named one. I was against the Also arguably it will be nice if you can configure different ones for different scenarios for example.
The "options" ultimately need to end up as a JSON and be transported around as such (and to not be recalculated) which means that w/e you have in it needs to be able to get to JSON and back in a way that we can figure out what you meant. We can try to do "magical" stuff so when you wrie a name of a function we automatically figure it out(no idea how, and probably will be finicky), but then writing just the function definition there won't work, which is strange ... so, yeah - we need it to be a string. |
Do we really want to provide this? If you allow the user to set a callback, they will likely have enough flexibility to change the logic based on the current scenario execution. In this case, it is not compulsory to serialize the string name.
What you call a "magically named", is just a documented API for me. @simskij provided another alternative something like IMO, this type of dynamic string execution looks not very intuitive and error-prone. cc @legander |
I'm quite fond of the tagging approach since it avoids adding additional metrics for each request, which are costly to process as it is. After #1321 is implemented this might not be as important, but we should still try to minimize adding new metrics if we can avoid it. Are we sure that statsd doesn't support tags still? It seems tag support was added last year in statsd/statsd#697, released in v0.9.0. If this was the only output that didn't support them, and if we're moving towards supporting OpenMetrics as a generic output (which does support tags from what I've seen), then it might be worth it to consider the tag approach after all. The performance penalty of executing the JS hook on each request is certainly a concern. Are we sure we need these checks to be dynamic? If status codes are the only metric for determining this, maybe we can avoid the hooks and allow configuring them with something like: export let options = {
http: {
success: {
status: ['>=200', '<400'],
}
}
} Though I'm not sure if this would perform much better. OTOH if we need the flexibility for custom failure states determined by headers or something else in the response, then I guess we need the hook based approach. I'm ambivalent about the syntax, and we can certainly refine it later, though I'd also like to avoid using naming conventions for it. |
A few short comments to start:
So, to step back the details and speak in more general terms... This is a big feature that, even if we agree on everything (and I have plenty of objections 😅 ), is unlikely to land in k6 v0.31.0. But given that this feature completely depends on the hook/callback (#1716), and that the hook is actually the more complicated part, I think we should focus solely on that as the MVP for v0.31.0. I am not sure we'll manage to do even that for v0.31.0, but if we reach a consensus, I think for sure it's the place we should focus our energies first. And I also mean the "viable" part in "MVP" 😅 If the hook is a bit more generic, everything else here might be completely unnecessary... If we have a generic post-request hook, it can be used for the following three things:
So, it gives us the most flexibility. And for k6 v0.32.0, when we have had a bit of time to carefully consider things, we can simply set the default such callback in Go to something that we think is reasonable. For example, "do not emit Example code: import http from 'k6/http';
import { Rate } from 'k6/metrics'
let requestFailures = new Rate('http_req_failures')
export let options = {
thresholds: {
http_req_failures: ['rate < 0.1'],
},
};
http.setResponseCallback(function (response, metricsObject) {
let requestFailed = response.status >= 399 || response.status === 0;
requestFailures.add(requestFailed);
if (requestFailed) {
metricsObject.doNotEmitMetrics(); // not a final API, just illustrative :D
}
})
export default function () {
http.get("https://httpbin.test.k6.io/status/200");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/403");
} |
I share your views! Only thing I'd like to expand on is that I think it would make sense to be able to define both default and per-call callbacks. As in: import { skipCallback } from 'http'
http.setDefaultResponseCallback(function (response, metricsObject) {
let requestFailed = response.status >= 399 || response.status === 0;
requestFailures.add(requestFailed);
if (requestFailed) {
metricsObject.doNotEmitMetrics(); // not a final API, just illustrative :D
}
})
export default function () {
http.get("https://httpbin.test.k6.io/status/200"); // Uses the one defined above
http.get("https://httpbin.test.k6.io/status/503"); // Uses the one defined above
http.get("https://httpbin.test.k6.io/status/503", { responseCallback: skipCallback }); // Overrides the one above with noop
http.get("https://httpbin.test.k6.io/status/403", { responseCallback: (res) => { /* ... /*} }); // Overrides the one above with something else
}
While switch cases and if's in the default callback sure is an option here, I'd argue that being able to do this per invocation makes sense, given that the URLs and other characteristics of the request might be dynamic, but the invocations themselves likely will be following a pattern you can reason about. |
I like the solution @na-- posted, it seems to provide most bang for the buck! (although initially requiring some setup on the user end), having a nice default (which is also brought up) would be nice. Perhaps the naming should indicate more clearly that the responseCallback is being processed before metrics are emitted? Like |
👍 for per-request callbacks. Not sure if having a special 👍 for a better name as well, my opinion here is 👍 for |
There are many good things with a generic callback that can manipulate all metrics, but there are bad things here as well:
I think we should have a generic hook as well, but I don't think it's required for this feature. We can have many hooks for many different purposes. |
@sniku Isn't it specifically what is being mentioned here or did I misinterpret that?
|
Agree with @sniku . IMO, one of the main advantages of this feature is to define a threshold easily on the error rate. import http from 'k6/http';
export let options = {
thresholds: {
http_req_failures: ['rate < 0.1'],
// or
'http_reqs{failed:true}': ['rate < 0.1'],
},
};
export default function () {
http.get("https://httpbin.test.k6.io/status/200");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/403");
} |
Yes, what @legander said 😄 And, I think we've discussed this before, but as an API design principle, if you can choose between:
In my mind, the first is always better, since it delivers the same benefits but covers corner cases and alternative use cases much, much better. A huge percentage of the open k6 issues and existing technical debt are because we've chosen the second option far too often. The whole For example, the majority of people probably want @ppcano, please explain how my suggestion precludes easily defining a threshold on the error rate? |
import http from 'k6/http';
import { Rate } from 'k6/metrics'
let requestFailures = new Rate('http_req_failures')
export let options = {
thresholds: {
http_req_failures: ['rate < 0.1'],
},
};
http.setResponseCallback(function (response, metricsObject) {
let requestFailed = response.status >= 399 || response.status === 0;
requestFailures.add(requestFailed);
if (requestFailed) {
metricsObject.doNotEmitMetrics(); // not a final API, just illustrative :D
}
})
export default function () {
http.get("https://httpbin.test.k6.io/status/200");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/403");
} @na-- I might have misunderstood your example, but it looks the user needs to create a Rate Metric, implement the hook and track the values. It is a big difference in comparison with only: export let options = {
thresholds: {
http_req_failures: ['rate < 0.1'],
},
}; Also, it would be more difficult to set a threshold on the error rate with the Test Builder. |
@ppcano , you didn't misunderstand my example, you probably just didn't read the text right above it... and when @legander copy-pasted it...
So, when we set such a default (be it k6 v0.32.0 or v0.31.0 if we manage it), unless the user has some specific requirements that would force them to define a custom post-request hook, setting a threshold on import http from 'k6/http';
export let options = {
thresholds: {
http_req_failures: ['rate < 0.1'],
},
};
export default function () {
http.get("https://httpbin.test.k6.io/status/200");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/503");
http.get("https://httpbin.test.k6.io/status/403");
} This is the whole script - the default post-request hook takes care of updating Btw, FWIW, having a default post-request hook also makes the breaking change much smaller. We all agree that a breaking change is necessary - the vast majority of people probably wouldn't want to count failed requests in import http from 'k6/http';
http.setDefaultPostRequestHook(null); And I think another good API design principle is that, when you're making a breaking change, you should strive to at least provide an easy solution for people who don't like it and want to go back to how things were... In this specific case, I imagine some people who have adopted their own solution for tracking HTTP errors might make use of it while they transition to our new hook-based approach. |
btw I'm starting to dislike |
I was re-reading the issue and I see I didn't address two parts of @sniku argument:
I don't accept the premise here, this can literally be an argument against almost any change. Especially everything k6 does above and beyond tools like
I strongly disagree on both counts. Besides thinking that it's a better solution for the specific problem we are discussing here, as I've expressed above, generic callbacks will solve plenty of other issues real k6 users have had. For example, this forum post linked in #1716. Or #927 / #1171, #800 (custom callback on the first HTTP digest request that has expected 400 response), #884 (if we have As to the "many hooks for many different purposes" - this will be quite difficult to do technically (please read through some of the objections in #1716, specifically how it's already tricky when you have |
When writing this issue I explored the generic hook idea as well but I couldn't come up with API that was intuitive and didn't require the user to define/re-define the built-in I would really like to see a concrete proposal for the generic hook to solve this specific problem before making up my mind. Here are some comments on the proposal from @na-- let requestFailures = new Rate('http_req_failures'); // redefinition of the built-in-metric should not be allowed like this (we have an issue for this)
http.setResponseCallback(function (response, metricsObject) {
let requestFailed = response.status >= 399 || response.status === 0;
requestFailures.add(requestFailed); // user responsible for manipulating a built-in metric feels wrong, but acceptable. Tags to this metric are also missing (status, url, etc)
if (requestFailed) {
metricsObject.doNotEmitMetrics(); // not convinced this is a good API :-) We do want to emit _some_ metrics.
}
// tagging of http_reqs is missing. I could not come up with a good suggestion for this.
}) Even if we do come up with a nice implementation for the default hook, the user will have to copy-paste our entire default implementation just to change the status ranges (let's say 403 is acceptable by the user). It's not possible to extend the default implementation, it needs to be fully replaced by the user. I like the At the moment, I still think that boolean |
I'll create an issue for a metric refactoring proposal, hopefully later today, for a prerequisite we need to implement #1435, #572, and #1443 / #961 (comment). In essence, it will make it so that it doesn't matter if the user has
Fair enough. I guess this is the next step for this issue, or maybe we should put this on hold and "go back" to #1716 to discuss it.
I am not sure this is the case. It depends on how we design the API, of course, but I think it might be possible to make it reasonably composable, we'll see. And even if it's not possible, the default implementation I suggested above would be something like these whooping... 6 lines of code 😄 let requestFailures = new Rate('http_req_failures')
http.setResponseCallback(function (response, metricsObject) {
let requestFailed = response.status !== 403 && (response.status >= 399 || response.status === 0);
requestFailures.add(requestFailed);
if (requestFailed) {
metricsObject.doNotEmitMetrics(); // not a final API, just illustrative :D
}
}) Having to copy-paste them is not perfect, but it's reasonably compact and understandable and, most importantly, it lacks any magic at a distance... And we can easily write a jslib wrapper that just accepts the |
Sorry, @sniku, I completely missed the code comments you added. To respond:
I already discussed some of the current problems with metrics in that internal discussion we have, but I will elaborate more in the new k6 issue I plan to write today or on Monday about the metric refactoring that's a prerequisite for #1435, #572, and #1443 / #961 (comment). In summary, "creating" a new metric with the same name as a built-in system metric (e.g.
Also related to above, but the distinction between built-in and custom metric is quite artificial. And the API can be like this,
I said it was illustrative, and I agree, it's not a good API. But what metrics exactly do you think we should emit? |
If we must allow users to manipulate built-in metrics, wouldn't it be better to allow users to import them rather than redefine them?
I have a different opinion. Built-in metrics have some guarantees:
I understand that it was a quick/initial mockup, but I really think we should get a reasonably final API proposal before deciding how to go forward. I sincerely doubt that the final implementation will be a "whooping... 6 lines of code".
The fact that we put the burden of all this on the user doesn't feel like good UX to me, but again, I would like to see a good API proposal before I agree/disagree :-) |
I like the import of metrics, though that's not going to solve the issue of
Me too 😄 That was the whole reason for me pushing to define this task before we prioritize it... I guess now it's my responsibility to come up with this API, given that I dislike the original proposal and have suggested an alternative. I'll try to think about it some more and propose something over the next week, but here are some very preliminary answers to the questions you posed:
Anyway, I'll let this percolate somewhat and hopefully get back to you sometime next week. |
So I'm just thinking a bit of how to maintain a JavaScript feel to it, and my suggestion might be way of. But here goes. As for the interface for manipulating the response event (or what you'd call it) I would go for something like the following: const redirect = new Counter("redirect")
http.on("response", (ev) => {
console.log("default metric: ", ev.metric.name); // whatever k6 thinks it should be
if (ev.request.url === "https://test.k6.io") {
// You go ahead and handle this for me!
return
}
const status = ev.response.status
if (status >= 400) {
// Change ev.metric to http_reqs_failure
ev.fail()
} else if (status >= 300) {
redirect.add(1)
ev.preventDefault() // tell k6 to ignore this ev
} else {
// Change ev.metric to http_reqs_success
ev.succeed()
}
}) The above just makes sense to me as a JavaScript-developer. It matches how NodeJS and browsers handle these kinds of events. For instance, in the browser We could also have specific events for // An evil API that has reversed error with success status
http.on("failure", ev => {
ev.succeed()
})
http.on("success", ev => {
ev.fail()
}) The list goes on what would be possible: http.on("request", ev => {
// I'm lazy and don't want to add this everywhere
ev.request.headers.add("X-Request-Id", uuid())
// Add tags for every request
ev.addTags({
})
}) I could go on and on. 😬 |
look at that, @allansson coming into the thread killing it with good suggestions! ;) |
I agree with my proposal :). Thanks @na-- for bringing it up instead of I having to do it. As for a compromise so that we don't have to have any JS callbacks for the first version (which likely will still not be in v0.31.0) I propose we have an additional built-in callback that is exteremely "special" object that can't be used for anything else (at least for now). But you can register it as the "callback" of My proposals for the one that should be included is: http.setResponseCallback(http.expectedStatuses([200,299], [301,302], 308, 418); The arguments are either an array of 2 ints or an int. If an int they are just the status code we expect, if they are an array they are the inclusive start and end of a range of codes we expect. This one callback will be easy to implement, is likely going to cover 95% of all use cases ever(famouse last words), does not stop us from having JS callbacks later, will mean that the performance for this simple one will likely be quite close to what it would be if we just hardcode it. I would say that this way we at least won't need to figure out what the arguments to the JS callback need to be now. While still not leaving us with more technical debt, except having to support the built-in one, which I would've argued in the first place. When we have JS callbacks the code will just check whether it gets an built-in callback or a not built in one. Also possibly the built-in ones will be able to be called with the same arguments as the JS ones and do what they usually do, possibly being used inside a JS callback - not really certain if that will really be all that useful, as at least for performance - calling in JS -> GO -> JS is probably going to be more work then just the 👍 for different API @allansson , we actually internally discussed that this will likely be useful for before doing requests, especially in redirect chains. So it is likely that we will need to either have to have multiple callback regsiters, or have an 'on' one. I am somewhat of the opinion that it will be better if this is with a tag "failed" then with a new metric, and then omitting the failed request metrics, as this way you won't know the performance of your failing requests(or you won't be able to mark them as failed). I think that it is fairly importatnt to know that your failed request also don't take forever and will is likely to be helpful when finding out what is going on when they are failing to know that they are failing in 10 seconds instead of 1 or that the tls handshake took 20 seconds. I would also again like to bring everybodies attention to the fact that k6 supports not only HTTP, so when we start moving this to other protocols, and add protocols, adding additional metrics or having HTTP specific names is probably not going to be ... great. To actually address @sniku concerns with this:
This is also true for having more then 1 URL being hit by the script, as the URL is a tag again. So I am of the opinion that if tags aren't supported by an output, that output should probably not be recommended/supported.
|
I'm strongly against this. Conditional logic in argument usage is... weird and reaaaally error-prone. Especially if you don't label it. If you really want to be able to mix entries with min-max style ranges, I'd definitely vote for having a separate object you could pass for that, with min and max as properties, like: http.setResponseCallback(
http.expectedStatuses(
{min: 200, max: 299},
308,
/* ... */
)); If it takes an array as well, that array should be considered a list of single status codes. Other than that, I think having an optimized, predefined status comparator makes perfect sense. |
I like almost all the ideas @mstoykov brought up here 👍 👍 👍 ❤️ for being the voice of reason and cutting down the scope while not compromising on future extendability. I'm also for the slight API modification proposed by @simskij http.setResponseCallback(http.expectedStatuses([{min: 200, max: 299}, {min: 301, max: 302}, 308, 418]); It makes it less error-prone. As far as I can tell, this proposal solves several issues:
I also very much like the callback API proposed by @allansson ❤️ but I think that proposal should be copy-pasted into #1716 rather than be a part of this issue. |
@allansson, you've given a lot of good suggestions, though they are things we probably won't be able to incorporate until we've started working on the new version of the k6 HTTP API, whenever that is. And we'd probably need event loops for most of them (#882)... 😞 And we should consider the performance impact of multiple different callback types, especially if they stack, since that might significantly affect something like Also a heavy 👍 from me for @mstoykov's suggestion with @simskij's modifications 🎉 So, it seems like we have a consensus on this part 🎉 🎊 Now only this bit remains:
@mstoykov, I partially agree with you here. 60s timeouts and 10ms That said, if we use a |
An internal discussion later, the consensus is that there will be both a tag and a metric:
In the (far) future when k6 has support for disabling metrics and having thresholds supporting something like |
This is done through running a callback on every request before emitting the metrics. Currently only a built-in metric looking at good statuses is possible, but a possibility for future JS based callbacks is left open. The implementation specifically makes it hard to figure out anything about the returned callback from JS and tries not to change any other code so it makes it easier for future implementation, but instead tries to do the bare minimum without imposing any limitations on the future work. Additionally because it turned out to be easy, setting the callback to null will make the http library to neither tag requests as passed nor emit the new `http_req_failed` metric, essentially giving users a way to go back to the previous behaviour. The cloud output is also not changed as the `http_req_li` already is aggregated based on tags so if an `http_req_li` is received that has tag `passed:true` then the whole `http_req_li` is about requests that have "passed". part of #1828
This is done through running a callback on every request before emitting the metrics. Currently only a built-in metric looking at good statuses is possible, but a possibility for future JS based callbacks is left open. The implementation specifically makes it hard to figure out anything about the returned callback from JS and tries not to change any other code so it makes it easier for future implementation, but instead tries to do the bare minimum without imposing any limitations on the future work. Additionally because it turned out to be easy, setting the callback to null will make the http library to neither tag requests as passed nor emit the new `http_req_failed` metric, essentially giving users a way to go back to the previous behaviour. The cloud output is also not changed as the `http_req_li` already is aggregated based on tags so if an `http_req_li` is received that has tag `passed:true` then the whole `http_req_li` is about requests that have "passed". part of #1828
This is done through running a callback on every request before emitting the metrics. Currently only a built-in metric looking at good statuses is possible, but a possibility for future JS based callbacks is left open. The implementation specifically makes it hard to figure out anything about the returned callback from JS and tries not to change any other code so it makes it easier for future implementation, but instead tries to do the bare minimum without imposing any limitations on the future work. Additionally because it turned out to be easy, setting the callback to null will make the http library to neither tag requests as passed nor emit the new `http_req_failed` metric, essentially giving users a way to go back to the previous behaviour. The cloud output is also not changed as the `http_req_li` already is aggregated based on tags so if an `http_req_li` is received that has tag `passed:true` then the whole `http_req_li` is about requests that have "passed". part of #1828
As per discussion with @sniku we agreed upon the output by default being changed to look like:
which is basically the expanded version of |
I think that :
I agree that this is a better experience. I am not certain that we shouldn't as well have an easy way to show the time for the failing ones as well ... but arguably this is just one more If we agree on 3. (and because for _, name := range []string{
"http_reqs{passed:true}",
"http_req_duration{passed:true}",
"http_req_blocked{passed:true}",
"http_req_connecting{passed:true}",
"http_req_tls_handshaking{passed:true}",
"http_req_sending{passed:true}",
"http_req_waiting{passed:true}",
"http_req_receiving{passed:true}",
} {
if _, ok := e.thresholds[name]; ok {
continue
}
parent, sm := stats.NewSubmetric(name)
e.submetrics[parent] = append(e.submetrics[parent], sm)
} after https://github.com/loadimpact/k6/blob/11811171a3c416daf17f7f0ad182ca30aa9990bb/core/engine.go#L100-L108 seems to do the trick, and arguably it will be something kind of similar with #1321 . Arguably: To me, the biggest problem (if we agree on 3.) seems to be #1806, which either needs to be done now or will need to redo the changes when we get to it, although arguably the changes will not be all that complicated so 🤷♂️ |
Let's backtrack a little bit. I haven't looked at the code in #1856 yet, but it seems like it fulfills everything we all previously agreed on (#1828 (comment)), right? If so, given that there is a very easy workaround to surface specific sub-metrics in the end-of-test summary (grafana/k6-docs#205), I think the PR covers way more than 80% of the goals for this issue for less than 20% of the effort:
Again, this seems plenty good enough for such a contentious set of features, for now, i.e. for k6 v0.31.0... I am for continuing with things in this priority order: #1832, followed by #1806, followed by #1321, followed by changing the summary. And this changing of the summary should include showing If we decide that we're not going to release any of these things in k6 v0.31.0, but we start working on them now in preparation for k6 v0.32.0, I think there's plenty of time to get them done until the end of April... I am mostly against hardcoding a bunch of cruft in the Remember, besides the needless clutter in the end-of-test summary, until we have #763, every |
As @na-- says, the #1856 PR goes a long way towards fulfilling this feature request. Still, even if #1856 is merged, the second basic requirement of this feature is not fulfilled.
Not getting
If I'm reading this sentence correctly, you are proposing to skip showing
I think we all firmly agree on this. We don't want to double CPU and memory burden. While I'm personally not against the "clutter" in see output, I see your point. I agree that adding I'm suggesting to display tagged values only for The proposed output would look similar to this:
|
Yes, that was my proposal. And I think it's realistic to target k6 v0.32.0 for all of that, if we put these 4 issues on the priority track and start working on them now. The rationale is that if people have a threshold on All of that said, I am not very opposed to a temporary hack to show
Why
|
I think the end state of this effort, after #1832, #1806, #1321, and the summary changes have been implemented, should look like this:
Where |
Do you mean that the first 401 should always be treated as a success for digest/ntlm requests? Effectively bypassing the callback? |
I haven't looked at the PR code yet, so I'm not sure what you mean by "effectively bypassing the callback", but yes, the first HTTP request in a digest/NTLM auth scenario should expect a 401 response and not treat that as a failure. |
This is out of scope for this feature request, but we will likely need to add |
Or we could stop adding extra metrics for information we already have encoded as tags of existing metrics and fix the fundamental issues that prevent us from using them as intended? 🙄 Adding an extra tag to metrics is mostly fine (whether that's |
This is done through running a callback on every request before emitting the metrics. Currently only a built-in metric looking at good statuses is possible, but a possibility for future JS based callbacks is left open. The implementation specifically makes it hard to figure out anything about the returned callback from JS and tries not to change any other code so it makes it easier for future implementation, but instead tries to do the bare minimum without imposing any limitations on the future work. Additionally because it turned out to be easy, setting the callback to null will make the http library to neither tag requests with `expected_response` nor emit the new `http_req_failed` metric, essentially giving users a way to go back to the previous behaviour. part of #1828
This is done through running a callback on every request before emitting the metrics. Currently only a built-in metric looking at good statuses is possible, but a possibility for future JS based callbacks is left open. The implementation specifically makes it hard to figure out anything about the returned callback from JS and tries not to change any other code so it makes it easier for future implementation, but instead tries to do the bare minimum without imposing any limitations on the future work. Additionally because it turned out to be easy, setting the callback to null will make the http library to neither tag requests with `expected_response` nor emit the new `http_req_failed` metric, essentially giving users a way to go back to the previous behaviour. part of #1828
This is done through running a callback on every request before emitting the metrics. Currently only a built-in metric looking at good statuses is possible, but a possibility for future JS based callbacks is left open. The implementation specifically makes it hard to figure out anything about the returned callback from JS and tries not to change any other code so it makes it easier for future implementation, but instead tries to do the bare minimum without imposing any limitations on the future work. Additionally because it turned out to be easy, setting the callback to null will make the http library to neither tag requests with `expected_response` nor emit the new `http_req_failed` metric, essentially giving users a way to go back to the previous behaviour. part of #1828
This is done through running a callback on every request before emitting the metrics. Currently only a built-in metric looking at good statuses is possible, but a possibility for future JS based callbacks is left open. The implementation specifically makes it hard to figure out anything about the returned callback from JS and tries not to change any other code so it makes it easier for future implementation, but instead tries to do the bare minimum without imposing any limitations on the future work. Additionally because it turned out to be easy, setting the callback to null will make the http library to neither tag requests with `expected_response` nor emit the new `http_req_failed` metric, essentially giving users a way to go back to the previous behaviour. part of #1828
Some implementation notes and changes from #1828 (comment):
|
Shouldn't we close this issue now? 90% of the original issue is implemented, and the rest is hidden behind so much noise that a fresh issue would be a better place for it... |
The NTLM handshake actually consists of two HTTP 401s before a HTTP 200 is received (see https://nav7neeet.medium.com/maintaining-session-in-ntlm-authentication-3e8c604d87e9). |
I agree with @na-- and will take the liberty of closing his issue. If anyone wants to continue any part of the conversation, please create a new issue that is (preferably) narrowly focused on that specific part. (Also locking it to prevent additional input to get lost in the void and to promote the opening of new issues) |
I'm opening a new issue instead of building on top of #1311 to start the discussion with a clean slate. I believe this proposal addresses most if not all issues brought up in the original proposal.
Before discussing the implementation details and effort required to build this feature, let's discuss why we want this feature at all.
Basic requirements
This basic script must:
Discussion about the metric type for
http_reqs_failure
There are two possible metric types for the new
http_reqs_failure
. Rate or Counter.Both types have their advantages, and it's not entirely clear which one is better for this use case.
Advantages of Rate:
http_reqs_failure: rate<0.1
Advantages of Counter:
http_reqs
metric.The end-of-test summary for this test should look similar to this:
Output when using Counter metric
Output when using Rate metric
Neither
Rate
norCounter
covers all possible use-cases. I thinkRate
is preferable overCounter
.If we added
count
to theRate
metric, the output could possibly look similar to thisNote, I'm not really advocating for this, just pointing out that neither Rate nor Counter cover all use cases.
Why do we have failures and successes as separate metrics?
The obvious critique of the above suggestion is to say that
http_reqs_success
is unnecessary because it's opposite ofhttp_reqs_failure
. This is true, but some outputs don't allow to define logic, and therefore it's not possible to showhttp_reqs_success
unless k6 itself produces it.Once the metric filtering feature is developed, I would suggest we exclude
http_reqs_success
by default.http_req_duration and other http_req_* metrics.
The core requirement of this feature is to be able to see http_req_duration for successful requests only.
There are two possibilities here:
http_req_duration
for failureshttp_req_duration
withfailed:true|false
tag and display filtered values.Let's discuss both approaches
Don't emit http Trend metrics for failed requests
In this approach,
http_req_duration
and other http metrics won't include failed requests towards the metric's internal state.Users who want to track error timings can define custom metrics like this:
Tag
http_req_duration
withfailed:true|false
tag and display filtered values.With this approach, we would emit the
http_req_duration
and friends as we used to, but we will tag values withfailed:true|false
The default-end-of-test summary would display only successful requests like this:
The most problematic issue with this approach is that some outputs don't ingest tags and won't be able to display http_req_duration for successful requests only.
Examples of
http_req_duration
andhttp_reqs_failure
k6 should produce with this approach.Cloud support
There are additional considerations for the
k6 cloud
support.Performance insights
The "performance insights" feature and web app currently assume that successful requests have status 200-399.
Both approaches listed above solve these problems, although in different ways.
In approach 1, we would only get timings for successful requests and therefore we won't show timings for failed requests.
We will still get tagged
http_reqs_failure
metrics and therefore will be able to show errors without timings.We would probably redesign this UI to separate failures from successes in a better way.
In approach 2, we would get a new standard tag called
failed
to allhttp_req_*
metrics, includinghttp_req_li_all
.Timings would still be shown for errors (although probably not useful), but the background of the row would be determined by the
failed
tag.(alternative) Why don't we skip the new metrics and purely rely on
failed
tag?It's possible to extend the existing
http_reqs
counter metric by tagging requests withfailed
and changing the metric type toRate
. If that's done, the following script would be possible,Possible end-of-test summary:
Possible problems with this approach are:
I'm (currently) against this alternative.
Defining failure
To determine if a request has failed or succeeded, a JavaScript hook function is invoked after the request, but before the metrics emission. This proposal builds on #1716
per-request handling
Sometimes users need to handle special cases.
Alternative 1 - handle inside the hook
Alternative 2 - override the hook per request
What about the redirect chains?
This is up for discussion.
Should the hook fire on every request in the chain or only on the last one?
Concerns
http_req_li_all
and otherhttp_req_*
metrics.The text was updated successfully, but these errors were encountered: