-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTML Report is filled up with failed requests for long running tests #2328
Comments
PRs welcome. It was discussed a while ago how it could be useful to process the error message using regexes etc (or maybe disable grouping by message and just group by response code) You may wanna try switching from HttpUser to FastHttpUser (or vice versa). Or, as a workaround, overwrite the failure message manually (using catch_response=True and resp.failure()) |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
If the issue is closed, am I still able to post comments to it? I would like to post an example of your workaround in the coming 1-2 weeks that overwrites the failure message on some conditions. |
Yes! |
Sounds great. I fear I won't have time to implement the actual feature in a reasonable time, but I'd love to put a snippet here for documentation, in case somebody else stumbles upon it. |
Since my private time for such topics was unfortunately very limited, here is a small sumup (big thanks to the great documentation by locust). The magic words I missed initially were grouping requests and validating responses, as @cyberw explained above. In my special case, I was testing an e-commerce page. Random product pages were grabbed from category pages and then visited - some of them resulted in errors. Fortunately, my "visit a product page" task is quite simple and can look something like this: def product_id():
# return a random product id
@task
def visit_product_page(self):
rnd_product_id = product_id()
self.client.get(f"/p/{rnd_product_id}") To work around the result HTML being bloated with (mostly) the same error on different product pages, all I had to do was patching the visit function to override with a custom error: @task
def visit_product_page(self):
rnd_product_id = product_id()
with self.client.get(f"/p/{rnd_product_id}", catch_response=True) as response:
if response.text != "Success":
response.failure("failed") Of course, you can (and should!) enhance this minimal example to more fine granular error handling by evaluating the |
from locust import HttpUser, TaskSet, task, events class MyTaskSet(TaskSet):
class MyUser(HttpUser):
|
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
This issue was closed because it has been stalled for 10 days with no activity. This does not necessarily mean that the issue is bad, but it most likely means that nobody is willing to take the time to fix it. If you have found Locust useful, then consider contributing a fix yourself! |
Describe the bug
I use locust for long running tests (>30m, >60m respectively) for a bigger system. After that, HTML reports are generated (by locust) and published (on various ways with custom logic by me).
For long running tests, the "Failures Statistics" is quite bloated up. Unlike in the above Request / Response Time Statistics, each request is logged in a weird way with its unique request/response:
Line 1 and 3 in this statistic are grouped under
category
in the locustfile. However, they are then separated by each request (in this example, I am visiting product pages on an e-commerce platform and would like to see how many visits failed in total).Expected behavior
Requests which are otherwise grouped in the result statistics should be grouped also in the failure statistics.
Actual behavior
HTML reports are bloated with failing unique request although they're grouped
Steps to reproduce
The text was updated successfully, but these errors were encountered: