Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libafl multimachine: disable ratelimiting #2558

Merged
merged 7 commits into from
Sep 30, 2024
Merged

libafl multimachine: disable ratelimiting #2558

merged 7 commits into from
Sep 30, 2024

Conversation

rmalmain
Copy link
Collaborator

originally there was a (primitive) rate limiter in case a lot of testcases were received by a node at once.
this caused the buffer to not be emptied fast enough on some targets when inputs are heavy.
i will refactor rate limiting if it happens to be necessary later on, for now it should not cause any problem to disable it.

@domenukk
Copy link
Member

The rate limiting should be on the sender side, IMHO

@tokatoka tokatoka merged commit 173aedd into main Sep 30, 2024
101 checks passed
@tokatoka tokatoka deleted the fix_mm_ratelimiter branch September 30, 2024 13:57
@rmalmain
Copy link
Collaborator Author

The rate limiting should be on the sender side, IMHO

hum not sure about this, a child doesn't know how many other children there are for a given parent, so it cannot (apriori) know what rate limit to choose statically.
i think it would only work if the parent dynamically gives a rate limit to each child, but it would require much more extensive modifications of the current implem.

@domenukk
Copy link
Member

just a global setting "don't send more often than every x millis" is fine

rmalmain added a commit that referenced this pull request Oct 7, 2024
* disable rate limiting for now

* fix

* clippy

---------

Co-authored-by: Dongjia "toka" Zhang <tokazerkje@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants