-
-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLMP compression #46
Comments
that is a very bad idea, sorry :) |
Good to know, we didn't benchmark anything yet. The basic concept was to add a "compressed" flag to the llmp header, then, in send_buf, directly compress the complete payload to a (large) buffer allocated by llmp_alloc, and then add a |
Okay. I got the idea |
I mean if you want an easy way out, you could allocate a compression buffer internally, and copy twice. But it's fine to write your proposal instead :) |
I've had a look at the details of So, I have an idea that we can compress the buffer directly after the call to My concern is, since alloc_next checks if we need to allocate a new page depending on the would it be better to allocate a compression buffer internally, because we can know the size after compression beforehand? |
as for no_std gzip lib, I've found this one |
Hello, And I've run I'll make a pr after making my code more decent |
just ensure that compression is configurable and a client can send both, so that there is a flag. |
After a long struggle of debugging, I've found out that my implementation for the previous the experiment was wrong... I fixed them, and tried to test the performance again under the same condition as the previous experiment. as @vanhauser-thc pointed out, it slows the fuzzer down to a huge degree (less than 1/4? on my machine). |
Are you sure you don't want to push the current code somewhere and we can try to figure out improvements? My best guess is that you do more copies than needed. It would be a nice feature to have for network connected fuzzers |
even for network connected fuzzers you would not want that slow down. it takes longer to compress and send less packets than not to compress and send more packets. only if you can really, really compress tightly (e.g. 50k of only A) then it makes sense. But queue entries will usually not compress very much - between 10-25%. that is a lot of time for not much less packets sent. |
We send over the observer maps, too. These will mostly be 0s so compressextremely well, but take up quite some storage in the broker map, as well as take longer to send. At least having the option (if only for low mem devices) is nice |
My judgement might have been wrong. I'll test it again. |
observer map can make sense - if they are large enough, but then I would only compress these. |
I've quickly run it 5 times to test
I'll open a draft pr to see how I can improve this one. by the way, I occasionally observe a weird behaviour from the 'exec/sec' with current
Something is causing the |
from this fluctuation I think we can see that this kind of measurement does not really tell us anything. |
* add compression * modify event/llmp.rs * rename to LLMP_TAG_COMPRESS * remove compression code from bolts/llmp.rs * add compress.rs * handle compress & decompress in GzipCompress struct, compress if the size is large enough * add code for benchmark * remove LLMP_TAG_COMPRESS, use a flag instead * cargo fmt * rm test.sh * passes the test * comment benchmarks code out * add recv_buf_with_flag() * add the llmp_compress feature * add send_buf, do not compile compression code if it's not used * fix warning * merged dev * add error handling code * doc for compress.rs * remove tag from decompress * rename every flag to flags * fix some clippy.sh errors * simplify recv_buf * delete benchmark printf code * cargo fmt * fix doc Co-authored-by: Dominik Maier <domenukk@gmail.com>
* add compression * modify event/llmp.rs * rename to LLMP_TAG_COMPRESS * remove compression code from bolts/llmp.rs * add compress.rs * handle compress & decompress in GzipCompress struct, compress if the size is large enough * add code for benchmark * remove LLMP_TAG_COMPRESS, use a flag instead * cargo fmt * rm test.sh * passes the test * comment benchmarks code out * add recv_buf_with_flag() * add the llmp_compress feature * add send_buf, do not compile compression code if it's not used * fix warning * merged dev * add error handling code * doc for compress.rs * remove tag from decompress * rename every flag to flags * fix some clippy.sh errors * simplify recv_buf * delete benchmark printf code * cargo fmt * fix doc Co-authored-by: Dominik Maier <domenukk@gmail.com>
Hi, I am interested in
LLMP brotli compression
and have been browsing codes related to LLMP lately.Is
LLMP brotli compression
referring to compressing (and decompressing)events
that are passed between a broker and clients?so we want to compress large members of
Event
likeEvent::NewTestcase->observers_buf
orEvent::Log->message
usingbrotli
crate?The text was updated successfully, but these errors were encountered: